As longtime readers will know I’m quite keen on Microsoft’s Azure platform and whilst I haven’t released anything on it I have got a couple projects running on it right now. For the most part it’s been great as previously I’d have to spend a lot of time getting my development environment right and then translate that onto another server in order to make sure everything worked as expected. Whilst this wasn’t beyond my capability it was more time burnt in activities that weren’t pushing the project forward and was often the cause behind me not wanting to bother with them anymore.
Of course as I continue down the Azure path I’ve run into the many different limitations, gotchas and ideology clashes that have caused me several headaches over the past couple years. I think most of them can be traced back to my decision to use Azure Table Storage as my first post on Azure development is how I ran up against some of the limitations I wasn’t completely aware of and this continued with several more posts dedicated to overcoming the shortcomings of Microsoft’s NOSQL storage backend. Since then I’ve delved into other aspects of the Azure platform but today I’m not going to talk about any of the technology per se, no today I’m going to tell you about what happens when you hit your subscription/spending limit, something which can happen with only a couple mouse clicks.
I’m currently on a program called Microsoft BizSpark a kind of partner program whereby Microsoft and several other companies provide resources to people looking to build their own start ups. Among the many awesome benefits I get from this (including a MSDN subscription that gives me access to most of the Microsoft catalogue of software, all for free) Microsoft also provides me with an Azure subscription that gives me access to a certain amount of resources. Probably the best part of this offer is the 1500 hours of free compute time which allows me to run 2 small instances 24/7. Additionally I’ve also got access to the upcoming Azure Websites functionality which I used for a website I developed for a friend’s wedding. However just before the wedding was about to go ahead the website suddenly became unavailable and I went to investigate why.
As it turned out I had somehow hit my compute hours limit for that month which results in all your services being suspended until the rollover period. It appears this was due to me switching the website from the free tier to the shared tier which then counts as consuming compute hours whenever someone hits the site. Removing the no-spend block on it did not immediately resolve the issue however a support query to Microsoft saw the website back online within an hour. However my other project, the one that would be chewing up the lion’s share of those compute hours, seemed to have up and disappeared even though the environment was still largely in tact.
This is in fact expected behaviour for when you hit either your subscription or spending limit for a particular month. Suspended VMs on Windows Azure don’t count as being inactive and will thus continue to cost you money even whilst they’re not in use. To get around this should you hit your spending limits those VMs will be deleted, saving you money but also causing some potential data loss. Now this might not be an issue for most people, for me all it entailed was republishing them from Visual Studio, but should you be storing anything critical on the local storage of an Azure role it will be gone forever. Whilst the nature of the cloud should make you wary of storing anything on non-permanent storage (like Azure Tables, SQL, blob storage) it’s still a gotcha that you probably wouldn’t be aware of until you ran into a situation similar to mine.
Like any platform there are certain aspects of Windows Azure that you have to plan for and chief among them is your spending limits. It’s pretty easy to simply put in your credit card details and then go crazy by provisioning as many VMs as you want but sooner or later you’ll be looking to put limits on it and it’s then that you have the potential to run into these kinds of issues.
Stripping away certain aspects of a game is the norm for independent developers as your limited resources constrain what you’re able to accomplish. Whilst on the surface this sounds like it would make for an inferior game often it results with a game that makes incredible use of its bare essentials, creating an overall experience that’s on par with much larger titles. Then there are those that eschew nearly all aspects of traditional games in order to focus on a single aspect. Notable entries include games like Gravity Bone, Thirty Flights of Loving, Auralux and, I’m most pleased to say, the new exploration game Proteus.
Like other exploration games Proteus is one where the narrative is primarily driven by your curiosity. Upon starting the game you’ll be greeted with your own little island (which I assume is procedurally generated so each one is unique). There’s no voice telling you to walk to it, nor any other indication that you should even go there, but of course there’s that little voice at the back of your head telling you to proceed forward. Should you do so the next hour of your life will be dedicated to exploring a world that undergoes wild amounts of change and, eventually, so do you.
Proteus is unique in terms of graphical style, straddling the boundaries of pixel art and early 3D first person shooter games. What I found particularly interesting was despite the bare bones nature everything was instantly recognizable, from the various types of plants and animals to the various bits of other miscellanea that covered my island. I’ll be honest and at first I just thought it was the developer being lazy but the more I played the more I began to appreciate the simplicity as that kind of refinement doesn’t exactly come easy.
There’s no real game mechanics to speak of, the whole point of Proteus is simply for you to explore the island that it has created for you. Whilst the island isn’t particularly huge there’s definitely enough to keep you interested, especially with all the various animals that react in different ways to you approaching them. There’s also a weather system that changes from time to time which, again, changes the island. But this is all just a lead up to the best part of Proteus and it only happens at night.
What could be considered plot spoilers follow:
I remember seeing this for the first time very clearly. The sun had gone down and little lights began to appear everywhere. Up until then I had wondered what the overall point of the game was as whilst it was cool to explore a procedurally generated island there wasn’t much more to it; no purpose, no story. But then the lights began to move in a strange way, they seemed to be all moving towards a single point on the island. Curious I walked towards it and they began to speed up with more and more lights appearing out of nowhere to join them.
The lights began congregating at one location, forming into a kind of vortex centered on a point in the middle of island. I walked towards it and they spun faster still, swarming around me until they erupted in a blinding flash of light. Afterwards I saw it was day time once again but the island had changed. New life had sprung up around me and the world looked very different. I realised then and there what had happened, I had been transported forward in time to the next season.
And so this process repeated itself several times over, each time when night fell I would wait anxiously for the lights to reappear in order for me to advance to the next stage. Eventually winter came to my island and it instantly became a desolate wasteland, home to no perceptible life. I wandered my island aimlessly looking for a sign, something to show that was still alive but alas there was none. I again waited for night to come but the lights never appeared so I kept exploring, hoping that I’d find the solution to a problem I felt I had created. It was then that my slow descent into the clouds began and eventually my eyes closed and my island journey came to an end.
Proteus wins my praise for the simple fact it went from a slightly confusing experience to an incredibly magical one by the use of simple mechanics that forced me to build my own narrative. If Auralux is the essence of real time strategy then Proteus is the essence of an exploration game as it does away with pretty much all extraneous elements in favour of the exploration mechanic. It’s short and bittersweet and definitely not for everyone but if you’re a fan of creating your own narrative or exploring games that strip away all things in favour of one aspect then Proteus is definitely worth a look in.
Proteus is available on PC right now for $9.99. Total game time was approximately 1 hour.
The finance market in Australia is in a weird state at the moment. On the one hand we’re doing pretty good economically, with unemployment remaining low and our major trading partners still buying things from us despite our strong dollar. The finance market, specifically credit and lending, on the other hand looks much like it did back during the peak of the global financial crisis with lending rates at record lows. Now it’s not like this is completely unexpected considering that the Eurozone Crisis is still working itself out but favourable economic conditions and low lending rates rarely go hand in hand.
Indeed it’s gotten to the point where the Reserve Bank of Australia doesn’t believe they can effect much more change by lowering the official rate and will likely hold off on any changes until sometime next year. At the same time though banks funding conditions have continued to improve which has led to calls from industry bodies for them to start cutting their rates independent of the RBA. Banks have never been shy to raise rates outside of official RBA decisions but cutting them be something new for all of the major lenders, especially considering the rather turmutuous funding environment we’ve had to endure over the past 5 years.
Now no one would be expecting these cuts to happen now as there’s really no pressure on the market from either direction that would make such a move advantageous. Most industry analysts agree that within the next year however though conditions would be favourable for banks to do this. If this is the case then there’s a pretty simple method for checking to see if banks think that there’ll be a rate cut, whether by them/their competition or the RBA, within the next year. All we have to do is check the current fixed term rates and compare them with the current variable rates on offer and see what the difference is between the various fixed term lengths.
Right now the cheapest variable loan you can secure is about 4.99%, a bargain that we haven’t really seen since the deepest parts of the GFC. Whilst there’s quite a spread between the lowest and highest there’s a pretty good chunk of the market hovering around the 5.25% region so we’ll use that as our baseline for comparison. For a 1 and 2 year fixed loan it’s looking pretty similar with the rates basically remaining the same overall, although there seems to be more lenders willing to lock in at 4.99% for that amount time. It’s only at 3 years do we start to see much change when the average jumps up about 0.25% which is a pretty small increase and is essentially a hedged bet against any unforseen circumstances.
The take away from this is that by and large the banks don’t really expect the funding situation to change dramatically in the next couple years as their loan term loans aren’t really priced with that in mind. There are some examples of lenders offering very attractive rates around the 2 year mark (ones lower than their current variable rates) but they’re most certainly not the majority and consist primarily of smaller, non-bank lenders. Barring any drastic changes (like the Eurozone escalating again) I can’t see any indication that the banks are thinking of moving rates in any meaningful direction for the next couple years, nor do they expect the RBA to do similar.
This doesn’t really mean much unless you’re currently in the market for a new loan or refinancing but if you are then it means that the choice between variable or fixed is essentially moot at this point and you should go with whatever makes you feel the most comfortable. It’s actually a great time to get a home loan thanks to the wide spread stagnation of house prices and cheap funding which are set to continue for at least another year. Of course you probably shouldn’t dive in unless you’ve done the proper due dilligence but if you’ve been on the fence for a while I really can’t think of a better time to buy in the last 5 years.
Well apart from the darkest parts of the GFC, but that had a whole bunch of other issues associated with it.
When Google announced the Nexus 4 I was genuinely excited, my Lumia was showing its age and I was eager to get back to the platform that I loved, especially one delivered by Google. However month after of month of delays which had me hanging on the order page every day eventually wore my patience down and I swore that Google wouldn’t be getting any money from me this time around. Whilst I’ll admit that I almost caved when they finally became available I stuck to my guns and kept searching for a replacement handset.
Initially I was sold on the ZTE Grand S as it’s release date wasn’t too far off into the future and it’s specifications were really quite impressive. Still being an impatient, instant gratification kind of guy I kept searching for other phones that had similar specs but would have a release date sooner rather than later. It didn’t take long before I stumbled across the Sony Xperia Z which not only matched the ZTE in every way it was going to be available months earlier. Within a week I had dropped the requisite cash for one and not long after it arrived at my doorstep.
The Xperia Z is by far the largest phone I’ve ever owned with a massive 5″ screen with an even more incredible 1080p resolution (yeah, that’s the same as my TV). For someone with large hands who struggled with the smaller screens on iPhones and my Samsung Galaxy S2 the increased screen real estate is just awesome, especially when it comes to typing on it. The screen itself is none too shabby either with that high DPI making everything look clear and incredibly detailed. It is a TFT screen which means that it’s viewing angle is somewhat limited (which is not usually a problem, but its certainly noticable) and it’s a little rubbish when used in sunlight. This can be combated somewhat by turning on auto-brightness adjustments which is strangely set to off by default.
Despite its size and glass casing the Xperia Z is quite light, especially when compared to the hefty Nokia device that I upgraded from. It’s not on the level of the Galaxy S2 where I’d sometimes forget I had it in my pocket, it’s far too large to forget about. I believe this is due to its rather unique construction where the glass layers are actually quite thin which, whilst reducing weight, does mean that when pressing on the screen you can sometimes cause the LCD to warp slightly which is a little disconcerting. Having said that though I’ve already managed to drop mine a couple times and it’s managed to survive with no noticeable consequences.
The hardware under the hood is great on paper (Snapdragon S4 Pro quad-core 1.5 Ghz processor with 2GB RAM, 16GB on board storage) and it doesn’t fail to deliver in the real world either. Out of the box all motions are buttery smooth with all applications reveling with the insane amount of grunt that the Xperia Z has behind it. The only time that I’ve seen it struggle is when I’ve started to make modifications (like a custom launcher and theme) but even that only seems to happen at very particular times and disappears as quickly as it started.
Surprisingly such grunt doesn’t come at the cost of battery life thanks to the massive 2400mAh battery that powers the Xperia Z. Whilst it will gladly chew through all that energy should you give it a reason to (like playing Minecraft on it, for instance) in its default state it’ll last for days on a single charge. I charge my battery every night but most of the time it’s above 50% when I do, showing that it’s quite capable of going for 2 days without requiring a charge. This is all without its crazy STAMINA mode enabled either which disables data connections when the screen is off which I can only assume would increase the battery life further.
The camera is none too bad either being a 13MP Exmor RS chip, similar to the ones that power Sony’s powerhouse pocket cams like the NEX-5. It’s capable of producing some pretty decent pictures, like the one you see above, however like all smartphone cameras it languishes in low light when it tries to ramp up the ISO and just ends up creating a noisy mess. The HDR video also seems to be something of a gimmick as turning it on doesn’t seem to have a noticeable impact on the result video produced. I haven’t done any conclusive testing with it however.
Sony took something of a light touch when it came to customizing the underlying Android OS with their mobile theme being a thin veneer over the default Jellybean interface. They’ve also favoured the in-built applications over developing their own versions of them which is great as whilst Samsung’s apps weren’t terrible they paled in comparison to others, including the stock Android versions. The only application that got a lot of work was the camera app and realistically all that was done to support the not-so-standard features that Sony packed into it. Overall I was quite pleased with Sony’s approach as it shows that they’re focused on providing a great experience rather than attempt to shovel crapware.
However I can’t really give Sony all the credit for that as it really comes down to Android and the third party application ecosystem that’s developed around it. Whilst I hadn’t been gone from Android for long the improvements in many of the applications that I used daily is really impressive and things that felt like a chore on other platforms are just so much better. That coupled with the insane amount of customizability that Android allows has enabled me to make my Xperia Z truly unique to me coupled with all the functionality I had been missing on my Lumia.
Sony has really come a long way with their line of phones, from way back in the day when they launched their first Xperia (which I still have in my drawer at home) to today when they’re building phones that are, in my opinion, best in class. I’ll admit that I was a little worried that I had jumped the gun when I heard the S4 was going to be out soon but the Xperia is not only comparable, it beats it in several categories. The fact that Sony was able to release a phone of this calibre ahead of the competition says a lot about Sony’s development team and I’m happy to say they’ve created the best phone I’ve ever used to date.
Last week I regaled you with a story of the inconsistent nature of Australia’s broadband and how the current NBN was going to solve that through replacing the aging copper network with optical fibre. However whilst the fundamental works to deliver it are underway it is still in its nascent stages and could be easily usurped by a government that didn’t agree with its end goals. With the election looking more and more like it’ll swing towards the coalition’s favour there has been a real risk that the NBN we end up with won’t be the one that we were promised at the start, although the lack of a concrete plan has left me biting my tongue whilst I await the proposal.
Today Malcolm Turnbull announced his NBN plan, and it’s not good at all.
Instead of rolling out fibre to 93% of Australians and covering the rest off with satellite and wireless connections the Liberal’s NBN will instead only roll fibre to 22%, the remaining 71% will be covered by FTTN. According to Turnbull’s estimations this will enable all Australians to have broadband speeds of up to 25MBps by 2016 with a planned upgrade of up to 100MBps by 2019. The total cost for this plan would be around $29 billion which is about $15 billion less than the current planned total expenditure required for Labor’s FTTP NBN. If you’re of the mind that the NBN was going to be a waste of money that’d take too long to implement then these numbers would look great to you but unfortunately they’re anything but.
For starters the promise of speeds of up to 25MBps isn’t much of an upgrade over what’s available with the current ADSL2+ infrastructure. Indeed most of the places that they’re looking to cover with this can already get such services so rigging fibre up to their nodes will likely not net much benefit to them. Predominantly this is because the last mile will still be on the copper network which is the major limiting factor in delivering higher speeds to residential areas. They might be able to roll out FTTN within that time frame but it’s highly unlikely that you’ll see any dramatic speed increases, especially if you’re on an old line.
Under the Liberal’s plan you could, however, pay for the last mile run to your house which, going by estimates from other countries that have done similar, could range anywhere from $2500 to $5000. Now I know a lot of people who would pay for that, indeed I would probably be among them, but I’d much rather it be rolled out to everyone indiscriminately otherwise we end up in a worse situation we have now. The idea behind the NBN was ubiquitous access to high speed Internet no matter where you are in Australia so forcing users to pay for the privilege kind of defeats its whole purpose.
Probably the biggest issue for me though is how the coalition plans to get to 100MBps without running FTTP. The technologies that Turnbull has talked about in the past just won’t be able to deliver the speeds he’s talking about. Realistically the only way to reliably attain those speeds across Australia would be with an FTTP network however upgrading a FTTN solution will cost somewhere on the order of $21 billion. All added up that makes the Liberal’s NBN almost $5 billion more than the current Labor one so it’s little wonder that they’ve been trying to talk up the cost in the past week or so.
You can have a look at their policy documents here but be warned it’s thin on facts and plays fast and loose with data. I’d do a step by step takedown of all the crazy in there but there are people who are much more qualified than me to do that and I’ll be sure to tweet links when they do.
Suffice to say the Liberal’s policy announcement has done nothing but confirm our worst fears about the Liberal party’s utter lack of understanding about why the FTTP NBN was a good thing for Australia. Their plan might be cheaper but it will fail to deliver the speeds they say it will and will thus provide a lot less value for the same dollars spent on a FTTP solution. I can only hope come election time we end up with a hung parliament again because the independents will guarantee that nobody fucks with the FTTP NBN.
When a technology company doesn’t get a whole lot of press it usually means one of two things: the first is that it isn’t that interesting and no one really cares about it or, and this doesn’t happen often, they simply don’t want/need it. Conversely if a product is a dismal failure it’s usually guaranteed that it’ll get a whole bunch of the wrong type of attention, especially with the Internet’s bent towards schadenfreude. With that in mind it made me wonder why I hadn’t heard more about D-Wave since I last wrote about them around this time last year. Especially considering that Lockheed Martin had bought one of their D-Wave One systems a year prior to that.
Turns out they probably don’t really need the press as they’re doing just fine:
VANCOUVER — When the world’s largest defence contractor reportedly paid $10 million for a superfast quantum computer, the Burnaby, B.C., company that built it earned a huge vote of confidence.
Two years after Lockheed Martin acquired the first commercially viable quantum computer from D-Wave Systems, the American aerospace and technology giant is once again throwing its weight behind a technology many thought was still the stuff of science fiction.
You’d be forgiven for thinking that this was just old news resurfacing 2 years later but it isn’t as Lockheed Martin just purchased a D-Wave 2, their latest and greatest quantum computing offering. Details are a little scant as to what is actually in their latest system but going off their product road map it’s likely to be some variant of their Vesuvius chip which contains 512 qubits. That’s 4 times the amount of qubits in their previous system which would make it exceptionally more powerful and all for the same cost as the first unit they sold.
In my quest to try and find a little more information about their new system I stumbled across this page which digs into the underlying architecture of the D-Wave One/Two systems. Now back when I first wrote about D-Wave they weren’t exactly forthcoming with this kind of information which was what drew them a considerable amount of criticism but since then a lot of their loudest critics have renounced their positions. Interestingly though, and feel free to correct me if I’m interpreting this wrong, whilst they indeed claim to have produced a functioning qubit they haven’t managed to entangle several of them together. Whilst this doesn’t make their system useless, single qubits daisy chained together will still be useful for some specific functions, it does mean that the exponential scaling doesn’t really apply to D-Wave’s style of quantum computers. I could be wrong about this but their explanation only mentions entanglement-like properties in the qubit section with their interconnecting grids only being used to “exchange information”, not to provide multi-qubit entanglement.
That doesn’t make it any less cool however as I’m sure as they continue to scale up their processors they’ll eventually start entangling more bits together which will increase their computational power exponentially. We won’t see consumer level processors using technology like this for a long time though as they’re akin to CUDA units on graphics cards, highly specialized computational units that excel in their task and not so much in general computing. Still D-Wave’s systems signal the beginning of the quantum computing era and that means its only a matter of time before we see them everywhere.
There are few games that have managed to reinvent themselves as successfully as the BioShock series has. Whilst the first and the second did not differ too much in terms of setting they did play as wildly different games and they both managed to explore different parts of the same universe. The Rapture universe was pretty much tapped out however (barring a prequel) and so it was with a sense of intrigue that I waited to see what Irrational had planned for their magical steam punk world. BioShock Infinite is the next installment in the BioShock franchise and the first one to be set outside of Rapture, but that’s not the only difference this game brings with it.
Set about 48 years prior to the original BioShock Infinite puts you in the shoes of Booker Dewitt, a private security agent who works for the Pinkerton National Detective Agency. You are charged with a simple mission, retrieve a girl and your large gambling debt will be wiped clean. It’s not going to be that simple of course as she’s being held captive in a city called Columbia, a city that resides in the clouds and is ruled by a man whom everyone calls The Prophet. Thus a simple snatch and grab soon turns into much more than that as you unravel the events that led up to you being here, and why your opponents seem to know so much about you.
BioShock Infinite has the same art deco feel as its predecessors and there’s been a notable step up in the graphics. Whilst Rapture is quintessential BioShock I can’t say that I missed it much when playing through Columbia as the wide open environments just felt a whole bunch better. Sure there wasn’t a lot more detail, with the world rapidly fading into the blue sky, but there was something refreshing about being out in the open. Combined with the excellent foley and music work the environment of BioShock Infinite is top notch and is something I’ve come to expect from Irrational’s games.
The way BioShock Infinite plays will feel instantly familiar to anyone who’s played the previous instalments. Lovers of the original’s much more RPG like elements will be disappointed to know that the simplification has continued with many of the more complicated ideas being distilled down to their basics. The necessary elements of a BioShock game are still there, plasmids are called vigors and you use salt instead of eve, but on a scale from Mass Effect 1 to Call of Duty we’re definitely starting to lean towards the latter side in terms of overall game complexity.
Probably the biggest change to BioShock’s combat, at least in terms of its overall impact to the game, was the addition of a rechargeable shield. Now it’s not like this was just slapped on top of the previous combat system, no its implementation seems to come at the cost of being able to carry consumables. The reasons behind this seem to be two fold: the primary one being to facilitate the overall simplification in aid of making the game more fluid. At the same time though it also encourages you to search around as you’ll often find yourself low on health, salts or both. However the infusion system, which allows you to upgrade your health/sheild/salts capacity, would seem to heavily favour you going for shields before anything else, well at least if you were playing BioShock the way I was at least.
Many of the vigors will seem familiar, notably ones like Shock Jockey and Devil’s Kiss, but they’ve all got a unique twist to them that sets them apart from their predecessors. I’m not exactly sure why but the alternate use mode for most of them, activated by charging up the power, is usually to create a trap version of said vigor. This can be useful if that’s your play style but for someone like me they were mostly useless unless I was facing down one of Columbia’s larger enemies. The traps might come in handy if you’re playing on 1999 mode difficulty but after a certain point I rarely found myself needing them due to the plasmid/gear combination I found that made me feel completely broken.
My vigor of choice was Charge which allowed me to get up close and personal with enemies who were usually quite a distance away from me. It wasn’t particularly great initially however once you’ve upgraded it not only do you get bonus damage on your target your shields are instantly recharged and you’re made invulnerable for a couple seconds. Combine this with some gear that gives you a 30% chance to possess things and a 400 damage fire nova when struck and you have a recipe for someone who’s essentially invulnerable in battle with most of the enemies tearing each other apart, if they’re not on fire already. Once I had that combo down there wasn’t really enough enemies in Columbia to stop me, unless they weren’t grouped together.
Gone is the two tiered currency system where Adam was used for plasmids and cash for everything else, now all you’ll deal in is cash. Again this seems to be done in aid of simplifying the whole game although this means that gear prioritizing cash rewards, like the Extra Extra! hat that gives you cash from voxophones, is by far the smartest choice early on. This does feel a bit limiting to begin with as taking away those sources of revenue, in favour of other gear upgrades, feels like you’re cutting yourself off from a potential killer build.
This is in stark contrast to BioShock 2 where you were basically able to try out any build you wanted in the space of a single playthrough. In BioShock Infinite there’s no way that you’ll be able to get the cash required to upgrade all the vigors and all the guns in a single play through (I say this as someone who found the vast majority of voxophones and much of the hidden coin stashes and finished with 2 maxed weapons and vigors. I could have afforded 1 more of each though). This is possibly done to encourage additional playthroughs as previous BioShocks could be done as one shot deals, should you make the right choices. I don’t necessarily hold this against BioShock Infinite though as it forces you to make choices about how you’re going to play rather than just pick and choosing whatever you need for the particular situation.
Minor-ish plot spoilers follow.
One very notable thing that’s absent from BioShock Infinite is the franchise’s moral choice system. Now it’s not like you’re completely absent choice, there are many occasions where you’re presented with similar binary choices that affect the game in some way, but the whole idea of crafting a good/bad/mixed character is gone. I believe this is mostly due to how the story is constructed, what with the whole pre-determined fate idea woven throughout the game’s narrative, but it did remove a significant amount of the agency in Booker’s character which was one of the stronger points of BioShock franchise previously.
This is not to say that the story suffers because of this, far from it. Whilst it will be easy to pick holes in the “tear” idea that’s central to Elizabeth’s character and the overall plot it does function well as a plot device. This, combined with Ken Levine’s brilliant writing and the various voice actor’s great performances, make BioShock’s story engaging, thrilling and, whilst ultimately tragic, beautifully executed. The only criticism I’d level at it was it became somewhat predictable past a certain point but the overall concept was still solid.
BioShock Infinite is another great instalment in the BioShock franchise, aptly demonstrating that Irrational is capable of delivering a fresh game experience when it would be all too easy to just crank out another Rapture. Whilst the game may have undergone a lot of simplification from its predecessors I don’t feel that it suffered because of it. Indeed BioShock Infinite feels a lot more fluid, the story flows better and rarely would I find my immersion broken by something in game. For both fans of the series and newcomers alike BioShock Infinite provides a gaming experience that’s hard to find a direct comparison to, one that’s incredibly enjoyable.
Rating: 9.25 /10
BioShock Infinite is available on PC, Xbox360 and PlayStation 3 right now for $69.99, $78 and $78 respectively. Game was played on the PC on Hard difficulty with 10 hours play time and 48% of the achievements unlocked.
The state of broadband Internet in Australia is one of incredible inconsistency. I lived without it for the better part of my youth, being stuck behind a dial up connection because my local exchange simply didn’t have the required number of people interested in getting broadband to warrant any telco installing the required infrastructure there. I was elated when we were provided a directional wireless connection that gave me speeds that were comparable to that of my city dwelling friends but to call it reliable was being kind as strong winds would often see it disconnect at the most inconvenient of times.
The situation didn’t improve much when I moved into the city though as whilst I was pretty much guaranteed ADSL wherever I lived the speed at which it was delivered varied drastically. In my first home, which was in an affluent and established suburb, usually capped out at well below half of its maximum speed. The second home fared much better despite being about as far away from the closest exchange as the other house was. My current residence is on par with the first, even with the technological jump from ADSL to ADSL2+. As to the reason behind this I can not be completely sure but there is no doubt that the aging copper infrastructure is likely to blame.
I say this because my parents, who still live out in the house that I grew up in, were able to acquire an ADSL2+ connection and have been on it for a couple years. They’re not big Internet users though and I’d never really had the need to use it much when I’m out there visiting but downloading a file over their connection last week revealed that their connection speeds were almost triple mine, despite their long line of sight distance to their exchange. Their connection is likely newer than most in Canberra thanks to their rural neighbourhood being a somewhat recent development (~30 years or so). You can then imagine my frustration with the current copper infrastructure as it simply can not be relied upon to provide consistent speeds, even in places where you’d expect it to be better.
There’s a solution on the horizon however in the form of the National Broadband Network. The current plan of rolling out fibre to 93% of Australian households (commonly referred to as Fibre to the Premises/Home, or FTTP/H) elminates the traditional instability that plagues the current copper infrastructure along with providing an order of magnitude higher speeds. Whilst this is all well and good from a consumer perspective it will also have incredible benefits for Australia economically. There’s no denying that the cost is quite high, on the order of $37 billion, but not only will it pay itself back in real terms long before its useful life has elapsed it will also provide benefits far exceeding that cost shortly after its completion.
Should this year’s election go the way everyone is thinking it will though the glorious NBN future will look decidedly grim if the Coalition has their way with it. They’ve been opponents of it from the get go, criticising it as a wasteful use of government resources. Whilst their plan might not sound that much different on the surface, choosing to only run Fibre to the Node (FTTN) rather than the premises, it is a decidedly inferior solution that will not deliver the same level of benefits as the currently envisioned NBN. The reason behind this is simple: it still uses the same copper infrastructure that has caused so many issues for current broadband users in Australia.
You don’t have to look much further than Canberra’s own FTTN network TransACT to know just how horrific such a solution is. After a decade of providing lackluster service, one that provided almost no benefit over ADSL2+, TransACT wrote down their capital investment and sold it to iiNet. If FTTN can’t survive in a region that is arguably one of the most affluent and tech savvy in Australia then it has absolutely no chance of surviving elsewhere, especially when current ADSL services can still be seen as competitive. You could make the argument that the copper could be upgraded/remediated but then you’re basically just building a FTTP solution using copper, so why not just go for optic fibre instead?
What really puts it in perspective is that the International Space Station, you know that thing whizzing 300KM above earth at Mach 26, has faster Internet than the average Australian does. Considering your average satellite connection isn’t much faster than dial up the fact that the ISS can beat the majority of Australians speed wise shows just how bad staying on copper will be. FTTN won’t remedy those last mile runs where all the attenuation happens and that means that you can’t guarantee minimum speeds like you can with FTTP.
The NBN represents a great opportunity to turn Australia into a technological leader, transforming us from something of an Internet backwater to a highly interconnected nation with infrastructure that will last us centuries. It will mean far more for Australia than faster loading web pages but failing to go the whole for the whole FTTP will make it an irrelevant boondoggle. Whilst we only have party lines to go on at the moment with the “fully detailed” plan still forthcoming it’s still safe to say that the Coalition are bad news for it, no matter which angle you view their plan from.
I’ll just put this here, a sunset on Mars as seen by the Curiosity rover:
I had one of those moments watching this video where I just considered the chain of events that led up to me being able to see this. There’s a robot on another planet, several million kilometers away, that’s beaming pictures back to Earth. Those pictures were then made available to the public via a vast, interconnected network that spans the entire globe. One person on that network decided to collate them into a video and make that available via said network. I then, using commodity hardware that anyone can purchase, was able to view that video. The chain of events leading up to that point seem so improbable when you look at as a completed system but they all exist and are all products of human innovation.
Isn’t that just mind blowingly awesome?
We Australians do love to pirate things. Those of us who live here can tell you why: we’re either gouged extensively on the same products sold overseas or we’re subject to incredible delays. The Internet has helped to remedy both these things however with the former being solved by having access to the same shops that everyone else does and the latter eliminating most long delays. Still, even though we’ve come this far, we’re still subject to the same scarcity that just doesn’t need to exist with certain goods, especially ones that can be purely digital.
Our tendency towards piracy hasn’t gone unnoticed by the rights holders overseas but all they’ve done in response is send scorn over our way. There’s been a couple shining examples of what they should do, like the ABC offering episodes of Dr. Who on iView before it shows on TV (that’s no more for this season, unfortunately), but few seem to be following their lead. It seems that, at least for the near future, Australia will be viewed as nothing more than a pirate haven, a drain on the creative world that does nothing but take.
Or will it?
Any avid TV watcher will be aware of the blockbuster series Game of Thrones which just aired episode one of season 3. Whilst the numbers aren’t in yet it’s shaping up to be the most pirated show ever yet again with Australia making up a decent portion of that. You would think then that its publishers would be aghast at these numbers as the current executive thinking is that every download is somehow a missed sale, robbing them of untold millions that should be in their pockets. However an interview with HBO’s President of Programming Michael Lombardo reveals that they’re doing just fine in spite of it and in fact are kind of flattered by it:
“I probably shouldn’t be saying this, but it is a compliment of sorts,” HBO programming president Michael Lombardo told EW. “[Piracy is] something that comes along with having a wildly successful show on a subscription network.”
Last month Nikolaj Coster-Waldau, the actor who plays Jaime Lannister in the show, said that although people watch the show online, he hoped they would still go out and buy the DVD or Blu-ray. And guess what? According to HBO, they do.
“The demand is there,” Lombardo said. “And it certainly didn’t negatively impact the DVD sales.”
I think you could knock me over with a feather after I read that.
There’s been a lot of research done into whether or not piracy, with respect to the online kind, is an overall negative influence on creative industries like TV, music and video games. Preliminary studies have shown that music pirates tend to spend much more than their non-pirating counter parts and that appears to extend to other industries. Lombardo’s revelation that the rampant piracy experienced by their flagship series didn’t hurt their DVD sales fits in with this idea as well and it’s incredibly gratifying to see people at the executive finally admitting that piracy isn’t as big of an issue as they’ve made it out to be. Of course he’s well aware that such a position isn’t popular, even within his own company, but at least the seeds of dissent are starting to take root and hopefully it will continue on from there.
History has shown that attempting to eliminate piracy is a fool’s errand and the only reliable way to combat it is to provide a product that is competitive to what they offer. Valve, Netflix et. al. saw this for their respective industries and their success is a testament to the fact that people will pay good money once the price is set at the right point. Companies who attempt to fight this are going to find themselves routinely outclassed by these upstarts and it’ll only be a matter of time before they find themselves on the wrong side of a bankruptcy hearing. So other executives should take note of Lombardo’s stance and consider taking the same view of their own right’s portfolios.