Technology

IBM-2-of-4

Chef Watson: Big Data Might Finally be Usable.

The promise of Big Data has been courting many a CIO for years now, the allure being that all the data they have on everything can be fed into some giant engine that will then spit out insights for them. However like all things the promise and the reality are vastly different beasts and whilst there are examples of Big Data providing never before seen insights it hasn’t really revolutionized industries in the way other technologies have. A big part of that is that Big Data tools aren’t push button solutions, requiring a deep understanding of data science in order to garner the insights you seek. IBM’s Watson however is a much more general purpose engine, one that I believe could potentially deliver on the promises that its other Big Data compatriots have made.

IBM-2-of-4

The problem I see with most Big Data solutions is that they’re not generalizable, I.E. a solution that’s developed for a specific data set (say a logistics company wanting to know how long it takes a package to get from one place to another) will likely not be applicable anywhere else. This means whilst you have the infrastructure and capability to generate insights the investment required to attain them needs to be reapplied every time you want to look at the data in a different way or if you have other data that requires similar insights to be derived from it. Watson on the other hand falls more into the category of a general purpose data engine that can ingest all sorts of data and provide meaningful insights, even to things you wouldn’t expect like helping to author a cookbook.

The story behind how that came about is particularly interesting as it showed what I feel is the power of Big Data without the required need to have a data science degree to exploit it. Essentially Watson was fed with over 9000 (ha!) recipes from Bon Appétit‘s database which was then supplemented with the knowledge it has around flavour profiles. It then used all this information to derive new combinations that you wouldn’t typically think of and then provided them back to the chefs to prepare. Compared to traditional recipes the ingredient lists that Watson provided were much longer and involved however the results (which should be mostly attributed to the chefs preparing them) were well received showing that Watson did provide insight that would otherwise have been missed.

That’d just be an impressive demonstration of data science if it wasn’t for the fact that Watson is now being used to provide similar levels of insight across a vast number of industries from medical to online shopping to even matching remote workers with employers seeking their skills. Whilst it’s far short of what most people would class as a general AI (it’s more akin to a highly flexible expert system on the data it’s provided)  Watson has shown that it can be fed a wide variety of data sets and can then be queried in a relatively straightforward way. It’s that last part that I believe is the secret sauce to making Big Data usable and it could be the next big thing for IBM.

Whether or not they can capitalize on that though is what will determine if Watson becomes the one Big Data platform to rule them all or simply an interesting footnote in the history of expert systems. Watson has already proven its capabilities numerous times over so fundamentally it’s ready to go and the responsibility now resides with IBM to make sure it gets in the right hands to further develop it. Watson’s presence is growing slowly but I’m sure a killer app isn’t too far off for it.

saupload_775px_supercapacitors_chart.svg

The New Battery Tech Conundrum.

The batteries in our portable devices never seem to be big enough, in fact in some cases they seem to be getting worse. Gone are the days when forgetting to charge your phone for days at a time wasn’t an issue and you’ll be lucky to get a full day’s worth of use out of your laptop before it starts screaming to be plugged back into the wall. The cold hard fact of the matter is that storing electrical energy in a portable fashion is hard as energy density is often a function of surface area, meaning those lovely slim smartphones you love are at odds with increasing their battery life. Of course there are always improvements to be made however many breakthroughs in one aspect or another usually come at the cost of something else.

saupload_775px_supercapacitors_chart.svgTake for instance the latest announcement to come out of Stanford University which shows a battery that can be fully charged in under a minute and, if its creators are to be believed, replace the current battery tech that powers all our modern devices. Their battery is based on a technology called aluminium ion which works in a very similar way to the current lithium ion technology that’s behind most rechargeable devices. It’s hard to deny the list of advantages that their battery tech has: cheaper components, safer operation and, of course the fast charging times, however those advantages start to look a lot less appealing when you see the two disadvantages that they currently have to work past.

The voltage and energy density.

As the battery tech stands now the usable voltage the battery is able to put out is around 2 volts which is about half the voltage that most devices currently use. Sure you could get around this by using various tricks (DC step up converter, batteries in series, etc.) however these all reduce the efficiency of your battery and add complexity to the device you put them in. Thus if these kinds of batteries are going to be used as drop in replacements for the current lithium ion tech they’re going to have to work out how to up the voltage significantly without impacting heavily on the other aspects that make it desirable.

The latter problem is the more difficult one and is something that all new battery technology struggles with. With any battery tech you’re usually balancing quite a few factors in order to make the best tradeoffs for your particular use case however one of the most typical is between charge times and the amount of power you can store. In general the quicker your battery can charge the less dense it is energywise, meaning that fast charge time comes at the cost of usable life once it’s off the charger. Indeed this is exactly the issue that the new aluminium ion battery is struggling with as its current power density does not match that of lithium ion.

Now this isn’t to say that the idea is worthless, more that when you hear about these amazing kinds of batteries or supercapacitors (different kind of technology, but they’re an energy storage medium all the same) that have some kind of revolutionary property your first reaction should be to ask what the trade offs were. There’s a reason why sealed lead acid, nickel metal hydride and other seemingly ancient battery technologies are still used the world over; they’re perfect at doing the job they’ve found themselves in. Whilst charging your phone in a minute might be a great thing on paper if that came with a battery life that was a mere 20% of its long charging competitors I’m sure most people would choose the latter. Hopefully the researchers can overcome their current drawbacks to make something truly revolutionary but I’ll stay skeptical until proven otherwise.

 

hw_415671

The Huawei Watch.

Back when I first saw the Motorola 360 I was pretty geared up to grab myself one as it was the first to have a design that actually appealed to me. However the reviews for it were less than stellar, many of them citing poor battery life and lacklustre performance thanks to its incredibly outdated processor. This was enough to sour me on the idea as whilst the design was still nice I didn’t want to burden myself with another device that I’d have to charge daily. With the Apple Watch failing to tickle my fancy I resigned myself to waiting for the next round of devices to see if anything came through. As it so happens there is one potential smartwatch I now have my eyes on but I’m hesitant to get excited lest I get let down again.

hw_415671

The Huawei Watch bears a similar aesthetic to the Motorola 360 with a round face and a single button. The included Milanese strap is a nice addition especially considering that Apple would charge you an extra $600 for the privilege. However should that style not suit you then you’re free to change it to any standard 18mm or 21mm band that takes your fancy. It’s available in the standard array of colours (silver, black and gold) all of which share the same construction although the gold appears to come with a leather band rather than the Milanese style one.

Specifications wise it’s a definite step up from most of the competition sporting a quad core Qualcomm chip and a 400 x 400 AMOLED screen that covers the entire dial (unlike the 360 which has a black bar at the bottom) covered in sapphire crystal. These differences might not sound like much but the newer processor should be able to run a lot better in low power modes and the AMOLED screen handles being dimmed a lot better than the 360’s IPS panel does. So whilst the Huawei Watch might have a slightly smaller battery it should, hopefully, be able to last significantly longer which was the main complaint against the 360.

However I still have concerns on just how useful such a device will be for me as whilst the array of sensors included in the device are impressive they’re still somewhat short of my idealized smartwatch. Sure the list of features I laid out a while back might be a little extreme (indeed I think including MYO technology now isn’t required, given that Google Glass isn’t as great as I first thought it’d be) but I’d want something like this to be functional and useful. Perhaps I’m being too harsh of a critic of the idea before I’ve tried it as there’s every chance that I’ll find a myriad of uses for it once I have it but I’ve used enough random bits of tech in the past to know that not all of them work out how everyone says they should.

Regardless it’s good to see more companies coming out with smartwatch designs that don’t look like cheap plastic pieces of junk. Whilst I’ll always question the value proposition of Rolex level smatchwatches I can definitely see the value in having a piece of technology on your wrist. Whether the current generation of devices will be enough to satisfy me is something I’ll have to find out and the Huawei Watch might be the first one to make me shell out the requisite cash.

Windows_10_Story__4_-pcgh

Windows 10 Brings Smaller Footprint, Better Updating.

Windows 10 is fast shaping up to be one of the greatest Windows releases with numerous consumer facing changes and behind the scenes improvements. Whilst Microsoft has been struggling somewhat to deliver on the rapid pace they promised with the Windows Insider program there has been some progress as of late and a couple new features have made their way into a leaked build. Technology wise they might not be revolutionary ideas, indeed a couple of them are simply reapplications of tech they’ve had for years now, but the improvements they bring speak to Microsoft’s larger strategy of trying to reinvent itself. That might be awfully familiar for those with intimate knowledge of Windows 8 (Windows Blue, anyone?) so it’s interesting to see how this will play out.

Windows_10_Story__4_-pcgh

First cab off the ranks in Windows 10’s new feature set is a greatly reduced footprint, something that Windows has copped a lot of flak for in the past. Now this might not sound like a big deal on the surface, drives are always getting bigger these days, however the explosion of tablet and portable devices has brought renewed focus on Windows’ rather large install size on these space constrained devices. A typical Windows 8.1 install can easily consume 20GB which, on devices that have only 64GB worth of space, doesn’t leave a lot for a user’s files. Windows 10 brings a couple improvements that free up a good chunk of that space and bring with it a couple cool features.

Windows 10 can now compress system files saving approximately 2GB on a typical install. The feature isn’t on by default, instead during the Windows install the system will be assessed to make sure that compression can happen without impacting user experience. Whether current generation tablet devices will meet the minimum requirements for this is something I’m a little skeptical about so it will be interesting to see how often this feature gets turned on or off.

Additionally Windows 10 does away with the recovery partition on the system drive which is where most of the size savings comes from. Now instead of reserving part of the disk to hold a full copy of the Windows 10 install image, which was used for the refresh and repair features, Windows 10 can rebuild itself in place. This comes with the added advantage of keeping all your installed updates so that refreshed PCs don’t need to go through the hassle of downloading them all again. However in the advent that you do have to do that they’ve included another great piece of technology that should make updating a new PC in your home a little easier.

Windows 10 will include the option of downloading PC updates via a P2P system which you can configure to download updates only from your local network or also PCs on the Internet. It’s essentially an extension of the BranchCache technology that’s been a part of Windows for a while now but it makes it far more accessible, allowing home users to take advantage of it. If you’re running a Windows home (like I am) this will make downloading updates far less painful and, for those of us who format regularly, help greatly when we need to get a bunch of Windows updates again. The Internet enabled feature is mostly for Microsoft’s benefit as it’ll take some load off their servers but should also help out users who are in regions that don’t have great backhaul to the Windows Update servers.

If Microsoft continues to release features like this for Windows 10 then it definitely has a bright future ahead of it. Things like this might not be the sexiest things to talk about but they address real concerns that have plagued Windows for years. In the end they all amount to one thing: a better experience for the consumer, something which Microsoft has fervently increased its focus on as of late. Whether they’ll amount to the panacea to the ills of Windows 8 remains to be seen but suffice to say I’m confident that it’ll line up well.

Mercedes-Benz-F-015-Car-Wallpaper

The Time Delta From Strange to Commonplace.

New technology always seems to border on the edge of being weird or creepy. Back in the 1970s and 80s it was weird to be into games, locking yourself away for hours at a time in a darkened room staring at a glowing screen. Then the children (and adults) of that time grew up and suddenly spending your leisure time doing something other than watching TV or reading a book became an acceptable activity. This trend has been seen occurring more recently with the advent of social networks and smartphones with people now divulging information onto public forums at a rate that would’ve made the 1990s versions of them blush. What I’ve come to notice is that the time period between something being weird or creepy to becoming acceptable is becoming smaller, and the rate at which its shrinking is accelerating.

Mercedes-Benz-F-015-Car-Wallpaper

The smartphone which you now carry with you everywhere is a constant source of things that were once considered on the borderline of acceptable but are now part of your life. Features like Google Now and Siri have their digital fingers through all your data, combing it for various bits of useful information that it can whip up into its slick interface. When these first came out everyone was apprehensive about them, I mean the fact that Google could pick up on travel itineraries and then display your flight times was downright spooky for some, but here we are a year or so later and features like that aren’t so weird anymore, hell they’re even expected.

The factor that appears to melt down barriers for us consumers is convenience. If a feature or product borders on the edge of being creepy but provides us with a level of convenience we couldn’t have otherwise we seem to have a very easy time accommodating it. Take for instance Disney’s new MyMagic Band which you program with your itinerary, preferences and food choices before you arrive at one of their amusement parks. Sure it might be a little weird to walk into a restaurant without having to order or pay, or walking up to rides and bypassing the queue, but you probably won’t be thinking about how weird that is when you’re in the thick of it. Indeed things like MyMagic break down barriers that would otherwise impact on the experience and thus, they work themselves easily into what we deem as acceptable.

The same can be said for self driving cars. Whilst techno junkies like myself can’t wait for the day when taking the wheel to go somewhere is optional the wider public is far more weary of what the implications of self-driving cars will be. This is why many companies have decided not to release a fully fledged vehicle first, instead opting to slowly incorporate pieces of the technology into their cars to see what customers react positively first. You’ll know these features as things like automatic emergency braking, lane assist and smart cruise control. All of these features are things you’d find in a fully fledged self driving car but instead of being some kind of voodoo magic they’re essentially just augments to things you’re already used to. In fact some of these systems are good enough that cars can self drive themselves in certain situations, although it’s probably not advised to do what this guy does.

Measuring the time difference between cultural shifts is tricky, they can really only be done in retrospect, but I feel the general idea that the time from weird to accepted has been accelerating. Primarily this is reflection in the acceleration of the pace of innovation where technological leaps that took decades now take place in mere years. Thus we’re far more accepting of change happening at such a rapid pace and it doesn’t take long for one feature, which was once considered borderline, to quickly seem passe. This is also a byproduct of how the majority of information is consumed now, with novelty and immediacy held above most other attributes. When this is all combined we become primed to accept changes at a greater rate which produces a positive feedback loop that drives technology and innovation faster.

What this means, for me at least, is that the information driven future that we’re currently hurtling towards might look scary on the surface however it will likely be far less worrisome when it finally arrives. There are still good conversations to be had around privacy and how corporations and governments handle our data, but past that the innovations that happen because of that are likely to be accepted much faster than anyone currently predicts. That is if they adhere to the core tenet of providing value and convenience for the end user as should a product neglect that it will fast find itself in the realm of obsolescence.

vulkan-vs-directx-b75be4583244c1db-800x450

DirectX 12, Vulkan Could Bring GPU Teaming Between Brands.

It’s been a while between drinks for DirectX with the latest release, 11, coming out some 6 years ago . This can be partly attributed to the consolization of PC games, putting a damper on the demand for new features, however Vista having exclusivity on DirectX 10 was the biggest factor ensuring that the vast majority of gamers simply didn’t have access to it. Now that the majority of the gaming crowd has caught up and DirectX 11 titles abound demand for a new graphics pipeline that can make the most of new hardware has started to ramp up and Microsoft looks ready to deliver on that with DirectX 12. Hot on the heels of that however is Vulkan, the new OpenGL standard that grew out of AMD’s Mantle API which is shaping up to be a solid competitor.

vulkan-vs-directx-b75be4583244c1db-800x450

Underpinning both of these new technologies is a desire for the engines to get out of the way of game developers by getting them as close to the hardware as possible. Indeed if you look at the marketing blurb for either DirectX 12 or Vulkan it’s clear that they want to market their new technology as being lightweight, giving the developers access to more of the graphical power than they would have had previously. The synthetic benchmarks that are making the rounds seem to confirm this showing a lot less time spent sending jobs to the GPUs thus eeking out more performance for the same piece of hardware. However the one feature that’s really intrigued me, and pretty much everyone else, is the possibility of these new APIs allowing SLI or CrossFire like functionality to work across different GPUs, even different brands.

The technology to do this is called Split Frame Rendering (SFR) an alternative way of combining graphics cards. The traditional way of doing SLI/CrossFire is called Alternate Frame Rendering (AFR) which sends odd frames to one card and even frames to the other. This is what necessitates the cards being identical and the reason why you don’t get a 100% performance boost from using 2 cards. SFR on the other hand makes both of the GPUs work in tandem, breaking up a scene into 2 halves and sending one of to each of the graphics cards. Such technology is already available for games that make use of the Mantle API for gamers who have AMD cards with titles like Civilization: Beyond Earth supporting SFR.

For Vulkan and DirectX 12 this technology could be used to send partial frames to 2 distinct types of GPUs, negating the need for special drivers or bridges in order to divvy up frames between 2 GPUs. Of course this then puts the onus on the game developer (or the engine that’s built on top of these APIs) to build in support for this rather than it sitting with the GPU vendor to develop a solution. I don’t think it will be long before we see the leading game engines support SFR natively and so you’d likely see numerous titles being able to take advantage of this technology without major updates required. This is still speculative at this point however and we may end up with similar restrictions around SFR like we currently have for AFR.

There’s dozens more features that are set to come out with these new set of APIs and whilst we won’t see the results of them for some time to come the possibilities they open up are quite exciting. I can definitely recall the marked jump up in graphical fidelity between DirectX 10 and 11 titles so hopefully 12 does the same thing when it graces our PCs. I’m interested to see how Vulkan goes as since it’s grown out of the Mantle API, which showed some very significant performance gains for AMD cards that used it, there’s every chance it’ll be able to deliver on the promises its making. It really harks back to the old days, when wars between supporters of OpenGL and DirectX were as fervent as those between vi and emacs users.

We all know that vi and DirectX are the superior platform, of course.

Apple iWatch $17000

Talk About Sticker Shock: The $17,000 Watch.

It seemed that even the announcement of the Watch couldn’t kill the rumour mill about the Watch as there’s been rampant speculation about just what this device will be, what it will cost and what it will mean for tech consumers worldwide. I guess I shouldn’t be surprised, any potential Apple product receives this treatment, but it still shocks me just how people are in potential rather than actual products. Yesterday Apple announced the price range for their range of Watches and they start at the expected price, some US$349 and rocket up to the absolutely crazy price of US$17,000. Needless to say those premium editions are far more premium than most people were expecting and it makes one question what the motives behind those devices are.

Apple iWatch $17000

For starters smartwatches are still in their nascent stages with numerous companies still vying to find that killer design, app or whatever it is that catapults them to the top of the pile. For me it’s still about aesthetics, something which the Watch certainly doesn’t have, and the only one that’s managed to come close to winning in that regard (in my mind) is the Huawei Watch and I’m even skeptical of that given how the Moto 360 turned out. For others though it’s going to be about the features, something which the current Watch seems to satisfy, however as time goes on those $17,000 Watches are going become decidedly dated and this brings in the quesiton about Apple’s strategy with these premium devices.

There’s no doubt that there’s a healthy dose of margin on the higher end devices, especially considering that the innards on those devices is identical to the ones that cost a fraction of the premium models. So potentially these higher end Watches are being used to subsidise the lower end although honestly I can’t remember a time when Apple has done this with another consumer product, a hefty premium on all hardware (and losses elsewhere) is their modus operandi. Whilst I can see the lower end models fitting well into Apple’s yearly product cycle I can’t say the same for these high end models although I’ll be the first to admit that someone paying that much for an Watch obviously has a different sense of value to me.

The argument has been made that these luxury versions of the Watch won’t be bought for the functionality which I agree with to a point however there are far, far better purchases that can be made to facilitate the same purpose for a similar price. The differentiator between those products and the one Apple is peddling is the functionality and it’s highly unlikely that someone who wants a fashion accessory would pick a $17K Watch over an equivalent Rolex or Patek. In that regard the functionality does matter and these watches are going to be rapidly outpaced by their cheaper brethren just a year down the line. Apple could of course offer an upgrade service although nothing of that nature has been forthcoming and they’re not exactly a company that prides themselves on upgradeable products.

Regardless of what I think though it will be the market that decides how popular these things will be and whether or not Apple can break into the realm of high fashion with their luxury Watches. My personal opinion is they won’t, given the fact that whilst functionality might not be important in a luxury watch it’s Apple’s only differentiator at this point. However I also highly critical of the iPad so I’m not the greatest judge of what should make a product successful so maybe an Watch with a gold case will be enough to sell people on the idea, even if the resulting watch will be replaced by a sleeker brother only 12 months later.

CETO

Wave Power Generators Now Powering Australia’s Grid.

Wave energy always seemed like one of those technologies that sounded cool but was always 10 years away from a practical implementation. I think the massive rise in solar over the past decade or so is partly to blame for this as whilst it has its disadvantages it’s readily available and at prices that make even the smallest installations worthwhile. However it seems that whilst the world may have turned its eyes elsewhere an Australian company, Carnegie Wave Energy, has been busy working away in the background on developing their CETO technology that can provide a peak power output of some 240KW. In fact they’ve just installed their first system here in Australia and connected it to the grid to provide power to Western Australia.

CETOThe way these pods work is quite fascinating as much of the technology they use has been adapted from offshore oil rigs and drill platforms. The buoy sits a couple meters under the surface and is anchored to the sea bed via a flexible tether. As the waves move past them it pulls on the cable, driving an attached pump that creates high pressure sea water. This is then fed up through a pipe to an onshore facility where it can be used to drive a turbine or a desalination plant. These CETO pods also have some other cool technology in them to be able to cope for rough sea conditions, allowing them to shed energy so that the pumps aren’t overdriven or undue stress is put on the tether.

What’s really impressive however are the power generation figures that they’re quoting for the current systems. The current CETO 5 pod that they’ve been running for some 2000 hours has a peak generation capacity of about 240KW which is incredibly impressive especially when you consider what comparable renewable energy sources require to deliver that. Their next implementation is looking to quadruple that, putting their CETO 6 pod in the 1MW range. Considering that this is a prototype slated to cost about $32 million total that’s not too far off how much other renewables would cost to get to that capacity so it’s definitely an avenue worth investigating.

I’m very interested to see where Carnegie Wave Energy takes this idea as it looks like there’s a lot of potential in this technology they’re developing. With offshore wind always meeting resistance from NIMBYs and those who think they ruin the view something like this has a lot of potential to work in places where the other alternatives aren’t tenable. That, coupled with the fact that they can be run as either power generation units or desalination plants, means that the technology has a very large potential market. Of course the final factor that will make or break the technology is the total installed cost per KW however the numbers are already looking pretty good in that regard so I’m sure we’ll be seeing more of these CETOs soon.

HTC-Vive_White

Valve Pairs With HTC for Their VR Headset.

It’s strange to think that just over 2 years ago that the idea of VR headsets was still something of a gimmick that was unlikely to take off. Then enter the Oculus Rift Kickstarter which managed to grab almost 10 times the funds it asked for and revamped an industry that really hadn’t seen much action since the late 90s. Whilst consumer level units are still a ways off it’s still shaping up to be an industry with robust competition with numerous competitors vying for the top spot. The latest of which comes to us via HTC who’ve partnered with Valve to deliver their Steam VR platform.

HTC-Vive_White

Valve partnering with another company for the hardware isn’t surprising as they let go a number of personnel in their hardware section not too long ago although their choice of partner is quite interesting. Most of the other consumer electronics giants have already made a play into the VR game: Samsung with Gear VR, Sony with Project Morpheus and Google with their (admittedly limited) Cardboard. So whilst I wouldn’t say that we’ve been waiting for HTC to release something it’s definitely not unexpected that they’d eventually make a play for this space. The fact that they’ve managed to partner with Valve, who already has major buy in with nearly all PC gamers thanks to Steam, is definitely a win for them and judging by the hardware it seems like Valve is pretty happy with the partnership too.

The HTC/Valve VR headset has been dubbed the Re Vive and looks pretty similar to the prototypes of the Oculus DK2. The specs are pretty interesting with it sporting 2, 1200 x 1080 screens which are capable of a 90hz refresh rate, well above what your standard computer monitor is capable of. The front is also littered with numerous sensors including your standard gyroscopes, accelerometers and a laser position tracker which all combine together to provide head tracking to 1/10th of a degree. There’s also additional Steam VR base stations which can provide full body tracking as well, allowing you to get up and move around in your environment.

There’s also been rumblings of additional “controllers’ that come with the headset although I’ve been unable to find any pictures of them or details on how they work. Supposedly they work to track your hand motions so you can interact with objects within the environment. Taking a wild guess here I think they might be based off something like the MYO as other solutions limit you to small spaces in order to do hand tracking properly whilst the MYO seems to fit more inline with the Re Vive’s idea of full movement tracking within a larger environment. I’ll be interested to see what their actual solution for this is as it has the potential to set Valve and HTC apart from everyone else who’s still yet to come up with a solution.

Suffice to say this piece of HTC kit has seen quite a bit of development work thrown into it, more than I think anyone had expected when this announcement was first made. It’ll be hard to judge the platform before anyone can get their hands on it as with all things VR you really don’t know what you’re getting yourself into until you give it a go. The pressure really is now on to be the first to market a consumer level solution that works seamlessly with games that support VR as all these prototypes and dev kits are great but we’re still lacking that one implementation that really sells the idea. HTC and Valve are well positioned to do that but so is nearly everyone else.

internet-access-vpn

The Shambles That is The Liberal NBN.

It’s no secret that I’m loudly, violently opposed to the Liberal’s Multi-Technology Mix NBN solution and I’ve made it my business to ensure that the wider Australian public is aware of frightfully bad it will be. The reasons as to why the Liberal’s solution is so bad are many however they can almost all be traced back to them wanting to cast anything that Labor created in a poor light and that their ideas are far better. Those of us in the know have remained unconvinced however, tearing into every talking point and line of rhetoric to expose the Liberal’s NBN for the farce it is. Now, as the Liberals attempt to rollout their inferior solution, they are no longer able to hide behind bullshit reports as the real world numbers paint an awfully bad picture for their supposedly better NBN.

internet-access-vpn

The slogan of the MTM NBN being “Fast Affordable. Sooner.” has become an easy target as the months have rolled on since the Liberal Party announced their strategy. Whilst the first point can always be debated (since 25Mbps should be “more than enough” according to Abbott) the second two can be directly tied to real world metrics that we’re now privy to. You see with the release of the MTM NBN strategy all works that were planned, but not yet executed, were put on hold whilst a couple FTTN trial sites were scheduled to be established. The thinking was that FTTN could be deployed much faster than a FTTP solution and, so the slogan went, much cheaper too. Well here we are a year and a half later and it’s not looking good for the Liberals and unfortunately, by extension, us Australians.

It hasn’t been much of secret that the FTTN trials that NBNCo have been conducting haven’t exactly been stellar with them experiencing significant delays in getting them set up. Considering that the Liberals gave themselves a 2016 deadline for giving everyone 25Mbps+ speeds these delays didn’t bode well for getting the solution out before the deadline. Those delays appear to have continued with the trial now having just 53 customers connected to the original Umima trial and not a single one connected to the Epping trial. This is after they gave a timeline of “within a month” in October last year. Suffice to say the idea that FTTN could be made available to the wide public by the end of 2016 is starting to look really shakey and so is the 2019 timeframe for their completion of the NBN.

Worst still the idea that the MTM NBN would be significantly cheaper than the full FTTP NBN is yet again failing to stand up to scrutiny. Additional cost analysis conducted by NBNCo, which includes opex costs that were previously excluded under previous costing models, has seen the cost per premises estimate for brownfields (deployed to existing houses) rise to $4316. That’s a substantial increase however it’s a more accurate representation of how much it actually costs to get a single house deployed. Taking that into account the total cost for deploying the FTTP NBN comes out to about $47 billion, very close to the original budget that Labor had allocated for it. Whilst it was obvious that the Liberal’s cost-benefit analysis was a crock of shit from the beginning this just adds further proves the point and casts more doubt over the MTM NBN being significantly cheaper.

I’m honestly not surprised by this anymore as its clear that the Liberals really had no intent of adhering to their rhetoric and were simply trashing the FTTP NBN because it was Labor’s idea. It’s an incredibly short sighted way of looking at it, honestly, as they would have won far more favour with a lot of people if they had just continued with the FTTP NBN as it was. Instead they’re going to waste years and multiple billions of dollars on a system that won’t deliver on its promises and we’ll be left to deal with the mess. All we can really hope for at this point is that we make political history and cement the Liberal’s reign under the OneTermTony banner.