Back when I first saw the Motorola 360 I was pretty geared up to grab myself one as it was the first to have a design that actually appealed to me. However the reviews for it were less than stellar, many of them citing poor battery life and lacklustre performance thanks to its incredibly outdated processor. This was enough to sour me on the idea as whilst the design was still nice I didn’t want to burden myself with another device that I’d have to charge daily. With the Apple Watch failing to tickle my fancy I resigned myself to waiting for the next round of devices to see if anything came through. As it so happens there is one potential smartwatch I now have my eyes on but I’m hesitant to get excited lest I get let down again.
The Huawei Watch bears a similar aesthetic to the Motorola 360 with a round face and a single button. The included Milanese strap is a nice addition especially considering that Apple would charge you an extra $600 for the privilege. However should that style not suit you then you’re free to change it to any standard 18mm or 21mm band that takes your fancy. It’s available in the standard array of colours (silver, black and gold) all of which share the same construction although the gold appears to come with a leather band rather than the Milanese style one.
Specifications wise it’s a definite step up from most of the competition sporting a quad core Qualcomm chip and a 400 x 400 AMOLED screen that covers the entire dial (unlike the 360 which has a black bar at the bottom) covered in sapphire crystal. These differences might not sound like much but the newer processor should be able to run a lot better in low power modes and the AMOLED screen handles being dimmed a lot better than the 360’s IPS panel does. So whilst the Huawei Watch might have a slightly smaller battery it should, hopefully, be able to last significantly longer which was the main complaint against the 360.
However I still have concerns on just how useful such a device will be for me as whilst the array of sensors included in the device are impressive they’re still somewhat short of my idealized smartwatch. Sure the list of features I laid out a while back might be a little extreme (indeed I think including MYO technology now isn’t required, given that Google Glass isn’t as great as I first thought it’d be) but I’d want something like this to be functional and useful. Perhaps I’m being too harsh of a critic of the idea before I’ve tried it as there’s every chance that I’ll find a myriad of uses for it once I have it but I’ve used enough random bits of tech in the past to know that not all of them work out how everyone says they should.
Regardless it’s good to see more companies coming out with smartwatch designs that don’t look like cheap plastic pieces of junk. Whilst I’ll always question the value proposition of Rolex level smatchwatches I can definitely see the value in having a piece of technology on your wrist. Whether the current generation of devices will be enough to satisfy me is something I’ll have to find out and the Huawei Watch might be the first one to make me shell out the requisite cash.
Windows 10 is fast shaping up to be one of the greatest Windows releases with numerous consumer facing changes and behind the scenes improvements. Whilst Microsoft has been struggling somewhat to deliver on the rapid pace they promised with the Windows Insider program there has been some progress as of late and a couple new features have made their way into a leaked build. Technology wise they might not be revolutionary ideas, indeed a couple of them are simply reapplications of tech they’ve had for years now, but the improvements they bring speak to Microsoft’s larger strategy of trying to reinvent itself. That might be awfully familiar for those with intimate knowledge of Windows 8 (Windows Blue, anyone?) so it’s interesting to see how this will play out.
First cab off the ranks in Windows 10’s new feature set is a greatly reduced footprint, something that Windows has copped a lot of flak for in the past. Now this might not sound like a big deal on the surface, drives are always getting bigger these days, however the explosion of tablet and portable devices has brought renewed focus on Windows’ rather large install size on these space constrained devices. A typical Windows 8.1 install can easily consume 20GB which, on devices that have only 64GB worth of space, doesn’t leave a lot for a user’s files. Windows 10 brings a couple improvements that free up a good chunk of that space and bring with it a couple cool features.
Windows 10 can now compress system files saving approximately 2GB on a typical install. The feature isn’t on by default, instead during the Windows install the system will be assessed to make sure that compression can happen without impacting user experience. Whether current generation tablet devices will meet the minimum requirements for this is something I’m a little skeptical about so it will be interesting to see how often this feature gets turned on or off.
Additionally Windows 10 does away with the recovery partition on the system drive which is where most of the size savings comes from. Now instead of reserving part of the disk to hold a full copy of the Windows 10 install image, which was used for the refresh and repair features, Windows 10 can rebuild itself in place. This comes with the added advantage of keeping all your installed updates so that refreshed PCs don’t need to go through the hassle of downloading them all again. However in the advent that you do have to do that they’ve included another great piece of technology that should make updating a new PC in your home a little easier.
Windows 10 will include the option of downloading PC updates via a P2P system which you can configure to download updates only from your local network or also PCs on the Internet. It’s essentially an extension of the BranchCache technology that’s been a part of Windows for a while now but it makes it far more accessible, allowing home users to take advantage of it. If you’re running a Windows home (like I am) this will make downloading updates far less painful and, for those of us who format regularly, help greatly when we need to get a bunch of Windows updates again. The Internet enabled feature is mostly for Microsoft’s benefit as it’ll take some load off their servers but should also help out users who are in regions that don’t have great backhaul to the Windows Update servers.
If Microsoft continues to release features like this for Windows 10 then it definitely has a bright future ahead of it. Things like this might not be the sexiest things to talk about but they address real concerns that have plagued Windows for years. In the end they all amount to one thing: a better experience for the consumer, something which Microsoft has fervently increased its focus on as of late. Whether they’ll amount to the panacea to the ills of Windows 8 remains to be seen but suffice to say I’m confident that it’ll line up well.
New technology always seems to border on the edge of being weird or creepy. Back in the 1970s and 80s it was weird to be into games, locking yourself away for hours at a time in a darkened room staring at a glowing screen. Then the children (and adults) of that time grew up and suddenly spending your leisure time doing something other than watching TV or reading a book became an acceptable activity. This trend has been seen occurring more recently with the advent of social networks and smartphones with people now divulging information onto public forums at a rate that would’ve made the 1990s versions of them blush. What I’ve come to notice is that the time period between something being weird or creepy to becoming acceptable is becoming smaller, and the rate at which its shrinking is accelerating.
The smartphone which you now carry with you everywhere is a constant source of things that were once considered on the borderline of acceptable but are now part of your life. Features like Google Now and Siri have their digital fingers through all your data, combing it for various bits of useful information that it can whip up into its slick interface. When these first came out everyone was apprehensive about them, I mean the fact that Google could pick up on travel itineraries and then display your flight times was downright spooky for some, but here we are a year or so later and features like that aren’t so weird anymore, hell they’re even expected.
The factor that appears to melt down barriers for us consumers is convenience. If a feature or product borders on the edge of being creepy but provides us with a level of convenience we couldn’t have otherwise we seem to have a very easy time accommodating it. Take for instance Disney’s new MyMagic Band which you program with your itinerary, preferences and food choices before you arrive at one of their amusement parks. Sure it might be a little weird to walk into a restaurant without having to order or pay, or walking up to rides and bypassing the queue, but you probably won’t be thinking about how weird that is when you’re in the thick of it. Indeed things like MyMagic break down barriers that would otherwise impact on the experience and thus, they work themselves easily into what we deem as acceptable.
The same can be said for self driving cars. Whilst techno junkies like myself can’t wait for the day when taking the wheel to go somewhere is optional the wider public is far more weary of what the implications of self-driving cars will be. This is why many companies have decided not to release a fully fledged vehicle first, instead opting to slowly incorporate pieces of the technology into their cars to see what customers react positively first. You’ll know these features as things like automatic emergency braking, lane assist and smart cruise control. All of these features are things you’d find in a fully fledged self driving car but instead of being some kind of voodoo magic they’re essentially just augments to things you’re already used to. In fact some of these systems are good enough that cars can self drive themselves in certain situations, although it’s probably not advised to do what this guy does.
Measuring the time difference between cultural shifts is tricky, they can really only be done in retrospect, but I feel the general idea that the time from weird to accepted has been accelerating. Primarily this is reflection in the acceleration of the pace of innovation where technological leaps that took decades now take place in mere years. Thus we’re far more accepting of change happening at such a rapid pace and it doesn’t take long for one feature, which was once considered borderline, to quickly seem passe. This is also a byproduct of how the majority of information is consumed now, with novelty and immediacy held above most other attributes. When this is all combined we become primed to accept changes at a greater rate which produces a positive feedback loop that drives technology and innovation faster.
What this means, for me at least, is that the information driven future that we’re currently hurtling towards might look scary on the surface however it will likely be far less worrisome when it finally arrives. There are still good conversations to be had around privacy and how corporations and governments handle our data, but past that the innovations that happen because of that are likely to be accepted much faster than anyone currently predicts. That is if they adhere to the core tenet of providing value and convenience for the end user as should a product neglect that it will fast find itself in the realm of obsolescence.
It’s been a while between drinks for DirectX with the latest release, 11, coming out some 6 years ago . This can be partly attributed to the consolization of PC games, putting a damper on the demand for new features, however Vista having exclusivity on DirectX 10 was the biggest factor ensuring that the vast majority of gamers simply didn’t have access to it. Now that the majority of the gaming crowd has caught up and DirectX 11 titles abound demand for a new graphics pipeline that can make the most of new hardware has started to ramp up and Microsoft looks ready to deliver on that with DirectX 12. Hot on the heels of that however is Vulkan, the new OpenGL standard that grew out of AMD’s Mantle API which is shaping up to be a solid competitor.
Underpinning both of these new technologies is a desire for the engines to get out of the way of game developers by getting them as close to the hardware as possible. Indeed if you look at the marketing blurb for either DirectX 12 or Vulkan it’s clear that they want to market their new technology as being lightweight, giving the developers access to more of the graphical power than they would have had previously. The synthetic benchmarks that are making the rounds seem to confirm this showing a lot less time spent sending jobs to the GPUs thus eeking out more performance for the same piece of hardware. However the one feature that’s really intrigued me, and pretty much everyone else, is the possibility of these new APIs allowing SLI or CrossFire like functionality to work across different GPUs, even different brands.
The technology to do this is called Split Frame Rendering (SFR) an alternative way of combining graphics cards. The traditional way of doing SLI/CrossFire is called Alternate Frame Rendering (AFR) which sends odd frames to one card and even frames to the other. This is what necessitates the cards being identical and the reason why you don’t get a 100% performance boost from using 2 cards. SFR on the other hand makes both of the GPUs work in tandem, breaking up a scene into 2 halves and sending one of to each of the graphics cards. Such technology is already available for games that make use of the Mantle API for gamers who have AMD cards with titles like Civilization: Beyond Earth supporting SFR.
For Vulkan and DirectX 12 this technology could be used to send partial frames to 2 distinct types of GPUs, negating the need for special drivers or bridges in order to divvy up frames between 2 GPUs. Of course this then puts the onus on the game developer (or the engine that’s built on top of these APIs) to build in support for this rather than it sitting with the GPU vendor to develop a solution. I don’t think it will be long before we see the leading game engines support SFR natively and so you’d likely see numerous titles being able to take advantage of this technology without major updates required. This is still speculative at this point however and we may end up with similar restrictions around SFR like we currently have for AFR.
There’s dozens more features that are set to come out with these new set of APIs and whilst we won’t see the results of them for some time to come the possibilities they open up are quite exciting. I can definitely recall the marked jump up in graphical fidelity between DirectX 10 and 11 titles so hopefully 12 does the same thing when it graces our PCs. I’m interested to see how Vulkan goes as since it’s grown out of the Mantle API, which showed some very significant performance gains for AMD cards that used it, there’s every chance it’ll be able to deliver on the promises its making. It really harks back to the old days, when wars between supporters of OpenGL and DirectX were as fervent as those between vi and emacs users.
We all know that vi and DirectX are the superior platform, of course.
It seemed that even the announcement of the Watch couldn’t kill the rumour mill about the Watch as there’s been rampant speculation about just what this device will be, what it will cost and what it will mean for tech consumers worldwide. I guess I shouldn’t be surprised, any potential Apple product receives this treatment, but it still shocks me just how people are in potential rather than actual products. Yesterday Apple announced the price range for their range of Watches and they start at the expected price, some US$349 and rocket up to the absolutely crazy price of US$17,000. Needless to say those premium editions are far more premium than most people were expecting and it makes one question what the motives behind those devices are.
For starters smartwatches are still in their nascent stages with numerous companies still vying to find that killer design, app or whatever it is that catapults them to the top of the pile. For me it’s still about aesthetics, something which the Watch certainly doesn’t have, and the only one that’s managed to come close to winning in that regard (in my mind) is the Huawei Watch and I’m even skeptical of that given how the Moto 360 turned out. For others though it’s going to be about the features, something which the current Watch seems to satisfy, however as time goes on those $17,000 Watches are going become decidedly dated and this brings in the quesiton about Apple’s strategy with these premium devices.
There’s no doubt that there’s a healthy dose of margin on the higher end devices, especially considering that the innards on those devices is identical to the ones that cost a fraction of the premium models. So potentially these higher end Watches are being used to subsidise the lower end although honestly I can’t remember a time when Apple has done this with another consumer product, a hefty premium on all hardware (and losses elsewhere) is their modus operandi. Whilst I can see the lower end models fitting well into Apple’s yearly product cycle I can’t say the same for these high end models although I’ll be the first to admit that someone paying that much for an Watch obviously has a different sense of value to me.
The argument has been made that these luxury versions of the Watch won’t be bought for the functionality which I agree with to a point however there are far, far better purchases that can be made to facilitate the same purpose for a similar price. The differentiator between those products and the one Apple is peddling is the functionality and it’s highly unlikely that someone who wants a fashion accessory would pick a $17K Watch over an equivalent Rolex or Patek. In that regard the functionality does matter and these watches are going to be rapidly outpaced by their cheaper brethren just a year down the line. Apple could of course offer an upgrade service although nothing of that nature has been forthcoming and they’re not exactly a company that prides themselves on upgradeable products.
Regardless of what I think though it will be the market that decides how popular these things will be and whether or not Apple can break into the realm of high fashion with their luxury Watches. My personal opinion is they won’t, given the fact that whilst functionality might not be important in a luxury watch it’s Apple’s only differentiator at this point. However I also highly critical of the iPad so I’m not the greatest judge of what should make a product successful so maybe an Watch with a gold case will be enough to sell people on the idea, even if the resulting watch will be replaced by a sleeker brother only 12 months later.
Wave energy always seemed like one of those technologies that sounded cool but was always 10 years away from a practical implementation. I think the massive rise in solar over the past decade or so is partly to blame for this as whilst it has its disadvantages it’s readily available and at prices that make even the smallest installations worthwhile. However it seems that whilst the world may have turned its eyes elsewhere an Australian company, Carnegie Wave Energy, has been busy working away in the background on developing their CETO technology that can provide a peak power output of some 240KW. In fact they’ve just installed their first system here in Australia and connected it to the grid to provide power to Western Australia.
The way these pods work is quite fascinating as much of the technology they use has been adapted from offshore oil rigs and drill platforms. The buoy sits a couple meters under the surface and is anchored to the sea bed via a flexible tether. As the waves move past them it pulls on the cable, driving an attached pump that creates high pressure sea water. This is then fed up through a pipe to an onshore facility where it can be used to drive a turbine or a desalination plant. These CETO pods also have some other cool technology in them to be able to cope for rough sea conditions, allowing them to shed energy so that the pumps aren’t overdriven or undue stress is put on the tether.
What’s really impressive however are the power generation figures that they’re quoting for the current systems. The current CETO 5 pod that they’ve been running for some 2000 hours has a peak generation capacity of about 240KW which is incredibly impressive especially when you consider what comparable renewable energy sources require to deliver that. Their next implementation is looking to quadruple that, putting their CETO 6 pod in the 1MW range. Considering that this is a prototype slated to cost about $32 million total that’s not too far off how much other renewables would cost to get to that capacity so it’s definitely an avenue worth investigating.
I’m very interested to see where Carnegie Wave Energy takes this idea as it looks like there’s a lot of potential in this technology they’re developing. With offshore wind always meeting resistance from NIMBYs and those who think they ruin the view something like this has a lot of potential to work in places where the other alternatives aren’t tenable. That, coupled with the fact that they can be run as either power generation units or desalination plants, means that the technology has a very large potential market. Of course the final factor that will make or break the technology is the total installed cost per KW however the numbers are already looking pretty good in that regard so I’m sure we’ll be seeing more of these CETOs soon.
It’s strange to think that just over 2 years ago that the idea of VR headsets was still something of a gimmick that was unlikely to take off. Then enter the Oculus Rift Kickstarter which managed to grab almost 10 times the funds it asked for and revamped an industry that really hadn’t seen much action since the late 90s. Whilst consumer level units are still a ways off it’s still shaping up to be an industry with robust competition with numerous competitors vying for the top spot. The latest of which comes to us via HTC who’ve partnered with Valve to deliver their Steam VR platform.
Valve partnering with another company for the hardware isn’t surprising as they let go a number of personnel in their hardware section not too long ago although their choice of partner is quite interesting. Most of the other consumer electronics giants have already made a play into the VR game: Samsung with Gear VR, Sony with Project Morpheus and Google with their (admittedly limited) Cardboard. So whilst I wouldn’t say that we’ve been waiting for HTC to release something it’s definitely not unexpected that they’d eventually make a play for this space. The fact that they’ve managed to partner with Valve, who already has major buy in with nearly all PC gamers thanks to Steam, is definitely a win for them and judging by the hardware it seems like Valve is pretty happy with the partnership too.
The HTC/Valve VR headset has been dubbed the Re Vive and looks pretty similar to the prototypes of the Oculus DK2. The specs are pretty interesting with it sporting 2, 1200 x 1080 screens which are capable of a 90hz refresh rate, well above what your standard computer monitor is capable of. The front is also littered with numerous sensors including your standard gyroscopes, accelerometers and a laser position tracker which all combine together to provide head tracking to 1/10th of a degree. There’s also additional Steam VR base stations which can provide full body tracking as well, allowing you to get up and move around in your environment.
There’s also been rumblings of additional “controllers’ that come with the headset although I’ve been unable to find any pictures of them or details on how they work. Supposedly they work to track your hand motions so you can interact with objects within the environment. Taking a wild guess here I think they might be based off something like the MYO as other solutions limit you to small spaces in order to do hand tracking properly whilst the MYO seems to fit more inline with the Re Vive’s idea of full movement tracking within a larger environment. I’ll be interested to see what their actual solution for this is as it has the potential to set Valve and HTC apart from everyone else who’s still yet to come up with a solution.
Suffice to say this piece of HTC kit has seen quite a bit of development work thrown into it, more than I think anyone had expected when this announcement was first made. It’ll be hard to judge the platform before anyone can get their hands on it as with all things VR you really don’t know what you’re getting yourself into until you give it a go. The pressure really is now on to be the first to market a consumer level solution that works seamlessly with games that support VR as all these prototypes and dev kits are great but we’re still lacking that one implementation that really sells the idea. HTC and Valve are well positioned to do that but so is nearly everyone else.
It’s no secret that I’m loudly, violently opposed to the Liberal’s Multi-Technology Mix NBN solution and I’ve made it my business to ensure that the wider Australian public is aware of frightfully bad it will be. The reasons as to why the Liberal’s solution is so bad are many however they can almost all be traced back to them wanting to cast anything that Labor created in a poor light and that their ideas are far better. Those of us in the know have remained unconvinced however, tearing into every talking point and line of rhetoric to expose the Liberal’s NBN for the farce it is. Now, as the Liberals attempt to rollout their inferior solution, they are no longer able to hide behind bullshit reports as the real world numbers paint an awfully bad picture for their supposedly better NBN.
The slogan of the MTM NBN being “Fast Affordable. Sooner.” has become an easy target as the months have rolled on since the Liberal Party announced their strategy. Whilst the first point can always be debated (since 25Mbps should be “more than enough” according to Abbott) the second two can be directly tied to real world metrics that we’re now privy to. You see with the release of the MTM NBN strategy all works that were planned, but not yet executed, were put on hold whilst a couple FTTN trial sites were scheduled to be established. The thinking was that FTTN could be deployed much faster than a FTTP solution and, so the slogan went, much cheaper too. Well here we are a year and a half later and it’s not looking good for the Liberals and unfortunately, by extension, us Australians.
It hasn’t been much of secret that the FTTN trials that NBNCo have been conducting haven’t exactly been stellar with them experiencing significant delays in getting them set up. Considering that the Liberals gave themselves a 2016 deadline for giving everyone 25Mbps+ speeds these delays didn’t bode well for getting the solution out before the deadline. Those delays appear to have continued with the trial now having just 53 customers connected to the original Umima trial and not a single one connected to the Epping trial. This is after they gave a timeline of “within a month” in October last year. Suffice to say the idea that FTTN could be made available to the wide public by the end of 2016 is starting to look really shakey and so is the 2019 timeframe for their completion of the NBN.
Worst still the idea that the MTM NBN would be significantly cheaper than the full FTTP NBN is yet again failing to stand up to scrutiny. Additional cost analysis conducted by NBNCo, which includes opex costs that were previously excluded under previous costing models, has seen the cost per premises estimate for brownfields (deployed to existing houses) rise to $4316. That’s a substantial increase however it’s a more accurate representation of how much it actually costs to get a single house deployed. Taking that into account the total cost for deploying the FTTP NBN comes out to about $47 billion, very close to the original budget that Labor had allocated for it. Whilst it was obvious that the Liberal’s cost-benefit analysis was a crock of shit from the beginning this just adds further proves the point and casts more doubt over the MTM NBN being significantly cheaper.
I’m honestly not surprised by this anymore as its clear that the Liberals really had no intent of adhering to their rhetoric and were simply trashing the FTTP NBN because it was Labor’s idea. It’s an incredibly short sighted way of looking at it, honestly, as they would have won far more favour with a lot of people if they had just continued with the FTTP NBN as it was. Instead they’re going to waste years and multiple billions of dollars on a system that won’t deliver on its promises and we’ll be left to deal with the mess. All we can really hope for at this point is that we make political history and cement the Liberal’s reign under the OneTermTony banner.
For as long as we’ve been using semiconductors there’s been one material that’s held the crown: silicon. Being one of the most abundant elements on Earth its semiconductor properties made it perfectly suited to mass manufacture and nearly all of the world’s electronics contain a silicon brain within them. Silicon isn’t the only material capable of performing this function, indeed there’s a whole smorgasbord of other semiconductors that are used for specific applications, however the amount of research poured into silicon means few of them are as mature as it is. However with our manufacturing processes shrinking we’re fast approaching the limit of what silicon, in its current form, is capable of and that may pave the way for a new contender for the semiconductor crown.
The road to the current 14nm manufacturing process has been a bumpy one, as the heavily delayed release of Intel’s Broadwell can attest to. Mostly this was due to the low yields that Intel was getting with the process, which is typical for die shrinks, however solving the issue proved to be more difficult than they had originally thought. This is likely due to the challenges Intel faced with making their FinFET technology work at the smaller scale as they had only just introduced it in the previous 22nm generation of CPUs. This process will likely still work down at the 10nm level (as Samsung has just proven today) but beyond that there’s going to need to be a fundamental shift in order for the die shrinks to continue.
For this Intel has alluded to new materials which, keen observers have pointed out, won’t be silicon.
The type of material that’s a likely candidate to replace silicon is something called Indium Gallium Arsenide (InGaAs). They’ve long been used in photodetectors and high frequency applications like microwave and millimeter wave applications. Transistors made from this substrate are called High-Electron Mobility Transistors which, in simpler terms, means that they can be made smaller, switch faster and more packed into a certain size. Whilst the foundries might not yet be able to create these kinds of transistors at scale the fact that they’ve been manufactured at some scale for decades now makes them a viable alternative rather than some of the other, more exotic materials.
There is potential for silicon to hang around for another die shrink or two if Extreme Ultraviolet (EUV) lithography takes off however that method has been plagued with developmental issues for some time now. The change between UV lithography and EUV isn’t a trivial one as EUV can’t be made into a laser and needs mirrors to be directed since most materials will simply absorb the EUV light. Couple that with the rather large difficulty in generating EUV light in the first place (it’s rather inefficient) and it makes looking at new substrates much more appealing. Still if TSMC, Intel or Samsung can figure it out then there’d be a bit more headroom for silicon, although maybe not enough to offset the investment cost.
Whatever direction the semiconductor industry takes one thing is very clear: they all have plans that extend far beyond the current short term to ensure that we can keep up the rapid pace of technological development that we’ve enjoyed for the past half century. I can’t tell you how many times I’ve heard others scream that the next die shrink would be our last, only to see some incredibly innovative solutions to come out soon after. The transition to InGaAs or EUV shows that we’re prepared for at least the next decade and I’m sure before we hit the limit of that tech we’ll be seeing the next novel innovation that will continue to power us forward.
Why the Abbott government hasn’t abandoned their incredibly unpopular metadata policy yet is beyond me. Nearly all other developed nations that have pursued such a policy have abandoned it, mostly because attempting to pass something like this is akin to committing political suicide. Worse still in their attempts to defend the policy from its critics the Abbott government has resorted to tactics and sensationalist rhetoric, none of which has any bearing on the underlying issues that this policy faces. Top this off with a cost estimation that seems to be based on back of the napkin math and you’ve got a recipe for bad legislation that will likely be implemented poorly and at a great cost to all Australian citizens.
Conceptually the idea is simple: the government wants to mandate that all ISPs and communications providers keep all metadata they generate for a period of 2 years. Initially this was sold as not being an increase in the power that authorities had however that idea is incredibly misleading as it greatly increases their ability to exercise that power. Worse still obtaining access to metadata doesn’t require a warrant and isn’t just the realm of law enforcement or intelligence agencies as people on local councils can obtain this data. Suffice to say that the gathering and retention of this data is a massive invasion of the privacy that the general public expects to have from its government and that is exactly why nearly all developed nations have dropped such policies before they’ve been implemented.
As expected the usual tropes for these kinds of policies have been trotted out, initially under the guise of a requirement for national security. I’d concede that point if it wasn’t for the fact that mass surveillance has not proved to be effective in combating terrorism, something which the critics of the policy were quick to point out. The rhetoric has then shifted away from national security to local security with Abbott saying that the metadata will help them track down peadophiles and child traffickers. Suffice to say if surveillance of this nature doesn’t help at a national level then I highly doubt its effectiveness at the lower levels and “think of the children” arguments like this are nothing more than an appeal to emotion.
Yesterday Abbott was pressed to give some hard figures on just how much this scheme would end up costing and he retorted with the rather ineloquent quip that it would be an “explosion in an unsolved crime“. When pressed the figure he gave was $300 million, estimated to be less than 1% of the total $40 billion that the entire telecommunications sector is estimated to be worth. That figure has apparently been sourced from PricewaterhouseCoopers (PwC) however the details of that figure have not been made public. In all honesty I cannot see how that figure can be accurate given the amount of data we’re talking about and the retention times required.
To put it in perspective Australians consumed something on the order of 1 Exabyte in 6 months up to June last year which is a 50% increase on the year previous. The amount of metadata on that data would be a fraction of that and, taking the same 1% liberty that Abbott seems intent on using, you get something like 50 Petabytes worth of storage required. Couple that with the fact that it won’t be stored in one place (negating economies of scale), the infrastructure requirements to provide access to it and the personnel required to fullfil requests and that $300 million figure starts to look quite shakey. Indeed the Communications Alliance in Australia has estimated it to be between $500 million and $700 million which casts doubt over how accurate Abbott’s lowball figure is.
Honestly this legislation stinks no matter which way you cut it and the rhetoric that the incumbent government has been using to defend it speaks directly to that. These policies are just simply not effective in what they set out to achieve and the only tangible result we’ll ever see from them will be an increased cost to accessing the Internet and a reduction in the expectation of privacy. I do hope Abbott keeps harping on about it though as the more he talks the more it seems likely that we’ll be able to cement the One Term Tony phrase in the history books.