I like gadgets, to the point where it I can get a little weird about things if they have just the right technological bent. My geek lust has seen my wallet open itself for all sorts of purchases I wouldn’t have typically made for myself just because the gadget geek in me fell in love with a piece of engineering or ingenious technology. It’s curbed somewhat by my desire for all things to have an useful function but that still means my house is littered with various objects which have caught my fancy at one point or another. With that in mind you’d think that I’d be something of a prime candidate for a smartwatch but I just can’t see the point of having one.
I’ll admit that I was somewhat impressed by the Pebble when I first saw it, mostly due to the fact that it used an e-Ink screen rather than a small LCD (which are notorious for being crap). I came in late to the Kickstarter however and missed out on my chance to get one but I figured it wouldn’t be too long before I could snag one at retail. Of course long delays ensued and many competitors have since released similar products but strangely enough I found myself looking at all of them and then wondering what the use case for them would be. Sure some of them looked cool (I’m something of a sucker for watches) but I couldn’t see the advantage of getting one over a traditional watch, especially if looks were the deciding factor.
The majority of the functionality seems to be focused towards at-a-glance style information coming from your smartphone like alerting you to messages or other application alerts. Whilst I can see some use for this most of the time those messages would require some action on my behalf something which these watches aren’t designed to accommodate. Using it as an external mic/speaker for my phone is something I don’t see myself using either as the quality is always going to be below that of what my phone itself can provide. Couple all this with the fact that it’s yet another device I’ll have to charge and I can’t really see the point of getting one, at least not in their current incarnations.
I could be convinced on the idea if the smartwatches included some functionality like the FitBit One and Jabone UP in them, possibly alongside an implementation of MYO. Whilst I’d love to do more metric tracking so that I could better hone my fitness program the idea of having another wearable, chargeable device always poses a significant barrier. However if a combination of all this tech could find its way into a single device then I could see myself warming to the idea as then it would be providing a whole host of functionality that my phone does not. At the same time I probably wouldn’t even need the traditional smartwatch capabilities if a fitness tracker, MYO and watch were all combined into one but if you’d already integrated that much tech it’d be inevitable to just go that one further step.
Of course I know hear the caterwauling of people thinking “Scratch your own itch! Build it yourself!” but honestly I’m not that wedded to the idea at all, just more musing over what it would take for me to come over to the smartwatch camp. I’m happy for someone to try and sell me on the idea though as I’m never adverse to spending money for good tech, so long as it serves a purpose.
I’m not exactly a corporate jet setter (although the past couple months would attest otherwise) but I’ve see the inside of a plane enough times to know the law of the land. For me I spend the majority of my time buried in a book, right now its the Wheel of Time series, as I don’t really get a chance to read for pleasure at any other time. For long haul flights I’ll usually have my laptop in tow as well although lately I’ve left that in the checked baggage, mostly because the in flight entertainment systems have gotten a lot better. Still I’ve had the pleasure of being on some flights that offer in flight wireless and whilst its usability was on the low side it was an apt demonstration of how far aviation technology has come, and where it was heading.
Rewind back a decade or so and the idea of allowing radio transmitting devices to operate on flights was akin to wanting to make the plane crash. The stance of the various aviation bodies was easy to understand however: they were simply unable to test all of the available transmitting devices with their aircraft to ensure that no interference was possible and thus had to ban them all outright. Their relenting on wireless networking was due in a large part to the rigorous specifications of 802.11a/g/n which include transmission power limits as well as their frequencies being well outside of any that aircraft use for necessary functions. Of course not every device strictly adheres to it but there’s little to be gained from juicing up the power levels on your wireless, especially if it’s running on a battery.
However the use of these systems is usually restricted to after take off through until the plane is making its final approaches for landing. Whilst I’ve heard a lot of people say that this was due to the interference I thought the reasoning was far more simple, it was to keep you aware during the most risky points of flight: take off and landing. Of course my theory falls apart in the face of reality as I’ve not once been told to put my book away during these times, even when they’re doing the safety demonstration, but have been told on numerous occasions that my laptop should be put away until I’m told it’s allowed again.
Recent announcements from the Federal Aviation Authority in the USA however show that the rules against electronic devices are slowly being changed to allow more broad use cases with them now allowing use of electronic devices during take off and landing. They’re still limiting the use of wireless to the in flight system (although whether the 10,000ft restriction is still in effect isn’t something I could ascertain) and about and the outright band on all other transmission devices remains in effect. It might surprise you to find out that I actually agree with the latter restriction but not for the sake of the airlines however, it’s for those poor cell towers.
You see when you’re on the ground your mobile phone has a finite transmission range that’s limited primarily by the numerous things that get in the signals way as it travels from the cell tower to you. As a consequence of this you’re likely only ever hitting a handful of different towers, something which they deal with easily through hand-offs between each other. However when you’re in a plane those obstructions are no longer in your way and suddenly you’re effectively able to hit dozens of towers all at the same time. This, in effect, is like a small denial of service attack and they’re simply not designed to handle it. The best way to combat this would be to use some form of picocell on the plane itself, something which I had heard was in development a long time ago but can’t find any links to support now. Still for the short term this is unlikely to change unless the telecommunications companies think its worth their while to support it and the FAA agrees to change the rules.
Personally though I’m far more interested in technology that makes those in flight wireless systems more usuable like the new Ground to Orbit systems that GoGo wireless has been testing. Whilst the current 10Mbps of bandwidth might be enough for the odd Tweet or Facebook post it’s rarely usable for anything else, especially when there’s a few people online at the same time. Of course some also take solace in the fact that they’re incommunicado for the duration of the flight, something which I don’t quite mind myself.
This requires no introduction, just watch:
As a performance this is pretty amazing as the extensive use of optical illusions to generate a feeling of depth where there is none surpasses anything that I’ve seen before. It gets even more impressive when you find out that all of it was done in camera, I.E. none of the effects you see on there have been edited in. Initially I was a little sceptical of that, I mean this kind of stuff is child’s play to anyone with Blender and some 3D tracking software, but once I saw the robotic arms in the background I immediately understood how everything fit together and it’s incredibly impressive.
There’s 2 key components at work here the first of which is the IRIS robotic arm from Bot and Dolly. They’re essentially scaled down industrial robots with several pivot points allowing them to move freely in 3D space. These are what are holding the two white panels where most of the magic happens and you can see that they’re quite agile even with their considerable bulk. The magic here is though that the camera is also held on one of them which is what allows the next piece of technology to really shine.
As you can probably guess there’s 2 projectors (at least, there could be more) which are responsible for all the visual imagery you see: one behind the camera and one pointing down onto the floor. Now what makes all of these crazy images possible is the fact that the IRIS arms can report their exact location in three dimensions, allowing the projectors to then display images with the required perspective to generate the illusions. It’s similar to the WiiMote head tracking application that came out a while back as the demo makes use of the same principles to generate the illusion of depth.
Another cool application of robots like this is introducing motion into high speed camera shots. Traditionally high speed video usually remains static as moving the camera fast enough to get any kind of good perspective in them is nigh on impossible. This demo reel from THE MARMALADE shows a very similar kind of robot that they use to do high speed video that has significant amounts of motion in it. The result is so foreign that it feels like it’s in the bottom of the uncanny valley for me but it’s still very impressive.
There’s been little doubt in the tech community that Malcolm Turnbull had it out for the FTTP NBN. He’s been quite critical of the program since its inception and has taken every opportunity to point out that it’s behind schedule (even though it’s 3 months in a 10+ year project). The FTTN policy which they campaigned with was universally derided yet Turnbull fervently defended it at every possible opportunity. Whilst I was somewhat optimistic that it was all campaign blather just to secure votes from some select parties, especially considering its non-core status, I still couldn’t shake the feeling that Turnbull really thought his policy was worthwhile, especially when he said FTTP had superseded FTTN.
Turns out that my predictions have largely turned out to be correct.
In a stark reversal on his previous positions about the NBN Turnbull has now instead opted to conduct a full review to ascertain how long the current rollout will take and if there’s anyway that can be reduced. Whilst on the surface this would appear to be just the next logical step in taking the axe to the FTTP program however it’s been shown that FTTP would end up costing about the same so any cost benefit analysis would conclude it would be the better option. Of course this also opens the door for Turnbull to take credit for the whole program by only making some superficial changes to it. Whilst this is probably the best outcome I could hope for, especially considering that current fibre rollouts will continue until the review is completed (expected to take 6 months), it doesn’t make up for the fact that Turnbull has taken every opportunity to blast the NBN and now wants to take credit for it.
Of course there’s every chance that he’d could still do a lot of damage to it without fundamentally changing the technology that underpins it. Now that the entire NBNCo board has resigned at his request Turnbull has apparently tapped former Telstra CEO Ziggy Switkowski to head the new board. Anyone who lived through Ziggy’s tenure as CEO of Telstra will tell you that he’s bad news for a telecommunications company as he proceeded to run Telstra into the ground and was ousted late in 2004. He has not been involved in the telecommunications industry since then so any cred he had has long since lapsed and would be far more likely to give a repeat performance of his time with Telstra. This could be made up for somewhat by the fact that NBNCo is still on the government’s leash but I’d rather not have to get them involved every time Ziggy makes a poor business decision.
Talking this over with my more politically minded friends it seems like this will be the only avenue in which we will be able to get the FTTP NBN we want: by letting the Liberals claim it as their own. Personally that gives me the shits as it shows that politicians aren’t interested in continuing large, multi-term infrastructure projects unless they can somehow claim ownership of it. Of course the tech community will always know it was Labor’s idea in the first place but the larger voting public will likely see it as a beleaguered project which the Liberals valiantly fixed, something which is provably wrong. In the end I guess I don’t care what the public perception is as long as it gets in but I’d rather not have to argue the point to convince people otherwise.
So hopefully 6 months from now I’ll be able to write a post about how the review has come back and magically convinced Turnbull of what we all knew: the FTTP NBN is the way to go. Whilst I’m struggling to figure out how NBNCo could do what they’re doing faster and more efficiently I’m sure they’ll be able to find a few percent here or there that will be enough to ensure the overall structure doesn’t change dramatically. With that Turnbull can claim victory that he’s able to do the exact same thing better than Labor and I’ll write another angry rant, albeit from behind a nice, fat 100MBs pipe.
There’s no question that Microsoft’s attempt at the tablet market has been lacklustre. Whilst the hardware they have powering their tablets was decent the nascent Windows Store lacks the diversity of its competitors, something which made the RT version of it even less desirable. This has since resulted in Microsoft writing down $900 million in Surface RT and associated inventory something which many speculated would be the end of the Surface line. However it appears that Microsoft is more committed than ever to the Surface idea and recently announced the Surface 2, an evolutionary improvement over its predecessor.
The new Surface 2 looks pretty much identical to predecessor although it’s a bit slimmer and is also a bit lighter. It retains the in built kick stand but it now has 2 positions instead of one something which I’m will be useful to some. The specifications under the hood have been significantly revamped for both versions of the tablet with the RT (although it’s no longer called that) version sporting a NVIDIA Tegra 4 and the Pro one of the new Haswell i5 chips. Microsoft will also now let you choose how much RAM you get in your Pro model, allowing you to cram up to 8GB in there. The Pro also gets the luxury of larger drive sizes, up to 512GB should you want it (although you’ll be forced to get the 8GB RAM model if you do). Overall I’d say this is pretty much what you’d expect from a generation 2 product and the Pro at least looks like it could be a decent laptop competitor.
Of course the issues that led Microsoft to write down nearly a billion dollars worth of inventory (after attempting to peddle as much of it as they could to TechEd attendees) still exist today and the upgrade to Windows 8.1 won’t do much to solve this. Sure in the time between the initial Surface release and now there’s been a decent amount of applications developed for it but it still pales in comparison. I still think that the Metro interface is pretty decent on a touch screen but Microsoft will really have to do something outrageous to convince everyone that the Surface is worth buying otherwise it’s doomed to repeat its predecessor’s mistakes.
The Pro on the other hand looks like it’d be a pretty great enterprise tablet thanks to its full x86 environment. I know I’d much rather have those in my environment than Android or iPads as they would be much harder to integrate into all the standard management tools. A Surface 2 Pro on the other hand would behave much like any other desktop allowing me to deliver the full experience to anyone who had one. Of course it’s then more of a replacement for a laptop than anything else but I do know a lot of users who would prefer a tablet device rather than the current fleet of laptops they’re given (even the ones who get ultrabooks).
Whilst the Pro looks like a solid upgrade I can’t help but feel that the upgrade to the RT is almost unnecessary given the fact that most of the complaints levelled at it were nothing to do with its performance. Indeed not once have I found myself wanting for speed on my Surface RT, instead I’ve been wanting my favourite apps to come across so that I don’t have to use their web versions which, on Internet Explorer, typically aren’t great. Maybe the ecosystem is mature enough now to tempt some people across but honestly unless they already own one I can’t really see that happening, at least for the RT version. The Pro on the other hand could make some headway into Microsoft’s core enterprise market but even that might not be enough for the Surface division.
One of the first ideas that an engineer in training is introduced to is the idea of modularity. This is the concept that every problem, no matter how big, can be broken down into a subset of smaller problems that are interlinked. The idea behind this is that you can design solutions specific to the problem space rather than trying to solve everything in one fell swoop, something that is guaranteed to be error prone and likely never to achieve its goals. Right after you’re introduced to that idea you’re also told that modularity done for its own sake can lead to the exact same problems so its use must be tempered with moderation. It’s this latter point that I think the designers of Phonebloks might be missing out on even though as a concept I really like the idea.
For the uninitiated the idea is relatively simple: you buy yourself what equates to a motherboard which you can then plug various bits and pieces in to with one side being dedicated to a screen and the other dedicated to all the bits and pieces you’ve come to expect from a traditional smartphone. Essentially it’s taking the idea of being able to build your own PC and applying it to the smartphone market done in the hope of reducing electronic waste since you’ll only be upgrading parts of the phone rather than the whole device at a time. The lofty idea is that this will eventually become the platform for everyone and smartphone component makers will be lining up to build additional blocks for it.
As someone who’s been building his own PCs for the better part of 3 decades now I think the idea that the base board, and by extension the interconnects it has on it, will never change is probably the largest fundamental flaw with Phonebloks. I’ve built many PCs with the latest CPU socket on them in the hopes that I could upgrade on the cheap at a later date only to find that, when it came time to upgrade, another newer and far superior socket was available. Whilst the Phonebloks board can likely be made to accommodate current requirements its inevitable that further down the track some component will require more connections or a higher bandwidth interface necessitating its replacement. Then, just as with all those PCs I bought, this will also necessitate re-buying all the additional components, essentially getting us into the same position as we are currently.
This is not to mention the fact that hoping other manufacturers, ones that already have a strong presence in the smartphone industry, will build components for it is an endeavor that’s likely to be met with heavy resistance, if it’s not outright ignored. Whilst there are a couple companies that would be willing to sell various components (Sony with their EXMOR R sensor, ARM with their processor, etc.) they’re certainly not going to bother with the integration, something that would likely cost them much more than any profit they’d see from being on the platform.
Indeed I think that’s the biggest issue that this platform faces. Whilst its admirable that they’re seeking to be the standard modular platform for smartphones the standardization in the PC industry did not come about overnight and took the collaboration of multiple large corporations to achieve. Without their support I’m struggling to see how this platform can get the diversity it needs to become viable and as far as I can tell the only backing they’ve got is from a bunch of people willing to tweet on their behalf.
Fundamentally I like the idea as whilst I’m able to find a smartphone that suits the majority of my wants pretty easily there are always things I would like to trade in for others. My current Xperia Z would be a lot better if the speakerphone wasn’t rubbish and the battery was capable of charging wirelessly and I’d happily shuffle around some of the other components in order to get my device just right. However I’m also aware of the giant integration challenge that such a modular platform would present and whilst they might be able to get a massive burst of publicity I’m skeptical that it will turn into a viable product platform. I’d love to be wrong on this though but as someone who’s seen many decades of modular platform development and the tribulations it entails I can’t say that I’m banking money for my first Phoneblok device.
I haven’t been an iPhone user for many years now, my iPhone 3GS sitting disused in the drawer beside me ever since it was replaced, mostly because the alternatives presented by other companies have, in my opinion, outclassed them for a long time. This is not to say that I think everything else should replace their phone with a Xperia Z, that particular phone is definitely not for everyone, as I realise that the iPhone fills a need for many people. Indeed it’s the phone I usually recommend to my less technically inclined friends and family members because I know that they have a support system tailored towards them (meaning they’ll bug me less). So whilst today’s announcement of the new models won’t have me opening up my wallet anytime soon it is something I feel I need to be aware of, if only for the small thrill I get for being critical of an Apple product.
So as many had speculated Apple announced 2 new iPhones today: the iPhone 5C which is essentially the entry level model and the iPhone 5S which is the top of the line one with all the latest and greatest features. The most interesting different between the two is the radical difference in design with the 5C looking more like a kids toy with its pastel style colours and the 5S looking distinctly more adult with it’s muted tones of silver, grey and gold. As expected the 5C is the cheaper of the two with the base model starting from AUD$739 and the 5S AUD$869 with the prices ramping up steadily depending on how much storage you want.
The 5C is interesting because everyone was expecting a budget iPhone to come out and Apple’s response is clearly not what most people had in mind. Sure it’s the cheapest model of the lot (bar the Phone 4S) but should you want to upgrade the storage you’re already paying the same amount as the entry level 5S. The difference in features as well are also pretty minimal with the exceptions being an A6 vs A7 processor, slightly bulkier dimensions, new fandangled fingerprint home button and a slightly better camera. Of course those slight differences are usually enough to push any potential iPhone buyer to the higher end model so the question then becomes: who is the 5C marketed towards?
It’s certainly not at the low end of the market, as most people were expecting, even though it looks the part with its all plastic finish (which we haven’t seen since I last used an iPhone). It might appeal to those who like those particular colours although realistically I can’t see that being much of a draw card considering you can buy any colour case for $10 these days. Indeed even when you factor in the typical on contract price for a new iPhone (~$200) the difference between an entry level 5C and 5S is so small that most would likely dole out the extra cash just to have the better version, especially considering how visually different they are.
Another thing running against the 5C is that the 5S shares the same dimensions as the original iPhone 5 allowing you to use all your old cases and accessories with it. I know this won’t be a dealbreaker for many but it seems obvious that the 5S is aimed at people coming from the iPhone 5 whereas the 5C doesn’t appear to have any particular market in mind that necessitates its differences. If this was Apple’s attempt to try and claw back some of the market that Android has been happily dominating then I can help but feel it’s completely misguided. Then again I lost my desire for Apple products years ago so I might be missing out on what the appeal of a gimped, not-really-budget Apple handset might be.
The iPhone 5S does look like a decent phone sporting most of the features you’d expect from a current generation smart phone. NFC is still missing which, if I’m honest, isn’t as big of a deal as I used to make it out to be as I’ve now got a NFC phone and I can’t use it for jack so I don’t count it as downer anymore. As always though the price of a comparable Android handset to what you get from Apple is a big sore point with the top of the line model topping out at an incredible AUD$1129. I know Apple is a premium brand but when the price difference between the high and low end is $260 and the only difference is storage you really have to ask if its worth it, especially when comparable Android phones will have the same level of features and will be cheaper (my 16GB Xperia Z was $768 for reference).
I will be really interested to see how the 5C pans out as many are billing it as the “budget” iPhone that everyone was after when in truth it’s anything but that. The 5S is your typical product refresh cycle from Apple, bringing in a few new cool things but nothing particularly revolutionary. Of course you should consider everything I’ve said through the eyes of a long time Android user and lover as whilst I’ve owned an iPhone before it’s been so long between drinks that I can barely remember the experience anymore. Still I’m sure at least the 5S will do well in the marketplace as all the flagship Apple phones do.
This blog has had a pretty good run as far as data retention goes. I’ve been through probably a dozen different servers over its life and every time I’ve managed to maintain continuity of pretty much everything. It’s not because I kept rigorous backups or anything like that, no I was just good at making sure I had all my data moved over and working before I deleted the old one. Sure there’s various bits of data scattered among my hard drives but none of it is readily usable so should the unthinkable happen I was up the proverbial creek without a paddle.
And, of course, late on Saturday night, the unthinkable happened.
Like a good little admin I thought it would be good to do a cleanup of the directory before I embarked on this as I was going to have to move the backup file to my desktop, no small feat considering it was some 1.9GB big and I’m on Australian Internet (thanks Abbott!). I had a previous backup file there which I moved to my /var/www directory to make sure I could download it (I could) and so I looked to cleaning everything else up. I’ve had a couple legacy directories in there for a while and so I decided to remove them. This would have been fine except I fat fingered the command and typed rm -r which happily went about its business deleting the entire folder contents. The next ls I ran sent me into a fit of rage as I struggled to figure out what to do next.
If this was a Windows box it would’ve been a minor inconvenience as I’d just fire up Recuva (if CTRL + Z didn’t work) and get all the files restore however in Linux restoring deleted files seems to be a right pain in the ass. Try as I might extundelete couldn’t restore squat and every other application looked like it required a PhD to operate. The other option was to contact my VPS provider’s support to see if they could help out however since I’m not paying a terrible amount for the service I doubt it would been very expedient, nor would I have expected them to be able to recover anything.
In desperation I reached out to my old VPS provider to see if they still had a copy of my virtual machine. The service had only been cancelled a week ago and I know a lot of them keep copies for a little while just in case something like this happens, mostly because it’s a good source of revenue (I would’ve gladly paid $200 for it). However this morning the email came from them stating unequivocally that the files are gone and there’s no way to get them back, so I was left with very few options to get everything working again.
Thankfully I still had the database which contains much of the configuration information required to get this site back up and running so all that was required was to get the base WordPress install working and then reinstall all the necessary plugins. It was during this exercise that I stumbled across the potential attack vector that let whoever it was ruin my site in the first place: my permissions were all kinds of fucked, essentially allowing open slather to anyone who wanted it. Whilst I’ve since struggled to get everything working like it was before I now know that my permissions are far better than they were and hopefully should keep it from happening again.
As for the rest of the content I have about half of the images I’ve uploaded over the past 5 years in a source folder and, if I was so inclined, could reupload them. However I’ve decided to leave that for the moment as the free CDN that WordPress gives you as part of Jetpack has most of those images in it anyway which is why everything on the front page is working as it should. I may end up doing it anyway just as an exercise to flex my PowerShell skills but it’s no longer a critical issue.
So what has this whole experience taught me? Well mostly that I should practice what I preach as if a customer came running to me in this situation I’d have little sympathy for them and would likely spend maybe 20% of the total effort I’ve spent on this site to try and restore theirs. The unintentional purge has been somewhat good as I’ve dropped many of the plugins I no longer used which has made the site substantially leaner and I’ve moved from having my pants around my ankles, begging for attackers to take advantage of me, to at least holding them around my waist. I’ll also be implementing some kind of rudimentary backup solution so that if this happens again I at least have a point in time to restore to as this whole experience has been far too stressful for my liking and I’d rather not repeat it again.
One of the biggest arguments I’ve heard against developing anything for the Android platform is the problem of fragmentation. Now it’s no secret that Android is the promiscuous smartphone operating system, letting anyone and everyone have their way with it, but that has led to an ecosystem that is made up of numerous devices that all have varying amounts of capabilities. Worse still the features of the Android OS itself aren’t very standard either with only a minority of users running the latest software at any point in time and the rest never making a true majority. Google has been doing a lot to combat this but unfortunately the unified nature of the iOS platform is hard to deny, especially when you look at the raw numbers from Google themselves.
Android developer’s lives have been made somewhat easier by the fact that they can add in lists of required features and lock out devices that don’t have them however that also limits your potential market so many developers aren’t too stringent with their requirements. Indeed those settings are also user controllable as well which can allow users you explicitly wanted to disallow being able to access your application (ala ChainFire3D to emulate NVIDIA Tegra devices). This might not be an issue for most of the basic apps out there but for things like games and applications that require certain performance characterisitcs it can be a real headache for developers to work with, let alone the sub-par user experience that comes as a result of it.
This isn’t made any easier by handset manufacturers and telecommunications providers dragging their feet every time an upgrade comes along. Even though I’ve always bought unlocked and unbranded phones the time between Google releasing an update and me receiving them has been on the order of months, sometimes coming so late that I’ve upgraded to a new phone before they’ve come out. This is why the Nexus range of phones directly from Google is so appealing, you’re guaranteed those updates immediately and without any of the cruft that your manufacturer of choice might cram in. Of course then there was that whole issue with supply but that’s another story.
For what it’s worth Google does seem to be aware of this and has tried to make inroads to solving it in the past. None of these have been particularly successful but their latest attempt, called Google Play Services, might just be the first step in the right direction to eliminating at least one aspect of Android fragmentation. Essentially instead of most new feature releases coming through Android updates like they have done in the past Google will instead deliver them via the new service. It’s done completely outside the Play store, heck it even has its own update mechanism (which isn’t visible to the end user), and is essentially Google’s solution to eliminate the feet dragging that carriers and handset manufacturers are renown for.
On the surface it sounds pretty great as pretty much every Android device is capable of running this which means that many features that just aren’t available to older versions can be made available via Google Play Services. This will also help developers immensely as they’ll be able to code against those APIs knowing that it’ll be widely available. I’m a little worried about its clandestine nature however with its silent, non-interactive updating process which seems like a potential attack vector but smarter people than me are working on it so I’ll hold off on bashing them until there’s a proven exploit.
Of course the one fragmentation problem this doesn’t solve is the one that comes from the varying hardware that the Android operating system runs on. Feature levels, performance characteristics and even screen resolution and aspect ratio are things that can’t be solved in software and will still pose a challenge to developers looking to create a consistent experience. It’s the lesser of the two problems, granted, but this is the price that Android has to pay for its wide market domination. Short of pulling a Microsoft and imposing design restrictions on manufacturers I don’t think there’s much that Google can do about this and, honestly, I don’t think they have any intentions to.
How this will translate into the real world remains to be seen however as whilst the idea is good the implementation will determine just how far this goes to solving Android’s fragmentation issue. Personally I think it will work well although not nearly as well as controlling the entire ecosystem, but that freedom is exactly what allowed Android to get to where it is today. Google isn’t showing any signs of losing that crown yet either so this really is all about improving the end user experience.
Have you ever read a software patent? They’re laborious things to read often starting out by describing at length their claims and then attempting to substantiate them all with even more colourful and esoteric language. They do this not out of some sick pleasure they get from torturing people who dare to read them but because the harder it is to compare it to prior art the better chance it has of getting through. Whilst a Dynamic Resolution Optimizer Algorithm might sound like something new and exciting it’s quite likely that it’s an image resizer, something that is trivial and has tons of prior art but, if such a patent was granted, would give the owner of it a lot of opportunity to squeeze people for licensing fees.
Indeed this kind of behaviour, patenting anything and everything that can be done in software, is what has allow the patent troll industry to flourish. These are companies that don’t produce anything, nor do they use their patents for their intended purpose (I.E. a time limited monopoly to make use of said patent), and all they do is seek licensing fees from companies based who are infringing on their patent portfolio. The trouble is with the patent language being so deliberately obtuse and vague it’s nigh on impossible for anyone creating software products to not infringe on one of them, especially if they’re granted for things which the wider programming community would believe would be obvious and trivial. It’s for this reason that I and the vast majority of people involved in the creation of software oppose patents like these and it seems finally we may have the beginnings of support from governmental entities.
The New Zealand parliament just put the kibosh on software patents in a 117-4 vote. The language of the bill is a little strange essentially declaring that any computer program doesn’t classify as an invention however any computer application that’s an implementation of a process (which itself can be patented) is patentable. This legislation is also not retroactive which means that any software patents granted in New Zealand prior to its passing will remain in effect until their expiry date. Whilst this isn’t the kind of clean sweep that many of us would have hoped for I think it’s probably the best outcome we could realistically hope for and the work done in New Zealand will hopefully function well as a catalyst for similar legislation to be passed elsewhere.
Unfortunately the place that it’s least likely to happen in is also the place where it’s needed the most: the USA. The vast majority of software patents and their ensuing lawsuits take place in the USA and unfortunately the guaranteed way of avoiding infringement (not selling your software there) means cutting out one of the world’s largest markets. The only way I can see the situation changing there is if the EU passed similar laws however I haven’t heard of them attempting to do anything of the sort. The changes passed in New Zealand might go a ways to influence them along the same lines, but I’m not holding my breath on that one.
So overall this is a good thing however we’re still a long way off from eradicating the evils of software patents. We always knew this would be a long fight, one that would likely take decades to see any real progress in, but the decision in New Zealand shows that there’s a strong desire from the industry for change in this area and people in power are starting to take notice.