Technology

Regin Architecture

Regin: The Spies’ Stuxnet.

It’s really hard to have anything but admiration for Stuxnet. It was the first piece of software that could be clearly defined as a weapon, one with a very specific purpose in mind that used all manner of tricks to accomplish its task. Since its discovery there hasn’t been another piece of software that’s come close to it in terms of capability although there’s always rumours and speculation of what might be coming next. Regin, discovered by Symantec, has been infecting computers since at least 2008 and is the next candidate for the cyber-weapon title and whilst its mode of operation is more clandestine (and thus, a little more boring) it’s more what it’s not that interests me most.

Regin Architecture

Unlike Stuxnet, and most other malware you’ll encounter these days, Regin is designed to infect a single target with no further mechanism to spread itself. This is interesting because most run of the mill malware wants to get itself onto as many machines as possible, furthering the chances that it’d pick up something of value. Malware of this nature, one that we haven’t identified a specific infection vector for, means that it’s purpose is far more targeted and was likely developed with specific targets in mind. Indeed the architecture of the software, which is highly modular in nature, indicates that Regin is deployed against a very specific subset of targets rather than allowing it to roam free and find targets of interest.

Regin has the ability to load up various different modules depending on what its command and control servers tell it to do. The functions range from interchangeable communication methods (one of which includes the incredibly insidious idea of encoding data within ping packets) to modules designed to target specific pieces of software. It’s quite possible that the list Symantec has created isn’t exhaustive either as Regin attempts to leave very little data at rest. Indeed Symantec hasn’t been able to recover any of the data captured by this particular bit of malware, indicating that captured data is likely not stored for long, if at all.

Due to its non-worm nature the range of targets that Regin has infected gives a pretty good indication as to what it’s intended purpose is. The two largest groups of targets were individuals or telecommunications backbones, indicating that its purpose is likely information gathering on a large scale. The location of of infections indicates that this piece of software was likely western in origin as the primary targets were Russia and Saudi Arabia with very few targets within western countries. It’s unlikely that this tool was developed for a specific operation due to its modular nature however, so I don’t believe there’s any relationship between different infections apart from them using the same framework.

Just like Stuxnet I’m sure we won’t know the full story of Regin for some time to come as software of this nature is incredibly adept at hiding its true purpose. Whilst its capabilities appear to be rather run of the mill the way in which it achieves this is very impressive. More interesting though is it’s non-worm nature which, whilst it may have prevented its detection for some time, hints heavily to its true purpose and origin. I’m really looking forward to further analysis of this particular piece of software as it gives us a rare insight into the world of clandestine cyber warfare operations.

Google Contributor

Google’s Solution to AdBlock Plus: Contributor.

I’d like to say that I’ve never run ads on my blog out of a principled stance against them but the reality is I just wouldn’t make enough out of them to justify their existence. Sure this blog does cost me a non-zero sum to maintain but it’s never been much of a burden and I wouldn’t feel right compromising the (now) good look of the website just to make a few bucks on the side. This hasn’t stopped me from wondering how I would go about making my living as a blogger, although unfortunately pretty much every road leads back to advertising. However that model might be set to change with one of Google’s latest products: Contributor.

Google Contributor

The idea behind it is simple: you select a monthly amount you want to contribute to the sites you frequent and for sites participating in the Contributor program you’ll see no ads from Google AdSense. It’s a slight tweak on the idea of services like Flattr with a much lower barrier to adoption since most people have a Google account already and most sites run AdSense in some form. You also don’t have to specify how much goes to each site you visit, Google handles that by counting up the pageviews and dividing up your monthly contribution accordingly. In a world where AdBlock Plus has become one of the most installed browser extensions this could be a way for publishers to claw back a little revenue and, of course, for Google to bump up its revenue.

This isn’t Google’s first foray into crowd funding publishers as just a few months ago they released Fan Funding for YouTube channels. That was mostly a reaction to other crowd funding services like Patreon and Subbable whereas Contributor feels like a more fully thought out solution, one that has some real potential to generate revenue for content creators. Hopefully Google will be scaling the program into a more general solution as times goes on as I can imagine a simple “pay $3 to disable all AdSense ads” kind of service would see an incredibly large adoption rate.

On the flip side though I’m wondering how many people would convert away from blocking ads completely to using Contributor or a similar service. I know those AdBlock sensing scripts that put up guilt trip ads (like DotaCinema’s Don’t Make Sven Cry one) are pretty effective in making me whitelist certain sites, but going the next step to actually paying money is a leap I’m not sure I’d make. I know it’s nothing in the grand scheme of things, $36/year is a pittance for most people browsing the Internet, but it’s still a barrier. That being said it’s a lower barrier than any of the other options available, however.

I think Contributor will be a positive thing for both publishers and consumers in the long run, it’ll just depend on how willing people are to fork over a couple bucks a month and how much of that makes its way back to the sites it supports. You’ll still need a decent sized audience to make a living off it but at least you’d have another tool at your disposal to have them support what you do. Meanwhile I and all the other aspiring small time bloggers will continue to fantasize about what it would be like to get paid for what we do, even though we know it’ll never happen.

But it could…couldn’t it? ;)

Netflix

Netflix to Come to Australia in March 2015.

The age of the Internet has broke down the barriers that once existed between Australia and the rest of the world. We’re keenly aware that there are vast numbers of products and services available overseas that we want to take advantage of but either can’t, because they don’t want to bring it to us, or won’t because it’s far too expensive. We’re a resourceful bunch though and whilst companies will try their darnedest to make us pay the dreaded Australia Tax we’ll find a way around it, legitimately or otherwise. Probably the most popular of services like this is Netflix which, even though it’s not available here, attracts some 200,000 subscribers here in Australia. That number could soon rocket skywards as Netflix has finally announced that they’ll be coming to our shores early next year.

Netflix

Australia will be the 16th country to receive the Netflix service, 7 years after they originally launched in the USA. Whilst there’s been demand for them to come Australia for some time now the critical mass of semi-legitimate users, plus the maturity of the cloud infrastructure they will need to deliver it here (Netflix uses AWS), has finally reached a point where an actual presence is warranted. Details are scant on exactly what they’ll be offering in Australia but looking at the other 14 non-US countries to get Netflix we can get a pretty good idea of what to expect when they finally hit the go live button for the Australian service.

For starters the full catalogue of shows that the USA service has will likely not be available to Netflix Australia subscribers. Whilst original content, like House of Cards or Orange is the New Black, will be available the content deals inked by rights holders with other companies in Australia will unfortunately take precedent over Netflix. This doesn’t mean that this won’t change over time as it’s highly likely that rights holders will look to move onto Netflix as old contracts expire but it might put a damper on the initial uptake rate. Considering that there are numerous services to change your Netflix region to get the full catalogue though I’m sure the restriction won’t have too much of an effect.

The DVD service probably won’t be making it here either, although I don’t think anyone really cares about that anyway.

Probably the biggest issue that Netflix will face coming to Australia is the dismal state of the Internet infrastructure here. Whilst most of us have enough speed to support some level of streaming the numbers of us that can do anything above 720p is a much more limited market. As long time readers will know I have little faith in the MTM NBN to provide the speeds required to support services like Netflix so I don’t think this is a problem that will be going away any time soon. Well, unless everyone realises their mistake at the next election.

Overall this is good news for Australia as it has the potential to break the iron grip that many of the pay TV providers have on the content that Australians want. It might not be the service that many are lusting after for but over time I can see Netflix becoming the dominant content platform in Australia. Hopefully other content providers will follow suit not long after this and Australia will finally get all the services it’s been lusting after for far too long. Maybe then people will realise the benefits of a properly implemented FTTP NBN and I’ll finally be able to stop ranting about it.

Microsoft .Net logo

.NET to be Fully Open Source.

Microsoft isn’t a company you’d associate with open source. Indeed if you wound back the clock 10 years or so you’d find a company that was outright hostile to the idea, often going to great lengths to ensure open source projects that competed with their offerings would never see the light of day. The  Microsoft of today is vastly different, contributing to dozens of open sourced projects and working hard with partner organisations to develop their presence in the ecosystem. For the most part however this has usually been done with an integration view towards their proprietary products which isn’t exactly in-line with the open source ethos. That may be set to change however as Microsoft will be fully open sourcing its .NET framework, the building blocks of all Microsoft applications.

Microsoft .Net logo

For the uninitiated Microsoft .NET is a development framework that’s been around since the Windows XP days that exposed a consistent set of capabilities which applications could make use of. Essentially this meant that developing a .NET application meant you could guarantee it would work on any computer running that framework, something which wasn’t entirely a given before its inception. It’s since then grown substantially in capability, allowing developers to create some very capable programs using nothing more than the functionality built directly into Windows. Indeed it was so successful in accomplishing its aims that there was already a project going to make it work on non-Windows platforms, dubbed Mono, and it is with them that Microsoft is seeking to release a full open source implementation of the .NET framework.

Whilst this still falls in line with Microsoft’s open source strategy of “things to get people onto the Microsoft platform” it does open up a lot of opportunities for software to be freed from the Microsoft platform. The .NET framework underpins a lot of applications that run on Windows, some that only run on Windows, and an implementation of that framework on another platform could quickly elevate them to cross platform status. Sure, the work to translate them would still likely be non-trivial, however it’ll be a damn sight easier with a full implementation available, possibly enough to tempt some companies to make the investment.

One particularly exciting application of an open sourced .NET framework is games which, traditionally, have an extremely high opportunity cost when porting between platforms. Whilst everything about games development on Windows isn’t strictly .NET there are a lot of .NET based frameworks out there that will be readily portable to new platforms once the open sourcing is complete. I’m not expecting miracles, of course, but it does mean that the future of cross-platform releases is looking a whole bunch brighter than it was just a week ago.

This is probably one of Microsoft’s longest bets in a while as it’s going to be years before the .NET framework sees any kind of solid adoption among the non-Windows crowd. However this does drastically increase the potential of C# and .NET to become the cross platform framework of favour with developers, especially considering the large .NET developer community that already exists today. It’s going to be an area that many of us will be watching with keen interest as it’s yet another signal that Microsoft isn’t the company it used to be, a likely never will be again in the future.

Project Ara Prototype

The Modular Phone Idea is Still Alive in Project Ara.

There’s two distinct schools of thought when it comes to the modular smartphone idea. The first is that it’s the way phones were meant to be made, giving users the ability to customize every aspect of their device and reducing e-waste at the same time. The other flips that idea on its head, stating that the idea is infeasible due to the limitations inherent in a modular platform and reliance on manufacturers to build components specifically for the platform. Since I tend towards the latter I thought that Project Ara, Google’s (nee Motorola’s) attempt at the idea, would likely never see the light of day but as it turns out the platform is very real and they even have a working prototype.

Project Ara PrototypeThe essence of the idea hasn’t changed much since Motorola first talked about it at the end of last year, being a more restrained version of the Phonebloks idea. The layout is the same as the original design prototypes, giving you space on the back of the unit for about 7 modular units and space on the front for a large screen and a speaker attachment. However they also showed off a new, slim version which has space for fewer modules but is a much sleeker unit overall. Google also mentioned that they were working on a phablet design as well which was interesting considering that the current prototype unit was looking to be almost phablet sized. The whole unit, dubbed Spiral 1, was fully functional including module removal and swapping so the idea has definitely come a long way since it’s initial inception late last year.

There are a few things that stand out about the device in its current form, primarily the way in which some of the blocks don’t conform to the same dimensions as other ones. Most notably you can see this with the blood oxygen sensor they have sticking out of the top however you’ll also notice that the battery module is about twice the height of anything else. This highlights one of the bigger issues with modular design as much of the heft in modern phones is due to the increasingly large batteries they carry with them. The limited space of the modular blocks means that either the batteries have significantly reduced capacity or have to be bigger than the other modules, neither of which is a particularly desirable attribute.

In fact the more the I think about Project Ara the more I feel it’s oriented towards those looking to develop hardware for mobile platforms than it is for actual phone users. Being able to develop your specific functionality without having to worry about the rest of the platform frees up a significant amount of time which can then be spent on getting said functionality into other phones. In that regards Project Ara is amazing however that same flexibility is likely what will turn many consumers off such a device. Sure, having a phone tailored to your exact specifications has a certain allure, but I can’t help but feel that that market is vanishingly small.

It will be interesting to see how the Project Ara platform progresses as they have hinted that there’s a much better prototype floating around (called Spiral 2) which they’re looking to release to hardware developers in the near future. Whilst having a proof of concept is great there’s still a lot of questions around module development, available functionality and, above all, the usability of the system when its complete. It’s looking like a full consumer version likely isn’t due out until late next year or early 2016 so we’re going to have to wait a while to see what the fully fledged modular smartphone will look like.

 

 

StatCounter-os-ww-monthly-201108-201410

Windows XP Finally Meeting its End.

For the longest time, far too long in my opinion, XP had been the beast that couldn’t be slayed. The numerous releases of Windows after it never seemed to make much more than a slight dent in its usage stats and it reigned as the most used operating system worldwide for an astonishing 10 years after its initial release. It finally lost its crown to Windows 7 back in October of 2011 but it still managed to hold on a market share that dwarfed many of its competitors. It’s decline was slow though, much slower than an operating system which was fast approaching end of life should have been. However last quarter saw it drop an amazing 6% in total usage, finally dropping it below the combined usage of Windows 8 and 8.1.

StatCounter-os-ww-monthly-201108-201410

The reasons behind this drop are wide and varied but it finally appears that people are starting to take Microsoft’s warnings that their product is no longer supported seriously and are looking for upgrades. Surprisingly though the vast majority of people transitioning away from the aging operating system aren’t going for Windows 7, they’re going straight to Windows 8.1. This isn’t to say that 8.1 is eating away at 7’s market share however, it’s up about half a percent in the same time frame, and the upgrade path is likely due to the fact that Microsoft has ceased selling OEM copies of Windows 7. Most of those new licenses do come with downgrade rights however though I’m sure few people actually use them.

If XP’s current downward trend continues along this path then it’s likely to hit the low single digit usage figures sometime around the middle of next year. On the surface this would appear to be a good thing for Microsoft as it means that the majority of their user base will be on a far more modern platform. However at the same time the decline might just be a little too swift for people to consider upgrading to Windows 10 which isn’t expected to be RTM until late next year. Considering the take up performance of Windows 8 and 8.1 this could be something of a concern for Microsoft although there is another potential avenue: Windows 7 users.

The last time Microsoft has a disastrous release like Windows 8 the next version of Windows to take the majority of the market share was 7, a decade after the original had released. Whilst it’s easy to argue that this time will be different (like everyone does) a repeat performance of that nature would see Windows 7 being the dominant platform all the way up until 2019. Certainly this is something that Microsoft wants to avoid so it will be interesting to see how fast Windows 10 gets picked up and which segments of Microsoft’s business it will cannibalize. Should it be primarily Windows 7 based then I’d say everything would be rosy for them, however if it’s all Windows 8/8.1 then we could be seeing history repeat itself.

Microsoft is on the cusp of either reinventing itself with Windows 10 or being doomed to forever repeat the cycle which consumers have forced them into. To Microsoft’s credit they have been trying their best to break out of this mould however it’s hard to argue with the demands of the consumer and there’s only so much they can do before they lose their customer’s faith completely. The next year will be very telling for how the Microsoft of the future will look and how much of history will repeat itself.

Alienware Graphics Amplifier

External GPUs are a Solution in Search of a Problem.

If you’re a long time PC gamer chances are that you’ve considered getting yourself a gaming laptop at one point or another. The main attraction from such a device is portability, especially back in the heydays of LANs where steel cases and giant CRTs were a right pain to lug around. However they always came at a cost, both financially and opportunity as once you bought yourself a gaming laptop you were locked into those specs until you bought yourself another one. Alienware, a longtime manufacturer of gaming laptops, has cottoned onto this issue and has developed what they’re calling the Graphics Amplifier in order to bring desktop level grunt and upgradeability to their line of laptops.

Alienware Graphics Amplifier

On the surface it looks like a giant external hard drive but inside it are all the components required to run any PCIe graphics card. It contains a small circuit board with a PCIe x16 slot, a 450W power supply and a host of other connections because why not. There’s no fans or anything else to speak of however so you’re going to want to get a card with a blower style fan system on it, something which you’ll only see on reference cards these days. This then connects back to an Alienware laptop through a proprietary connection (unfortunately) which then allows the graphics card to act as if it’s installed in the system. The enclosure retails for about $300 without the graphics card included in it which means you’re up for about $600+ if you’re going to buy one for it. That’s certainly not out of reach for those who are already investing $1800+ in the requisite laptop but it’s certainly enough to make you reconsider the laptop purchase in the first place.

You see whilst this external case does appear to work as advertised (judging by the various articles that have popped up with it) it essentially removes the most attractive thing about having a gaming capable laptop: the portability. Sure this is probably more portable than a mini tower and a monitor but at the same time this case is likely to weigh more than the laptop itself and won’t fit into your laptop carry bag. The argument could be made that you wouldn’t need to take this with you, this is only for home use or something, but even then I’d argue you’d likely be better off with a gaming desktop and some slim, far more portable laptop to take with you (both of which could be had for the combined cost of this and the laptop).

Honestly though the days have long since passed when it was necessary to upgrade your hardware on a near yearly basis in order to be able to play the latest games. My current rig is well over 3 years old now and is still quite capable of playing all current releases, even if I have to dial back a setting or two on occasion. With that in mind you’d be better off spending the extra cash that you’d sink into this device plus the graphics card into the actual laptop itself which would likely net you the same overall performance. Then, when the laptop finally starts to show its age, you’ll likely be in the market for a replacement anyway.

I’m sure there’ll be a few people out there who’ll find some value in a device like this but honestly I just can’t see it. Sure it’s a cool piece of technology, a complete product where there’s only been DIY solutions in the past, but it’s uses are extremely limited and not likely to appeal to those who it’ll be marketed too. Indeed it feels much like Razer’s modular PC project, a cool idea that just simply won’t have a market to sell its product to. It’ll be interesting to see if this catches on though but since Alienware are the first (and only) company to be doing this I don’t have a high hopes.

Alan Eustace Record Breaking Jump

Google VP Alan Eustace Breaks Baumgartner’s Record.

It was just over 2 years ago that Felix Baumgartner leapt from the Red Bull Stratos capsule from a height of 39KMs above the Earth’s atmosphere, breaking a record that had stood for over 50 years. The amount of effort that went into creating that project left many, including myself, thinking that Baumgartner’s record would stand for a pretty long time as few have the resources and desire to do something of that nature. However as it turns out one of Google’s Senior Vice Presidents, Alan Eustace, had been working on breaking that record in secret for the past 3 years and on Friday last week he descended to Earth from a height of 135,890 feet (41.4KM), shattering Baumgartner’s record by an incredible 7,000 feet.

Alan Eustace Record Breaking Jump

The 2 jumps could not be more different, both technically and generally. For starters the Red Bull Stratos project was primarily a marketing exercise for Red Bull, the science that happened on the side was just a benefit for the rest of us. Eustace’s project on the other hand was done primarily in secret, with him eschewing any help from Google in order to avoid it becoming a marketing event. Indeed I don’t think anyone bar those working on the project knew that this was coming and the fact that they managed to achieve what Stratos did with a fraction of the funding speaks volumes to the team Eustace created to achieve this.

Looking at the above picture, which shows Eustace dangling from a tenuous tether as he ascends upwards, it’s plain to see that their approach was radically different to Stratos. Instead of building a capsule to transport Eustace, like Stratos and Kittinger’s project both did, they instead went for a direct tether to his pressure suit. This meant he spent the long journey skywards dangling face down which, whilst being nightmare material for some, would’ve given him an unparalleled view of the Earth disappearing from him. It also means that the load the balloon needed to carry was greatly reduced by comparison which likely allowed him to ascend much quicker.

Indeed the whole set up is incredibly bare bones with Eustace’s suit lacking many of the ancillary systems that Baumgartner’s had. One that amazed me was the lack of any kind of cooling system, something which meant that any heat he generated would stick around for an uncomfortably long period of time. To get around this he essentially remained motionless for the entire ascent, responding to ground control by moving one of this legs which they could monitor on camera. They did include a specially developed kind of parachute though, called Saber, which ensured that he didn’t suffer from the same control issues that Baumgartner did during his descent.

It’s simply astounding how Eustace and his team managed to achieve this, given their short time frame and comparatively limited budget. I’m also wildly impressed that they managed to keep this whole thing a secret for that period of time too as it would’ve been very easy for them to overshadow the Stratos project, especially given some of the issues they encountered. Whilst we might not all be doing high altitude jumps any time soon the technology behind this could find its way into safety systems in the coming generation of private space flight vehicles, something they will all need in no short order.

Windows 10 Logo

Windows 10 Brings Vastly Improved Security.

Windows has always had a troubled relationship with security. As the most popular desktop operating system it’s frequently the target of all sorts of weird and wonderful attacks which, to Microsoft’s credit, they’ve done their best to combat. However it’s hard to forget the numerous missteps along the way like the abhorrent User Access Control system which, in its default state, did little to improve security and just added another level of frustration for users. However if the features coming from the technical preview of Windows 10 are anything to go by Microsoft might finally be making big boy steps towards improving security on their flagship OS.

Windows 10 Logo

Whilst there’s numerous third party solutions to 2 factor authentication on Windows, like smartcards or tokens, the OS itself has never had that capability natively. This means that for the vast majority of Windows users this heightened security mode has been unavailable. Windows 10 brings with it the Next Generation Credentials service which allows users (both consumer and corporate) the ability to enrol a device to function as a second factor for authentication. The larger mechanics of how this work are still being worked out however the application has a PIN which would prevent unauthorized access to the code, ensuring that losing your device doesn’t mean someone automatically gains access to your Windows login. Considering this kind of technology has been freely available for years (hell my World of Warcraft characters have had it for years) it’s good to see it finally making its way into Windows as native functionality.

There’s also extensive customization abilities available thanks to Microsoft adopting the FIDO Alliance standard rather than developing their own proprietary solution. In addition to the traditional code-generation 2 factor auth you can also use your smartphone as a sort of smartcard with it being automatically recognised when brought next to a bluetooth enabled PC. This opens up the possibility for your phone to be a second factor for a whole range of services and products that currently make use of Microsoft technology, like Active Directory integrated applications. Whilst some might lament that possibility the fact that it’s based on open standards means that such functionality won’t be limited to the Microsoft family of products.

Microsoft has also announced a whole suite of better security features, many of which have been third party products for the better part of a decade. Encryption is now available for the open and save dialogs natively within the Windows APIs, allowing developers to easily integrate encryption functionality into their applications. This comes hand in hand with controls around which applications can access said encrypted data, ensuring that data handling measures can’t be circumvented by using non-standard applications. Device lock down is also now natively supported, eliminating the need for other device access control software like Lumension (which, if you’ve worked with, will likely be thankful for).

It might not be the sexiest thing to be happening in Windows 10 but it’s by far one of the more important. As the defacto platform for many people increases in Windows security are very much welcome and hopefully this will lead to a much more secure computing world for us all. These measures aren’t a silver bullet by any stretch of the imagination but they’ll go a long way to making Windows far more secure than it has been in the past.

Nexus 6

Nexus 6 Announced, Confirms 6 Inches is What Everyone Wants.

For the last 6 months I’ve been on the lookout for the next phone that will replace my Xperia Z. Don’t get me wrong, it’s still quite a capable phone, however not a year has gone by in the past decade that there hasn’t been one phone that triggered my geeky lust, forcing me to part ways with several hundred dollars. However the improvements made since I acquired my last handset have just been evolutionary steps forward, none of which have been compelling enough to make me get my wallet out. I had hoped that the Nexus 6 would be the solution to my woes and, whilst it’s not exactly the technological marvel I was hoping for, Google might just be fortunate enough to get my money this time around.

Nexus 6

The Nexus 6 jumps on the huge screen bandwagon bringing us an (almost) 6″ display boasting a 2560 x 1440 resolution on an AMOLED panel. The specs under the hood are pretty impressive with it sporting a quad core 2.7 GHz SOC with 3GB RAM and a 3220mAh battery. The rest of it is a rather standard affair including things such as the standard array of sensors that everyone has come to expect, a decent camera (that can do usable 4K video) and a choice between 32GB and 64GB worth of storage. If you were upgrading every 2 years or so the Nexus 6 would be an impressive step up however compared to what’s been available in the market for a while now it’s not much more than a giant screen.

You can’t help but compare this phone to the recently released iPhone 6+ which also sports a giant screen and similar specifications. In terms of who comes out ahead it’s not exactly clear as they both seem to win out in various categories (the Nexus 6 has the better screen, the iPhone 6+ is lighter) but then again the main driver of which one of these you’d go for would be more heavily driven by which ecosystem you’d already bought into. I’d be interested to see how these devices compare side by side however as there’s only so much you can tell by looking at spec sheets.

As someone who’s grown accustom to his 5″ screen I was hoping there’d be a diminutive sister of the Nexus 6, much like the iPhone 6. You can still get the Nexus 5, which now sports Android L, however the specs are the same as they ever were which means there’s far less incentive for people like me to upgrade. Talking to friends who’ve made the switch to giant phones like this (and seeing my wife, with her tiny hands, deftly use her Galaxy Note) it seems like the upgrade wouldn’t be too much of a stretch. Had there been a smaller screen I would probably be a little bit more excited about acquiring one as I don’t really have a use case for a much bigger screen than what I have now. That could change once I get some time with the device, though.

So whilst I might not be frothing at the mouth to get Google’s latest handset they might just end up getting my money anyway as there just enough new features for me to justify upgrading my near 2 year old handset. There’s no mistaking that the Nexus 6 is the iPhone 6+ for those on the Android ecosystem and I’m sure there will be many a water cooler conversation over which one of them is the better overall device. For me though the main draw is the stock Android interface with updates that are unimpeded by manufacturers and carriers, something which has been the bane of my Android existence for far too long. Indeed that’s probably the only compelling reason I can see to upgrade to the Nexus 6 at the moment, which is likely enough for some.