For the longest time, far too long in my opinion, XP had been the beast that couldn’t be slayed. The numerous releases of Windows after it never seemed to make much more than a slight dent in its usage stats and it reigned as the most used operating system worldwide for an astonishing 10 years after its initial release. It finally lost its crown to Windows 7 back in October of 2011 but it still managed to hold on a market share that dwarfed many of its competitors. It’s decline was slow though, much slower than an operating system which was fast approaching end of life should have been. However last quarter saw it drop an amazing 6% in total usage, finally dropping it below the combined usage of Windows 8 and 8.1.
The reasons behind this drop are wide and varied but it finally appears that people are starting to take Microsoft’s warnings that their product is no longer supported seriously and are looking for upgrades. Surprisingly though the vast majority of people transitioning away from the aging operating system aren’t going for Windows 7, they’re going straight to Windows 8.1. This isn’t to say that 8.1 is eating away at 7’s market share however, it’s up about half a percent in the same time frame, and the upgrade path is likely due to the fact that Microsoft has ceased selling OEM copies of Windows 7. Most of those new licenses do come with downgrade rights however though I’m sure few people actually use them.
If XP’s current downward trend continues along this path then it’s likely to hit the low single digit usage figures sometime around the middle of next year. On the surface this would appear to be a good thing for Microsoft as it means that the majority of their user base will be on a far more modern platform. However at the same time the decline might just be a little too swift for people to consider upgrading to Windows 10 which isn’t expected to be RTM until late next year. Considering the take up performance of Windows 8 and 8.1 this could be something of a concern for Microsoft although there is another potential avenue: Windows 7 users.
The last time Microsoft has a disastrous release like Windows 8 the next version of Windows to take the majority of the market share was 7, a decade after the original had released. Whilst it’s easy to argue that this time will be different (like everyone does) a repeat performance of that nature would see Windows 7 being the dominant platform all the way up until 2019. Certainly this is something that Microsoft wants to avoid so it will be interesting to see how fast Windows 10 gets picked up and which segments of Microsoft’s business it will cannibalize. Should it be primarily Windows 7 based then I’d say everything would be rosy for them, however if it’s all Windows 8/8.1 then we could be seeing history repeat itself.
Microsoft is on the cusp of either reinventing itself with Windows 10 or being doomed to forever repeat the cycle which consumers have forced them into. To Microsoft’s credit they have been trying their best to break out of this mould however it’s hard to argue with the demands of the consumer and there’s only so much they can do before they lose their customer’s faith completely. The next year will be very telling for how the Microsoft of the future will look and how much of history will repeat itself.
Microsoft really can’t seem to win sometimes. If they stop making noticeable changes to their products everyone starts whining about how they’re no longer innovating and that people will start to look for alternatives. However should they really try something innovative everyone rebels, pushing Microsoft to go back to the way things ought to be done. It happened with Vista, the Ribbon interface and most recently with Windows 8. Usually what happens though is that the essence of the update makes it into the new version with compromises made to appease those who simply can’t handle change.
And with that, ladies and gentlemen, Microsoft has announced Windows 10.
Everyone seems to be collectively shitting their pants over the fact that Microsoft skipped a version number, somehow forgetting that most of the recent versions of Windows have come sans any number at all. If you want to get pedantic about it (and really, I do) the last 10 versions of Windows have been: Windows 3.1, Windows 95, Windows 98, Windows NT 4.0, Windows 2000, Windows ME (gag), Windows XP, Windows Vista, Windows 7 and Windows 8. If you were expecting them to release Windows 9 because of the last 2 versions of Windows just happened to be in numerical order I’m going to hazard a guess you ate a lot of paint as a child.
On a more serious note the changes that many people were expecting to make up the 8.2 release appear to have been bundled into Windows 10. The start menu makes its triumphant return after 2 years on the sidelines although those modern/metro apps that everyone loved to hate will now make an appearance on there. For someone like me who hasn’t really relied on the start menu even since before Windows 8 arrived (pressing the window key and then typing in what I want is much faster than clicking my way through the menu) I’m none too bothered with its return. It will probably make Windows 10 more attractive to the enterprise though as many of them are still in the midst of upgrading from XP (or purposefully delaying upgrading to 8).
The return of the start menu goes hand in hand with the removal of the metro UI that hosted those kinds of apps, which have now been given the ability to run in a window on the desktop. This is probably one of the better improvements as it no longer means you get a full screen app taking over your desktop if you accidentally click on something that somehow associated itself with a metro app. For me this most often seems to happen with mail as even though I’ve got Outlook installed the Mail app still seems to want to launch itself every so often. Whether or not this will make that style of apps more palatable to the larger world will have to remain to be seen, however.
There’s also been a few other minor updates announced like the inclusion of multiple desktops and improved aero-snap. The command line has also received a usability update, now allowing you to use CTRL + C and CTRL + V to copy and paste respectively. In all honesty if you’re still doing your work in the command line on any version of Windows above Vista you’re doing it wrong as PowerShell has been the shell of choice for everyone for the better part of 7 years. I’m sure some users will be in love with that change but the vast majority of us moved on long ago.
The release date is scheduled for late next year with a technical preview available right now for enterprising enthusiasts. It will be interesting to see what the take up rate is as that date might be a little too late for enterprises who are still running XP who will most likely favour 7 instead. That being said the upgrade path from 7 to 10 is far easier so there is the possibility of Windows 10 seeing a surge in uptake a couple years down the road. For those early adopters of Windows 7 this next release might just be hitting the sweet spot for them to upgrade so there’s every chance that 10 will be as successful as 7.
I’ll reserve my judgement on the new OS until I’ve had a good chance to sit down and use it for an extended period of time. Microsoft rarely makes an OS that’s beyond saving (I’d really only count ME in there) and whilst I might disagree with the masses on 8’s usability I can’t fault Microsoft for capitulating to them. Hopefully the changes aren’t just skin deep as this is shaping up to be the last major revision of Windows we’ll ever see and there’d be nothing worse than for Microsoft to build their future empire on sand.
It’s hard to deny that Windows 8 hasn’t been a great product for Microsoft. In the 2 years that it’s been on the market it’s managed to secure some 12% of total market share which sounds great on the surface however its predecessor managed to nab some 40% in a similar time frame. The reasons behind this are wide and varied however there’s no mistaking that a large part of it was the Metro interface which just didn’t sit well with primarily desktop users. Microsoft, to their credit, has responded to this criticism by giving consumer what they want but like Vista the product that Windows 8 today is overshadowed by it’s rocky start. It seems clear now that Microsoft is done with Windows 8 as a platform and is now looking towards its successor, codenamed Windows Threshold.
Not a whole lot is known about what Threshold will entail but what is known points to a future where Microsoft is distancing itself from Windows 8 in the hopes of getting a fresh start. It’s still not known whether or not Threshold will become known as Windows 9 (or whatever name they might give to it) however the current release date is slated for sometime next year, on time with Microsoft’s new dynamic release schedule. This would also put it at 3 years after the initial release of Windows 8 which also ties into the larger Microsoft product cycle. Indeed most speculators are pegging Threshold to be much like the Blue release of last year with all Microsoft products receiving an update upon release. What interests me about this release isn’t so much of what it contains, more what it’s going to take away from Windows 8.
Whilst Microsoft has made inroads to making Windows 8 feel more like its predecessors the experience is still deeply tied to the Metro interface. Pressing the windows key doesn’t bring up the start menu and Metro apps are still have that rather obnoxious behaviour of taking over your entire screen. Threshold however is rumoured to do away with this, bringing back the start menu with a Metro twist that will allow you to access those kinds of applications without having to open up the full interface. Indeed for desktop systems, those that are bound to a mouse and keyboard, Metro will be completely disabled by default. Tablets and other hybrid devices will still retain the UI with the latter switching between modes depending on what actions occur (switch to desktop when docked, Metro when in tablet form).
From memory such features were actually going to make up parts of the next Windows 8 update, not the next version of Windows itself. Microsoft did add some similar features to Windows 8 in the last update (desktop users now default to desktop on login, not Metro) but the return of the start menu and the other improvements are seemingly not for Windows 8 anymore. Considering just how poor the adoption rates of Windows 8 has been this isn’t entirely surprising and Microsoft might be looking for a clean break away from Windows 8 in order to drive better adoption of Threshold.
It’s a strategy that has worked well for them in the past so it shouldn’t be surprising to see Microsoft doing this. For those of us who actually used Vista (after it was patched to remedy all the issues) we knew that Windows 7 was Vista under the hood, it was just visually different enough to break past people’s preconceptions about it. Windows Threshold will likely be the same, different enough from its direct ancestor that people won’t recognise it but sharing the same core that powered it. Hopefully this will be enough to ensure that Windows 7 doesn’t end up being the next XP as I don’t feel that’s a mistake Microsoft can afford to keep repeating.
The Surface has always been something of a bastard child for Microsoft. They were somewhat forced into creating a tablet device as everyone saw them losing to Apple in this space (even though Microsoft’s consumer electronics division isn’t one of their main profit centers) and their entry into the market managed to confuse a lot of people. The split between the Pro and RT line was clear enough for those of us in the know however consumers, who often in the face of 2 seemingly identical choices will prefer the cheaper one, were left with devices that didn’t function exactly as they expected. The branding of the Surface then changed slightly so that those seeking the device would likely end up with the Pro model and all would be right with the world. The Surface 3, announced last week, carries on that tradition albeit with a much more extreme approach.
As you’d expect the new Surface is an evolutionary step up in terms of functionality, specifications and, funnily enough, size. You now have the choice of either an Intel i3, i5 or i7, 4GB or 8GB of memory and up to 512GB of SSD storage. The screen has swelled to 12″ in size and now sports a pretty incredible 2160 x 1440 resolution, equal to that of many high end screens you’d typically find on a desktop. These additional features actually come with a reduction in weight from the Surface 2 Pro, down from 900g to a paltry 790g. There are some other minor changes as well like the multi-position kickstand and a changed pen but those are small potatoes compared to the rest of the changes that seem to have aimed the Surface more as a laptop replacement than a tablet that can do laptop things.
Since I carry a laptop with me for work (a Dell Latitude E6430 if you were wondering) I’m most certainly sensitive to the issues that plague people like me and the Surface Pro has the answer to many of them. Having to lug my work beast around isn’t the most pleasant experience and I’ve long been a champion of moving everyone across to Ultrabooks in order to address many of the concerns. The Surface Pro is essentially an Ultrabook in a tablet form factor which provides the benefits of both in one package. Indeed colleagues of mine who’ve bought a surface for that purpose love them and those who bought the original Surface Pro back at the TechEd fire sale all said similar things after a couple days of use.
The one thing that would seal the deal for me on the Surface as the replacement to my now 2 year old Zenbook would be the inclusion (or at least option to include) a discrete graphics card. Whilst I don’t do it often I do use my (non-work) laptop for gaming and whilst the Intel HD 4400 can play some games decently the majority of them will struggle. However the inclusion of even a basic discrete chip would make the Surface a portable gaming powerhouse and would be the prime choice for when my Zenbook reaches retirement. That’s still a year or two away however so Microsoft may end up getting my money in the end.
What’s really interesting about this announcement is the profound lack of a RT version of the Surface Pro 3. Indeed whilst I didn’t think there was anything to get confused about between the two version it seems a lot of people did and that has led to a lot of disappointed customers. It was obvious that Microsoft was downplaying the RT version when the second one was announced last year but few thought that it would lead to Microsoft outright cancelling the line. Indeed the lack of an accompanying Surface RT would indicate that Microsoft isn’t so keen on that platform, something which doesn’t bode well for the few OEMs that decided to play in that space. On the flip side it could be a great in for them as Microsoft eating up the low end of the market was always going to be a sore spot for their OEMs and Microsoft still seems committed to the idea from a purely technological point of view.
The Surface 3 might not be seeing me pull out the wallet just yet but there’s enough to like about it that I can see many IT departments turning towards it as the platform of choice for their mobile environments. The lack of an RT variant could be construed as Microsoft giving up on the RT idea but I think it’s probably more to do with the confusion around each of the platform’s value propositions. Regardless it seems that Microsoft is committed to the Surface Pro platform, something which was heavily in doubt just under a year ago. It might not be the commercial success that the iPad et al were but it seems the Surface Pro will become a decent revenue generator for Microsoft.
The IT industry has always been one of rapid change and upheaval, with many technology companies only lasting as long as they could innovate for. This is at odds with the traditional way businesses operated, preferring to stick to predictable cycles and seek gains through incremental improvements in process, procedure and marketing. The eventuality of this came in the form of the traditional 3~5 year cycle that many enterprises engaged in, upgrading the latest available technology usually years after it had been released. However the pace of innovation has increased to the point where such a cycle could leave an organisation multiple generations behind and it’s not showing any signs of slowing down soon.
I mentioned last year how Microsoft’s move from a 3 year development cycle to a yearly one was a good move, allowing them to respond to customer demands much more quickly than they were previously able to. However the issue I’ve come across is whilst I, as a technologist, love hearing about the new technology the customer readiness for this kind of innovation simply isn’t there. The blame for this almost wholly lays at the feet of XP’s 12 year dominance of the desktop market, something which even the threat of no support did little to impact its market share. So whilst the majority may have made the transition now they’re by no means ready for a technology upgrade cycle that happens on a yearly basis. There are several factors at play with this (tools, processes and product knowledge being the key ones) but the main issue remains the same: there’s a major disjoint between Microsoft’s current release schedule and it’s adoption among its biggest customers.
Microsoft, to their credit, are doing their best to foster rapid adoption. Getting Windows 8.1 at home is as easy as downloading an app from the Windows store and waiting for it to install, something you can easily do overnight if you can’t afford the down time. Similarly the tools available to do deployments on a large scale have been improved immensely, something anyone who’s used System Center Configuration Manager 2012 (and it’s previous incarnations) will attest to. Still even though the transition from Windows 7 to 8 or above is much lower risk than from XP to 7 most enterprises aren’t looking to make the move and it’s not just because they don’t like Windows 8.
With Windows 8.2 slated for release sometime in August this year Windows 8 will retain an almost identical look and feel to that of its predecessors, allowing users to bypass the metro interface completely and giving them back the beloved start menu. With that in place there’s almost no reason for people to not adopt the latest Microsoft operating system yet it’s likely to see a spike in adoption due to the inertia of large IT operations. Indeed even those that have managed to make the transition to Windows 8 probably won’t be able to make the move until 8.3 makes it debut, or possibly even Windows 9.
Once the Windows 8 family becomes the standard however I can see IT operations looking to move towards a more rapid pace of innovation. The changes between the yearly revisions are much less likely to break or change core functionality, enabling much of the risk that came with adopting a new operating system (application remediation). Additionally once the IT sections have moved to better tooling upgrading their desktops should also be a lot easier. I don’t think this will happen for another 3+ years however as we’re still in the midst of a XP hangover, one that’s not likely to subside until it’s market share is in the single digits. Past that we administrators then have the unenviable job of convincing our businesses that engaging in a faster product update cycle is good for them, even if the cost is low.
As someone who loves working with the latest and greatest from Microsoft it’s an irritating issue for me. I spend countless hours trying to skill myself up only to end up working on 5+ year old technology for the majority of my work. Sure it comes in handy eventually but the return on investment feels extremely low. It’s my hope that the cloud movement, which has already driven a lot of businesses to look at more modern approaches to the way they do their IT, will be the catalyst by which enterprise IT begins to embrace a more rapid innovation cycle. Until then however I’ll just lament all the Windows Server 2012 R2 training I’m doing and wait until TechEd rolls around again to figure out what’s obsolete.
When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.
The designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.
On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.
This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.
Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.
If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.
As a poor student the last thing I wanted to pay for was software. Whilst the choice to pirate a base operating system is always questionable, it’s the foundation on which all your computing activities rely, it was either pay the high license cost or find an alternative. I’ve since found numerous, legitimate alternatives of course (thank you BizSpark) but not everyone is able to take advantage of them. Thus for many the choice to upgrade their copy of Windows typically comes with the purchase of a new computer, something which doesn’t happen as often as it used to. I believe that this is one factor that’s affected the Windows 8/8.1 adoption rates and it seems Microsoft might be willing to try something radical to change it.
Rumours have been making the rounds that Microsoft is potentially going to offer a low cost (or completely free) version of their operating system dubbed Windows 8.1 with Bing. Details as to what is and isn’t included are still somewhat scant but it seems like it will be a full version without any major strings attached. There’s even musings around some of Microsoft core applications, like Office, to be bundled in with the new version of Windows 8.1. This wouldn’t be unusual (they already do it with Office Core for the Surface) however it’s those consumer applications where Microsoft draws a lot of its revenue in this particular market segment so their inclusion would mean the revenue would have to be made up somewhere else.
Many are toting this release as being targeted mostly at Windows 7 users who are staving off making the switch to Windows 8. In terms of barriers to entry they are by far the lowest although they’re also the ones who have the least to gain from the upgrade. Depending on the timing of the release though this could also be a boon to those XP laggards who run out of support in just over a month. The transition from XP to Windows 8 is much more stark however, both in terms of technology and user experience, but there are numerous things Microsoft could do in order to smooth it over.
Whilst I like the idea there’s still the looming question of how Microsoft would monetize something like this as releasing something for free and making up the revenue elsewhere isn’t really their standard business model (at least not with Windows itself). The “With Bing” moniker seems to suggest that they’ll be relying heavily on browser based revenue, possibly by restricting users to only being able to use Internet Explorer. They’ve got into hot water for doing similar things in the past although they’d likely be able to argue that they no longer hold a monopoly on Internet connected devices like they once did. Regardless it will be interesting to see what the strategy is as the mere rumour of something like this is new territory for Microsoft.
It’s clear that Microsoft doesn’t want Windows 7 to become the next XP and is doing everything they can to make it attractive to get users to make the switch. They’re facing an uphill battle as there’s still a good 30% of Windows users who are still on XP, ones who are unlikely to change even in the face of imminent end of life. A free upgrade might be enough to coax some users across however Microsoft needs to start selling the transition from any of their previous version as a seamless affair, something that anyone can do on a lazy Sunday afternoon. Even then there will still be holdouts but at least it’d go a long way to pushing the other versions’ market share down into the single digits.
In the time that Microsoft has been a company it has only known two Chief Executive Officers. The first was unforgettably Bill Gates, the point man of the company from its founding days that saw the company grow from a small software shop to the industry giant of the late 90s. Then, right at the beginning of the new millennium, Bill Gates stood down and passed the crown to long time business partner Steve Ballmer who has since spent the better part of 2 decades attempting to transform Microsoft from a software company to a devices and services one. Rumours had been spreading about who was slated to take over Ballmer for some time now and, last week, after much searching Microsoft veteran Satya Nadella took over as the third CEO of the venerable company and now everyone is wondering where he will take it.
For those who don’t know him Nadella’s heritage in Microsoft comes from the Server and Tools department where he’s held several high ranking positions for a number of years. Most notably he’s been in charge of Microsoft’s cloud computing endeavours, including building out Azure which hit $1 billion in sales last year, something I’m sure helped to seal the deal on his new position. Thus many would assume that Nadella’s vision for Microsoft would trend along these lines, something which runs a little contrary to the more consumer focused business that Ballmer sought to deliver, however his request to have Bill Gates step down as Chairman of the Board so that Nadella could have him as an advisor in this space says otherwise.
As with any changing of the guard many seek to impress upon the new bearer their wants for the future of the company. Nadella has already come under pressure to drop some of Microsoft’s less profitable endeavours including things like Bing, Surface and even the Xbox division (despite it being quite a revenue maker, especially as of late). Considering these products are the culmination of the effort of the 2 previous CEOs, both of which will still be involved in the company to some degree, taking an axe to them would be a extraordinarily hard thing to do. These are the products they’ve spent years and billions of dollars building so dropping them seems like a short sighted endeavour, even if it would make the books look a little better.
Indeed many of these business units which certain parties would look to cut are the ones that are seeing good growth figures. The surface has gone from a $900 million write down disaster to losing a paltry $39 million in 2 quarters, an amazing recovery that signals profitability isn’t too far off. Similarly Bing, the search engine that we all love to hate on, saw a 34% increase in revenue in a single quarter. It’s not even worth mentioning the Xbox division as it’s been doing well for years now and the release of the XboxOne with it’s solid initial sales ensures that remains one of Microsoft’s better performers.
The question then becomes whether Nadella, and the board he now serves, sees the on-going value in these projects. Indeed much of the work they’ve been doing in the past decade has been with the focus towards unifying many disparate parts of their ecosystem together, heading towards that unified nirvana where everything works together seamlessly. Removing them from the picture feels like Microsoft backing itself into a corner, one where it can be easily shoehorned into a narrative that will see it easily lose ground to those competitors it has been fighting for years. In all honesty I feel Microsoft is so dominant in those sectors already that there’s little to be gained from receding from perceived failures and Nadella should take this chance to innovate on his predecessor’s ideas, not toss them out wholesale.
Ever since Microsoft and Nokia announced their partnership with (and subsequent acquisition by) Microsoft I had wondered when we’d start seeing a bevy of feature phones that were running the Windows Phone operating system behind the scenes. Sure there’s a lot of cheaper Lumias on the market, like the Lumia 520 can be had for $149 outright, but there isn’t anything in the low end where Nokia has been the undisputed king for decades. That section of the market is now dominated by Nokia’s Asha line of handsets, a curious new operating system that came into being shortly after Nokia canned all development on Symbian and their other alternative mobile platforms. However there’s long been rumours circling that Nokia was developing a low end Android handset to take over this area of the market, predominately due to the rise of cheap Android handsets that were beginning to trickle in.
The latest leaks from engineers within Nokia appear to confirm these rumours with the above pictures showcasing a prototype handset developed under the Normandy code name. Details are scant as to what the phone actually consists of but the notification bar in does look distinctly Android with the rest of the UI not bearing any resemblance to anything else on the market currently. This fits in with the rumours that Nokia was looking to fork Android and make its own version of it, much like Amazon did for the Kindle Fire, which would also mean that they’d likely be looking to create their own app store as well. This would be where Microsoft could have its in, pushing Android versions of its Windows Phone applications through its own distribution channel without having to seek Google’s approval.
Such a plan almost wholly relies on the fact that Nokia is the trusted name in the low end space, managing to command a sizable chunk of the market even in the face of numerous rivals. Even though Windows Phone has been gaining ground recently in developed markets it’s still been unable to gain much traction in emerging markets. Using Android as a trojan horse to get uses onto their app ecosystem could potentially work however it’s far more likely that those users will simply remain on the new Android platform. Still there would be a non-zero number who would eventually look towards moving upwards in terms of functionality and when it comes to Nokia there’s only one platform to choose from.
Of course this all hinges on the idea that Microsoft is actively interested in pursuing this idea and it’s not simply part of the ongoing skunk works of Nokia employees. That being said Microsoft already makes a large chunk of change from every Android phone sold thanks to its licensing arrangements with numerous vendors so they would have a slight edge in creating a low end Android handset. Whether they eventually use that to try and leverage users onto the Windows Phone platform though will be something that we’ll have to wait to see as I can imagine it’ll be a long time before an actual device sees the light of day.
Us gamers tend to be hoarders when it comes to our game collections with many of us amassing huge stashes of titles on our platforms of choice. My steam library alone blew past 300 titles some time ago and anyone visiting my house will see the dozens of game boxes littering every corner of the house. There’s something of a sunk cost in all this and it’s why the idea of being able to play them on a current generation system is always attractive to people like me: we like to go back sometimes and play through games of our past. Whilst my platform of choice rarely suffers from this (PCs are the kings of backwards compatibility) my large console collection is in varying states of being able to play my library of titles and, if I’m honest, I don’t think it’s ever going to get better.
For the current kings of the console market the decision to do away with backwards compatibility has been something of a sore spot for many gamers. Whilst the numbers show that most people buy new consoles to play the new games on them¹ there’s a non-zero number who get a lot of enjoyment out of their current gen titles. Indeed I probably would’ve actually used my PlayStation4 for gaming if it had some modicum of backwards compatibility as right now there aren’t any compelling titles for it. This doesn’t seem to have been much of a hinderance to adoption of the now current gen platforms however.
There does seem to be a lot of faith being poured into the idea that backwards compatibility will come eventually through cloud services, of which only Sony has committed to developing. The idea is attractive, mainly because it then enables you to play any time you want from a multitude of devices, however, as I’ve stated in the past, the feasibility of such an idea isn’t great, especially if it relies on server hardware needing to be in many disparate locations around the world to make the service viable. Whilst both Sony and Microsoft have the capital to make this happen (and indeed Sony has a head start on it thanks to the Gaikai acquisition) the issues I previously mentioned are only compounded when it comes to providing a cloud based service with console games.
The easiest way of achieving this is to just run a bunch of the old consoles in a server environment and allow users to connect directly to them. This has the advantage of being cheaper from a capital point of view as I’m sure both Sony and Microsoft have untold hordes of old consoles to take advantage of, however the service would be inherently unscalable and, past a certain point, unmaintable. The better solution is to emulate the console in software which would allow you to run it on whatever hardware you wanted but this brings with it challenges I’m not sure even Microsoft or Sony are capable of solving.
You see whilst the hardware of the past generation consoles is rather long in the tooth emulating it in software is nigh on impossible. Whilst there’s some experimental efforts by the emulation community to do this none of them have produced anything capable of running even the most basic titles. Indeed even with access to the full schematics of the hardware recreating them in software would be a herculean effort, especially for Sony who’s Cell processor is a nightmare architecturally speaking.
There’s also the possibility that Sony has had the Gaikai team working on a Cell to x86 transition library which could make the entire PlayStation3 library available without too much hassle although there would likely be a heavy trade off in performance. In all honesty that’s probably the most feasible solution as it’d allow them to run the titles on commodity hardware but you’d still have the problems of scaling out the service that I’ve touched on in previous posts.
Whatever ends up happening we’re not going to hear much more about it until sometime next year and it’ll be a while after that before we can get our hands on it (my money is on 2016 for Australia). If you’re sitting on a trove of old titles and hoping that the next gen will allow you to play them I wouldn’t hold your breath as its much more likely that it’ll be extremely limited, likely requiring an additional cost on top of your PlayStation Plus membership. That’s even if it works as everyone speculating it will as I can see it easily turning out to be something else entirely.
¹ I can’t seem to find a source for this but back when the PlayStation3 Slim was announced (having that capability removed) I can remember a Sony executive saying something to this effect. It was probably a combination of factors that led up to him saying that though as around that time the PlayStation2 Slim was still being manufactured and was retailing for AUD$100, so it was highly likely that anyone who had the cash to splurge on a PlayStation3 likely owned a PlayStation2.