Posts Tagged‘microsoft’

Open Compute Project Logo

Will Open Compute Ever Trickle Down?

When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.

Open Compute Project LogoThe designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.

On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.

This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.

Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.

If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.

Windows 8.1 With Bing wzor.net

Windows 8.1 With Bing: The First (Legal) Free Version of Windows?

As a poor student the last thing I wanted to pay for was software. Whilst the choice to pirate a base operating system is always questionable, it’s the foundation on which all your computing activities rely, it was either pay the high license cost or find an alternative. I’ve since found numerous, legitimate alternatives of course (thank you BizSpark) but not everyone is able to take advantage of them. Thus for many the choice to upgrade their copy of Windows typically comes with the purchase of a new computer, something which doesn’t happen as often as it used to. I believe that this is one factor that’s affected the Windows 8/8.1 adoption rates and it seems Microsoft might be willing to try something radical to change it.

Windows 8.1 With Bing wzor.netRumours have been making the rounds that Microsoft is potentially going to offer a low cost (or completely free) version of their operating system dubbed Windows 8.1 with Bing. Details as to what is and isn’t included are still somewhat scant but it seems like it will be a full version without any major strings attached. There’s even musings around some of Microsoft core applications, like Office, to be bundled in with the new version of Windows 8.1. This wouldn’t be unusual (they already do it with Office Core for the Surface) however it’s those consumer applications where Microsoft draws a lot of its revenue in this particular market segment so their inclusion would mean the revenue would have to be made up somewhere else.

Many are toting this release as being targeted mostly at Windows 7 users who are staving off making the switch to Windows 8. In terms of barriers to entry they are by far the lowest although they’re also the ones who have the least to gain from the upgrade. Depending on the timing of the release though this could also be a boon to those XP laggards who run out of support in just over a month. The transition from XP to Windows 8 is much more stark however, both in terms of technology and user experience, but there are numerous things Microsoft could do in order to smooth it over.

Whilst I like the idea there’s still the looming question of how Microsoft would monetize something like this as releasing something for free and making up the revenue elsewhere isn’t really their standard business model (at least not with Windows itself). The “With Bing” moniker seems to suggest that they’ll be relying heavily on browser based revenue, possibly by restricting users to only being able to use Internet Explorer. They’ve got into hot water for doing similar things in the past although they’d likely be able to argue that they no longer hold a monopoly on Internet connected devices like they once did. Regardless it will be interesting to see what the strategy is as the mere rumour of something like this is new territory for Microsoft.

It’s clear that Microsoft doesn’t want Windows 7 to become the next XP and is doing everything they can to make it attractive to get users to make the switch. They’re facing an uphill battle as there’s still a good 30% of Windows users who are still on XP, ones who are unlikely to change even in the face of imminent end of life. A free upgrade might be enough to coax some users across however Microsoft needs to start selling the transition from any of their previous version as a seamless affair, something that anyone can do on a lazy Sunday afternoon. Even then there will still be holdouts but at least it’d go a long way to pushing the other versions’ market share down into the single digits.

 

New Microsoft Chief Executive Officer of, Satya Nadella

What Kind of Microsoft Can We Expect From Satya Nadella?

In the time that Microsoft has been a company it has only known two Chief Executive Officers. The first was unforgettably Bill Gates, the point man of the company from its founding days that saw the company grow from a small software shop to the industry giant of the late 90s. Then, right at the beginning of the new millennium, Bill Gates stood down and passed the crown to long time business partner Steve Ballmer who has since spent the better part of 2 decades attempting to transform Microsoft from a software company to a devices and services one. Rumours had been spreading about who was slated to take over Ballmer for some time now and, last week, after much searching Microsoft veteran Satya Nadella took over as the third CEO of the venerable company and now everyone is wondering where he will take it.

New Microsoft Chief Executive Officer of, Satya NadellaFor those who don’t know him Nadella’s heritage in Microsoft comes from the Server and Tools department where he’s held several high ranking positions for a number of years. Most notably he’s been in charge of Microsoft’s cloud computing endeavours, including building out Azure which hit $1 billion in sales last year, something I’m sure helped to seal the deal on his new position. Thus many would assume that Nadella’s vision for Microsoft would trend along these lines, something which runs a little contrary to the more consumer focused business that Ballmer sought to deliver, however his request to have Bill Gates step down as Chairman of the Board so that Nadella could have him as an advisor in this space says otherwise.

As with any changing of the guard many seek to impress upon the new bearer their wants for the future of the company. Nadella has already come under pressure to drop some of Microsoft’s less profitable endeavours including things like Bing, Surface and even the Xbox division (despite it being quite a revenue maker, especially as of late). Considering these products are the culmination of the effort of the 2 previous CEOs, both of which will still be involved in the company to some degree, taking an axe to them would be a extraordinarily hard thing to do. These are the products they’ve spent years and billions of dollars building so dropping them seems like a short sighted endeavour, even if it would make the books look a little better.

Indeed many of these business units which certain parties would look to cut are the ones that are seeing good growth figures. The surface has gone from a $900 million write down disaster to losing a paltry $39 million in 2 quarters, an amazing recovery that signals profitability isn’t too far off. Similarly Bing, the search engine that we all love to hate on, saw a 34% increase in revenue in a single quarter. It’s not even worth mentioning the Xbox division as it’s been doing well for years now and the release of the XboxOne with it’s solid initial sales ensures that remains one of Microsoft’s better performers.

The question then becomes whether Nadella, and the board he now serves, sees the on-going value in these projects. Indeed much of the work they’ve been doing in the past decade has been with the focus towards unifying many disparate parts of their ecosystem together, heading towards that unified nirvana where everything works together seamlessly. Removing them from the picture feels like Microsoft backing itself into a corner, one where it can be easily shoehorned into a narrative that will see it easily lose ground to those competitors it has been fighting for years. In all honesty I feel Microsoft is so dominant in those sectors already that there’s little to be gained from receding from perceived failures and Nadella should take this chance to innovate on his predecessor’s ideas, not toss them out wholesale.

 

Normandy Prototype

Microsoft’s Grab For The Low End Market.

Ever since Microsoft and Nokia announced their partnership with (and subsequent acquisition by) Microsoft I had wondered when we’d start seeing a bevy of feature phones that were running the Windows Phone operating system behind the scenes. Sure there’s a lot of cheaper Lumias on the market, like the Lumia 520 can be had for $149 outright, but there isn’t anything in the low end where Nokia has been the undisputed king for decades. That section of the market is now dominated by Nokia’s Asha line of handsets, a curious new operating system that came into being shortly after Nokia canned all development on Symbian and their other alternative mobile platforms. However there’s long been rumours circling that Nokia was developing a low end Android handset to take over this area of the market, predominately due to the rise of cheap Android handsets that were beginning to trickle in.

Normandy PrototypeThe latest leaks from engineers within Nokia appear to confirm these rumours with the above pictures showcasing a prototype handset developed under the Normandy code name. Details are scant as to what the phone actually consists of but the notification bar in does look distinctly Android with the rest of the UI not bearing any resemblance to anything else on the market currently. This fits in with the rumours that Nokia was looking to fork Android and make its own version of it, much like Amazon did for the Kindle Fire, which would also mean that they’d likely be looking to create their own app store as well. This would be where Microsoft could have its in, pushing Android versions of its Windows Phone applications through its own distribution channel without having to seek Google’s approval.

Such a plan almost wholly relies on the fact that Nokia is the trusted name in the low end space, managing to command a sizable chunk of the market even in the face of numerous rivals. Even though Windows Phone has been gaining ground recently in developed markets it’s still been unable to gain much traction in emerging markets. Using Android as a trojan horse to get uses onto their app ecosystem could potentially work however it’s far more likely that those users will simply remain on the new Android platform. Still there would be a non-zero number who would eventually look towards moving upwards in terms of functionality and when it comes to Nokia there’s only one platform to choose from.

Of course this all hinges on the idea that Microsoft is actively interested in pursuing this idea and it’s not simply part of the ongoing skunk works of Nokia employees. That being said Microsoft already makes a large chunk of change from every Android phone sold thanks to its licensing arrangements with numerous vendors so they would have a slight edge in creating a low end Android handset. Whether they eventually use that to try and leverage users onto the Windows Phone platform though will be something that we’ll have to wait to see as I can imagine it’ll be a long time before an actual device sees the light of day.

Gaikai Background

I Wouldn’t Get Your Hopes Up For Backwards Compatibility.

Us gamers tend to be hoarders when it comes to our game collections with many of us amassing huge stashes of titles on our platforms of choice. My steam library alone blew past 300 titles some time ago and anyone visiting my house will see the dozens of game boxes littering every corner of the house. There’s something of a sunk cost in all this and it’s why the idea of being able to play them on a current generation system is always attractive to people like me: we like to go back sometimes and play through games of our past. Whilst my platform of choice rarely suffers from this (PCs are the kings of backwards compatibility) my large console collection is in varying states of being able to play my library of titles and, if I’m honest, I don’t think it’s ever going to get better.

Gaikai BackgroundJust for the sake of example let’s have a look at the 3 consoles that are currently sitting next to my TV:

  • Nintendo Wii: It has the ability to play the GameCube’s games directly and various other titles are made available through the Virtual Console. Additionally all games for the Wii are forwards compatible with the WiiU so you’re getting 3+ generations worth of backwards compatibility, not bad for a company who used to swap catridge formats every generation.
  • Xbox360: Appears to be software level emulation as it requires you to periodically download emulation profiles from Microsoft and the list of supported titles has changed over time. There is no forwards compatibility with the XboxOne due to the change in architecture although there are rumours of Microsoft developing their own cloud based service to provide it.
  • PlayStation4: There’s no backwards compatibility to speak of for this console, for much the same reasons as the XboxOne, however my PlayStation 3 was a launch day model and thus had software emulation. American versions of the same console had full hardware backwards compatibility however giving access to the full PlayStation library stretching back to the original PlayStation. Sony bought the cloud gaming company Gaikai in July last year, ostensibly to provide backwards compatibility via this service.

For the current kings of the console market the decision to do away with backwards compatibility has been something of a sore spot for many gamers. Whilst the numbers show that most people buy new consoles to play the new games on them¹ there’s a non-zero number who get a lot of enjoyment out of their current gen titles. Indeed I probably would’ve actually used my PlayStation4 for gaming if it had some modicum of backwards compatibility as right now there aren’t any compelling titles for it. This doesn’t seem to have been much of a hinderance to adoption of the now current gen platforms however.

There does seem to be a lot of faith being poured into the idea that backwards compatibility will come eventually through cloud services, of which only Sony has committed to developing. The idea is attractive, mainly because it then enables you to play any time you want from a multitude of devices, however, as I’ve stated in the past, the feasibility of such an idea isn’t great, especially if it relies on server hardware needing to be in many disparate locations around the world to make the service viable. Whilst both Sony and Microsoft have the capital to make this happen (and indeed Sony has a head start on it thanks to the Gaikai acquisition) the issues I previously mentioned are only compounded when it comes to providing a cloud based service with console games.

The easiest way of achieving this is to just run a bunch of the old consoles in a server environment and allow users to connect directly to them. This has the advantage of being cheaper from a capital point of view as I’m sure both Sony and Microsoft have untold hordes of old consoles to take advantage of, however the service would be inherently unscalable and, past a certain point, unmaintable. The better solution is to emulate the console in software which would allow you to run it on whatever hardware you wanted but this brings with it challenges I’m not sure even Microsoft or Sony are capable of solving.

You see whilst the hardware of the past generation consoles is rather long in the tooth emulating it in software is nigh on impossible. Whilst there’s some experimental efforts by the emulation community to do this none of them have produced anything capable of running even the most basic titles. Indeed even with access to the full schematics of the hardware recreating them in software would be a herculean effort, especially for Sony who’s Cell processor is a nightmare architecturally speaking.

There’s also the possibility that Sony has had the Gaikai team working on a Cell to x86 transition library which could make the entire PlayStation3 library available without too much hassle although there would likely be a heavy trade off in performance. In all honesty that’s probably the most feasible solution as it’d allow them to run the titles on commodity hardware but you’d still have the problems of scaling out the service that I’ve touched on in previous posts.

Whatever ends up happening we’re not going to hear much more about it until sometime next year and it’ll be a while after that before we can get our hands on it (my money is on 2016 for Australia). If you’re sitting on a trove of old titles and hoping that the next gen will allow you to play them I wouldn’t hold your breath as its much more likely that it’ll be extremely limited, likely requiring an additional cost on top of your PlayStation Plus membership. That’s even if it works as everyone speculating it will as I can see it easily turning out to be something else entirely.

¹ I can’t seem to find a source for this but back when the PlayStation3 Slim was announced (having that capability removed) I can remember a Sony executive saying something to this effect. It was probably a combination of factors that led up to him saying that though as around that time the PlayStation2 Slim was still being manufactured and was retailing for AUD$100, so it was highly likely that anyone who had the cash to splurge on a PlayStation3 likely owned a PlayStation2.

 

AMD Logo

The Real Winner of the Console Wars: AMD.

In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.

AMD LogoBoth of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4′s day one sales figures.

If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.

To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.

Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.

 

AGIMO ICT Strategy Summary

A New AGIMO Policy is Great, But…

Canberra is a strange little microcosm. If you live here chances are you are either working directly for the government as a member of the public service or you’re part of an organisation that’s servicing said government. This is especially true in the field of IT as anyone with a respectable amount of IT experience can make a very good living working for any of the large department’s headquarters. I have made my IT career in this place and in my time here I’ve spent all of my time lusting after the cutting edge of technology whilst dealing with the realities of what large government departments actually need to function. As long time readers will be aware I’ve been something of a cloud junkie for a while now but not once have I been able to use it at my places of work, and there’s a good reason for that.

AGIMO ICT Strategy SummaryNot that you’d know that if you heard the latest bit of rhetoric from the current government which has criticised the current AGIMO APS ICT Strategy for providing only “notional” guidelines for using cloud base services. Whilst I’ll agree that the financial implications are rather cumbersome (although this is true of any procurement activity within the government, as anyone who’s worked in one can tell you) what annoyed me was the idea that security requirements were too onerous. The simple fact of the matter is that many government departments have regulatory and legal obligations not to use overseas cloud providers due to the legislation that restricts Australian government data from travelling outside our borders.

The technical term for this is data sovereignty and for the vast majority of the large government departments of Australia they’re legally bound to keep all their services, and the data that they rely on, on Australian soil. The legislation is so strict in this regard that even data that’s not technically sensitive, like say specifications of machines or network topologies, in some cases can’t be given to external vendors and must instead only be inspected on site. The idea then that governments could take advantage of cloud providers, most of which don’t have availability zones here in Australia, is completely ludicrous and no amount of IT strategy policies can change that.

Of course cloud providers aren’t unaware of these issues, indeed I’ve met with several people behind some of the larger public clouds on this, and many of them are bringing availability zones to Australia. Indeed Amazon Web Services has already made itself available here and Microsoft’s Azure platform is expected to land on our shores sometime next year. The latter is probably the more important of the two as if the next AGIMO policy turns out the way it’s intended the Microsoft cloud will be the defacto solution for light user agencies thanks to the heavy amount of Microsoft products in use at those places.

Whilst I might be a little peeved at the rhetoric behind the review of the APS ICT Strategy I do welcome it as even though it was only written a couple years ago it’s still in need of an update due to the heavy shift towards cloud services and user centric IT that we’ve seen recently. The advent of Australian availability zones will mean that the government agencies most able to take advantage of cloud services will finally be able to, especially with AGIMO policy behind them. Still it will be up to the cloud providers to ensure their systems can meet the requirements of these agencies and there’s still every possibility that they will still not be enough for some departments to take advantage of.

We’ll have to see how that pans out, however.

 

Surface 2 Pro

Microsoft’s Surface 2: A Big Hole To Fill.

There’s no question that  Microsoft’s attempt at the tablet market has been lacklustre. Whilst the hardware they have powering their tablets was decent the nascent Windows Store lacks the diversity of its competitors, something which made the RT version of it even less desirable. This has since resulted in Microsoft writing down $900 million in Surface RT and associated inventory something which many speculated would be the end of the Surface line. However it appears that Microsoft is more committed than ever to the Surface idea and recently announced the Surface 2, an evolutionary improvement over its predecessor.

Surface 2 ProThe new Surface 2 looks pretty much identical to predecessor although it’s a bit slimmer and is also a bit lighter. It retains the in built kick stand but it now has 2 positions instead of one something which I’m will be useful to some. The specifications under the hood have been significantly revamped for both versions of the tablet with the RT (although it’s no longer called that) version sporting a NVIDIA Tegra 4 and the Pro one of the new Haswell i5 chips. Microsoft will also now let you choose how much RAM you get in your Pro model, allowing you to cram up to 8GB in there. The Pro also gets the luxury of larger drive sizes, up to 512GB should you want it (although you’ll be forced to get the 8GB RAM model if you do). Overall I’d say this is pretty much what you’d expect from a generation 2 product and the Pro at least looks like it could be a decent laptop competitor.

Of course the issues that led Microsoft to write down nearly a billion dollars worth of inventory (after attempting to peddle as much of it as they could to TechEd attendees) still exist today and the upgrade to Windows 8.1 won’t do much to solve this. Sure in the time between the initial Surface release and now there’s been a decent amount of applications developed for it but it still pales in comparison. I still think that the Metro interface is pretty decent on a touch screen but Microsoft will really have to do something outrageous to convince everyone that the Surface is worth buying otherwise it’s doomed to repeat its predecessor’s mistakes.

The Pro on the other hand looks like it’d be a pretty great enterprise tablet thanks to its full x86 environment. I know I’d much rather have those in my environment than Android or iPads as they would be much harder to integrate into all the standard management tools. A Surface 2 Pro on the other hand would behave much like any other desktop allowing me to deliver the full experience to anyone who had one. Of course it’s then more of a replacement for a laptop than anything else but I do know a lot of users who would prefer a tablet device rather than the current fleet of laptops they’re given (even the ones who get ultrabooks).

Whilst the Pro looks like a solid upgrade I can’t help but feel that the upgrade to the RT is almost unnecessary given the fact that most of the complaints levelled at it were nothing to do with its performance. Indeed not once have I found myself wanting for speed on my Surface RT, instead I’ve been wanting my favourite apps to come across so that I don’t have to use their web versions which, on Internet Explorer, typically aren’t great. Maybe the ecosystem is mature enough now to tempt some people across but honestly unless they already own one I can’t really see that happening, at least for the RT version. The Pro on the other hand could make some headway into Microsoft’s core enterprise market but even that might not be enough for the Surface division.

 

NOKIA BRICKS

Microsoft Buys Nokia: That’s Great, For One of Them.

If you’re old enough to remember a time when mobile phones weren’t common place you also likely remember the time when Nokia was the brand to have, much like Apple is today. I myself owned quite a few of them with my very first phone ever being the (then) ridiculously small Nokia 8210. I soon gravitated towards other, more shiny devices as my disposable income allowed but I did find myself in possession of an N95 because, at the time, it was probably one of the best handsets around for techno-enthusiasts like myself.  However it’s hard to deny that they’ve struggled to compete in today’s smartphone market and, unfortunately, their previous domination in the feature phone market has also slipped away from them.

NOKIA BRICKSTheir saving grace was meant to come from partnering with Microsoft and indeed I attested to as much at the time. Casting my mind back to when I wrote that post I was actually of the mind that Nokia was going to be the driving force for Microsoft however in retrospect it seems the partnership was done in the hopes that both of their flagging attempts in the smartphone market could be combined into one, potentially viable, product. Whilst I’ve praised the design and quality of Windows Phone based Nokias in the past it’s clear that the amalgamation of 2 small players hasn’t resulted in a viable strategy to accumulate a decent amount of market share.

You can then imagine my surprise when Microsoft up and bought Nokia’s Devices and Services business as it doesn’t appear to be a great move for them.

So Nokia as a company isn’t going anywhere as they still retain control of a couple key businesses (Solutions and Networks, HERE/Navteq and Advanced Technologies which I’ll talk about in a bit) however they’re not going to be making phones anymore as that entire capability has been transferred to Microsoft. That’s got a decent amount of value in itself, mostly in the manufacturing and supply chains, and Microsoft’s numbers will swell by 32,000 when the deal is finished. However whether that’s going to result in any large benefits for Microsoft is debateable as they arguably got most of this in their 2011 strategic partnership just that they can now do all the same without the Nokia branding on the final product.

If this type of deal is sounding familiar then you’re probably remembering the nearly identical acquisition that Google made in Motorola back in 2011. Google’s reasons and subsequent use of the company were quite different however and, strangely enough, they have yet to use them to make one of Nexus phones. Probably the biggest difference, and this is key to why this deal is great for Nokia and terrible for Microsoft, is the fact that Google got all of Motorola’s patents, Microsoft hasn’t got squat.

As part of the merger a new section is being created in Nokia called Advanced Technologies which, as far as I can tell, is going to be the repository for all of Nokia’s technology patents. Microsoft has been granted a 10 year license to all of these, and when that’s expired they’ll get a perpetual one, however Nokia gets to keep ownership of all of them and the license they gave Microsoft is non-exclusive. So since Nokia is really no longer a phone company they’re now free to start litigating against anyone they choose without much fear of counter-suits harming any of their products. Indeed they’ve stated that the patent suits will likely continue post acquisition signalling that Nokia is likely going to look a lot more like a patent troll than a technology company in the near future.

Meanwhile Microsoft has been left with a flagging handset business, one that’s failed to reach the kind of growth that would be required to make it sustainable long term. Now there’s something to be said about Microsoft being able to release Lumia branded handsets (they get the branding in this deal) but honestly their other forays into the consumer electronics space haven’t gone so well so I’m not sure what they’re going to accomplish here. They’ve already got the capability and distribution channels to get products out there (go into any PC store and you’ll find Microsoft branded peripherals there, guaranteed) so whilst it might be nice to get Nokia’s version of that all built and ready I’m sure they could have built one themselves for a similar amount of cash. Of course the Lumia tablet might be able to change consumer’s minds on that one but most of the user complaints around Windows RT weren’t about the hardware (as evidenced in my review).

In all honesty I have no idea why Microsoft would think this would be a good move, let alone a move that would let them do anything more than they’re currently doing. If they had acquired Nokia’s vast portfolio of patents in the process I’d be singing a different tune as Microsoft has shown how good they are in wringing license fees out of people (so much so that the revenue they get from Android licensing exceeds that of their Windows Phone division) . However that hasn’t happened and instead we’ve got Nokia lining up to become a patent troll of epic proportions and Microsoft left $7 billion patent licensing deal that comes with its own failing handset business. I’m not alone in this sentiment either as Microsoft’s shares dropped 5% on this announcement which isn’t great news for this deal.

I really want to know where they’re going with this because I can’t for the life of me figure it out.

 

IMG_3901

Microsoft’s Surface RT: It’s Nice, But…

Just like any new tech gadget I’ve been ogling tablets for quite some time. Now I’m sure there will be a few who are quick to point out that I said long ago that an ultrabook fills the same niche, at least for me, but that didn’t stop me from lusting after them for one reason or another. I’d held off on buying one for a long time though as the price for something I would only have a couple uses for was far too high, even if I was going to use it for game reviews, so for a long time I simply wondered at what could be. Well whilst I was at TechEd North America the opportunity to snag a Windows Surface RT came up for the low price of $99 and I, being able to ignore the fiscal conservative in me and relent into my tech lust, happily handed over my credit card so I could take one home with me.

IMG_3901

It’s quite a solid device with a noticeable amount of heft in it despite its rather slim figure. Of particular note is the built in kick stand which allows you to sit the Surface upright, something which I’ve heard others wish for with their tablets. It’s clear that the Surface as been designed to be used primarily in landscape mode which is in opposition to most other tablets that utilize the portrait view. For someone like me who’s been a laptop user for a long time this didn’t bother me too much but I can see how it’d be somewhat irritating if you were coming from another platform as it’d be just another thing you’d have to get used to. Other than that it seems to be your pretty standard tablet affair with a few tweaks to give it a more PC feel.

The specifications of it are pretty decent boasting a WXGA (1366 x 768) 16:9 screen powered by a NVIDIA Tegra3 with 2GB RAM behind it. I’ve got the 64GB model which reports 53GB available and 42GB free which was something of a contentious point for many as they weren’t getting what they paid for (although at $99 I wasn’t going to complain). It’s enough that when using it I never noticed any stutter or slow down even when I was playing some of the more graphically intense games on it. I didn’t really try any heavy productivity stuff on it because I much prefer my desktop for work of that nature but I get the feeling it could handle 90% of the tasks I could throw at it. The battery life also appears to be relatively decent although I have had a couple times where it mysteriously came up on 0 charge although that might have been due to my fiddling with the power settings (more on why I did that later).

Since I’ve been a Windows 8 user for a while the RT interface on the Surface wasn’t much of a shock to me although I was a little miffed that I couldn’t run some of my chosen applications, even in desktop mode, notably Google Chrome. That being said applications that have been designed for the Metro interface are usually pretty good, indeed the OneNote app and Cocktail Flow are good examples of this, however the variety of applications available is unfortunately low. This is made up for a little by the fact that the browser on the Surface is far more usable than the one on Windows Phone 7 enabling many of the web apps to work fine. I hope for Microsoft’s sake this changes soon as the dearth of applications on the Surface really limits its appeal.

The keyboard that came with the Surface gets a special mention because of just how horrid it is. Whilst it does a good job of being a protective cover, one that does have a rather satisfying click as the magnets snap in, it’s absolutely horrendous as an input device, akin to typing on a furry piece of cardboard. Since there’s no feedback it’s quite hard to type fast on it and worse still it seems to miss key presses every so often. Probably the worst part about it is that if your surface locks itself with it attached and then you remove it you will then have no way to unlock your device until you re-attach it, even if you’ve set a PIN code up. I’ve heard that the touch cover is a lot better but since it was going for $100 at the time I wasn’t too keen on purchasing it.

The Surface does do a good job of filling the particular niche I had for it, which was mainly watching videos and using it to remote into my servers, but past that I haven’t found myself using it that much. Indeed the problem seems to be that the Surface, at least the non-pro version, is stuck halfway between being a true tablet and a laptop as many of its features are still computer-centric. This means that potential customers on either side of the equation will probably feel like they’re missing something which I think is one of the main reasons that the Surface has struggled to get much market share. The Pro seems to be much closer to being a laptop, enough so that the people I talked to at TechEd seemed pretty happy with their purchase. Whether that translates into Microsoft refocusing their strategy with the Surface remains to be seen, however.

The Surface is a decent little device, having the capabilities you’ve come to expect from a tablet whilst still having that Microsoft Windows feel about it. It’s let down by the lack of applications and dissonance it suffers from being stuck between the PC and tablet worlds, something that can’t be easily remedied by a software fix. The touch cover is also quite atrocious and should be avoided at all costs, even if you’re just going to use it as a cover. For the price I got it for I think it was worth the money however getting it at retail is another story and unless you’re running a completely Microsoft house already I’d struggle to recommend it over an ultrabook or similarly portable computing device.