For as long as we’ve been using semiconductors there’s been one material that’s held the crown: silicon. Being one of the most abundant elements on Earth its semiconductor properties made it perfectly suited to mass manufacture and nearly all of the world’s electronics contain a silicon brain within them. Silicon isn’t the only material capable of performing this function, indeed there’s a whole smorgasbord of other semiconductors that are used for specific applications, however the amount of research poured into silicon means few of them are as mature as it is. However with our manufacturing processes shrinking we’re fast approaching the limit of what silicon, in its current form, is capable of and that may pave the way for a new contender for the semiconductor crown.
The road to the current 14nm manufacturing process has been a bumpy one, as the heavily delayed release of Intel’s Broadwell can attest to. Mostly this was due to the low yields that Intel was getting with the process, which is typical for die shrinks, however solving the issue proved to be more difficult than they had originally thought. This is likely due to the challenges Intel faced with making their FinFET technology work at the smaller scale as they had only just introduced it in the previous 22nm generation of CPUs. This process will likely still work down at the 10nm level (as Samsung has just proven today) but beyond that there’s going to need to be a fundamental shift in order for the die shrinks to continue.
For this Intel has alluded to new materials which, keen observers have pointed out, won’t be silicon.
The type of material that’s a likely candidate to replace silicon is something called Indium Gallium Arsenide (InGaAs). They’ve long been used in photodetectors and high frequency applications like microwave and millimeter wave applications. Transistors made from this substrate are called High-Electron Mobility Transistors which, in simpler terms, means that they can be made smaller, switch faster and more packed into a certain size. Whilst the foundries might not yet be able to create these kinds of transistors at scale the fact that they’ve been manufactured at some scale for decades now makes them a viable alternative rather than some of the other, more exotic materials.
There is potential for silicon to hang around for another die shrink or two if Extreme Ultraviolet (EUV) lithography takes off however that method has been plagued with developmental issues for some time now. The change between UV lithography and EUV isn’t a trivial one as EUV can’t be made into a laser and needs mirrors to be directed since most materials will simply absorb the EUV light. Couple that with the rather large difficulty in generating EUV light in the first place (it’s rather inefficient) and it makes looking at new substrates much more appealing. Still if TSMC, Intel or Samsung can figure it out then there’d be a bit more headroom for silicon, although maybe not enough to offset the investment cost.
Whatever direction the semiconductor industry takes one thing is very clear: they all have plans that extend far beyond the current short term to ensure that we can keep up the rapid pace of technological development that we’ve enjoyed for the past half century. I can’t tell you how many times I’ve heard others scream that the next die shrink would be our last, only to see some incredibly innovative solutions to come out soon after. The transition to InGaAs or EUV shows that we’re prepared for at least the next decade and I’m sure before we hit the limit of that tech we’ll be seeing the next novel innovation that will continue to power us forward.
Why the Abbott government hasn’t abandoned their incredibly unpopular metadata policy yet is beyond me. Nearly all other developed nations that have pursued such a policy have abandoned it, mostly because attempting to pass something like this is akin to committing political suicide. Worse still in their attempts to defend the policy from its critics the Abbott government has resorted to tactics and sensationalist rhetoric, none of which has any bearing on the underlying issues that this policy faces. Top this off with a cost estimation that seems to be based on back of the napkin math and you’ve got a recipe for bad legislation that will likely be implemented poorly and at a great cost to all Australian citizens.
Conceptually the idea is simple: the government wants to mandate that all ISPs and communications providers keep all metadata they generate for a period of 2 years. Initially this was sold as not being an increase in the power that authorities had however that idea is incredibly misleading as it greatly increases their ability to exercise that power. Worse still obtaining access to metadata doesn’t require a warrant and isn’t just the realm of law enforcement or intelligence agencies as people on local councils can obtain this data. Suffice to say that the gathering and retention of this data is a massive invasion of the privacy that the general public expects to have from its government and that is exactly why nearly all developed nations have dropped such policies before they’ve been implemented.
As expected the usual tropes for these kinds of policies have been trotted out, initially under the guise of a requirement for national security. I’d concede that point if it wasn’t for the fact that mass surveillance has not proved to be effective in combating terrorism, something which the critics of the policy were quick to point out. The rhetoric has then shifted away from national security to local security with Abbott saying that the metadata will help them track down peadophiles and child traffickers. Suffice to say if surveillance of this nature doesn’t help at a national level then I highly doubt its effectiveness at the lower levels and “think of the children” arguments like this are nothing more than an appeal to emotion.
Yesterday Abbott was pressed to give some hard figures on just how much this scheme would end up costing and he retorted with the rather ineloquent quip that it would be an “explosion in an unsolved crime“. When pressed the figure he gave was $300 million, estimated to be less than 1% of the total $40 billion that the entire telecommunications sector is estimated to be worth. That figure has apparently been sourced from PricewaterhouseCoopers (PwC) however the details of that figure have not been made public. In all honesty I cannot see how that figure can be accurate given the amount of data we’re talking about and the retention times required.
To put it in perspective Australians consumed something on the order of 1 Exabyte in 6 months up to June last year which is a 50% increase on the year previous. The amount of metadata on that data would be a fraction of that and, taking the same 1% liberty that Abbott seems intent on using, you get something like 50 Petabytes worth of storage required. Couple that with the fact that it won’t be stored in one place (negating economies of scale), the infrastructure requirements to provide access to it and the personnel required to fullfil requests and that $300 million figure starts to look quite shakey. Indeed the Communications Alliance in Australia has estimated it to be between $500 million and $700 million which casts doubt over how accurate Abbott’s lowball figure is.
Honestly this legislation stinks no matter which way you cut it and the rhetoric that the incumbent government has been using to defend it speaks directly to that. These policies are just simply not effective in what they set out to achieve and the only tangible result we’ll ever see from them will be an increased cost to accessing the Internet and a reduction in the expectation of privacy. I do hope Abbott keeps harping on about it though as the more he talks the more it seems likely that we’ll be able to cement the One Term Tony phrase in the history books.
The discovery of Stuxnet in the wild was a watershed moment, signalling the first known salvo sent across the wires of the Internet to strike at an enemy far away. The fact that a piece of software could wreak such destruction in the real world was what drew most people’s interest however the way in which it achieved this was, in my opinion, far more interesting than the results it caused. Stuxnet showed that nation state sponsored malware was capable of things far beyond that of what we’ve attributed to malicious hackers in the past and made us wonder what they were really capable of. Thanks to Kaspersky Labs we now have a really good (read: scary) idea of what a nation state could develop and it’s beyond what many of us thought would be possible.
The Equation Group has been identified as being linked to several different pieces of malware that have surfaced in various countries around the world. They’ve been in operation for over a decade and have continuously improved their toolset over that time. Interestingly this group appears to have ties to the development teams behind both Stuxnet and Regin as some of the exploits found in early versions of Equation Group’s tools were also found in those pieces of malware. However those zero day exploits were really just the tip of the spear in Equation Groups arsenal as what Kaspersky Labs has discovered is far beyond anything else we’ve ever seen.
Perhaps the most fascinating piece of software that the group has developed is the ability to write disk firmware which allows them persist their malware through reboots, operating system reinstalls and even low level formats. If that wasn’t nasty enough there’s actually no way (currently) to detect an infection of that nature as few hard drives include the capability to read the firmware once its been written. That means once the firmware has wormed its way into your system there’s very little you could do to detect and remove it, save buying a whole new PC from a random vendor and keeping it isolated from every other device.
This then feeds into their other tools which give them unprecedented control over every facet of a Windows operating system. GrayFish, as it has been dubbed, completely replaces the bootloader and from there completely controls how Windows loads and operates. Essentially once a system is under GrayFish control it no longer uses any of its core boot process which are replaced by GrayFish’s toolkit. This allows Equation Group to be able to inject malware into almost every aspect of the system, preventing detection and giving them complete control to load any of their other malware modules. This shows a level of understanding of the operating system that would rival top Microsoft technicians, even those who have direct access to the source code. Although to be honest I wouldn’t be surprised if they had access to the source code themselves given the level of sophistication here.
These things barely begin to describe the capabilities that the Equation Group has developed over the past couple years as their level of knowledge, sophistication and penetration into world networks is well above anything the general public has known about before. It would be terrifying if it wasn’t so interesting as it shows just what can be accomplished when you’ve got the backing of an entire nation behind you. I’m guessing that it won’t be long before we uncover more of what the Equation Group is capable of and, suffice to say, whatever they come up with next will once again set the standard for what malware can be capable of.
For us long time PC gamers, those of us who grew up in a time where games were advancing so fast that yearly upgrades were a given, getting the most bang for your buck was often our primary concern. Often the key components would get upgraded first like the CPU, RAM and GPU with other components falling by the wayside. However over the past few years technological advances for some pieces of technology, like SSDs, provided such a huge benefit that they became the upgrade that everyone wanted. Now I believe I’ve found the next upgrade everyone else should get and comes to us via NVIDIA’s new monitor technology: G-Sync.
For the uninitiated G-Sync is a monitor technology from NVIDIA that allows the graphics card (which must a NVIDIA card) to directly control the refresh rate of your monitor. This allows the graphics card to write each frame to the monitor as soon as its available, dynamically altering the refresh rate to match the frame rate. G-Sync essentially allows you to have the benefits of having vsync turned off and on at the same time as there’s no frame tearing and no stutter or slowdown. As someone who can’t stand either of those graphical artefacts G-Sync sounded like the perfect technology for me and now that I’m the proud owner of a GTX970 and two AOC G2460PGs I think that position is justified.
After getting the drivers installed and upping the refresh rate to 144Hz (more on that in a sec) the NVIDIA control panel informed me that I had G-Sync capable monitors and, strangely, told me to go enable it even though when I went there it was already done. After that I dove into some old favourites to see how the monitor and new rig handled them and, honestly, it was like I was playing on a different kind of computer. Every game I threw at it that typically had horrendous tearing or stuttering ran like a dream without a hint of those graphical issues in any frame. It was definitely worth waiting as long as I did so that I could get a native G-Sync capable monitor.
One thing G-Sync does highlight however is slowdown that’s caused by other factors like a game engine trying to load files or performing some background task that impedes the rendering engine. These things, which would have previously gone unnoticed, are impossible to ignore now when everything else runs so smoothly. Thankfully most issues like that are few and far between as I’ve only noticed them shortly after loading into a level but it’s interesting to see issues like that bubbling up now, signalling that the next must-have upgrade might be drive related once again.
I will admit that some of these benefits come from the hugely increased refresh rate of my new monitors, jumping me from the paltry 60Hz all the way up to 144Hz. The difference is quite stark when you turn it on in Windows and, should you have the grunt to power it, astounding in games. After spending so long with content running in the 30~60Hz spectrum I had forgotten just how smooth higher frame rates are and whilst I don’t know if there’s much benefit going beyond 144Hz that initial bump up is most certainly worth it. Not a lot of other content (like videos, etc.) take advantage of the higher frame rates however, something I didn’t think would bother me until I started noticing it.
Suffice to say I’m enamored with G-Sync and consider the premium I paid for these TN panel monitors well worth it. I’m willing to admit that high frame rates and G-Sync isn’t for everyone, especially if you’re lusting after the better colour reproduction and high resolutions of IPS panels, but for someone like me who can’t help but notice tearing and stuttering it’s a dream come true. If you have the opportunity to see one in action I highly recommend it as it’s hard to describe just how much better it is until you see it for yourself.
The current project I’m on has a requirement for being able to determine a server’s overall performance before and after a migration, mostly to make sure that it still functions the same or better once its on the new platform. Whilst it’s easy enough to get raw statistics from perfmon getting an at-a-glance view of how a server is performing before and after a migration is a far more nuanced concept, one that’s not easily accomplished with some Excel wizardry. With that in mind I thought I’d share with you my idea for creating such a view as well as outlining the challenges I’ve hit when attempting to collate the data.
At a high level I’ve focused on the 4 core resources that all operating systems consume: CPU, RAM, disk and network. For the most part these metrics are easily captured by the counters that perfmon has however I wanted to go a bit further to make sure that the final comparisons represented a more “true” picture of before and after performance. To do this I included some additional qualifying metrics which would show if increased resource usage was negatively impacting on performance or if it was just the server consuming more resources because it could since the new platform had much more capacity. With that in mind these are the metrics I settled on using:
Essentially these metrics can be broken down into 3 categories: quantitative, qualitative and qualifying. Quantitative metrics are the base metrics which will form the main part of the before and after analysis. Qualitative metrics are mostly just informational (being the Top 5 consumers of X resource) however they’ll provide some useful insight into what might be causing an issue. For example if an SQL box isn’t showing the SQL process as a top consumer then it’s likely something is causing that process to take a dive before it can actually use any resources. Finally the qualifying metrics are used to indicate whether or not increased usage of a certain metric signals an impact to the server’s performance like say if the memory usage is high and the memory balloon size is high it’s quite likely the system isn’t performing very well.
The vast majority of these metrics are provided in perfmon however there were a couple that I couldn’t seem to get through the counters, even though I could see them in Resource Monitor. As it turns out Resource Monitor makes use of the Event Tracing for Windows (ETW) framework which gives you an incredibly granular view of all events that are happening on your machine. What I was looking for was a breakdown of disk and network usage per process (in order to generate the Top 5 users list) which is unfortunately bundled up in the IO counters available in perfmon. In order to split these out you have to run a Kernel Trace through ETW and then parse the resulting file to get the metrics you want. It’s a little messy but unfortunately there’s no good way to get those metrics separated. The resulting perfmon profile I created can be downloaded here.
The next issue I’ve run into is getting the data into a readily digestible format. You see not all servers are built the same and not all of them run the same amount of software. This means that when you open up the resulting CSV file from different servers the column headers won’t line up so you’ve got to either do some tricky Excel work (which is often prone to failure) or get freaky with some PowerShell (which is messy and complicated). I decided to go for the latter as at least I could maintain and extend the script somewhat easily whereas an Excel spreadsheet has a tendency to get out of control faster than anyone expects. That part is still a work in progress however but I’ll endeavour to update this post with the completed script once I’ve got it working.
After that point it’s a relatively simple task of displaying everything in a nicely formatted Excel spreadsheet and doing comparisons based on the metrics you’ve generated. If I had more time on my hands I probably would’ve tried to integrate it into something like a SharePoint BI site so we could do some groovy tracking and intelligence on it but due to tight time constraints I probably won’t get that far. Still a well laid out spreadsheet isn’t a bad format for presenting such information, especially when you can colour everything green when things are going right.
I’d be keen to hear other people’s thoughts on how you’d approach a problem like this as trying to quantify the nebulous idea of “server performance” has proven to be far more challenging than I first thought it would be. Part of this is due to the data manipulation required but it was also ensuring that all aspects of a server’s performance were covered and converted down to readily digestible metrics. I think I’ve gotten close to a workable solution with this but I’m always looking for ways to improve it or if there’s a magical tool out there that will do this all for me
Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.
The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.
Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.
Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.
The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.
It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.
It’s undeniable that the freewheeling nature of the Internet is behind the exponential growth that it has experienced. It was a communications platform that was unencumbered by corporate overlords, free from gatekeepers that enabled people around the world to communicate with each other. However the gatekeepers of old have always tried to claw back some semblance of control at every point they can by imposing data caps, premium services and charging popular websites a premium to give their customers preferred access. Such things go against the pervasive idea of Net Neutrality that is a core tenant of the Internet’s strength however the Federal Communications Commission (FCC) in the USA is looking to change that.
FCC chairman Tom Wheeler has announced today that they will be seeking to classify Internet services under their Title II authority which would see them regulated in such a way as to guarantee the idea of net neutrality, ensuring open and unhindered access. The rules wouldn’t just be limited to fixed line broadband services either as Mr Wheeler stated this change in regulation would also cover wireless Internet services. The motion will have to be voted on before it can be enacted in earnest (and there’s still the possibility of Congress undermining it with additional legislation) however given the current makeup of the FCC board it’s almost guaranteed to pass which is a great thing for the Internet in the USA.
This will go a long way to combatting the anti-competitive practices that a lot of ISPs are engaging in. Companies like Netflix have been strong armed in the past into paying substantial fees to ISPs to ensure that their services run at full speed for their customers, something which only benefits the ISP. Under the Title II changes it would be illegal for ISPs to engage in such behaviour, ensuring that all packets that traverse the network were given the same priority. This would then ensure that no Internet based company would have to pay ISPs to ensure that their services ran acceptably which is hugely beneficial to Internet based innovators.
Of course ISPs have been quick to paint these changes in a negative light, saying that with this new kind of regulation we’re likely to see an increase in fees and all sorts of things that will trash anyone’s ability to innovate. Pretty much all of their concerns stem from the fact that they will be losing revenue from the deals that they’ve cut, ones that are directly in competition with the idea of net neutrality. Honestly I have little sympathy for them as they’ve already profited heavily from investment from the government and regulation that ensured competition between ISPs was kept at a minimum. The big winners in all of this will be consumers and open Internet providers like Google Fiber, things which are the antithesis to their outdated business models.
Hopefully this paves the way for similar legislation and regulation to make its way around the world, paving the way for an Internet free from the constraints of its corporate overlords. My only fear is that congress will mess with these provisions after the changes are made but hopefully the current incumbent government, who has gone on record in support of net neutrality, will put the kibosh on any plans to that effect. In any case the future of the Internet is looking brighter than it ever has and hopefully that trend will continue globally.
It’s not widely known that Microsoft has been in the embedded business for quite some time now with their various versions of Windows tailored specific for that purpose. Not that Microsoft has a particular stellar reputation in this field however as most of the time people find out that something was running Windows is when they crash spectacularly. However if you wanted to tinker with it yourself the process to do so was pretty arduous which wasn’t very conducive to generating much interest in the product. Microsoft seems set to change that however with the latest version of Windows 10 to run on the beefed up Raspberry Pi 2 and, best of all, it will be completely free to use.
Windows has supported the ARM chipset that powers the Raspberry Pi since the original 8 release however the diminutive specifications of the board precluded it from running even the cut down RT version. With the coming of Windows 10 however Microsoft is looking to develop an Internet of Things (IoT) line of Windows products which are specifically geared towards low power platforms such as the Raspberry Pi. Better still the product team behind those versions of Windows has specifically included the Raspberry Pi 2 as one of their supported platforms, meaning that it will work out of the box without needing to mess with its drivers or other configuration details. Whilst I’m sure the majority of users of the Raspberry Pi 2 will likely stick to their open source alternatives the availability of a free version of Windows for the platform does open it up to a whole host of developers who might not have considered the platform previously.
The IoT version of Windows is set to come in three different flavours: Industry, Mobile and Athens; with a revision of the .NET Micro framework for other devices that don’t fall into one of those categories. Industry is essentially the full version of Windows with features geared towards the embedded platform. The Mobile version is, funnily enough, geared towards always-on mobile devices but still retains much of the capabilities of its fully fledged brethren. Athens, the version that’s slated to be released on the Raspberry Pi 2, is a “resource focused” version of Windows 10 that still retains the ability to run Universal Apps. There’ll hopefully be some more clarity around these delineations as we get closer to Windows 10’s official release date but suffice to say if the Raspberry Pi 2 can run Universal Apps it’s definitely a platform I could see myself tinkering with.
These new flavours of Windows fit into Microsoft’s broader strategy of trying to get their ecosystem into as many places as they can, something they attempted to start with the WinRT framework and have reworked with Universal Apps. Whilst I feel that WinRT had merit it’s hard to say that it was successful in achieving what it set out to do, especially with the negative reception Metro Apps got with the wider Windows user base. Universal Apps could potentially be the Windows 7 to WinRT’s Vista, a similar idea reworked and rebranded for a new market that finds the feet its predecessors never had. The IoT versions of Windows are simply another string in this particular bow but whether or not it’ll pan out is not something I feel I can accurately predict.
Flash, after starting out its life as one of the bevy of animation plugins for browsers back in the day. has become synonymous with online video. It’s also got a rather terrible reputation for using an inordinate amount of system resources to accomplish this feat, something which hasn’t gone away even in the latest versions. Indeed even my media PC, which has a graphics card with accelerated video decoding, struggles with Flash, it’s unoptimized format monopolizing every skerrick of resources for itself. HTML5 sought to solve this problem by making video a part of the base HTML specification which, everyone had hoped, would see an end to proprietary plug-ins and the woes they brought with them. However the road to getting that standard widely adopted hasn’t been an easy one as YouTube’s 4 year road to making HTML5 the default shows.
Google had always been on the “let’s use an open standard” bandwagon when it came to HTML5 video which was at odds with other members of the HTML5 board who wanted to use something that, whilst being more ubiquitous, was a proprietary codec. This, unfortunately, led to a deadlock within the committee with none of them being able to agree on a default standard. Despite what YouTube’s move to HTML5 would indicate there is still no defined standard for which codec to use for HTML5 video, meaning that there’s no way to guarantee that a video you’ve encoded in one way will be viewable by HTML5 compliant browsers. Essentially it looks like a format war is about to begin where the wider world will decide the champion and the HTML5 committee will just have to play catch up.
YouTube has unsurprisingly decided to go for Google’s VP9 codec for their HTML5 videos, a standard which they fully control. Whilst they’ve had HTML5 video available for some time now as an option it never enjoyed the widespread support required in order for them to make it the default. It seems now they’ve got buy in from most of the major browser vendors in order to be able to make the switch so people running Safari 8, IE 11, Chrome and (beta) Firefox will be given the Flash free experience. This has the potential to set up VP9 as the de facto codec for HTML5 although I highly doubt it’ll be officially crowned anytime soon.
Google has also been hard at work ensuring that VP9 enjoys wide support across platforms as there are already several major chip producers whose System on a Chip (SoC) already supports the codec. Without that the mobile experience of VP9 encoded videos would likely be extremely poor, hindering adoption substantially.
Whilst a codec that’s almost entirely under the control of Google might not have been the ideal solution that the Open Source evangelists were hoping for (although it seems pretty open to me) it’s probably the best solution we were going to get. I have not heard of the other competing standards, apart from H.264, having such widespread support as Google’s VP9 does now. It’s likely that the next few years will see many people adopting a couple standards whilst the consumers duke it out in the next format war with the victor not clear until it’s been over for a couple years. For me though I’m glad it’s happened and hopefully soon we can do away with the system hog that Flash is.
Microsoft’s hardware business has always felt like something of an also-ran, with the notable exception being the Xbox of course. It’s not that the products were bad per se, indeed many of my friends still swear by the Microsoft Natural ergonomic keyboard, more that it just seemed to be an aside that never really saw much innovation or effort. The Surface seemed like an attempt to change the perception, pitting Microsoft directly against the venerable iPad whilst also attempting to bring consumers across to the Windows 8 way of thinking. Unfortunately the early years weren’t kind to it at all with the experiment resulting in a $900 million write down for Microsoft which many took to indicate that the Surface (or at the very least the RT version) weren’t long for this world. The 18 months that have followed however have seen that particular section of Microsoft’s business make a roaring comeback, much to my and everyone else’s surprise.
The Microsoft quarterly earnings report released today showing that Microsoft is generally in a good position with revenue and gross margin up on the previous quarter of last year. The internal make up of those numbers is a far more mixed story (covered in much better detail here) however the standout point was the fact that the Surface division alone was $1.1 billion for the quarter, up a staggering $211 million from the previous quarter. This is most certainly on the back of the Surface Pro 3 which was released in June 2014 but for a device that was almost certainly headed for the trash heap it’s a pretty amazing turn around from $900 million in the hole to $1.1 billion in revenue just 1.5 years later.
The question that interests me then is: What was the driving force behind this comeback?
To start off with the Surface Pro 3 (and all the Surface Pro predecessors) are actually pretty great pieces of kit, widely praised for their build quality and overall usability. They were definitely a premium device, especially if you went for the higher spec options, but they are infinitely preferable to carting around your traditional workhorse laptop around with you. The lines get a little blurry when you compare them to an ultrabook of similar specifications, at least if you’re someone like me who’s exacting with what they want, however if you didn’t really care about that the Surface was a pretty easy decision. So the hardware was great, what was behind the initial write down then?
That entirely at the feet of the WinRT version which simply failed to be the iPad competitor it was slated to be. Whilst I’m sure I’d have about as much use for an iPad as I would for my Surface RT it simply didn’t have the appeal that its fully fledged Pro brethren had. Sure you’d be spending more money on the Pro but you’d be getting the full Windows experience rather than the cut down version which felt like it was stuck between being a tablet and laptop replacement. Microsoft tried to stick with the RT idea with the 2 however they’ve gone to great lengths now to reposition the device as a laptop replacement, not an iPad competitor.
You don’t even have to go far to see this repositioning in action, the Microsoft website for the Surface Pro 3 puts it in direct competition with the Macbook Air. It’s a market segment that the device is far more likely to win in as well considering that Apple’s entire Mac product line made about $6.6 billion last quarter which includes everything from the Air all the way to the Mac Pro. Apple has never been the biggest player in this space however so the comparison might be a little unfair but it still puts the Surface’s recent revival into perspective.
It might not signal Microsoft being the next big thing in consumer electronics but it’s definitely not something I expected from a sector that endured a near billion dollar write off. Whether Microsoft can continue along these lines to capitalize on this is something we’ll have to watch closely as I’m sure no one is going to let them forget the failure that was the original Surface RT. I still probably won’t buy one however, well unless they decide to include a discrete graphics chip in a future revision.
Hint hint, Microsoft.