It’s not widely known that Microsoft has been in the embedded business for quite some time now with their various versions of Windows tailored specific for that purpose. Not that Microsoft has a particular stellar reputation in this field however as most of the time people find out that something was running Windows is when they crash spectacularly. However if you wanted to tinker with it yourself the process to do so was pretty arduous which wasn’t very conducive to generating much interest in the product. Microsoft seems set to change that however with the latest version of Windows 10 to run on the beefed up Raspberry Pi 2 and, best of all, it will be completely free to use.
Windows has supported the ARM chipset that powers the Raspberry Pi since the original 8 release however the diminutive specifications of the board precluded it from running even the cut down RT version. With the coming of Windows 10 however Microsoft is looking to develop an Internet of Things (IoT) line of Windows products which are specifically geared towards low power platforms such as the Raspberry Pi. Better still the product team behind those versions of Windows has specifically included the Raspberry Pi 2 as one of their supported platforms, meaning that it will work out of the box without needing to mess with its drivers or other configuration details. Whilst I’m sure the majority of users of the Raspberry Pi 2 will likely stick to their open source alternatives the availability of a free version of Windows for the platform does open it up to a whole host of developers who might not have considered the platform previously.
The IoT version of Windows is set to come in three different flavours: Industry, Mobile and Athens; with a revision of the .NET Micro framework for other devices that don’t fall into one of those categories. Industry is essentially the full version of Windows with features geared towards the embedded platform. The Mobile version is, funnily enough, geared towards always-on mobile devices but still retains much of the capabilities of its fully fledged brethren. Athens, the version that’s slated to be released on the Raspberry Pi 2, is a “resource focused” version of Windows 10 that still retains the ability to run Universal Apps. There’ll hopefully be some more clarity around these delineations as we get closer to Windows 10’s official release date but suffice to say if the Raspberry Pi 2 can run Universal Apps it’s definitely a platform I could see myself tinkering with.
These new flavours of Windows fit into Microsoft’s broader strategy of trying to get their ecosystem into as many places as they can, something they attempted to start with the WinRT framework and have reworked with Universal Apps. Whilst I feel that WinRT had merit it’s hard to say that it was successful in achieving what it set out to do, especially with the negative reception Metro Apps got with the wider Windows user base. Universal Apps could potentially be the Windows 7 to WinRT’s Vista, a similar idea reworked and rebranded for a new market that finds the feet its predecessors never had. The IoT versions of Windows are simply another string in this particular bow but whether or not it’ll pan out is not something I feel I can accurately predict.
Flash, after starting out its life as one of the bevy of animation plugins for browsers back in the day. has become synonymous with online video. It’s also got a rather terrible reputation for using an inordinate amount of system resources to accomplish this feat, something which hasn’t gone away even in the latest versions. Indeed even my media PC, which has a graphics card with accelerated video decoding, struggles with Flash, it’s unoptimized format monopolizing every skerrick of resources for itself. HTML5 sought to solve this problem by making video a part of the base HTML specification which, everyone had hoped, would see an end to proprietary plug-ins and the woes they brought with them. However the road to getting that standard widely adopted hasn’t been an easy one as YouTube’s 4 year road to making HTML5 the default shows.
Google had always been on the “let’s use an open standard” bandwagon when it came to HTML5 video which was at odds with other members of the HTML5 board who wanted to use something that, whilst being more ubiquitous, was a proprietary codec. This, unfortunately, led to a deadlock within the committee with none of them being able to agree on a default standard. Despite what YouTube’s move to HTML5 would indicate there is still no defined standard for which codec to use for HTML5 video, meaning that there’s no way to guarantee that a video you’ve encoded in one way will be viewable by HTML5 compliant browsers. Essentially it looks like a format war is about to begin where the wider world will decide the champion and the HTML5 committee will just have to play catch up.
YouTube has unsurprisingly decided to go for Google’s VP9 codec for their HTML5 videos, a standard which they fully control. Whilst they’ve had HTML5 video available for some time now as an option it never enjoyed the widespread support required in order for them to make it the default. It seems now they’ve got buy in from most of the major browser vendors in order to be able to make the switch so people running Safari 8, IE 11, Chrome and (beta) Firefox will be given the Flash free experience. This has the potential to set up VP9 as the de facto codec for HTML5 although I highly doubt it’ll be officially crowned anytime soon.
Google has also been hard at work ensuring that VP9 enjoys wide support across platforms as there are already several major chip producers whose System on a Chip (SoC) already supports the codec. Without that the mobile experience of VP9 encoded videos would likely be extremely poor, hindering adoption substantially.
Whilst a codec that’s almost entirely under the control of Google might not have been the ideal solution that the Open Source evangelists were hoping for (although it seems pretty open to me) it’s probably the best solution we were going to get. I have not heard of the other competing standards, apart from H.264, having such widespread support as Google’s VP9 does now. It’s likely that the next few years will see many people adopting a couple standards whilst the consumers duke it out in the next format war with the victor not clear until it’s been over for a couple years. For me though I’m glad it’s happened and hopefully soon we can do away with the system hog that Flash is.
Microsoft’s hardware business has always felt like something of an also-ran, with the notable exception being the Xbox of course. It’s not that the products were bad per se, indeed many of my friends still swear by the Microsoft Natural ergonomic keyboard, more that it just seemed to be an aside that never really saw much innovation or effort. The Surface seemed like an attempt to change the perception, pitting Microsoft directly against the venerable iPad whilst also attempting to bring consumers across to the Windows 8 way of thinking. Unfortunately the early years weren’t kind to it at all with the experiment resulting in a $900 million write down for Microsoft which many took to indicate that the Surface (or at the very least the RT version) weren’t long for this world. The 18 months that have followed however have seen that particular section of Microsoft’s business make a roaring comeback, much to my and everyone else’s surprise.
The Microsoft quarterly earnings report released today showing that Microsoft is generally in a good position with revenue and gross margin up on the previous quarter of last year. The internal make up of those numbers is a far more mixed story (covered in much better detail here) however the standout point was the fact that the Surface division alone was $1.1 billion for the quarter, up a staggering $211 million from the previous quarter. This is most certainly on the back of the Surface Pro 3 which was released in June 2014 but for a device that was almost certainly headed for the trash heap it’s a pretty amazing turn around from $900 million in the hole to $1.1 billion in revenue just 1.5 years later.
The question that interests me then is: What was the driving force behind this comeback?
To start off with the Surface Pro 3 (and all the Surface Pro predecessors) are actually pretty great pieces of kit, widely praised for their build quality and overall usability. They were definitely a premium device, especially if you went for the higher spec options, but they are infinitely preferable to carting around your traditional workhorse laptop around with you. The lines get a little blurry when you compare them to an ultrabook of similar specifications, at least if you’re someone like me who’s exacting with what they want, however if you didn’t really care about that the Surface was a pretty easy decision. So the hardware was great, what was behind the initial write down then?
That entirely at the feet of the WinRT version which simply failed to be the iPad competitor it was slated to be. Whilst I’m sure I’d have about as much use for an iPad as I would for my Surface RT it simply didn’t have the appeal that its fully fledged Pro brethren had. Sure you’d be spending more money on the Pro but you’d be getting the full Windows experience rather than the cut down version which felt like it was stuck between being a tablet and laptop replacement. Microsoft tried to stick with the RT idea with the 2 however they’ve gone to great lengths now to reposition the device as a laptop replacement, not an iPad competitor.
You don’t even have to go far to see this repositioning in action, the Microsoft website for the Surface Pro 3 puts it in direct competition with the Macbook Air. It’s a market segment that the device is far more likely to win in as well considering that Apple’s entire Mac product line made about $6.6 billion last quarter which includes everything from the Air all the way to the Mac Pro. Apple has never been the biggest player in this space however so the comparison might be a little unfair but it still puts the Surface’s recent revival into perspective.
It might not signal Microsoft being the next big thing in consumer electronics but it’s definitely not something I expected from a sector that endured a near billion dollar write off. Whether Microsoft can continue along these lines to capitalize on this is something we’ll have to watch closely as I’m sure no one is going to let them forget the failure that was the original Surface RT. I still probably won’t buy one however, well unless they decide to include a discrete graphics chip in a future revision.
Hint hint, Microsoft.
The rumour mill has been running strong for Microsoft’s next Windows release, fuelled by the usual sneaky leaks and the intrepid hackers who relentlessly dig through preview builds to find things they weren’t meant to see. For the most part though things have largely been as expected with Microsoft announcing the big features and changes late last year and drip feeding minor things through the technical preview stream. Today Microsoft held their Windows 10 Consumer Preview event in Redmond, announcing several new features that would become part of their flagship operating system as well as confirming the strategy for the Windows platform going forward. Suffice to say it’s definitely a shake up of what we’d traditionally expect from Microsoft, especially when it comes to licensing.
The announcement that headlined the event that Windows 10 would be a free upgrade for all current Windows 7, 8, 8.1 and Windows Phone 8.1 customers who upgrade in the first year. This is obviously an attempt to ensure that Windows 10’s adoption rate doesn’t languish in the Vista/8 region as even though every other version of Windows seems to do just fine Windows 10 is still different enough for it to cause issues. I can see the adoption rate for current Windows 8 and 8.1 users to be very high, thanks to the integration with the Windows store, however for Windows 7 stalwarts I’m not so sure. Note that this also won’t apply to enterprises who are responsible for an extremely large chunk of the Windows 7 market currently.
Microsoft also announced Universal Applications which are essentially the next iteration of the WinRT framework that was introduced with Windows 8. However instead of delineating some applications to the functional ghetto (like all Metro apps were) Universal Apps instead share a common base set of functionality with additional code paths for the different platforms they support. Conceptually it sounds like a great idea as it means that the different versions of the applications will share the same codebase, making it very easy to bring new features to all platforms simultaneously. Indeed if this platform can be extended to encompass Android/iOS it’d be an incredibly powerful tool, although I wouldn’t count on that coming from Microsoft.
Xbox Live will also be making a prominent appearance in Windows 10 with some pretty cool features coming for XboxOne owners. Chief among these, at least for me, is the ability to stream XboxOne games from your console directly to your PC. As someone who currently uses their PC as a monitor for their PS4 (I have a capture card for reviews and my wife didn’t like me monopolizing the TV constantly with Destiny) I think this a great feature, one I hope other console manufacturers replicate. There’s also cross-game integration for games that use Xbox Live, an inbuilt game recorder and, of course, another iteration of DirectX. This was the kind of stuff Microsoft had hinted at doing with Windows 8 but it seems like they’re finally committed to it with Windows 10.
Microsoft is also expanding its consumer electronics business with new Windows 10 enabled devices. The Microsoft HoloLens is their attempt at a Google Glass like device although one that’s more aimed at being used with the desktop rather than on the go. There’s also the Surface Hub which is Microsoft’s version of the smart board, integrating all sorts of conferencing and collaboration features. It will be interesting to see if these things see any sort of meaningful adoption rate as whilst they’re not critical to Windows 10’s success they’re certainly devices that could increase adoption in areas that traditionally aren’t Microsoft’s domain.
Overall the consumer preview event paints Windows 10 as an evolutionary step forward for Microsoft, taking the core of the ideas that they attempted with previous iterations and reworking them with a fresh perspective. It will be interesting to see how the one year free upgrade approach works for them as gaining that critical mass of users is the hardest thing for any application, even the venerable Windows platform. The other features that are coming along as more nice to haves than anything else, things that will likely help Microsoft sell people on the Windows 10 idea. Getting this launch right is crucial for Microsoft to execute on their strategy of it being the one platform for time immaterial as the longer it takes to get the majority of users on Windows 10 the harder it will be to invest heavily in it. Hopefully Windows 10 can be the Windows 7 to Windows 8 as Microsoft has a lot riding on this coming off just right.
Technological enablers aren’t good or evil, they simply exist to facilitate whatever purpose they were designed for. Of course we always aim to maximise the good they’re capable of whilst diminishing the bad, however changing their fundamental characteristics (which are often the sole purpose for their existence) in order to do so is, in my mind, abhorrent. This is why I think things like Internet filters and other solutions which hope to combat the bad parts of the Internet are a fool’s errand as they would seek to destroy the very thing they set out to improve. The latest instalment of which comes to us courtesy of David Cameron who is now seeking to have a sanctioned backdoor to all encrypted communications and to legislate against those who’d resist.
Like most election waffle Cameron is strong on rhetoric but weak on substance but you can get the gist of it from this quote:
“I think we cannot allow modern forms of communication to be exempt from the ability, in extremis, with a warrant signed by the home secretary, to be exempt from being listened to.”
Essentially what he’s referring to is the fact that encrypted communications, the ones that are now routinely employed by consumer level applications like WhatsApp and iMessage, shouldn’t be allowed to exist without a method for intelligence agencies to tap into them. It’s not like these communications are exempt from being listened to currently just that it’s infeasible for the security agencies to decrypt them once they’ve got their hands on them. The problem that arises here though is that unlike other means of communication introducing a mechanism like this, a backdoor by which encrypted communications can be decrypted, this fundamentally breaks the utility of the service and introduces a whole slew of potential threats that will be exploited.
The crux of the matter stems from the trust relationships that are required for two way encrypted communications to work. For the most part you’re relying on the channel between both parties to be free from interference and monitoring from interfering parties. This is what allows corporations and governments to spread their networks over the vast reaches of the Internet as they can ensure that information passing through untrusted networks isn’t subject to prying eyes. Taking this proposal into mind any encrypted communications which pass through the UK’s networks could be intercepted, something which I’m sure a lot of corporations wouldn’t like to sign on for. This is not to mention the millions of regular people who rely on encrypted communications for their daily lives, like anyone who’s used Facebook or a secure banking site.
Indeed I believe the risks poses by introducing a backdoor into encrypted communications far outweighs any potential benefits that you’d care to mention. You see any backdoor into a system, no matter how well designed it is, will severely weaken the encrypted channel’s ability to resist intrusion from a malicious attacker. No matter which way you slice it you’re introducing another attack vector into the equation when there was, at most, 2 before you now have at least 3 (the 2 endpoints plus the backdoor). I don’t know about you but I’d rather not increase my risk of being compromised by 50% just because someone might’ve said plutonium on my private chats.
The idea speaks volumes to David Cameron’s lack of understanding of technology as whilst you might be able to get some commercial companies to comply with this you will have no way of stopping peer to peer encrypted communications using open source solutions. Simply put if the government, somehow, managed to get PGP to work a backdoor in it’d be a matter of days before it was no longer used and another solution worked into its place. Sure, you could attempt to prosecute all those people using illegal encryption, but they said the same thing about BitTorrent and I haven’t seen mass arrests yet.
It’s becoming painfully clear that the conservative governments of the world are simply lacking in fundamental understanding of how technology works and thus concoct solutions which simply won’t work in reality. There are far easier ways for them to get the data that they so desperately need (although I’m yet to see the merits of any of these mass surveillance networks) however they seem hell bent on getting it in the most retarded way possible. I would love to say that my generation would be different when they get into power but stupid seems to be an inheritable condition when it comes to conservative politics.
There are few industries that can claim to have been disrupted by the Internet as much as the media industry has. In the span of a couple decades they’ve gone from having fine grained control over what content goes where to a world that’s keenly aware of what’s available and will take it if it’s not given to them at the right price. At the same time however we’re far more likely to spend more than we would have done in the media world of the past, just that now we’re asking for much more value for our money. This back and forth battle between the Internet’s innate ability to break down geographical barriers and the rights holder’s business models that rely on them ultimately leaves both sides feeling hard done by, but it doesn’t have to be this way.
The latest shot fired in this battle comes in the form of Netflix cracking down on subscribers that use VPN services to circumvent their geographical restrictions. For countries where the Netflix service is available this is usually done to access the broader catalogue but for places like Australia it’s necessary just to access the service at all. Indeed the user figures for Australia are pretty strong, enough so that a blanket ban on VPN users would see Netflix lose millions of dollars per month in subscriber revenue. The rights holders don’t seem to be to phased about this however likely thinking that we’ll revert to the other, far more expensive, options when our Netflix is taken away from us.
However that’s likely to be the last thing that any of the current Australian Netflix subscribers would do. You see setting up a VPN to get Netflix to work is, whilst not exactly hard, a non-trivial affair, requiring just as much technical know how to set up as your average piracy enabling client. Thus when their legitimate source of media is cut off from them they’ll likely turn to the illegitimate sources, either their old haunts of Usent and Bittorrent or the new world of media piracy provided through Popcorn Time. I honestly don’t know how you’d expect anything different especially considering that Australia consistently rates as the highest consumer of illegitimate media worldwide.
These kinds of idiotic decisions are driven by business models that are simply no longer viable in the Internet driven world. Sure, back in the days when physical media was king there was an argument to be made for this style of business however now, when digital media reigns supreme, it just doesn’t make any sense. It’s not likely consumers are unwilling to pay for it, indeed the hundreds of thousands of Netflix subscribers in Australia is testament to that, it’s that the companies that hold the rights to that media are simply unwilling to provide it. It’s been shown time and time again that should no reasonable cost alternative be provided users will simply turn to other sources and won’t stop until such a service materializes.
Not that it really matters what Netflix, or any other service for that matter, does to try and block people it’s only a matter of time until someone figures out how to defeat the detection methods used, allowing everyone to use it once again. This is a game of cat and mouse that no service provider can win as there are far more individuals out in the Internet’s ether working to crack such schemes than Netflix has to create them. I’m sure eventually the rights holders will come around and give up this crusade to protect their outdated business models but until then things like this are just going to cost them paying consumers and swell the ranks of those filthy pirates who won’t give them one red cent.
Filtering Australian’s Internet is something all good politicians learned to avoid long ago after the fiasco that was Labor’s Clean Feed. It quickly turned from being what seemed like an easily defensible policy (Think of the children!) to the horrendous mess that it was, something that threatened the very core of what the Internet was built on. Thus any policy that dares to tread similar ground has, for the most part, been put down long before the legislation makes it to the floor of our parliament. However it seems that, in true Liberal fashion, our current government wants to put a filter in but is flatly denying that that’s what they’re doing.
Last year Brandis and Turnbull got in cahoots with each other to start devising some reforms to Australia’s copyright system, most likely in response to some of the secret Trans-Pacific Partnership talks that have been going on. These reforms largely ignored the actual problem and instead adopted the reactionary measures that other countries have adopted, all of which have proven ineffective in curbing copyright infringement. However one of the measures, the requirement for ISPs to block links to infringing content when contacted, had a strange bit of familiarity of it.
It sounded an awful lot like an Internet filter.
When he was made aware of this comparison Turnbull was quick to distance it from the idea, calling it “complete BS”. However whilst you might not want to call it a filter (obviously for fear of being tarred with the same brush, but I’m about to do that anyway) it, unfortunately, has all the makings of Internet filter. It’ll be overseen by the courts, which likely means there’ll be some kind of central list of blocked content, which all ISPs will be required to block using whatever means they have. If you cast your mind back a few years you’ll see that this was pretty much identical to Labor’s voluntary mandatory system, the one that was dumped for “budgetary” reasons.
The time has long since passed when this was just an issue for the technical elite and freedom of speech warriors of Australia as the entire country is far more invested in its access to the Internet than it ever has been. We want it to be fast and unfettered, ideals which the current government seems hellbent on trashing in order to appease big businesses both here and overseas. Unfortunately for them it looks like they’re slow learners, unable to recognise the mistakes of their predecessors and are simply dooming themselves to repeat them. Not that this was entirely unexpected, but that doesn’t stop it all from being just as rage inducing.
Roll back the clock a decade or so and the competition for what kind of processor ended up in your PC was at a fever pitch with industry heavyweights Intel and AMD going blow for blow. The choice of CPU, at least for me and my enthusiast brethren, almost always came down to what was fastest but the lines were often blurry enough that brand loyalty was worth more than a few FPS here or there. For the longest time I was an AMD fan, sticking stalwartly to their CPUs which provided me with the same amount of grunt as their Intel brethren for a fraction of the cost. However over time the gap between what an AMD CPU could provide and what Intel offered was too wide to ignore, and it’s only been getting wider since then.
The rift is seen in adoption rates across all products that make use of modern CPUs with Intel dominating nearly any sector that you find them in. When Intel first retook the crown all those years ago the reasons were clear, Intel just performed well enough to justify the cost, however as time went on it seemed like AMD was willing to let that gap continue to grow. Indeed if you look at them from a pure technology basis they’re stuck about 2 generations behind where Intel is today with the vast majority of their products being produced on a 28nm process, with Intel’s latest release coming out on 14nm. Whilst they pulled a major coup in winning over all of the 3 major consoles that success has had much onflow to the rest of the business. Indeed since they’ll be producing the exact same chips for the next 5+ years for those consoles they can’t really do much with them anyway and I doubt they’d invest in a new foundry process unless Microsoft or Sony asked them nicely.
What this has translated into is a monopoly by default, one where Intel maintains it’s massive market share without having to worry about any upstarts rocking their boat. Thankfully the demands of the industry are pressure enough to keep them innovating at the rapid pace they set way back when AMD was still biting at their heels but there’s a dangerously real chance that they could just end up doing the opposite. It’s a little unfair to put the burden on AMD to keep Intel honest however it’s hard to think of another company who has the required pedigree and experience to be the major competition to their platform.
The industry is looking towards ARM as being the big competition for Intel’s x86 platform although, honestly, they’re really not in the same market. Sure nearly every phone under the sun is now powered by some variant of the ARM architecture however when it comes to consumer or enterprise compute you’d be struggling to find anything that runs on it. There is going to have to be an extremely compelling reason for everyone to want to translate to that platform and, as it stands right now, mobile and low power are the only places where it really fits. For ARM to really start eating Intel’s lunch it’d need to make some serious inroads into those spaces, something which I don’t see happening for decades at least.
There is some light in the form of Kaveri however it’s less than stellar performance when compared to Intel’s less tightly coupled solution does leave a lot to be desired. At a high level the architecture does feel like the future of all computing, well excluding radical paradigm shifts like HP’s The Machine (which is still vaporware at this point), but until it equals the performance of discreet components it’s not going anywhere fast. I get the feeling that if AMD had kept up with Intel’s die shrinks Kaveri would be looking a lot more attractive than it is currently, but who knows what it might have cost them to get to that stage.
In any other industry you’d see this kind of situation as one that was ripe for disruption however the capital intensive nature, plus an industry leader who isn’t resting on their laurels, means that there are few who can hold a candle to Intel. The net positive out of all of this is that we as consumers aren’t suffering however we’ve all seen what happens when a company remains at the top for far too long. Hopefully the numerous different sectors which Intel is currently competing in will be enough to offset their monopolistic nature in the CPU market but that doesn’t mean more competition in that space isn’t welcome.
If there’s one thing that turn an otherwise professional looking document into a piece of horrifying garbage it’s clip art. Back in the days when graphics on computers were a still a nascent field, one populated with people with little artistic style, they were the go-to source for images to convey a message. Today however, with clip art’s failure to modernize in any way (mostly due to the users who desperately cling to it’s disgustingly iconic style) it’s become a trademark of documents that have had little to no thought put into them. Microsoft has been aware of this for some time, drastically reducing the amount of clip art present in Office 2010 and moving the entire library online in Office 2013. Now that library no longer contains any clip art at all, now it just points to Bing Images.
As someone who’s had to re-enable access to clip art more times then he’d have liked to I’m glad Microsoft has made this move as whilst it won’t likely see everyone become a graphic designer overnight it will force them to think long and hard about the images they’re putting into their documents. The limited set of images provided as part of clip art usually meant people would try to shoehorn multiple images together in order to convey what they were after, rather than attempting create something in Visio or just searching through the Internet. Opening it up to the Bing Image search engine, which by default filters to images which have the appropriate Creative Commons licensing, is obviously done in a hope that more people will use the service although whether they will or not remains to be seen.
However what’s really interesting about this is what it says about where Microsoft is looking to go in the near term when it comes to its Office line of products. Most people wouldn’t know it but Microsoft has been heavily investing in developing Office to be a much more modern set of documentation tools, retaining their trademark backwards compatibility whilst making it far more easier to make documents that are clean, professional and, above all, usable. The reason why most people wouldn’t know about it is that their latest product, Sway, isn’t yet part of the traditional Office product suite but with Microsoft’s push to get everyone on Office 365 I can’t see that being the case for too long.
Sway is essentially a replacement for PowerPoint, yet another Microsoft product that’s been lauded for it’s gaudy design principles and gross overuse in certain situations. However instead of focusing just on slides and text it’s designed to be far more interactive and inter-operable, able to gather data from numerous different sources and present it in a format that’s far more pleasing than any PowerPoint presentation I’ve seen. Unfortunately it’s still in closed beta for the time being so I can’t give you my impressions with it (I’ve been on the waiting list for some time now) but suffice to say if Sway is the future of Microsoft’s Office products than the ugly history of clip art might end up just being a bad memory.
It’s just more evidence that the Microsoft of today is nothing like the one it was in the past. Microsoft is still a behemoth of a company, one that’s more beholden to it’s users than it’d like to admit, but we’re finally starting to see some forms of innovation from them rather than their old strategy of embrace, extend, extinguish. Whether its users will embrace the new way of doing things or cling to the old (like they continue to do) will be the crux of Microsoft’s strategy going forward but either way it’s an exciting time if you’re a Microsoft junkie like myself.
It’s really hard to have anything but admiration for Stuxnet. It was the first piece of software that could be clearly defined as a weapon, one with a very specific purpose in mind that used all manner of tricks to accomplish its task. Since its discovery there hasn’t been another piece of software that’s come close to it in terms of capability although there’s always rumours and speculation of what might be coming next. Regin, discovered by Symantec, has been infecting computers since at least 2008 and is the next candidate for the cyber-weapon title and whilst its mode of operation is more clandestine (and thus, a little more boring) it’s more what it’s not that interests me most.
Unlike Stuxnet, and most other malware you’ll encounter these days, Regin is designed to infect a single target with no further mechanism to spread itself. This is interesting because most run of the mill malware wants to get itself onto as many machines as possible, furthering the chances that it’d pick up something of value. Malware of this nature, one that we haven’t identified a specific infection vector for, means that it’s purpose is far more targeted and was likely developed with specific targets in mind. Indeed the architecture of the software, which is highly modular in nature, indicates that Regin is deployed against a very specific subset of targets rather than allowing it to roam free and find targets of interest.
Regin has the ability to load up various different modules depending on what its command and control servers tell it to do. The functions range from interchangeable communication methods (one of which includes the incredibly insidious idea of encoding data within ping packets) to modules designed to target specific pieces of software. It’s quite possible that the list Symantec has created isn’t exhaustive either as Regin attempts to leave very little data at rest. Indeed Symantec hasn’t been able to recover any of the data captured by this particular bit of malware, indicating that captured data is likely not stored for long, if at all.
Due to its non-worm nature the range of targets that Regin has infected gives a pretty good indication as to what it’s intended purpose is. The two largest groups of targets were individuals or telecommunications backbones, indicating that its purpose is likely information gathering on a large scale. The location of of infections indicates that this piece of software was likely western in origin as the primary targets were Russia and Saudi Arabia with very few targets within western countries. It’s unlikely that this tool was developed for a specific operation due to its modular nature however, so I don’t believe there’s any relationship between different infections apart from them using the same framework.
Just like Stuxnet I’m sure we won’t know the full story of Regin for some time to come as software of this nature is incredibly adept at hiding its true purpose. Whilst its capabilities appear to be rather run of the mill the way in which it achieves this is very impressive. More interesting though is it’s non-worm nature which, whilst it may have prevented its detection for some time, hints heavily to its true purpose and origin. I’m really looking forward to further analysis of this particular piece of software as it gives us a rare insight into the world of clandestine cyber warfare operations.