Technology

Equation Group Victims Map

Equation Group Malware is Beyond Anything We’ve Seen.

The discovery of Stuxnet in the wild was a watershed moment, signalling the first known salvo sent across the wires of the Internet to strike at an enemy far away. The fact that a piece of software could wreak such destruction in the real world was what drew most people’s interest however the way in which it achieved this was, in my opinion, far more interesting than the results it caused. Stuxnet showed that nation state sponsored malware was capable of things far beyond that of what we’ve attributed to malicious hackers in the past and made us wonder what they were really capable of. Thanks to Kaspersky Labs we now have a really good (read: scary) idea of what a nation state could develop and it’s beyond what many of us thought would be possible.

Equation Group Victims Map

The Equation Group has been identified as being linked to several different pieces of malware that have surfaced in various countries around the world. They’ve been in operation for over a decade and have continuously improved their toolset over that time. Interestingly this group appears to have ties to the development teams behind both Stuxnet and Regin as some of the exploits found in early versions of Equation Group’s tools were also found in those pieces of malware. However those zero day exploits were really just the tip of the spear in Equation Groups arsenal as what Kaspersky Labs has discovered is far beyond anything else we’ve ever seen.

Perhaps the most fascinating piece of software that the group has developed is the ability to write disk firmware which allows them persist their malware through reboots, operating system reinstalls and even low level formats. If that wasn’t nasty enough there’s actually no way (currently) to detect an infection of that nature as few hard drives include the capability to read the firmware once its been written. That means once the firmware has wormed its way into your system there’s very little you could do to detect and remove it, save buying a whole new PC from a random vendor and keeping it isolated from every other device.

This then feeds into their other tools which give them unprecedented control over every facet of a Windows operating system. GrayFish, as it has been dubbed, completely replaces the bootloader and from there completely controls how Windows loads and operates. Essentially once a system is under GrayFish control it no longer uses any of its core boot process which are replaced by GrayFish’s toolkit. This allows Equation Group to be able to inject malware into almost every aspect of the system, preventing detection and giving them complete control to load any of their other malware modules. This shows a level of understanding of the operating system that would rival top Microsoft technicians, even those who have direct access to the source code. Although to be honest I wouldn’t be surprised if they had access to the source code themselves given the level of sophistication here.

These things barely begin to describe the capabilities that the Equation Group has developed over the past couple years as their level of knowledge, sophistication and penetration into world networks is well above anything the general public has known about before. It would be terrifying if it wasn’t so interesting as it shows just what can be accomplished when you’ve got the backing of an entire nation behind you. I’m guessing that it won’t be long before we uncover more of what the Equation Group is capable of and, suffice to say, whatever they come up with next will once again set the standard for what malware can be capable of.

AOC G2460PG

G-Sync is Love, G-Sync is Life.

For us long time PC gamers, those of us who grew up in a time where games were advancing so fast that yearly upgrades were a given, getting the most bang for your buck was often our primary concern. Often the key components would get upgraded first like the CPU, RAM and GPU with other components falling by the wayside. However over the past few years technological advances for some pieces of technology, like SSDs, provided such a huge benefit that they became the upgrade that everyone wanted. Now I believe I’ve found the next upgrade everyone else should get and comes to us via NVIDIA’s new monitor technology: G-Sync.

AOC G2460PG

For the uninitiated G-Sync is a monitor technology from NVIDIA that allows the graphics card (which must a NVIDIA card) to directly control the refresh rate of your monitor. This allows the graphics card to write each frame to the monitor as soon as its available, dynamically altering the refresh rate to match the frame rate. G-Sync essentially allows you to have the benefits of having vsync turned off and on at the same time as there’s no frame tearing and no stutter or slowdown. As someone who can’t stand either of those graphical artefacts G-Sync sounded like the perfect technology for me and now that I’m the proud owner of a GTX970 and two AOC G2460PGs I think that position is justified.

After getting the drivers installed and upping the refresh rate to 144Hz (more on that in a sec) the NVIDIA control panel informed me that I had G-Sync capable monitors and, strangely, told me to go enable it even though when I went there it was already done. After that I dove into some old favourites to see how the monitor and new rig handled them and, honestly, it was like I was playing on a different kind of computer. Every game I threw at it that typically had horrendous tearing or stuttering ran like a dream without a hint of those graphical issues in any frame. It was definitely worth waiting as long as I did so that I could get a native G-Sync capable monitor.

One thing G-Sync does highlight however is slowdown that’s caused by other factors like a game engine trying to load files or performing some background task that impedes the rendering engine. These things, which would have previously gone unnoticed, are impossible to ignore now when everything else runs so smoothly. Thankfully most issues like that are few and far between as I’ve only noticed them shortly after loading into a level but it’s interesting to see issues like that bubbling up now, signalling that the next  must-have upgrade might be drive related once again.

I will admit that some of these benefits come from the hugely increased refresh rate of my new monitors, jumping me from the paltry 60Hz all the way up to 144Hz. The difference is quite stark when you turn it on in Windows and, should you have the grunt to power it, astounding in games. After spending so long with content running in the 30~60Hz spectrum I had forgotten just how smooth higher frame rates are and whilst I don’t know if there’s much benefit going beyond 144Hz that initial bump up is most certainly worth it. Not a lot of other content (like videos, etc.) take advantage of the higher frame rates however, something I didn’t think would bother me until I started noticing it.

Suffice to say I’m enamored with G-Sync and consider the premium I paid for these TN panel monitors well worth it. I’m willing to admit that high frame rates and G-Sync isn’t for everyone, especially if you’re lusting after the better colour reproduction and high resolutions of IPS panels, but for someone like me who can’t help but notice tearing and stuttering it’s a dream come true. If you have the opportunity to see one in action I highly recommend it as it’s hard to describe just how much better it is until you see it for yourself.

Perfmon Data

Capturing a Before and After Performance Report for Windows Servers.

The current project I’m on has a requirement for being able to determine a server’s overall performance before and after a migration, mostly to make sure that it still functions the same or better once its on the new platform. Whilst it’s easy enough to get raw statistics from perfmon getting an at-a-glance view  of how a server is performing before and after a migration is a far more nuanced concept, one that’s not easily accomplished with some Excel wizardry. With that in mind I thought I’d share with you my idea for creating such a view as well as outlining the challenges I’ve hit when attempting to collate the data.

Perfmon Data

At a high level I’ve focused on the 4 core resources that all operating systems consume: CPU, RAM, disk and network. For the most part these metrics are easily captured by the counters that perfmon has however I wanted to go a bit further to make sure that the final comparisons represented a more “true” picture of before and after performance. To do this I included some additional qualifying metrics which would show if increased resource usage was negatively impacting on performance or if it was just the server consuming more resources because it could since the new platform had much more capacity. With that in mind these are the metrics I settled on using:

  • Average of CPU usage (24 hours), Percentage, Quantitative
  • CPU idle time on virtual host of VM (24 hours), Percentage, Qualifying
  • Top 5 services by CPU usage, List, Qualitative
  • Average of  Memory usage (24 hours), Percentage, Quantitative
  • Average balloon driver memory usage (24 hours), MB consumed, Qualifying
  • Top 5 services by Memory usage, List, Qualitative
  • Average of Network usage (24 hours), Percentage, Quantitative
  • Average TCP retransmissions (24 hours), Total, Qualifying
  • Top 5 services by Network bandwidth utilized, List, Qualitative
  • Average of Disk usage (24 hours), Percentage, Quantitative
  • Average queue depth (24 hours), Total, Qualifying
  • Top 5 services by Storage IOPS/Bandwidth utilized, List, Qualitative

Essentially these metrics can be broken down into 3 categories: quantitative, qualitative  and qualifying. Quantitative metrics are the base metrics which will form the main part of the before and after analysis. Qualitative metrics are mostly just informational (being the Top 5 consumers of X resource) however they’ll provide some useful insight into what might be causing an issue. For example if an SQL box isn’t showing the SQL process as a top consumer then it’s likely something is causing that process to take a dive before it can actually use any resources. Finally the qualifying metrics are used to indicate whether or not increased usage of a certain metric signals an impact to the server’s performance like say if the memory usage is high and the memory balloon size is high it’s quite likely the system isn’t performing very well.

The vast majority of these metrics are provided in perfmon however there were a couple that I couldn’t seem to get through the counters, even though I could see them in Resource Monitor. As it turns out Resource Monitor makes use of the Event Tracing for Windows (ETW) framework which gives you an incredibly granular view of all events that are happening on your machine. What I was looking for was a breakdown of disk and network usage per process (in order to generate the Top 5 users list) which is unfortunately bundled up in the IO counters available in perfmon. In order to split these out you have to run a Kernel Trace through ETW and then parse the resulting file to get the metrics you want. It’s a little messy but unfortunately there’s no good way to get those metrics separated. The resulting perfmon profile I created can be downloaded here.

The next issue I’ve run into is getting the data into a readily digestible format. You see not all servers are built the same and not all of them run the same amount of software. This means that when you open up the resulting CSV file from different servers the column headers won’t line up so you’ve got to either do some tricky Excel work (which is often prone to failure) or get freaky with some PowerShell (which is messy and complicated). I decided to go for the latter as at least I could maintain and extend the script somewhat easily whereas an Excel spreadsheet has a tendency to get out of control faster than anyone expects. That part is still a work in progress however but I’ll endeavour to update this post with the completed script once I’ve got it working.

After that point it’s a relatively simple task of displaying everything in a nicely formatted Excel spreadsheet and doing comparisons based on the metrics you’ve generated. If I had more time on my hands I probably would’ve tried to integrate it into something like a SharePoint BI site so we could do some groovy tracking and intelligence on it but due to tight time constraints I probably won’t get that far. Still a well laid out spreadsheet isn’t a bad format for presenting such information, especially when you can colour everything green when things are going right.

I’d be keen to hear other people’s thoughts on how you’d approach a problem like this as trying to quantify the nebulous idea of “server performance” has proven to be far more challenging than I first thought it would be. Part of this is due to the data manipulation required but it was also ensuring that all aspects of a server’s performance were covered and converted down to readily digestible metrics. I think I’ve gotten close to a workable solution with this but I’m always looking for ways to improve it or if there’s a magical tool out there that will do this all for me ;)

vmware_vsphere

VMware Targets OpenStack with vSphere 6.

Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.

vmware_vsphere

The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and  in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.

Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.

Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.

The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.

It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.

tom-wheeler-fcc

FCC to Solidify Net Neutrality Under Title II Provisions.

It’s undeniable that the freewheeling nature of the Internet is behind the exponential growth that it has experienced. It was a communications platform that was unencumbered by corporate overlords, free from gatekeepers that enabled people around the world to communicate with each other. However the gatekeepers of old have always tried to claw back some semblance of control at every point they can by imposing data caps, premium services and charging popular websites a premium to give their customers preferred access. Such things go against the pervasive idea of Net Neutrality that is a core tenant of the Internet’s strength however the Federal Communications Commission (FCC) in the USA is looking to change that.

tom-wheeler-fcc

FCC chairman Tom Wheeler has announced today that they will be seeking to classify Internet services under their Title II authority which would see them regulated in such a way as to guarantee the idea of net neutrality, ensuring open and unhindered access. The rules wouldn’t just be limited to fixed line broadband services either as Mr Wheeler stated this change in regulation would also cover wireless Internet services. The motion will have to be voted on before it can be enacted in earnest (and there’s still the possibility of Congress undermining it with additional legislation) however given the current makeup of the FCC board it’s almost guaranteed to pass which is a great thing for the Internet in the USA.

This will go a long way to combatting the anti-competitive practices that a lot of ISPs are engaging in. Companies like Netflix have been strong armed in the past into paying substantial fees to ISPs to ensure that their services run at full speed for their customers, something which only benefits the ISP. Under the Title II changes it would be illegal for ISPs to engage in such behaviour, ensuring that all packets that traverse the network were given the same priority. This would then ensure that no Internet based company would have to pay ISPs to ensure that their services ran acceptably which is hugely beneficial to Internet based innovators.

Of course ISPs have been quick to paint these changes in a negative light, saying that with this new kind of regulation we’re likely to see an increase in fees and all sorts of things that will trash anyone’s ability to innovate. Pretty much all of their concerns stem from the fact that they will be losing revenue from the deals that they’ve cut, ones that are directly in competition with the idea of net neutrality. Honestly I have little sympathy for them as they’ve already profited heavily from investment from the government and regulation that ensured competition between ISPs was kept at a minimum. The big winners in all of this will be consumers and open Internet providers like Google Fiber, things which are the antithesis to their outdated business models.

Hopefully this paves the way for similar legislation and regulation to make its way around the world, paving the way for an Internet free from the constraints of its corporate overlords. My only fear is that congress will mess with these provisions after the changes are made but hopefully the current incumbent government, who has gone on record in support of net neutrality, will put the kibosh on any plans to that effect. In any case the future of the Internet is looking brighter than it ever has and hopefully that trend will continue globally.

Raspberry Pi 2

Raspberry Pi 2 to Run Windows 10.

It’s not widely known that Microsoft has been in the embedded business for quite some time now with their various versions of Windows tailored specific for that purpose. Not that Microsoft has a particular stellar reputation in this field however as most of the time people find out that something was running Windows is when they crash spectacularly. However if you wanted to tinker with it yourself the process to do so was pretty arduous which wasn’t very conducive to generating much interest in the product. Microsoft seems set to change that however with the latest version of Windows 10 to run on the beefed up Raspberry Pi 2 and, best of all, it will be completely free to use.

Raspberry Pi 2

Windows has supported the ARM chipset that powers the Raspberry Pi since the original 8 release  however the diminutive specifications of the board precluded it from running even the cut down RT version. With the coming of Windows 10 however Microsoft is looking to develop an Internet of Things (IoT) line of Windows products which are specifically geared towards low power platforms such as the Raspberry Pi. Better still the product team behind those versions of Windows has specifically included the Raspberry Pi 2 as one of their supported platforms, meaning that it will work out of the box without needing to mess with its drivers or other configuration details. Whilst I’m sure the majority of users of the Raspberry Pi 2 will likely stick to their open source alternatives the availability of a free version of Windows for the platform does open it up to a whole host of developers who might not have considered the platform previously.

The IoT version of Windows is set to come in three different flavours: Industry, Mobile and Athens; with a revision of the .NET Micro framework for other devices that don’t fall into one of those categories. Industry is essentially the full version of Windows with features geared towards the embedded platform. The Mobile version is, funnily enough, geared towards always-on mobile devices but still retains much of the capabilities of its fully fledged brethren. Athens, the version that’s slated to be released on the Raspberry Pi 2, is a “resource focused” version of Windows 10 that still retains the ability to run Universal Apps. There’ll hopefully be some more clarity around these delineations as we get closer to Windows 10’s official release date but suffice to say if the Raspberry Pi 2 can run Universal Apps it’s definitely a platform I could see myself tinkering with.

These new flavours of Windows fit into Microsoft’s broader strategy of trying to get their ecosystem into as many places as they can, something they attempted to start with the WinRT framework and have reworked with Universal Apps. Whilst I feel that WinRT had merit it’s hard to say that it was successful in achieving what it set out to do, especially with the negative reception Metro Apps got with the wider Windows user base. Universal Apps could potentially be the Windows 7 to WinRT’s Vista, a similar idea reworked and rebranded for a new market that finds the feet its predecessors never had. The IoT versions of Windows are simply another string in this particular bow but whether or not it’ll pan out is not something I feel I can accurately predict.

youtube

YouTube Now HTML5 by Default*.

Flash, after starting out its life as one of the bevy of animation plugins for browsers back in the day. has become synonymous with online video. It’s also got a rather terrible reputation for using an inordinate amount of system resources to accomplish this feat, something which hasn’t gone away even in the latest versions. Indeed even my media PC, which has a graphics card with accelerated video decoding, struggles with Flash, it’s unoptimized format monopolizing every skerrick of resources for itself. HTML5 sought to solve this problem by making video a part of the base HTML specification which, everyone had hoped, would see an end to proprietary plug-ins and the woes they brought with them. However the road to getting that standard widely adopted hasn’t been an easy one as YouTube’s 4 year road to making HTML5 the default shows.

youtube

Google had always been on the “let’s use an open standard” bandwagon when it came to HTML5 video which was at odds with other members of the HTML5 board who wanted to use something that, whilst being more ubiquitous, was a proprietary codec. This, unfortunately, led to a deadlock within the committee with none of them being able to agree on a default standard. Despite what YouTube’s move to HTML5 would indicate there is still no defined standard for which codec to use for HTML5 video, meaning that there’s no way to guarantee that a video you’ve encoded in one way will be viewable by HTML5 compliant browsers. Essentially it looks like a format war is about to begin where the wider world will decide the champion and the HTML5 committee will just have to play catch up.

YouTube has unsurprisingly decided to go for Google’s VP9 codec for their HTML5 videos, a standard which they fully control. Whilst they’ve had HTML5 video available for some time now as an option it never enjoyed the widespread support required in order for them to make it the default. It seems now they’ve got buy in from most of the major browser vendors in order to be able to make the switch so people running Safari 8, IE 11, Chrome and  (beta) Firefox will be given the Flash free experience. This has the potential to set up VP9 as the de facto codec for HTML5 although I highly doubt it’ll be officially crowned anytime soon.

Google has also been hard at work ensuring that VP9 enjoys wide support across platforms as there are already several major chip producers whose System on a Chip (SoC) already supports the codec. Without that the mobile experience of VP9 encoded videos would likely be extremely poor, hindering adoption substantially.

Whilst a codec that’s almost entirely under the control of Google might not have been the ideal solution that the Open Source evangelists were hoping for (although it seems pretty open to me) it’s probably the best solution we were going to get. I have not heard of the other competing standards, apart from H.264, having such widespread support as Google’s VP9 does now. It’s likely that the next few years will see many people adopting a couple standards whilst the consumers duke it out in the next format war with the victor not clear until it’s been over for a couple years. For me though I’m glad it’s happened and hopefully soon we can do away with the system hog that Flash is.

Surface-Pro-3

Microsoft’s Surface is…Doing Ok? What?

Microsoft’s hardware business has always felt like something of an also-ran, with the notable exception being the Xbox of course. It’s not that the products were bad per se, indeed many of my friends still swear by the Microsoft Natural ergonomic keyboard, more that it just seemed to be an aside that never really saw much innovation or effort. The Surface seemed like an attempt to change the perception, pitting Microsoft directly against the venerable iPad whilst also attempting to bring consumers across to the Windows 8 way of thinking. Unfortunately the early years weren’t kind to it at all with the experiment resulting in a $900 million write down for Microsoft which many took to indicate that the Surface (or at the very least the RT version) weren’t long for this world. The 18 months that have followed however have seen that particular section of Microsoft’s business make a roaring comeback, much to my and everyone else’s surprise.

Surface-Pro-3

The Microsoft quarterly earnings report released today showing that Microsoft is generally in a good position with revenue and gross margin up on the previous quarter of last year. The internal make up of those numbers is a far more mixed story (covered in much better detail here) however the standout point was the fact that the Surface division alone was $1.1 billion for the quarter, up a staggering $211 million from the previous quarter. This is most certainly on the back of the Surface Pro 3 which was released in June 2014 but for a device that was almost certainly headed for the trash heap it’s a pretty amazing turn around from $900 million in the hole to $1.1 billion in revenue just 1.5 years later.

The question that interests me then is: What was the driving force behind this comeback?

To start off with the Surface Pro 3 (and all the Surface Pro predecessors) are actually pretty great pieces of kit, widely praised for their build quality and overall usability. They were definitely a premium device, especially if you went for the higher spec options, but they are infinitely preferable to carting around your traditional workhorse laptop around with you. The lines get a little blurry when you compare them to an ultrabook of similar specifications, at least if you’re someone like me who’s exacting with what they want, however if you didn’t really care about that the Surface was a pretty easy decision. So the hardware was great, what was behind the initial write down then?

That entirely at the feet of the WinRT version which simply failed to be the iPad competitor it was slated to be. Whilst I’m sure I’d have about as much use for an iPad as I would for my Surface RT it simply didn’t have the appeal that its fully fledged Pro brethren had. Sure you’d be spending more money on the Pro but you’d be getting the full Windows experience rather than the cut down version which felt like it was stuck between being a tablet and laptop replacement. Microsoft tried to stick with the RT idea with the 2 however they’ve gone to great lengths now to reposition the device as a laptop replacement, not an iPad competitor.

You don’t even have to go far to see this repositioning in action, the Microsoft website for the Surface Pro 3 puts it in direct competition with the Macbook Air. It’s a market segment that the device is far more likely to win in as well considering that Apple’s entire Mac product line made about $6.6 billion last quarter which includes everything from the Air all the way to the Mac Pro. Apple has never been the biggest player in this space however so the comparison might be a little unfair but it still puts the Surface’s recent revival into perspective.

It might not signal Microsoft being the next big thing in consumer electronics but it’s definitely not something I expected from a sector that endured a near billion dollar write off. Whether Microsoft can continue along these lines to capitalize on this is something we’ll have to watch closely as I’m sure no one is going to let them forget the failure that was the original Surface RT. I still probably won’t buy one however, well unless they decide to include a discrete graphics chip in a future revision.

Hint hint, Microsoft.

ms-event-2015-01-21-win10-46-741x416

Windows 10: Free (For a Year), Packed With…Stuff.

The rumour mill has been running strong for Microsoft’s next Windows release, fuelled by the usual sneaky leaks and the intrepid hackers who relentlessly dig through preview builds to find things they weren’t meant to see. For the most part though things have largely been as expected with Microsoft announcing the big features and changes late last year and drip feeding minor things through the technical preview stream. Today Microsoft held their Windows 10 Consumer Preview event in Redmond, announcing several new features that would become part of their flagship operating system as well as confirming the strategy for the Windows platform going forward. Suffice to say it’s definitely a shake up of what we’d traditionally expect from Microsoft, especially when it comes to licensing.

ms-event-2015-01-21-win10-46-741x416

The announcement that headlined the event that Windows 10 would be a free upgrade for all current Windows 7, 8, 8.1 and Windows Phone 8.1 customers who upgrade in the first year. This is obviously an attempt to ensure that Windows 10’s adoption rate doesn’t languish in the Vista/8 region as even though every other version of Windows seems to do just fine Windows 10 is still different enough for it to cause issues. I can see the adoption rate for current Windows 8 and 8.1 users to be very high, thanks to the integration with the Windows store, however for Windows 7 stalwarts I’m not so sure. Note that this also won’t apply to enterprises who are responsible for an extremely large chunk of the Windows 7 market currently.

Microsoft also announced Universal Applications which are essentially the next iteration of the WinRT framework that was introduced with Windows 8. However instead of delineating some applications to the functional ghetto (like all Metro apps were) Universal Apps instead share a common base set of functionality with additional code paths for the different platforms they support. Conceptually it sounds like a great idea as it means that the different versions of the applications will share the same codebase, making it very easy to bring new features to all platforms simultaneously. Indeed if this platform can be extended to encompass Android/iOS it’d be an incredibly powerful tool, although I wouldn’t count on that coming from Microsoft.

Xbox Live will also be making a prominent appearance in Windows 10 with some pretty cool features coming for XboxOne owners. Chief among these, at least for me, is the ability to stream XboxOne games from your console directly to your PC. As someone who currently uses their PC as a monitor for their PS4 (I have a capture card for reviews and my wife didn’t like me monopolizing the TV constantly with Destiny) I think this a great feature, one I hope other console manufacturers replicate. There’s also cross-game integration for games that use Xbox Live, an inbuilt game recorder and, of course, another iteration of DirectX. This was the kind of stuff Microsoft had hinted at doing with Windows 8 but it seems like they’re finally committed to it with Windows 10.

Microsoft is also expanding its consumer electronics business with new Windows 10 enabled devices. The Microsoft HoloLens is their attempt at a Google Glass like device although one that’s more aimed at being used with the desktop rather than on the go. There’s also the Surface Hub which is Microsoft’s version of the smart board, integrating all sorts of conferencing and collaboration features. It will be interesting to see if these things see any sort of meaningful adoption rate as whilst they’re not critical to Windows 10’s success they’re certainly devices that could increase adoption in areas that traditionally aren’t Microsoft’s domain.

Overall the consumer preview event paints Windows 10 as an evolutionary step forward for Microsoft, taking the core of the ideas that they attempted with previous iterations and reworking them with a fresh perspective. It will be interesting to see how the one year free upgrade approach works for them as gaining that critical mass of users is the hardest thing for any application, even the venerable Windows platform. The other features that are coming along as more nice to haves than anything else, things that will likely help Microsoft sell people on the Windows 10 idea. Getting this launch right is crucial for Microsoft to execute on their strategy of it being the one platform for time immaterial as the longer it takes to get the majority of users on Windows 10 the harder it will be to invest heavily in it. Hopefully Windows 10 can be the Windows 7 to Windows 8 as Microsoft has a lot riding on this coming off just right.

David Cameron Shifty Lookin Fella

What’s Worse Than a Filter? A Backdoor Curtosey of David Cameron.

Technological enablers aren’t good or evil, they simply exist to facilitate whatever purpose they were designed for. Of course we always aim to maximise the good they’re capable of whilst diminishing the bad, however changing their fundamental characteristics (which are often the sole purpose for their existence) in order to do so is, in my mind, abhorrent. This is why I think things like Internet filters and other solutions which hope to combat the bad parts of the Internet are a fool’s errand as they would seek to destroy the very thing they set out to improve. The latest instalment of which comes to us courtesy of David Cameron who is now seeking to have a sanctioned backdoor to all encrypted communications and to legislate against those who’d resist.

David Cameron Shifty Lookin Fella

Like most election waffle Cameron is strong on rhetoric but weak on substance but you can get the gist of it from this quote:

“I think we cannot allow modern forms of communication to be exempt from the ability, in extremis, with a warrant signed by the home secretary, to be exempt from being listened to.”

Essentially what he’s referring to is the fact that encrypted communications, the ones that are now routinely employed by consumer level applications like WhatsApp and iMessage, shouldn’t be allowed to exist without a method for intelligence agencies to tap into them. It’s not like these communications are exempt from being listened to currently just that it’s infeasible for the security agencies to decrypt them once they’ve got their hands on them. The problem that arises here though is that unlike other means of communication introducing a mechanism like this, a backdoor by which encrypted communications can be decrypted, this fundamentally breaks the utility of the service and introduces a whole slew of potential threats that will be exploited.

The crux of the matter stems from the trust relationships that are required for two way encrypted communications to work. For the most part you’re relying on the channel between both parties to be free from interference and monitoring from interfering parties. This is what allows corporations and governments to spread their networks over the vast reaches of the Internet as they can ensure that information passing through untrusted networks isn’t subject to prying eyes. Taking this proposal into mind any encrypted communications which pass through the UK’s networks could be intercepted, something which I’m sure a lot of corporations wouldn’t like to sign on for. This is not to mention the millions of regular people who rely on encrypted communications for their daily lives, like anyone who’s used Facebook or a secure banking site.

Indeed I believe the risks poses by introducing a backdoor into encrypted communications far outweighs any potential benefits that you’d care to mention. You see any backdoor into a system, no matter how well designed it is, will severely weaken the encrypted channel’s ability to resist intrusion from a malicious attacker. No matter which way you slice it you’re introducing another attack vector into the equation when there was, at most, 2 before you now have at least 3 (the 2 endpoints plus the backdoor). I don’t know about you but I’d rather not increase my risk of being compromised by 50% just because someone might’ve said plutonium on my private chats.

The idea speaks volumes to David Cameron’s lack of understanding of technology as whilst you might be able to get some commercial companies to comply with this you will have no way of stopping peer to peer encrypted communications using open source solutions. Simply put if the government, somehow, managed to get PGP to work a backdoor in it’d be a matter of days before it was no longer used and another solution worked into its place. Sure, you could attempt to prosecute all those people using illegal encryption, but they said the same thing about BitTorrent and I haven’t seen mass arrests yet.

It’s becoming painfully clear that the conservative governments of the world are simply lacking in fundamental understanding of how technology works and thus concoct solutions which simply won’t work in reality. There are far easier ways for them to get the data that they so desperately need (although I’m yet to see the merits of any of these mass surveillance networks) however they seem hell bent on getting it in the most retarded way possible. I would love to say that my generation would be different when they get into power but stupid seems to be an inheritable condition when it comes to conservative politics.