Technology

HTC-Vive_White

Valve Pairs With HTC for Their VR Headset.

It’s strange to think that just over 2 years ago that the idea of VR headsets was still something of a gimmick that was unlikely to take off. Then enter the Oculus Rift Kickstarter which managed to grab almost 10 times the funds it asked for and revamped an industry that really hadn’t seen much action since the late 90s. Whilst consumer level units are still a ways off it’s still shaping up to be an industry with robust competition with numerous competitors vying for the top spot. The latest of which comes to us via HTC who’ve partnered with Valve to deliver their Steam VR platform.

HTC-Vive_White

Valve partnering with another company for the hardware isn’t surprising as they let go a number of personnel in their hardware section not too long ago although their choice of partner is quite interesting. Most of the other consumer electronics giants have already made a play into the VR game: Samsung with Gear VR, Sony with Project Morpheus and Google with their (admittedly limited) Cardboard. So whilst I wouldn’t say that we’ve been waiting for HTC to release something it’s definitely not unexpected that they’d eventually make a play for this space. The fact that they’ve managed to partner with Valve, who already has major buy in with nearly all PC gamers thanks to Steam, is definitely a win for them and judging by the hardware it seems like Valve is pretty happy with the partnership too.

The HTC/Valve VR headset has been dubbed the Re Vive and looks pretty similar to the prototypes of the Oculus DK2. The specs are pretty interesting with it sporting 2, 1200 x 1080 screens which are capable of a 90hz refresh rate, well above what your standard computer monitor is capable of. The front is also littered with numerous sensors including your standard gyroscopes, accelerometers and a laser position tracker which all combine together to provide head tracking to 1/10th of a degree. There’s also additional Steam VR base stations which can provide full body tracking as well, allowing you to get up and move around in your environment.

There’s also been rumblings of additional “controllers’ that come with the headset although I’ve been unable to find any pictures of them or details on how they work. Supposedly they work to track your hand motions so you can interact with objects within the environment. Taking a wild guess here I think they might be based off something like the MYO as other solutions limit you to small spaces in order to do hand tracking properly whilst the MYO seems to fit more inline with the Re Vive’s idea of full movement tracking within a larger environment. I’ll be interested to see what their actual solution for this is as it has the potential to set Valve and HTC apart from everyone else who’s still yet to come up with a solution.

Suffice to say this piece of HTC kit has seen quite a bit of development work thrown into it, more than I think anyone had expected when this announcement was first made. It’ll be hard to judge the platform before anyone can get their hands on it as with all things VR you really don’t know what you’re getting yourself into until you give it a go. The pressure really is now on to be the first to market a consumer level solution that works seamlessly with games that support VR as all these prototypes and dev kits are great but we’re still lacking that one implementation that really sells the idea. HTC and Valve are well positioned to do that but so is nearly everyone else.

internet-access-vpn

The Shambles That is The Liberal NBN.

It’s no secret that I’m loudly, violently opposed to the Liberal’s Multi-Technology Mix NBN solution and I’ve made it my business to ensure that the wider Australian public is aware of frightfully bad it will be. The reasons as to why the Liberal’s solution is so bad are many however they can almost all be traced back to them wanting to cast anything that Labor created in a poor light and that their ideas are far better. Those of us in the know have remained unconvinced however, tearing into every talking point and line of rhetoric to expose the Liberal’s NBN for the farce it is. Now, as the Liberals attempt to rollout their inferior solution, they are no longer able to hide behind bullshit reports as the real world numbers paint an awfully bad picture for their supposedly better NBN.

internet-access-vpn

The slogan of the MTM NBN being “Fast Affordable. Sooner.” has become an easy target as the months have rolled on since the Liberal Party announced their strategy. Whilst the first point can always be debated (since 25Mbps should be “more than enough” according to Abbott) the second two can be directly tied to real world metrics that we’re now privy to. You see with the release of the MTM NBN strategy all works that were planned, but not yet executed, were put on hold whilst a couple FTTN trial sites were scheduled to be established. The thinking was that FTTN could be deployed much faster than a FTTP solution and, so the slogan went, much cheaper too. Well here we are a year and a half later and it’s not looking good for the Liberals and unfortunately, by extension, us Australians.

It hasn’t been much of secret that the FTTN trials that NBNCo have been conducting haven’t exactly been stellar with them experiencing significant delays in getting them set up. Considering that the Liberals gave themselves a 2016 deadline for giving everyone 25Mbps+ speeds these delays didn’t bode well for getting the solution out before the deadline. Those delays appear to have continued with the trial now having just 53 customers connected to the original Umima trial and not a single one connected to the Epping trial. This is after they gave a timeline of “within a month” in October last year. Suffice to say the idea that FTTN could be made available to the wide public by the end of 2016 is starting to look really shakey and so is the 2019 timeframe for their completion of the NBN.

Worst still the idea that the MTM NBN would be significantly cheaper than the full FTTP NBN is yet again failing to stand up to scrutiny. Additional cost analysis conducted by NBNCo, which includes opex costs that were previously excluded under previous costing models, has seen the cost per premises estimate for brownfields (deployed to existing houses) rise to $4316. That’s a substantial increase however it’s a more accurate representation of how much it actually costs to get a single house deployed. Taking that into account the total cost for deploying the FTTP NBN comes out to about $47 billion, very close to the original budget that Labor had allocated for it. Whilst it was obvious that the Liberal’s cost-benefit analysis was a crock of shit from the beginning this just adds further proves the point and casts more doubt over the MTM NBN being significantly cheaper.

I’m honestly not surprised by this anymore as its clear that the Liberals really had no intent of adhering to their rhetoric and were simply trashing the FTTP NBN because it was Labor’s idea. It’s an incredibly short sighted way of looking at it, honestly, as they would have won far more favour with a lot of people if they had just continued with the FTTP NBN as it was. Instead they’re going to waste years and multiple billions of dollars on a system that won’t deliver on its promises and we’ll be left to deal with the mess. All we can really hope for at this point is that we make political history and cement the Liberal’s reign under the OneTermTony banner.

wpid-Intel

10nm May be The Last Hurrah for Silicon.

For as long as we’ve been using semiconductors there’s been one material that’s held the crown: silicon. Being one of the most abundant elements on Earth its semiconductor properties made it perfectly suited to mass manufacture and nearly all of the world’s electronics contain a silicon brain within them. Silicon isn’t the only material capable of performing this function, indeed there’s a whole smorgasbord of other semiconductors that are used for specific applications, however the amount of research poured into silicon means few of them are as mature as it is. However with our manufacturing processes shrinking we’re fast approaching the limit of what silicon, in its current form, is capable of and that may pave the way for a new contender for the semiconductor crown.

wpid-Intel

The road to the current 14nm manufacturing process has been a bumpy one, as the heavily delayed release of Intel’s Broadwell can attest to. Mostly this was due to the low yields that Intel was getting with the process, which is typical for die shrinks, however solving the issue proved to be more difficult than they had originally thought. This is likely due to the challenges Intel faced with making their FinFET technology work at the smaller scale as they had only just introduced it in the previous 22nm generation of CPUs. This process will likely still work down at the 10nm level (as Samsung has just proven today) but beyond that there’s going to need to be a fundamental shift in order for the die shrinks to continue.

For this Intel has alluded to new materials which, keen observers have pointed out, won’t be silicon.

The type of material that’s a likely candidate to replace silicon is something called Indium Gallium Arsenide (InGaAs). They’ve long been used in photodetectors and high frequency applications like microwave and millimeter wave applications. Transistors made from this substrate are called High-Electron Mobility Transistors which, in simpler terms, means that they can be made smaller, switch faster and more packed into a certain size. Whilst the foundries might not yet be able to create these kinds of transistors at scale the fact that they’ve been manufactured at some scale for decades now makes them a viable alternative rather than some of the other, more exotic materials.

There is potential for silicon to hang around for another die shrink or two if Extreme Ultraviolet (EUV) lithography takes off however that method has been plagued with developmental issues for some time now. The change between UV lithography and EUV isn’t a trivial one as EUV can’t be made into a laser and needs mirrors to be directed since most materials will simply absorb the EUV light. Couple that with the rather large difficulty in generating EUV light in the first place (it’s rather inefficient) and it makes looking at new substrates much more appealing. Still if TSMC, Intel or Samsung can figure it out then there’d be a bit more headroom for silicon, although maybe not enough to offset the investment cost.

Whatever direction the semiconductor industry takes one thing is very clear: they all have plans that extend far beyond the current short term to ensure that we can keep up the rapid pace of technological development that we’ve enjoyed for the past half century. I can’t tell you how many times I’ve heard others scream that the next die shrink would be our last, only to see some incredibly innovative solutions to come out soon after. The transition to InGaAs or EUV shows that we’re prepared for at least the next decade and I’m sure before we hit the limit of that tech we’ll be seeing the next novel innovation that will continue to power us forward.

Equation Group Victims Map

Equation Group Malware is Beyond Anything We’ve Seen.

The discovery of Stuxnet in the wild was a watershed moment, signalling the first known salvo sent across the wires of the Internet to strike at an enemy far away. The fact that a piece of software could wreak such destruction in the real world was what drew most people’s interest however the way in which it achieved this was, in my opinion, far more interesting than the results it caused. Stuxnet showed that nation state sponsored malware was capable of things far beyond that of what we’ve attributed to malicious hackers in the past and made us wonder what they were really capable of. Thanks to Kaspersky Labs we now have a really good (read: scary) idea of what a nation state could develop and it’s beyond what many of us thought would be possible.

Equation Group Victims Map

The Equation Group has been identified as being linked to several different pieces of malware that have surfaced in various countries around the world. They’ve been in operation for over a decade and have continuously improved their toolset over that time. Interestingly this group appears to have ties to the development teams behind both Stuxnet and Regin as some of the exploits found in early versions of Equation Group’s tools were also found in those pieces of malware. However those zero day exploits were really just the tip of the spear in Equation Groups arsenal as what Kaspersky Labs has discovered is far beyond anything else we’ve ever seen.

Perhaps the most fascinating piece of software that the group has developed is the ability to write disk firmware which allows them persist their malware through reboots, operating system reinstalls and even low level formats. If that wasn’t nasty enough there’s actually no way (currently) to detect an infection of that nature as few hard drives include the capability to read the firmware once its been written. That means once the firmware has wormed its way into your system there’s very little you could do to detect and remove it, save buying a whole new PC from a random vendor and keeping it isolated from every other device.

This then feeds into their other tools which give them unprecedented control over every facet of a Windows operating system. GrayFish, as it has been dubbed, completely replaces the bootloader and from there completely controls how Windows loads and operates. Essentially once a system is under GrayFish control it no longer uses any of its core boot process which are replaced by GrayFish’s toolkit. This allows Equation Group to be able to inject malware into almost every aspect of the system, preventing detection and giving them complete control to load any of their other malware modules. This shows a level of understanding of the operating system that would rival top Microsoft technicians, even those who have direct access to the source code. Although to be honest I wouldn’t be surprised if they had access to the source code themselves given the level of sophistication here.

These things barely begin to describe the capabilities that the Equation Group has developed over the past couple years as their level of knowledge, sophistication and penetration into world networks is well above anything the general public has known about before. It would be terrifying if it wasn’t so interesting as it shows just what can be accomplished when you’ve got the backing of an entire nation behind you. I’m guessing that it won’t be long before we uncover more of what the Equation Group is capable of and, suffice to say, whatever they come up with next will once again set the standard for what malware can be capable of.

AOC G2460PG

G-Sync is Love, G-Sync is Life.

For us long time PC gamers, those of us who grew up in a time where games were advancing so fast that yearly upgrades were a given, getting the most bang for your buck was often our primary concern. Often the key components would get upgraded first like the CPU, RAM and GPU with other components falling by the wayside. However over the past few years technological advances for some pieces of technology, like SSDs, provided such a huge benefit that they became the upgrade that everyone wanted. Now I believe I’ve found the next upgrade everyone else should get and comes to us via NVIDIA’s new monitor technology: G-Sync.

AOC G2460PG

For the uninitiated G-Sync is a monitor technology from NVIDIA that allows the graphics card (which must a NVIDIA card) to directly control the refresh rate of your monitor. This allows the graphics card to write each frame to the monitor as soon as its available, dynamically altering the refresh rate to match the frame rate. G-Sync essentially allows you to have the benefits of having vsync turned off and on at the same time as there’s no frame tearing and no stutter or slowdown. As someone who can’t stand either of those graphical artefacts G-Sync sounded like the perfect technology for me and now that I’m the proud owner of a GTX970 and two AOC G2460PGs I think that position is justified.

After getting the drivers installed and upping the refresh rate to 144Hz (more on that in a sec) the NVIDIA control panel informed me that I had G-Sync capable monitors and, strangely, told me to go enable it even though when I went there it was already done. After that I dove into some old favourites to see how the monitor and new rig handled them and, honestly, it was like I was playing on a different kind of computer. Every game I threw at it that typically had horrendous tearing or stuttering ran like a dream without a hint of those graphical issues in any frame. It was definitely worth waiting as long as I did so that I could get a native G-Sync capable monitor.

One thing G-Sync does highlight however is slowdown that’s caused by other factors like a game engine trying to load files or performing some background task that impedes the rendering engine. These things, which would have previously gone unnoticed, are impossible to ignore now when everything else runs so smoothly. Thankfully most issues like that are few and far between as I’ve only noticed them shortly after loading into a level but it’s interesting to see issues like that bubbling up now, signalling that the next  must-have upgrade might be drive related once again.

I will admit that some of these benefits come from the hugely increased refresh rate of my new monitors, jumping me from the paltry 60Hz all the way up to 144Hz. The difference is quite stark when you turn it on in Windows and, should you have the grunt to power it, astounding in games. After spending so long with content running in the 30~60Hz spectrum I had forgotten just how smooth higher frame rates are and whilst I don’t know if there’s much benefit going beyond 144Hz that initial bump up is most certainly worth it. Not a lot of other content (like videos, etc.) take advantage of the higher frame rates however, something I didn’t think would bother me until I started noticing it.

Suffice to say I’m enamored with G-Sync and consider the premium I paid for these TN panel monitors well worth it. I’m willing to admit that high frame rates and G-Sync isn’t for everyone, especially if you’re lusting after the better colour reproduction and high resolutions of IPS panels, but for someone like me who can’t help but notice tearing and stuttering it’s a dream come true. If you have the opportunity to see one in action I highly recommend it as it’s hard to describe just how much better it is until you see it for yourself.

Perfmon Data

Capturing a Before and After Performance Report for Windows Servers.

The current project I’m on has a requirement for being able to determine a server’s overall performance before and after a migration, mostly to make sure that it still functions the same or better once its on the new platform. Whilst it’s easy enough to get raw statistics from perfmon getting an at-a-glance view  of how a server is performing before and after a migration is a far more nuanced concept, one that’s not easily accomplished with some Excel wizardry. With that in mind I thought I’d share with you my idea for creating such a view as well as outlining the challenges I’ve hit when attempting to collate the data.

Perfmon Data

At a high level I’ve focused on the 4 core resources that all operating systems consume: CPU, RAM, disk and network. For the most part these metrics are easily captured by the counters that perfmon has however I wanted to go a bit further to make sure that the final comparisons represented a more “true” picture of before and after performance. To do this I included some additional qualifying metrics which would show if increased resource usage was negatively impacting on performance or if it was just the server consuming more resources because it could since the new platform had much more capacity. With that in mind these are the metrics I settled on using:

  • Average of CPU usage (24 hours), Percentage, Quantitative
  • CPU idle time on virtual host of VM (24 hours), Percentage, Qualifying
  • Top 5 services by CPU usage, List, Qualitative
  • Average of  Memory usage (24 hours), Percentage, Quantitative
  • Average balloon driver memory usage (24 hours), MB consumed, Qualifying
  • Top 5 services by Memory usage, List, Qualitative
  • Average of Network usage (24 hours), Percentage, Quantitative
  • Average TCP retransmissions (24 hours), Total, Qualifying
  • Top 5 services by Network bandwidth utilized, List, Qualitative
  • Average of Disk usage (24 hours), Percentage, Quantitative
  • Average queue depth (24 hours), Total, Qualifying
  • Top 5 services by Storage IOPS/Bandwidth utilized, List, Qualitative

Essentially these metrics can be broken down into 3 categories: quantitative, qualitative  and qualifying. Quantitative metrics are the base metrics which will form the main part of the before and after analysis. Qualitative metrics are mostly just informational (being the Top 5 consumers of X resource) however they’ll provide some useful insight into what might be causing an issue. For example if an SQL box isn’t showing the SQL process as a top consumer then it’s likely something is causing that process to take a dive before it can actually use any resources. Finally the qualifying metrics are used to indicate whether or not increased usage of a certain metric signals an impact to the server’s performance like say if the memory usage is high and the memory balloon size is high it’s quite likely the system isn’t performing very well.

The vast majority of these metrics are provided in perfmon however there were a couple that I couldn’t seem to get through the counters, even though I could see them in Resource Monitor. As it turns out Resource Monitor makes use of the Event Tracing for Windows (ETW) framework which gives you an incredibly granular view of all events that are happening on your machine. What I was looking for was a breakdown of disk and network usage per process (in order to generate the Top 5 users list) which is unfortunately bundled up in the IO counters available in perfmon. In order to split these out you have to run a Kernel Trace through ETW and then parse the resulting file to get the metrics you want. It’s a little messy but unfortunately there’s no good way to get those metrics separated. The resulting perfmon profile I created can be downloaded here.

The next issue I’ve run into is getting the data into a readily digestible format. You see not all servers are built the same and not all of them run the same amount of software. This means that when you open up the resulting CSV file from different servers the column headers won’t line up so you’ve got to either do some tricky Excel work (which is often prone to failure) or get freaky with some PowerShell (which is messy and complicated). I decided to go for the latter as at least I could maintain and extend the script somewhat easily whereas an Excel spreadsheet has a tendency to get out of control faster than anyone expects. That part is still a work in progress however but I’ll endeavour to update this post with the completed script once I’ve got it working.

After that point it’s a relatively simple task of displaying everything in a nicely formatted Excel spreadsheet and doing comparisons based on the metrics you’ve generated. If I had more time on my hands I probably would’ve tried to integrate it into something like a SharePoint BI site so we could do some groovy tracking and intelligence on it but due to tight time constraints I probably won’t get that far. Still a well laid out spreadsheet isn’t a bad format for presenting such information, especially when you can colour everything green when things are going right.

I’d be keen to hear other people’s thoughts on how you’d approach a problem like this as trying to quantify the nebulous idea of “server performance” has proven to be far more challenging than I first thought it would be. Part of this is due to the data manipulation required but it was also ensuring that all aspects of a server’s performance were covered and converted down to readily digestible metrics. I think I’ve gotten close to a workable solution with this but I’m always looking for ways to improve it or if there’s a magical tool out there that will do this all for me ;)

vmware_vsphere

VMware Targets OpenStack with vSphere 6.

Despite the massive inroads that other virtualization providers have made into the market VMware still stands out as the king of the enterprise space. Part of this is due to the maturity of their toolset which is able to accommodate a wide variety of guests and configurations but they’ve also got the largest catalogue of value adds which helps vastly in driving adoption of their hypervisor. Still the asking price for any of their products has become something of a sore point for many and their proprietary platform has caused consternation for those looking to leverage public cloud services. With their latest release of their vSphere product VMware is looking to remedy at least the latter issue, embracing OpenStack compatibility for one of their distributions.

vmware_vsphere

The list of improvements that are coming with this new release are numerous (and I won’t bother repeating them all here) but suffice to say that most of them were expected and  in-line with what we’ve gotten previously. Configuration maximums have gone up for pretty much every aspect, feature limitations have been extended and there’s a handful of new features that will enable vSphere based clusters to do things that were previously impossible. In my mind the key improvements that VMware have made in this release come down to Virtual SAN 6, Long Distance vMotion and, of course, their support for OpenStack via their VMware Integrated OpenStack release.

Virtual SAN always felt like a bit of an also-ran when it first came out due to the rather stringent requirements it had around its deployment. I remember investigating it as part of a deployment I was doing at the time, only to be horrified at the fact that I’d have to deploy a vSphere instance at every site that I wanted to use it at. The subsequent releases have shifted the product’s focus significantly and now presents a viable option for those looking to bring software defined datacenter principles to their environment. The improvements that come in 6 are most certainly cloud focused with things like Fault Domains and All Flash configurations. I’ll be very interested to see how the enterprise reacts to this offering, especially for greenfields deployments.

Long Distance vMotion might sound like a minor feature but as someone who’s worked in numerous large, disparate organisations the flexibility that this feature will bring is phenomenal. Right now the biggest issue most organisations face when maintaining two sites (typically for DR purposes) is the ability to get workloads between the sites, often requiring a lengthy outage process to do it. With Long Distance vMotion making both sites active and simply vMotioning workloads between sites is a vastly superior solution and provides many of the benefits of SRM without the required investment and configuration.

The coup here though is, of course, the OpenStack compatibility through VMware’s integrated distribution. OpenStack is notorious for being a right pain in the ass to get running properly, even if you already have staff that have had some experience with the product set in the past. VMware’s solution to this is to provide a pre-canned build which exposes all the resources in a VMware cloud through the OpenStack APIs for developers to utilize. Considering that OpenStack’s lack of good management tools has been, in my mind, one of the biggest challenges to its adoption this solution from VMware could be the kick in the pants it needs to see some healthy adoption rates.

It’s good to see VMware jumping on the hybrid cloud idea as the solution going forward as I’ve long been of the mind that that will be the solution going forward. Cloud infrastructure is great and all but there are often requirements it simply can’t meet due to its commodity nature. Going hybrid with OpenStack as the intermediary layer will allow enterprises to take advantage of these APIs whilst still leveraging their investment in core infrastructure, utilizing the cloud on an as-needed basis. Of course that’s the nirvana state but it seems to get closer to realisation with every new release so here’s hoping VMware will be the catalyst to finally see it succeed.

tom-wheeler-fcc

FCC to Solidify Net Neutrality Under Title II Provisions.

It’s undeniable that the freewheeling nature of the Internet is behind the exponential growth that it has experienced. It was a communications platform that was unencumbered by corporate overlords, free from gatekeepers that enabled people around the world to communicate with each other. However the gatekeepers of old have always tried to claw back some semblance of control at every point they can by imposing data caps, premium services and charging popular websites a premium to give their customers preferred access. Such things go against the pervasive idea of Net Neutrality that is a core tenant of the Internet’s strength however the Federal Communications Commission (FCC) in the USA is looking to change that.

tom-wheeler-fcc

FCC chairman Tom Wheeler has announced today that they will be seeking to classify Internet services under their Title II authority which would see them regulated in such a way as to guarantee the idea of net neutrality, ensuring open and unhindered access. The rules wouldn’t just be limited to fixed line broadband services either as Mr Wheeler stated this change in regulation would also cover wireless Internet services. The motion will have to be voted on before it can be enacted in earnest (and there’s still the possibility of Congress undermining it with additional legislation) however given the current makeup of the FCC board it’s almost guaranteed to pass which is a great thing for the Internet in the USA.

This will go a long way to combatting the anti-competitive practices that a lot of ISPs are engaging in. Companies like Netflix have been strong armed in the past into paying substantial fees to ISPs to ensure that their services run at full speed for their customers, something which only benefits the ISP. Under the Title II changes it would be illegal for ISPs to engage in such behaviour, ensuring that all packets that traverse the network were given the same priority. This would then ensure that no Internet based company would have to pay ISPs to ensure that their services ran acceptably which is hugely beneficial to Internet based innovators.

Of course ISPs have been quick to paint these changes in a negative light, saying that with this new kind of regulation we’re likely to see an increase in fees and all sorts of things that will trash anyone’s ability to innovate. Pretty much all of their concerns stem from the fact that they will be losing revenue from the deals that they’ve cut, ones that are directly in competition with the idea of net neutrality. Honestly I have little sympathy for them as they’ve already profited heavily from investment from the government and regulation that ensured competition between ISPs was kept at a minimum. The big winners in all of this will be consumers and open Internet providers like Google Fiber, things which are the antithesis to their outdated business models.

Hopefully this paves the way for similar legislation and regulation to make its way around the world, paving the way for an Internet free from the constraints of its corporate overlords. My only fear is that congress will mess with these provisions after the changes are made but hopefully the current incumbent government, who has gone on record in support of net neutrality, will put the kibosh on any plans to that effect. In any case the future of the Internet is looking brighter than it ever has and hopefully that trend will continue globally.

Raspberry Pi 2

Raspberry Pi 2 to Run Windows 10.

It’s not widely known that Microsoft has been in the embedded business for quite some time now with their various versions of Windows tailored specific for that purpose. Not that Microsoft has a particular stellar reputation in this field however as most of the time people find out that something was running Windows is when they crash spectacularly. However if you wanted to tinker with it yourself the process to do so was pretty arduous which wasn’t very conducive to generating much interest in the product. Microsoft seems set to change that however with the latest version of Windows 10 to run on the beefed up Raspberry Pi 2 and, best of all, it will be completely free to use.

Raspberry Pi 2

Windows has supported the ARM chipset that powers the Raspberry Pi since the original 8 release  however the diminutive specifications of the board precluded it from running even the cut down RT version. With the coming of Windows 10 however Microsoft is looking to develop an Internet of Things (IoT) line of Windows products which are specifically geared towards low power platforms such as the Raspberry Pi. Better still the product team behind those versions of Windows has specifically included the Raspberry Pi 2 as one of their supported platforms, meaning that it will work out of the box without needing to mess with its drivers or other configuration details. Whilst I’m sure the majority of users of the Raspberry Pi 2 will likely stick to their open source alternatives the availability of a free version of Windows for the platform does open it up to a whole host of developers who might not have considered the platform previously.

The IoT version of Windows is set to come in three different flavours: Industry, Mobile and Athens; with a revision of the .NET Micro framework for other devices that don’t fall into one of those categories. Industry is essentially the full version of Windows with features geared towards the embedded platform. The Mobile version is, funnily enough, geared towards always-on mobile devices but still retains much of the capabilities of its fully fledged brethren. Athens, the version that’s slated to be released on the Raspberry Pi 2, is a “resource focused” version of Windows 10 that still retains the ability to run Universal Apps. There’ll hopefully be some more clarity around these delineations as we get closer to Windows 10’s official release date but suffice to say if the Raspberry Pi 2 can run Universal Apps it’s definitely a platform I could see myself tinkering with.

These new flavours of Windows fit into Microsoft’s broader strategy of trying to get their ecosystem into as many places as they can, something they attempted to start with the WinRT framework and have reworked with Universal Apps. Whilst I feel that WinRT had merit it’s hard to say that it was successful in achieving what it set out to do, especially with the negative reception Metro Apps got with the wider Windows user base. Universal Apps could potentially be the Windows 7 to WinRT’s Vista, a similar idea reworked and rebranded for a new market that finds the feet its predecessors never had. The IoT versions of Windows are simply another string in this particular bow but whether or not it’ll pan out is not something I feel I can accurately predict.