Posts Tagged‘microsoft’

Windows Threshold

Windows Threshold: Burying Windows 8 for the Sake of 9.

It’s hard to deny that Windows 8 hasn’t been a great product for Microsoft. In the 2 years that it’s been on the market it’s managed to secure some 12% of total market share which sounds great on the surface however its predecessor managed to nab some 40% in a similar time frame. The reasons behind this are wide and varied however there’s no mistaking that a large part of it was the Metro interface which just didn’t sit well with primarily desktop users. Microsoft, to their credit, has responded to this criticism by giving consumer what they want but like Vista the product that Windows 8 today is overshadowed by it’s rocky start. It seems clear now that Microsoft is done with Windows 8 as a platform and is now looking towards its successor, codenamed Windows Threshold.

Windows ThresholdNot a whole lot is known about what Threshold will entail but what is known points to a future where Microsoft is distancing itself from Windows 8 in the hopes of getting a fresh start. It’s still not known whether or not Threshold will become known as Windows 9 (or whatever name they might give to it) however the current release date is slated for sometime next year, on time with Microsoft’s new dynamic release schedule. This would also put it at 3 years after the initial release of Windows 8 which also ties into the larger Microsoft product cycle. Indeed most speculators are pegging Threshold to be much like the Blue release of last year with all Microsoft products receiving an update upon release. What interests me about this release isn’t so much of what it contains, more what it’s going to take away from Windows 8.

Whilst Microsoft has made inroads to making Windows 8 feel more like its predecessors the experience is still deeply tied to the Metro interface. Pressing the windows key doesn’t bring up the start menu and Metro apps are still have that rather obnoxious behaviour of taking over your entire screen. Threshold however is rumoured to do away with this, bringing back the start menu with a Metro twist that will allow you to access those kinds of applications without having to open up the full interface. Indeed for desktop systems, those that are bound to a mouse and keyboard, Metro will be completely disabled by default. Tablets and other hybrid devices will still retain the UI with the latter switching between modes depending on what actions occur (switch to desktop when docked, Metro when in tablet form).

From memory such features were actually going to make up parts of the next Windows 8 update, not the next version of Windows itself. Microsoft did add some similar features to Windows 8 in the last update (desktop users now default to desktop on login, not Metro) but the return of the start menu and the other improvements are seemingly not for Windows 8 anymore. Considering just how poor the adoption rates of Windows 8 has been this isn’t entirely surprising and Microsoft might be looking for a clean break away from Windows 8 in order to drive better adoption of Threshold.

It’s a strategy that has worked well for them in the past so it shouldn’t be surprising to see Microsoft doing this. For those of us who actually used Vista (after it was patched to remedy all the issues) we knew that Windows 7 was Vista under the hood, it was just visually different enough to break past people’s preconceptions about it. Windows Threshold will likely be the same, different enough from its direct ancestor that people won’t recognise it but sharing the same core that powered it. Hopefully this will be enough to ensure that Windows 7 doesn’t end up being the next XP as I don’t feel that’s a mistake Microsoft can afford to keep repeating.

 

Surface Pro 3

Microsoft’s Surface 3: It’s Interesting Because of What It’s Not.

The Surface has always been something of a bastard child for Microsoft. They were somewhat forced into creating a tablet device as everyone saw them losing to Apple in this space (even though Microsoft’s consumer electronics division isn’t one of their main profit centers) and their entry into the market managed to confuse a lot of people. The split between the Pro and RT line was clear enough for those of us in the know however consumers, who often in the face of 2 seemingly identical choices will prefer the cheaper one, were left with devices that didn’t function exactly as they expected. The branding of the Surface then changed slightly so that those seeking the device would likely end up with the Pro model and all would be right with the world. The Surface 3, announced last week, carries on that tradition albeit with a much more extreme approach.

Surface Pro 3

As you’d expect the new Surface is an evolutionary step up in terms of functionality, specifications and, funnily enough, size. You now have the choice of either an Intel i3, i5 or i7, 4GB or 8GB of memory and up to 512GB of SSD storage. The screen has swelled to 12″ in size and now sports a pretty incredible 2160 x 1440 resolution, equal to that of many high end screens you’d typically find on a desktop. These additional features actually come with a reduction in weight from the Surface 2 Pro, down from 900g to a paltry 790g. There are some other minor changes as well like the multi-position kickstand and a changed pen but those are small potatoes compared to the rest of the changes that seem to have aimed the Surface more as a laptop replacement than a tablet that can do laptop things.

Since I carry a laptop with me for work (a Dell Latitude E6430 if you were wondering) I’m most certainly sensitive to the issues that plague people like me and the Surface Pro has the answer to many of them. Having to lug my work beast around isn’t the most pleasant experience and I’ve long been a champion of moving everyone across to Ultrabooks in order to address many of the concerns. The Surface Pro is essentially an Ultrabook in a tablet form factor which provides the benefits of both in one package. Indeed colleagues of mine who’ve bought a surface for that purpose love them and those who bought the original Surface Pro back at the TechEd fire sale all said similar things after a couple days of use.

The one thing that would seal the deal for me on the Surface as the replacement to my now 2 year old Zenbook would be the inclusion (or at least option to include) a discrete graphics card. Whilst I don’t do it often I do use my (non-work) laptop for gaming and whilst the Intel HD 4400 can play some games decently the majority of them will struggle. However the inclusion of even a basic discrete chip would make the Surface a portable gaming powerhouse and would be the prime choice for when my Zenbook reaches retirement. That’s still a year or two away however so Microsoft may end up getting my money in the end.

What’s really interesting about this announcement is the profound lack of a RT version of the Surface Pro 3. Indeed whilst I didn’t think there was anything to get confused about between the two version it seems a lot of people did and that has led to a lot of disappointed customers. It was obvious that Microsoft was downplaying the RT version when the second one was announced last year but few thought that it would lead to Microsoft outright cancelling the line. Indeed the lack of an accompanying Surface RT would indicate that Microsoft isn’t so keen on that platform, something which doesn’t bode well for the few OEMs that decided to play in that space. On the flip side it could be a great in for them as Microsoft eating up the low end of the market was always going to be a sore spot for their OEMs and Microsoft still seems committed to the idea from a purely technological point of view.

The Surface 3 might not be seeing me pull out the wallet just yet but there’s enough to like about it that I can see many IT departments turning towards it as the platform of choice for their mobile environments. The lack of an RT variant could be construed as Microsoft giving up on the RT idea but I think it’s probably more to do with the confusion around each of the platform’s value propositions. Regardless it seems that Microsoft is committed to the Surface Pro platform, something which was heavily in doubt just under a year ago. It might not be the commercial success that the iPad et al were but it seems the Surface Pro will become a decent revenue generator for Microsoft.

Windows 8.2 Start Bar

How Long Will it Take for Enterprise IT to Embrace Rapid Innovation?

The IT industry has always been one of rapid change and upheaval, with many technology companies only lasting as long as they could innovate for. This is at odds with the traditional way businesses operated, preferring to stick to predictable cycles and seek gains through incremental improvements in process, procedure and marketing. The eventuality of this came in the form of the traditional 3~5 year cycle that many enterprises engaged in, upgrading the latest available technology usually years after it had been released. However the pace of innovation has increased to the point where such a cycle could leave an organisation multiple generations behind and it’s not showing any signs of slowing down soon.

Windows 8.2 Start Bar

I mentioned last year how Microsoft’s move from a 3 year development cycle to a yearly one was a good move, allowing them to respond to customer demands much more quickly than they were previously able to. However the issue I’ve come across is whilst I, as a technologist, love hearing about the new technology the customer readiness for this kind of innovation simply isn’t there. The blame for this almost wholly lays at the feet of XP’s 12 year dominance of the desktop market, something which even the threat of no support did little to impact its market share. So whilst the majority may have made the transition now they’re by no means ready for a technology upgrade cycle that happens on a yearly basis. There are several factors at play with this (tools, processes and product knowledge being the key ones) but the main issue remains the same: there’s a major disjoint between Microsoft’s current release schedule and it’s adoption among its biggest customers.

Microsoft, to their credit, are doing their best to foster rapid adoption. Getting Windows 8.1 at home is as easy as downloading an app from the Windows store and waiting for it to install, something you can easily do overnight if you can’t afford the down time. Similarly the tools available to do deployments on a large scale have been improved immensely, something anyone who’s used System Center Configuration Manager 2012 (and it’s previous incarnations) will attest to. Still even though the transition from Windows 7 to 8 or above is much lower risk than from XP to 7 most enterprises aren’t looking to make the move and it’s not just because they don’t like Windows 8.

With Windows 8.2 slated for release sometime in August this year Windows 8 will retain an almost identical look and feel to that of its predecessors, allowing users to bypass the metro interface completely and giving them back the beloved start menu. With that in place there’s almost no reason for people to not adopt the latest Microsoft operating system yet it’s likely to see a spike in adoption due to the inertia of large IT operations. Indeed even those that have managed to make the transition to Windows 8 probably won’t be able to make the move until 8.3 makes it debut, or possibly even Windows 9.

Once the Windows 8 family becomes the standard however I can see IT operations looking to move towards a more rapid pace of innovation. The changes between the yearly revisions are much less likely to break or change core functionality, enabling much of the risk that came with adopting a new operating system (application remediation). Additionally once the IT sections have moved to better tooling upgrading their desktops should also be a lot easier. I don’t think this will happen for another 3+ years however as we’re still in the midst of a XP hangover, one that’s not likely to subside until it’s market share is in the single digits. Past that we administrators then have the unenviable job of convincing our businesses that engaging in a faster product update cycle is good for them, even if the cost is low.

As someone who loves working with the latest and greatest from Microsoft it’s an irritating issue for me. I spend countless hours trying to skill myself up only to end up working on 5+ year old technology for the majority of my work. Sure it comes in handy eventually  but the return on investment feels extremely low. It’s my hope that the cloud movement, which has already driven a lot of businesses to look at more modern approaches to the way they do their IT, will be the catalyst by which enterprise IT begins to embrace a more rapid innovation cycle. Until then however I’ll just lament all the Windows Server 2012 R2 training I’m doing and wait until TechEd rolls around again to figure out what’s obsolete.

Open Compute Project Logo

Will Open Compute Ever Trickle Down?

When Facebook first announced the Open Compute Project it was a very exciting prospect for people like me. Ever since virtualization became the defacto standard for servers in the data center hardware density became the prime the name of the game. Client after client I worked for was always seeking out ways to reduce their server fleet’s footprint, both by consolidating through virtualization and by taking advantage of technology like blade servers. However whilst the past half decade has seen a phenomenal increase the amount of computing power available, and thus an increase in density, there hasn’t been another blade revelation. That was until Facebook went open kimono on their data center strategies.

Open Compute Project LogoThe designs proposed by the Open Compute Project are pretty radical if you’re used to traditional computer hardware, primarily because they’re so minimalistic and the fact that they expect a 12.5V DC input rather than the usual 240/120VAC that’s typical of all modern data centers. Other than that they look very similar to your typical blade server and indeed the first revisions appeared to get densities that were pretty comparable. The savings at scale were pretty tremendous however as you could gain a lot of efficiency by not running a power supply in every server and their simple design meant their cooling aspects were greatly improved. Apart from Facebook though I wasn’t aware of any other big providers utilizing ideas like this until Microsoft announced today that it was joining the project and was contributing its own designs to the effort.

On the surface they look pretty similar to the current Open Compute standards although the big differences seem to come from the chassis.Instead of doing away with a power supply completely (like the current Open Compute servers advocate) it instead has a dedicated power supply in the base of the chassis for all the servers. Whilst I can’t find any details on it I’d expect this would mean that it could operate in a traditional data center with a VAC power feed rather than requiring the more specialized 12.5V DC. At the same time the density that they can achieve with their cloud servers is absolutely phenomenal, being able to cram 96 of them in a standard rack. For comparison the densest blade system I’ve ever supplied would top out at 64 servers and most wouldn’t go past 48.

This then begs the question: when we will start to see server systems like this trickle down to the enterprise and consumer market? Whilst we rarely have the requirements for the scales at which these servers are typically used I can guarantee there’s a market for servers of this nature as enterprises continue on their never ending quest for higher densities and better efficiency. Indeed this feels like it would be advantageous for some of the larger server manufacturers to pursue since if these large companies are investing in developing their own hardware platforms it shows that there’s a niche they haven’t yet filled.

Indeed if the system can also accommodate non-compute blades (like the Microsoft one shows with the JBOD expansion) such ideas would go toe to toe with system-in-a-box solutions like the CISCO UCS which, to my surprise, quickly pushed its way to the #2 spot for x86 blade servers last year. Of course there are already similar systems on the market from others but in order to draw people away from that platform other manufacturers are going to have to offer something more and I think the answer to that lies within the Open Compute designs.

If I’m honest I think that the real answer to the question posited in the title of this blog is no. Whilst it would be possible for anyone working at Facebook and Microsoft levels of scale to engage in something like this unless a big manufacturer gets on board Open Compute based solutions just won’t be feasible for the clients I service. It’s a shame because I think there’s some definite merits to the platform, something which is validated by Microsoft joining the project.

Windows 8.1 With Bing wzor.net

Windows 8.1 With Bing: The First (Legal) Free Version of Windows?

As a poor student the last thing I wanted to pay for was software. Whilst the choice to pirate a base operating system is always questionable, it’s the foundation on which all your computing activities rely, it was either pay the high license cost or find an alternative. I’ve since found numerous, legitimate alternatives of course (thank you BizSpark) but not everyone is able to take advantage of them. Thus for many the choice to upgrade their copy of Windows typically comes with the purchase of a new computer, something which doesn’t happen as often as it used to. I believe that this is one factor that’s affected the Windows 8/8.1 adoption rates and it seems Microsoft might be willing to try something radical to change it.

Windows 8.1 With Bing wzor.netRumours have been making the rounds that Microsoft is potentially going to offer a low cost (or completely free) version of their operating system dubbed Windows 8.1 with Bing. Details as to what is and isn’t included are still somewhat scant but it seems like it will be a full version without any major strings attached. There’s even musings around some of Microsoft core applications, like Office, to be bundled in with the new version of Windows 8.1. This wouldn’t be unusual (they already do it with Office Core for the Surface) however it’s those consumer applications where Microsoft draws a lot of its revenue in this particular market segment so their inclusion would mean the revenue would have to be made up somewhere else.

Many are toting this release as being targeted mostly at Windows 7 users who are staving off making the switch to Windows 8. In terms of barriers to entry they are by far the lowest although they’re also the ones who have the least to gain from the upgrade. Depending on the timing of the release though this could also be a boon to those XP laggards who run out of support in just over a month. The transition from XP to Windows 8 is much more stark however, both in terms of technology and user experience, but there are numerous things Microsoft could do in order to smooth it over.

Whilst I like the idea there’s still the looming question of how Microsoft would monetize something like this as releasing something for free and making up the revenue elsewhere isn’t really their standard business model (at least not with Windows itself). The “With Bing” moniker seems to suggest that they’ll be relying heavily on browser based revenue, possibly by restricting users to only being able to use Internet Explorer. They’ve got into hot water for doing similar things in the past although they’d likely be able to argue that they no longer hold a monopoly on Internet connected devices like they once did. Regardless it will be interesting to see what the strategy is as the mere rumour of something like this is new territory for Microsoft.

It’s clear that Microsoft doesn’t want Windows 7 to become the next XP and is doing everything they can to make it attractive to get users to make the switch. They’re facing an uphill battle as there’s still a good 30% of Windows users who are still on XP, ones who are unlikely to change even in the face of imminent end of life. A free upgrade might be enough to coax some users across however Microsoft needs to start selling the transition from any of their previous version as a seamless affair, something that anyone can do on a lazy Sunday afternoon. Even then there will still be holdouts but at least it’d go a long way to pushing the other versions’ market share down into the single digits.

 

New Microsoft Chief Executive Officer of, Satya Nadella

What Kind of Microsoft Can We Expect From Satya Nadella?

In the time that Microsoft has been a company it has only known two Chief Executive Officers. The first was unforgettably Bill Gates, the point man of the company from its founding days that saw the company grow from a small software shop to the industry giant of the late 90s. Then, right at the beginning of the new millennium, Bill Gates stood down and passed the crown to long time business partner Steve Ballmer who has since spent the better part of 2 decades attempting to transform Microsoft from a software company to a devices and services one. Rumours had been spreading about who was slated to take over Ballmer for some time now and, last week, after much searching Microsoft veteran Satya Nadella took over as the third CEO of the venerable company and now everyone is wondering where he will take it.

New Microsoft Chief Executive Officer of, Satya NadellaFor those who don’t know him Nadella’s heritage in Microsoft comes from the Server and Tools department where he’s held several high ranking positions for a number of years. Most notably he’s been in charge of Microsoft’s cloud computing endeavours, including building out Azure which hit $1 billion in sales last year, something I’m sure helped to seal the deal on his new position. Thus many would assume that Nadella’s vision for Microsoft would trend along these lines, something which runs a little contrary to the more consumer focused business that Ballmer sought to deliver, however his request to have Bill Gates step down as Chairman of the Board so that Nadella could have him as an advisor in this space says otherwise.

As with any changing of the guard many seek to impress upon the new bearer their wants for the future of the company. Nadella has already come under pressure to drop some of Microsoft’s less profitable endeavours including things like Bing, Surface and even the Xbox division (despite it being quite a revenue maker, especially as of late). Considering these products are the culmination of the effort of the 2 previous CEOs, both of which will still be involved in the company to some degree, taking an axe to them would be a extraordinarily hard thing to do. These are the products they’ve spent years and billions of dollars building so dropping them seems like a short sighted endeavour, even if it would make the books look a little better.

Indeed many of these business units which certain parties would look to cut are the ones that are seeing good growth figures. The surface has gone from a $900 million write down disaster to losing a paltry $39 million in 2 quarters, an amazing recovery that signals profitability isn’t too far off. Similarly Bing, the search engine that we all love to hate on, saw a 34% increase in revenue in a single quarter. It’s not even worth mentioning the Xbox division as it’s been doing well for years now and the release of the XboxOne with it’s solid initial sales ensures that remains one of Microsoft’s better performers.

The question then becomes whether Nadella, and the board he now serves, sees the on-going value in these projects. Indeed much of the work they’ve been doing in the past decade has been with the focus towards unifying many disparate parts of their ecosystem together, heading towards that unified nirvana where everything works together seamlessly. Removing them from the picture feels like Microsoft backing itself into a corner, one where it can be easily shoehorned into a narrative that will see it easily lose ground to those competitors it has been fighting for years. In all honesty I feel Microsoft is so dominant in those sectors already that there’s little to be gained from receding from perceived failures and Nadella should take this chance to innovate on his predecessor’s ideas, not toss them out wholesale.

 

Normandy Prototype

Microsoft’s Grab For The Low End Market.

Ever since Microsoft and Nokia announced their partnership with (and subsequent acquisition by) Microsoft I had wondered when we’d start seeing a bevy of feature phones that were running the Windows Phone operating system behind the scenes. Sure there’s a lot of cheaper Lumias on the market, like the Lumia 520 can be had for $149 outright, but there isn’t anything in the low end where Nokia has been the undisputed king for decades. That section of the market is now dominated by Nokia’s Asha line of handsets, a curious new operating system that came into being shortly after Nokia canned all development on Symbian and their other alternative mobile platforms. However there’s long been rumours circling that Nokia was developing a low end Android handset to take over this area of the market, predominately due to the rise of cheap Android handsets that were beginning to trickle in.

Normandy PrototypeThe latest leaks from engineers within Nokia appear to confirm these rumours with the above pictures showcasing a prototype handset developed under the Normandy code name. Details are scant as to what the phone actually consists of but the notification bar in does look distinctly Android with the rest of the UI not bearing any resemblance to anything else on the market currently. This fits in with the rumours that Nokia was looking to fork Android and make its own version of it, much like Amazon did for the Kindle Fire, which would also mean that they’d likely be looking to create their own app store as well. This would be where Microsoft could have its in, pushing Android versions of its Windows Phone applications through its own distribution channel without having to seek Google’s approval.

Such a plan almost wholly relies on the fact that Nokia is the trusted name in the low end space, managing to command a sizable chunk of the market even in the face of numerous rivals. Even though Windows Phone has been gaining ground recently in developed markets it’s still been unable to gain much traction in emerging markets. Using Android as a trojan horse to get uses onto their app ecosystem could potentially work however it’s far more likely that those users will simply remain on the new Android platform. Still there would be a non-zero number who would eventually look towards moving upwards in terms of functionality and when it comes to Nokia there’s only one platform to choose from.

Of course this all hinges on the idea that Microsoft is actively interested in pursuing this idea and it’s not simply part of the ongoing skunk works of Nokia employees. That being said Microsoft already makes a large chunk of change from every Android phone sold thanks to its licensing arrangements with numerous vendors so they would have a slight edge in creating a low end Android handset. Whether they eventually use that to try and leverage users onto the Windows Phone platform though will be something that we’ll have to wait to see as I can imagine it’ll be a long time before an actual device sees the light of day.

Gaikai Background

I Wouldn’t Get Your Hopes Up For Backwards Compatibility.

Us gamers tend to be hoarders when it comes to our game collections with many of us amassing huge stashes of titles on our platforms of choice. My steam library alone blew past 300 titles some time ago and anyone visiting my house will see the dozens of game boxes littering every corner of the house. There’s something of a sunk cost in all this and it’s why the idea of being able to play them on a current generation system is always attractive to people like me: we like to go back sometimes and play through games of our past. Whilst my platform of choice rarely suffers from this (PCs are the kings of backwards compatibility) my large console collection is in varying states of being able to play my library of titles and, if I’m honest, I don’t think it’s ever going to get better.

Gaikai BackgroundJust for the sake of example let’s have a look at the 3 consoles that are currently sitting next to my TV:

  • Nintendo Wii: It has the ability to play the GameCube’s games directly and various other titles are made available through the Virtual Console. Additionally all games for the Wii are forwards compatible with the WiiU so you’re getting 3+ generations worth of backwards compatibility, not bad for a company who used to swap catridge formats every generation.
  • Xbox360: Appears to be software level emulation as it requires you to periodically download emulation profiles from Microsoft and the list of supported titles has changed over time. There is no forwards compatibility with the XboxOne due to the change in architecture although there are rumours of Microsoft developing their own cloud based service to provide it.
  • PlayStation4: There’s no backwards compatibility to speak of for this console, for much the same reasons as the XboxOne, however my PlayStation 3 was a launch day model and thus had software emulation. American versions of the same console had full hardware backwards compatibility however giving access to the full PlayStation library stretching back to the original PlayStation. Sony bought the cloud gaming company Gaikai in July last year, ostensibly to provide backwards compatibility via this service.

For the current kings of the console market the decision to do away with backwards compatibility has been something of a sore spot for many gamers. Whilst the numbers show that most people buy new consoles to play the new games on them¹ there’s a non-zero number who get a lot of enjoyment out of their current gen titles. Indeed I probably would’ve actually used my PlayStation4 for gaming if it had some modicum of backwards compatibility as right now there aren’t any compelling titles for it. This doesn’t seem to have been much of a hinderance to adoption of the now current gen platforms however.

There does seem to be a lot of faith being poured into the idea that backwards compatibility will come eventually through cloud services, of which only Sony has committed to developing. The idea is attractive, mainly because it then enables you to play any time you want from a multitude of devices, however, as I’ve stated in the past, the feasibility of such an idea isn’t great, especially if it relies on server hardware needing to be in many disparate locations around the world to make the service viable. Whilst both Sony and Microsoft have the capital to make this happen (and indeed Sony has a head start on it thanks to the Gaikai acquisition) the issues I previously mentioned are only compounded when it comes to providing a cloud based service with console games.

The easiest way of achieving this is to just run a bunch of the old consoles in a server environment and allow users to connect directly to them. This has the advantage of being cheaper from a capital point of view as I’m sure both Sony and Microsoft have untold hordes of old consoles to take advantage of, however the service would be inherently unscalable and, past a certain point, unmaintable. The better solution is to emulate the console in software which would allow you to run it on whatever hardware you wanted but this brings with it challenges I’m not sure even Microsoft or Sony are capable of solving.

You see whilst the hardware of the past generation consoles is rather long in the tooth emulating it in software is nigh on impossible. Whilst there’s some experimental efforts by the emulation community to do this none of them have produced anything capable of running even the most basic titles. Indeed even with access to the full schematics of the hardware recreating them in software would be a herculean effort, especially for Sony who’s Cell processor is a nightmare architecturally speaking.

There’s also the possibility that Sony has had the Gaikai team working on a Cell to x86 transition library which could make the entire PlayStation3 library available without too much hassle although there would likely be a heavy trade off in performance. In all honesty that’s probably the most feasible solution as it’d allow them to run the titles on commodity hardware but you’d still have the problems of scaling out the service that I’ve touched on in previous posts.

Whatever ends up happening we’re not going to hear much more about it until sometime next year and it’ll be a while after that before we can get our hands on it (my money is on 2016 for Australia). If you’re sitting on a trove of old titles and hoping that the next gen will allow you to play them I wouldn’t hold your breath as its much more likely that it’ll be extremely limited, likely requiring an additional cost on top of your PlayStation Plus membership. That’s even if it works as everyone speculating it will as I can see it easily turning out to be something else entirely.

¹ I can’t seem to find a source for this but back when the PlayStation3 Slim was announced (having that capability removed) I can remember a Sony executive saying something to this effect. It was probably a combination of factors that led up to him saying that though as around that time the PlayStation2 Slim was still being manufactured and was retailing for AUD$100, so it was highly likely that anyone who had the cash to splurge on a PlayStation3 likely owned a PlayStation2.

 

AMD Logo

The Real Winner of the Console Wars: AMD.

In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.

AMD LogoBoth of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4′s day one sales figures.

If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.

To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.

Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.

 

AGIMO ICT Strategy Summary

A New AGIMO Policy is Great, But…

Canberra is a strange little microcosm. If you live here chances are you are either working directly for the government as a member of the public service or you’re part of an organisation that’s servicing said government. This is especially true in the field of IT as anyone with a respectable amount of IT experience can make a very good living working for any of the large department’s headquarters. I have made my IT career in this place and in my time here I’ve spent all of my time lusting after the cutting edge of technology whilst dealing with the realities of what large government departments actually need to function. As long time readers will be aware I’ve been something of a cloud junkie for a while now but not once have I been able to use it at my places of work, and there’s a good reason for that.

AGIMO ICT Strategy SummaryNot that you’d know that if you heard the latest bit of rhetoric from the current government which has criticised the current AGIMO APS ICT Strategy for providing only “notional” guidelines for using cloud base services. Whilst I’ll agree that the financial implications are rather cumbersome (although this is true of any procurement activity within the government, as anyone who’s worked in one can tell you) what annoyed me was the idea that security requirements were too onerous. The simple fact of the matter is that many government departments have regulatory and legal obligations not to use overseas cloud providers due to the legislation that restricts Australian government data from travelling outside our borders.

The technical term for this is data sovereignty and for the vast majority of the large government departments of Australia they’re legally bound to keep all their services, and the data that they rely on, on Australian soil. The legislation is so strict in this regard that even data that’s not technically sensitive, like say specifications of machines or network topologies, in some cases can’t be given to external vendors and must instead only be inspected on site. The idea then that governments could take advantage of cloud providers, most of which don’t have availability zones here in Australia, is completely ludicrous and no amount of IT strategy policies can change that.

Of course cloud providers aren’t unaware of these issues, indeed I’ve met with several people behind some of the larger public clouds on this, and many of them are bringing availability zones to Australia. Indeed Amazon Web Services has already made itself available here and Microsoft’s Azure platform is expected to land on our shores sometime next year. The latter is probably the more important of the two as if the next AGIMO policy turns out the way it’s intended the Microsoft cloud will be the defacto solution for light user agencies thanks to the heavy amount of Microsoft products in use at those places.

Whilst I might be a little peeved at the rhetoric behind the review of the APS ICT Strategy I do welcome it as even though it was only written a couple years ago it’s still in need of an update due to the heavy shift towards cloud services and user centric IT that we’ve seen recently. The advent of Australian availability zones will mean that the government agencies most able to take advantage of cloud services will finally be able to, especially with AGIMO policy behind them. Still it will be up to the cloud providers to ensure their systems can meet the requirements of these agencies and there’s still every possibility that they will still not be enough for some departments to take advantage of.

We’ll have to see how that pans out, however.