Posts Tagged‘microsoft’

Gaikai Background

I Wouldn’t Get Your Hopes Up For Backwards Compatibility.

Us gamers tend to be hoarders when it comes to our game collections with many of us amassing huge stashes of titles on our platforms of choice. My steam library alone blew past 300 titles some time ago and anyone visiting my house will see the dozens of game boxes littering every corner of the house. There’s something of a sunk cost in all this and it’s why the idea of being able to play them on a current generation system is always attractive to people like me: we like to go back sometimes and play through games of our past. Whilst my platform of choice rarely suffers from this (PCs are the kings of backwards compatibility) my large console collection is in varying states of being able to play my library of titles and, if I’m honest, I don’t think it’s ever going to get better.

Gaikai BackgroundJust for the sake of example let’s have a look at the 3 consoles that are currently sitting next to my TV:

  • Nintendo Wii: It has the ability to play the GameCube’s games directly and various other titles are made available through the Virtual Console. Additionally all games for the Wii are forwards compatible with the WiiU so you’re getting 3+ generations worth of backwards compatibility, not bad for a company who used to swap catridge formats every generation.
  • Xbox360: Appears to be software level emulation as it requires you to periodically download emulation profiles from Microsoft and the list of supported titles has changed over time. There is no forwards compatibility with the XboxOne due to the change in architecture although there are rumours of Microsoft developing their own cloud based service to provide it.
  • PlayStation4: There’s no backwards compatibility to speak of for this console, for much the same reasons as the XboxOne, however my PlayStation 3 was a launch day model and thus had software emulation. American versions of the same console had full hardware backwards compatibility however giving access to the full PlayStation library stretching back to the original PlayStation. Sony bought the cloud gaming company Gaikai in July last year, ostensibly to provide backwards compatibility via this service.

For the current kings of the console market the decision to do away with backwards compatibility has been something of a sore spot for many gamers. Whilst the numbers show that most people buy new consoles to play the new games on them¹ there’s a non-zero number who get a lot of enjoyment out of their current gen titles. Indeed I probably would’ve actually used my PlayStation4 for gaming if it had some modicum of backwards compatibility as right now there aren’t any compelling titles for it. This doesn’t seem to have been much of a hinderance to adoption of the now current gen platforms however.

There does seem to be a lot of faith being poured into the idea that backwards compatibility will come eventually through cloud services, of which only Sony has committed to developing. The idea is attractive, mainly because it then enables you to play any time you want from a multitude of devices, however, as I’ve stated in the past, the feasibility of such an idea isn’t great, especially if it relies on server hardware needing to be in many disparate locations around the world to make the service viable. Whilst both Sony and Microsoft have the capital to make this happen (and indeed Sony has a head start on it thanks to the Gaikai acquisition) the issues I previously mentioned are only compounded when it comes to providing a cloud based service with console games.

The easiest way of achieving this is to just run a bunch of the old consoles in a server environment and allow users to connect directly to them. This has the advantage of being cheaper from a capital point of view as I’m sure both Sony and Microsoft have untold hordes of old consoles to take advantage of, however the service would be inherently unscalable and, past a certain point, unmaintable. The better solution is to emulate the console in software which would allow you to run it on whatever hardware you wanted but this brings with it challenges I’m not sure even Microsoft or Sony are capable of solving.

You see whilst the hardware of the past generation consoles is rather long in the tooth emulating it in software is nigh on impossible. Whilst there’s some experimental efforts by the emulation community to do this none of them have produced anything capable of running even the most basic titles. Indeed even with access to the full schematics of the hardware recreating them in software would be a herculean effort, especially for Sony who’s Cell processor is a nightmare architecturally speaking.

There’s also the possibility that Sony has had the Gaikai team working on a Cell to x86 transition library which could make the entire PlayStation3 library available without too much hassle although there would likely be a heavy trade off in performance. In all honesty that’s probably the most feasible solution as it’d allow them to run the titles on commodity hardware but you’d still have the problems of scaling out the service that I’ve touched on in previous posts.

Whatever ends up happening we’re not going to hear much more about it until sometime next year and it’ll be a while after that before we can get our hands on it (my money is on 2016 for Australia). If you’re sitting on a trove of old titles and hoping that the next gen will allow you to play them I wouldn’t hold your breath as its much more likely that it’ll be extremely limited, likely requiring an additional cost on top of your PlayStation Plus membership. That’s even if it works as everyone speculating it will as I can see it easily turning out to be something else entirely.

¹ I can’t seem to find a source for this but back when the PlayStation3 Slim was announced (having that capability removed) I can remember a Sony executive saying something to this effect. It was probably a combination of factors that led up to him saying that though as around that time the PlayStation2 Slim was still being manufactured and was retailing for AUD$100, so it was highly likely that anyone who had the cash to splurge on a PlayStation3 likely owned a PlayStation2.

 

AMD Logo

The Real Winner of the Console Wars: AMD.

In the general computing game you’d be forgiven for thinking there’s 2 rivals locked in a contest for dominance. Sure there’s 2 major players, Intel and AMD, and whilst they are direct competitors with each other there’s no denying the fact that Intel is the Goliath to AMD’s David, trouncing them in almost every way possible. Of course if you’re looking to build a budget PC you really can’t go past AMD’s processors as they provide an incredible amount of value for the asking price but there’s no denying that Intel has been the reigning performance and market champion for the better part of a decade now. However the next generation of consoles have proved to be something of a coup for AMD and it could be the beginnings of a new era for the beleaguered chip company.

AMD LogoBoth of the next generation consoles, the PlayStation 4 and XboxOne, both utilize an almost identical AMD Jaguar chip under the hood. The reasons for choosing it seem to align with Sony’s previous architectural idea for Cell (I.E. having lots of cores working in parallel rather than fewer working faster) and AMD is the king of cramming more cores into a single consumer chip. Although the reasons for going for AMD over Intel likely stem from the fact that Intel isn’t too crazy about doing custom hardware and the requirements that Sony and Microsoft had for their own versions of Jaguar could simply not be accommodated. Considering how big the console market is this would seem like something of a misstep by Intel, especially judging by the PlayStation4’s day one sales figures.

If you hadn’t heard the PlayStation 4 managed to move an incredible 1 million consoles on its first day of launch and that was limited to the USA. The Nintendo Wii by comparison took about a week to move 400,000 consoles and it even had a global launch window to beef up the sales. Whether the trend will continue or not considering that the XboxOne just got released yesterday is something we’ll have to wait to see but regardless every one of those consoles being purchased now contains in it an AMD CPU and they’re walking away with a healthy chunk of change from each one.

To put it in perspective out of every PlayStation 4 sale (and by extension every XboxOne as well) AMD is taking away a healthy $100 which means that in that one day of sales AMD generated some $100 million for itself. For a company who’s annual revenue is around the $1.5 billion mark this is a huge deal and if the XboxOne launch is even half that AMD could have seen $150 million in the space of a week. If the previous console generations were anything to go by (roughly 160 million consoles between Sony and Microsoft) AMD is looking at a revenue steam of some $1.6 billion over the next 8 years, a 13% increase to their bottom line. Whilst it’s still a far cry from the kinds of revenue that Intel sees on a monthly basis it’s a huge win for AMD and something they will hopefully be able to use to leverage themselves more in other markets.

Whilst I may have handed in my AMD fanboy badge after many deliriously happy years with my watercooled XP1800+ I still think they’re a brilliant chip company and their inclusion in both next generation consoles shows that the industry giants think the same way. The console market might not be as big as the consumer desktop space nor as lucrative as the high end server market but getting their chips onto both sides of the war is a major coup for them. Hopefully this will give AMD the push they need to start muscling in on Intel’s turf again as whilst I love their chips I love robust competition between giants a lot more.

 

AGIMO ICT Strategy Summary

A New AGIMO Policy is Great, But…

Canberra is a strange little microcosm. If you live here chances are you are either working directly for the government as a member of the public service or you’re part of an organisation that’s servicing said government. This is especially true in the field of IT as anyone with a respectable amount of IT experience can make a very good living working for any of the large department’s headquarters. I have made my IT career in this place and in my time here I’ve spent all of my time lusting after the cutting edge of technology whilst dealing with the realities of what large government departments actually need to function. As long time readers will be aware I’ve been something of a cloud junkie for a while now but not once have I been able to use it at my places of work, and there’s a good reason for that.

AGIMO ICT Strategy SummaryNot that you’d know that if you heard the latest bit of rhetoric from the current government which has criticised the current AGIMO APS ICT Strategy for providing only “notional” guidelines for using cloud base services. Whilst I’ll agree that the financial implications are rather cumbersome (although this is true of any procurement activity within the government, as anyone who’s worked in one can tell you) what annoyed me was the idea that security requirements were too onerous. The simple fact of the matter is that many government departments have regulatory and legal obligations not to use overseas cloud providers due to the legislation that restricts Australian government data from travelling outside our borders.

The technical term for this is data sovereignty and for the vast majority of the large government departments of Australia they’re legally bound to keep all their services, and the data that they rely on, on Australian soil. The legislation is so strict in this regard that even data that’s not technically sensitive, like say specifications of machines or network topologies, in some cases can’t be given to external vendors and must instead only be inspected on site. The idea then that governments could take advantage of cloud providers, most of which don’t have availability zones here in Australia, is completely ludicrous and no amount of IT strategy policies can change that.

Of course cloud providers aren’t unaware of these issues, indeed I’ve met with several people behind some of the larger public clouds on this, and many of them are bringing availability zones to Australia. Indeed Amazon Web Services has already made itself available here and Microsoft’s Azure platform is expected to land on our shores sometime next year. The latter is probably the more important of the two as if the next AGIMO policy turns out the way it’s intended the Microsoft cloud will be the defacto solution for light user agencies thanks to the heavy amount of Microsoft products in use at those places.

Whilst I might be a little peeved at the rhetoric behind the review of the APS ICT Strategy I do welcome it as even though it was only written a couple years ago it’s still in need of an update due to the heavy shift towards cloud services and user centric IT that we’ve seen recently. The advent of Australian availability zones will mean that the government agencies most able to take advantage of cloud services will finally be able to, especially with AGIMO policy behind them. Still it will be up to the cloud providers to ensure their systems can meet the requirements of these agencies and there’s still every possibility that they will still not be enough for some departments to take advantage of.

We’ll have to see how that pans out, however.

 

Surface 2 Pro

Microsoft’s Surface 2: A Big Hole To Fill.

There’s no question that  Microsoft’s attempt at the tablet market has been lacklustre. Whilst the hardware they have powering their tablets was decent the nascent Windows Store lacks the diversity of its competitors, something which made the RT version of it even less desirable. This has since resulted in Microsoft writing down $900 million in Surface RT and associated inventory something which many speculated would be the end of the Surface line. However it appears that Microsoft is more committed than ever to the Surface idea and recently announced the Surface 2, an evolutionary improvement over its predecessor.

Surface 2 ProThe new Surface 2 looks pretty much identical to predecessor although it’s a bit slimmer and is also a bit lighter. It retains the in built kick stand but it now has 2 positions instead of one something which I’m will be useful to some. The specifications under the hood have been significantly revamped for both versions of the tablet with the RT (although it’s no longer called that) version sporting a NVIDIA Tegra 4 and the Pro one of the new Haswell i5 chips. Microsoft will also now let you choose how much RAM you get in your Pro model, allowing you to cram up to 8GB in there. The Pro also gets the luxury of larger drive sizes, up to 512GB should you want it (although you’ll be forced to get the 8GB RAM model if you do). Overall I’d say this is pretty much what you’d expect from a generation 2 product and the Pro at least looks like it could be a decent laptop competitor.

Of course the issues that led Microsoft to write down nearly a billion dollars worth of inventory (after attempting to peddle as much of it as they could to TechEd attendees) still exist today and the upgrade to Windows 8.1 won’t do much to solve this. Sure in the time between the initial Surface release and now there’s been a decent amount of applications developed for it but it still pales in comparison. I still think that the Metro interface is pretty decent on a touch screen but Microsoft will really have to do something outrageous to convince everyone that the Surface is worth buying otherwise it’s doomed to repeat its predecessor’s mistakes.

The Pro on the other hand looks like it’d be a pretty great enterprise tablet thanks to its full x86 environment. I know I’d much rather have those in my environment than Android or iPads as they would be much harder to integrate into all the standard management tools. A Surface 2 Pro on the other hand would behave much like any other desktop allowing me to deliver the full experience to anyone who had one. Of course it’s then more of a replacement for a laptop than anything else but I do know a lot of users who would prefer a tablet device rather than the current fleet of laptops they’re given (even the ones who get ultrabooks).

Whilst the Pro looks like a solid upgrade I can’t help but feel that the upgrade to the RT is almost unnecessary given the fact that most of the complaints levelled at it were nothing to do with its performance. Indeed not once have I found myself wanting for speed on my Surface RT, instead I’ve been wanting my favourite apps to come across so that I don’t have to use their web versions which, on Internet Explorer, typically aren’t great. Maybe the ecosystem is mature enough now to tempt some people across but honestly unless they already own one I can’t really see that happening, at least for the RT version. The Pro on the other hand could make some headway into Microsoft’s core enterprise market but even that might not be enough for the Surface division.

 

NOKIA BRICKS

Microsoft Buys Nokia: That’s Great, For One of Them.

If you’re old enough to remember a time when mobile phones weren’t common place you also likely remember the time when Nokia was the brand to have, much like Apple is today. I myself owned quite a few of them with my very first phone ever being the (then) ridiculously small Nokia 8210. I soon gravitated towards other, more shiny devices as my disposable income allowed but I did find myself in possession of an N95 because, at the time, it was probably one of the best handsets around for techno-enthusiasts like myself.  However it’s hard to deny that they’ve struggled to compete in today’s smartphone market and, unfortunately, their previous domination in the feature phone market has also slipped away from them.

NOKIA BRICKSTheir saving grace was meant to come from partnering with Microsoft and indeed I attested to as much at the time. Casting my mind back to when I wrote that post I was actually of the mind that Nokia was going to be the driving force for Microsoft however in retrospect it seems the partnership was done in the hopes that both of their flagging attempts in the smartphone market could be combined into one, potentially viable, product. Whilst I’ve praised the design and quality of Windows Phone based Nokias in the past it’s clear that the amalgamation of 2 small players hasn’t resulted in a viable strategy to accumulate a decent amount of market share.

You can then imagine my surprise when Microsoft up and bought Nokia’s Devices and Services business as it doesn’t appear to be a great move for them.

So Nokia as a company isn’t going anywhere as they still retain control of a couple key businesses (Solutions and Networks, HERE/Navteq and Advanced Technologies which I’ll talk about in a bit) however they’re not going to be making phones anymore as that entire capability has been transferred to Microsoft. That’s got a decent amount of value in itself, mostly in the manufacturing and supply chains, and Microsoft’s numbers will swell by 32,000 when the deal is finished. However whether that’s going to result in any large benefits for Microsoft is debateable as they arguably got most of this in their 2011 strategic partnership just that they can now do all the same without the Nokia branding on the final product.

If this type of deal is sounding familiar then you’re probably remembering the nearly identical acquisition that Google made in Motorola back in 2011. Google’s reasons and subsequent use of the company were quite different however and, strangely enough, they have yet to use them to make one of Nexus phones. Probably the biggest difference, and this is key to why this deal is great for Nokia and terrible for Microsoft, is the fact that Google got all of Motorola’s patents, Microsoft hasn’t got squat.

As part of the merger a new section is being created in Nokia called Advanced Technologies which, as far as I can tell, is going to be the repository for all of Nokia’s technology patents. Microsoft has been granted a 10 year license to all of these, and when that’s expired they’ll get a perpetual one, however Nokia gets to keep ownership of all of them and the license they gave Microsoft is non-exclusive. So since Nokia is really no longer a phone company they’re now free to start litigating against anyone they choose without much fear of counter-suits harming any of their products. Indeed they’ve stated that the patent suits will likely continue post acquisition signalling that Nokia is likely going to look a lot more like a patent troll than a technology company in the near future.

Meanwhile Microsoft has been left with a flagging handset business, one that’s failed to reach the kind of growth that would be required to make it sustainable long term. Now there’s something to be said about Microsoft being able to release Lumia branded handsets (they get the branding in this deal) but honestly their other forays into the consumer electronics space haven’t gone so well so I’m not sure what they’re going to accomplish here. They’ve already got the capability and distribution channels to get products out there (go into any PC store and you’ll find Microsoft branded peripherals there, guaranteed) so whilst it might be nice to get Nokia’s version of that all built and ready I’m sure they could have built one themselves for a similar amount of cash. Of course the Lumia tablet might be able to change consumer’s minds on that one but most of the user complaints around Windows RT weren’t about the hardware (as evidenced in my review).

In all honesty I have no idea why Microsoft would think this would be a good move, let alone a move that would let them do anything more than they’re currently doing. If they had acquired Nokia’s vast portfolio of patents in the process I’d be singing a different tune as Microsoft has shown how good they are in wringing license fees out of people (so much so that the revenue they get from Android licensing exceeds that of their Windows Phone division) . However that hasn’t happened and instead we’ve got Nokia lining up to become a patent troll of epic proportions and Microsoft left $7 billion patent licensing deal that comes with its own failing handset business. I’m not alone in this sentiment either as Microsoft’s shares dropped 5% on this announcement which isn’t great news for this deal.

I really want to know where they’re going with this because I can’t for the life of me figure it out.

 

IMG_3901

Microsoft’s Surface RT: It’s Nice, But…

Just like any new tech gadget I’ve been ogling tablets for quite some time. Now I’m sure there will be a few who are quick to point out that I said long ago that an ultrabook fills the same niche, at least for me, but that didn’t stop me from lusting after them for one reason or another. I’d held off on buying one for a long time though as the price for something I would only have a couple uses for was far too high, even if I was going to use it for game reviews, so for a long time I simply wondered at what could be. Well whilst I was at TechEd North America the opportunity to snag a Windows Surface RT came up for the low price of $99 and I, being able to ignore the fiscal conservative in me and relent into my tech lust, happily handed over my credit card so I could take one home with me.

IMG_3901

It’s quite a solid device with a noticeable amount of heft in it despite its rather slim figure. Of particular note is the built in kick stand which allows you to sit the Surface upright, something which I’ve heard others wish for with their tablets. It’s clear that the Surface as been designed to be used primarily in landscape mode which is in opposition to most other tablets that utilize the portrait view. For someone like me who’s been a laptop user for a long time this didn’t bother me too much but I can see how it’d be somewhat irritating if you were coming from another platform as it’d be just another thing you’d have to get used to. Other than that it seems to be your pretty standard tablet affair with a few tweaks to give it a more PC feel.

The specifications of it are pretty decent boasting a WXGA (1366 x 768) 16:9 screen powered by a NVIDIA Tegra3 with 2GB RAM behind it. I’ve got the 64GB model which reports 53GB available and 42GB free which was something of a contentious point for many as they weren’t getting what they paid for (although at $99 I wasn’t going to complain). It’s enough that when using it I never noticed any stutter or slow down even when I was playing some of the more graphically intense games on it. I didn’t really try any heavy productivity stuff on it because I much prefer my desktop for work of that nature but I get the feeling it could handle 90% of the tasks I could throw at it. The battery life also appears to be relatively decent although I have had a couple times where it mysteriously came up on 0 charge although that might have been due to my fiddling with the power settings (more on why I did that later).

Since I’ve been a Windows 8 user for a while the RT interface on the Surface wasn’t much of a shock to me although I was a little miffed that I couldn’t run some of my chosen applications, even in desktop mode, notably Google Chrome. That being said applications that have been designed for the Metro interface are usually pretty good, indeed the OneNote app and Cocktail Flow are good examples of this, however the variety of applications available is unfortunately low. This is made up for a little by the fact that the browser on the Surface is far more usable than the one on Windows Phone 7 enabling many of the web apps to work fine. I hope for Microsoft’s sake this changes soon as the dearth of applications on the Surface really limits its appeal.

The keyboard that came with the Surface gets a special mention because of just how horrid it is. Whilst it does a good job of being a protective cover, one that does have a rather satisfying click as the magnets snap in, it’s absolutely horrendous as an input device, akin to typing on a furry piece of cardboard. Since there’s no feedback it’s quite hard to type fast on it and worse still it seems to miss key presses every so often. Probably the worst part about it is that if your surface locks itself with it attached and then you remove it you will then have no way to unlock your device until you re-attach it, even if you’ve set a PIN code up. I’ve heard that the touch cover is a lot better but since it was going for $100 at the time I wasn’t too keen on purchasing it.

The Surface does do a good job of filling the particular niche I had for it, which was mainly watching videos and using it to remote into my servers, but past that I haven’t found myself using it that much. Indeed the problem seems to be that the Surface, at least the non-pro version, is stuck halfway between being a true tablet and a laptop as many of its features are still computer-centric. This means that potential customers on either side of the equation will probably feel like they’re missing something which I think is one of the main reasons that the Surface has struggled to get much market share. The Pro seems to be much closer to being a laptop, enough so that the people I talked to at TechEd seemed pretty happy with their purchase. Whether that translates into Microsoft refocusing their strategy with the Surface remains to be seen, however.

The Surface is a decent little device, having the capabilities you’ve come to expect from a tablet whilst still having that Microsoft Windows feel about it. It’s let down by the lack of applications and dissonance it suffers from being stuck between the PC and tablet worlds, something that can’t be easily remedied by a software fix. The touch cover is also quite atrocious and should be avoided at all costs, even if you’re just going to use it as a cover. For the price I got it for I think it was worth the money however getting it at retail is another story and unless you’re running a completely Microsoft house already I’d struggle to recommend it over an ultrabook or similarly portable computing device.

xbox-One

Microsoft Still Playing Catch Up With Policy Updates.

It’s been a rough few months for Microsoft’s gaming division with them copping flak from every angle about nearly all aspects of their next generation console, the Xbox One. I’ve tried to remain mostly neutral on the whole ordeal as I had originally put myself down for both consoles when they both released but that changed when I couldn’t find a compelling reason to get both. Since then Microsoft has tried to win back the gamers it alienated with its initial announcements although it was clear that the damage was done in that respect and all this did was helped to keep the loyalists happy with a choice they were never going to make. Since then it’s been all quiet from Microsoft, perhaps in the hopes that silence would do more to help than anything else they could say at this point.

xbox-One

However a recent announcement from Microsoft has revealed that not only will Microsoft be allowing independent developers to self-publish on the Xbox One platform they’ll also be able to use a retail kit as a debug unit. Considering that traditionally development kits were on the order of a couple of thousand dollars (the PlayStation3 one was probably the most expensive I ever heard of at $20,000 on release day) this announcement is something of a boon for indie developers as those looking to do cross platform releases now don’t have spend a significant chunk of change in order to develop on Microsoft’s console. On the surface that would seem to be a one up on Nintendo and Sony but as it turns out Microsoft isn’t doing something truly notable with this announcement, they’re just playing catch up yet again.

Sony announced at E3 that they’d allow indie developers to self publish on the PlayStation4 however you’ll still need to get your hands on a development kit if you want to test your titles properly. This presents a barrier of course, especially if they retain the astronomical release day price (I wouldn’t expect that though), however Sony has a DevKit Loaner program which provides free development kits to studios who need them. They also have a whole bunch of other benefits for devs signing up to their program which would seem to knock out some of the more significant barriers to entry. I’ll be honest when I first started writing this I didn’t think Sony had any of this so it’s a real surprise that they’ve become this welcoming to indie developers.

Similarly Nintendo has a pretty similar level of offerings for indies although it wasn’t always that way. Updates are done for free and the review process, whilst still mandatory, is apparently a lot faster than other platforms. Additionally if you get into their program (which has requirements that I could probably meet, seriously) you’ll also find yourself with a copy of Unity 4 Pro at no extra charge which allows you to develop titles for multiple platforms simultaneously. Sure this might not be enough to convince a developer to go full tilt on a WiiU exclusive but those considering a multiplatform release after seeing some success on one might give it another look after seeing what Nintendo has to offer.

Probably the real kicker, at least for us Australians, is even despite the fact that indies will be able to self publish on the new platform after testing on retail consoles we still won’t be able to see them thanks to our lack of XBLIG. Microsoft are currently not taking a decisive stand on whether this will change or not (it seems most of the big reveals they want to make will be at Gamescon next month) but the smart money is on no, mostly due to the rather large fees required to get a game classified in Australia. This was supposed to be mitigated somewhat by co-regulation by the industry as part of the R18+ classification reforms and it has, to some extent, although it seems to be aimed at larger enterprises currently as I couldn’t find any fee for service assessors (there was a few jobs up on Seek for some though, weird). Whilst I’m sure that wouldn’t stop Australian indie devs from having a crack at the Xbox One I’m sure it’d be a turn off for some as who doesn’t to see their work in their own country?

I’m getting the feeling that Microsoft has a couple aces up its sleeve for Gamescon so I’ll hold back on beating the already very dead horse and instead say I’m interested to see what they have to say. I don’t think there’s anything at this point that would convince me to get one but I’m still leagues away from writing it off as a dead platform. Right now the ball is in Microsoft’s court and they’ve got a helluva lot of work to do if they want their next gen’s launch day to look as good as Sony’s.

Internet Disconnected

Why Gamers Are “Stuck” In The Disc Era.

There was an awful lot of noise last month around the whole XboxOne DRM/features/whatever debacle that ended up with Microsoft doing a 180 on their often-on DRM stance. Ostensibly it was reactionary due to the amount of praise that Sony was getting at Microsoft’s expense, even though they’d managed to hold fast during the initial PR stampede. There were a few though, certainly not the majority but a non-zero amount, who lamented this change by Microsoft, saying that they had capitulated to the crowd and were essentially keeping gaming services in the dark ages. There’s a little meat to this story as the removal of the daily check-in requirement meant that some of the features that came along with it had to go away. Initially the things people were talking about didn’t require a daily check-in to achieve (like worlds that “live on” between game sessions, I think Animal Crossing had that covered pretty well) but there was one that was so revolutionary that I thought people were just making it up.

That was the ability to sell your digital only games.

Internet Disconnected

Now as someone who’s got a massive library of these kinds of games on Steam (last count was in the realm of 300+) the ability to sell, or even just transfer, these games would be a pretty great feature. It’s possible that residents of EU countries might end up getting this by default thanks to a 2012 CURIA ruling but the idea that this could come to the XboxOne, regardless of territory, would be very appealing to a lot of gamers. The often on check is then required to make sure you haven’t sold the game through one channel and then continue to play it offline, which makes some sense in context, although I’d argue that the number of people who’d do such things would be in the minority (and you could just check whenever they did eventually get online anyway). However all that still has the one enormous caveat that I think was the crux of the issue for everyone: you have to rely on a service that may or may not be there in the future.

“Ah ha”, I hear you say, “but that’s the same for Steam and everyone just accepts it there!” and you’re right, to a point. That was probably the biggest thing that Steam had going against it at the time as PC gamers were most certainly not welcoming of it, I know I certainly wasn’t. However once the value proposition became very attractive, mostly through the sales, ease of use and increasing broadband penetration we started to warm to the service. There was also the assurance from Gabe Newell (although trying to source a direct quote relating to this is proving elusive) that should Steam have to shut down there’ll be a patch issued that would free your game library from its decaying hands. With Microsoft’s announcement there wasn’t, or at least it wasn’t communicated well, an equivalent assurance that would allow gamers to continue to play such games past the time when the Xbox Live service disappeared.

Indeed this problem faces all gamers as many titles move towards a more connected model which could mean that core features become unusuable the second the developer can no longer support running the back end infrastructure. For some times, ones that are traditionally multiplayer only, this is kind of expected but the difference between Diablo and Diablo III for instance is that in 20 years I can almost guarantee the former will still be able to be run by anyone with the disc, the latter I’m not sure will see the end of this decade. Sure the number of people doing this might not be in the majority but they’re a vocal one and the sole reason why services like GoG exist. Had Microsoft given some assurances to the contrary they might not be in the position they are today and those features might still be available to Xbox customers.

It may seem like we’re just being backwards Luddites bent on keeping the status quo but it’s far more than that, we just want to be able to play our games long into the future like we can do with so many titles we grew up on. I see no technical reason why systems can’t be built to enable both sides of the equation, one that allows us to sell/trade digital games whilst also giving the opportunity to play offline whenever we want, but the reasons are far more likely business in nature. It’s a real shame as Microsoft could have really outdone Sony on this particular front but it seems like they’re instead gearing up for being second place, capitulating just enough so they don’t end up competing with the Wii U for scraps of market share.

TechNet Subscription

Cancelling TechNet is a Bad Move, Microsoft.

It’s no secret that I’m a Microsoft guy, owing much of my current career to their products which have been the staple of my computing experience since I was 5 years old. In that time I’ve gone from a simple user, to a power user who tweaked his system for the ultimate gaming experience to the administrator I am today, one who has seen almost everything Microsoft has to offer. I won’t lie, much of that foundational experience was built on the backs of pirated software but once I had a proper job that gave me access to all the software I needed I found myself not often needing much more than they provided. That was until I became a contractor which necessitated some external learning on my part.

Enter TechNet subscriptions.

TechNet Subscription

They’re essentially a golden ticket to Microsoft’s entire software library. Back when I first bought into them there was only one level which got you everything but Visual Studio (that privilege is reserved for MSDN subscribers) and came with a handful of licenses for every Windows version out there, and I do mean every version as you could get MS-DOS 1.0 should you be so inclined. I, like most TechNet subscribers at the time, got it because the cost was roughly equivalent to the Windows desktop licensing cost to cover all my home machines at the time and the added server OSes and business software were an added bonus that’d help me professionally. I didn’t end up renewing it, mostly because I then got a MSDN account through work, but I know several people who are still subscribers today, usually for the same reasons I was.

It was with mixed feelings then that I read today’s announcement that Microsoft was going to stop selling the program effective August 31st, 2013. If you’re so inclined you can buy yourself a subscription (or renew your current one) all the way up to this date so you can continue to use the service for another year after that, putting the end date of the service at late 2014. After that your only option to get a similar level of access to Microsoft’s catalogue will be to go through MSDN which at current pricing levels is out of reach for infrastructure professionals like myself. Whilst the price difference is justified by a lot of the extra features you get (like the super cheap Azure pricing) those benefits aren’t exactly aligned with the current TechNet crowd.

The suggested replacement for TechNet is now the Evaluation Center which provides access to time limited versions of the same software (although how comprehensive the library is in comparison isn’t something I can comment on). Ironically there’s still a text blurb pointing you to buy a TechNet subscription should you want to “enjoy software for longer” something which I’m sure won’t remain there for long. In all honesty the reason why TechNet was so useful was the lack of time and feature limitations, allowing you to work freely with the product without having to consider some arbitrary limitation. For people like me who like to evaluate different bits of software at different times this was great as I could have an environment set up with all the basics and just install that application on top of it. Time limited software doesn’t provide this functionality, making evaluation done at the individual professional level essentially pointless.

The rationale is that people are looking more towards free services for evaluation and deployment. Now no one but Microsoft has the stats to back that argument up so we’ll just have to take their word for it but I get the feeling this is more about them trying to realign their professional network more than anything else. Sure I’m in the camp that admins will need to skill themselves up on dev related things (PowerShell and C# would not go astray) but semi-forcing them onto MSDN to do so isn’t the right way to go about it. Sure they’ve committed to expanding the services offered through the evaluation center but I doubt the best feature of TechNet, the no time and feature limitations, will ever come to it. Perhaps if they were to do a TechNet cloud edition, one where all the software had to be run on Azure, I might sing a different tune but I doubt that’ll ever happen.

As much as I praise Microsoft here I can’t help but feel this is a bad move on their part as it will only help to alienate a dedicated part of their user base that serves as the front line advocates for their products. I may not be a subscriber anymore, nor will I likely be one in the near future thanks to the benefits granted by my job, but I know many people who find a lot of value in the service, people who are de facto product evangelists because of it. I can only hope that they revamp the MSDN subscriptions to provide a similar level of service as otherwise there’s really only one place people will turn to and I know Microsoft doesn’t approve of it.

Xbone Your Feedback Matters

Microsoft Backtracks on DRM Stance.

Whilst its easy to argue to the contrary Microsoft really is a company that listens to its customers. Many of the improvements I wrote about during my time at TechEd North America were the direct result of them consulting with their users and integrating their requests into their updated product lines. Of course this doesn’t make them immune to blundering down the wrong path as they have done with the XboxOne (and a lot would argue Windows 8 as well, something which I’m finding hard to ignore these days) something which Sony gleefully capitalized on. Their initial attempts at damage control did little to help their image and it was looking like they were just going to wear it until launch day.

And then they did this:

Xbone Your Feedback Matters

Essentially it’s a backtrack to the way things are done today with the removal of the need for the console to check in every day in order for you to be able to play installed/disc based games. This comes hand in hand with Microsoft now allowing you to trade/sell/gift your disc based games to anyone, just like you can do now. They’re keeping the ability to download games directly from Xbox Live although it seems the somewhat convoluted sharing program has also been nixed, meaning you can no longer share games with your family members nor can you share downloaded titles with friends. Considering that not many people found that particular feature attractive I’m not sure it will be missed but it does look like Microsoft wanted to put the boot in a little to show us what we could have had.

I’ll be honest and say I didn’t expect this as Microsoft had been pretty adamant that it was going to stick around regardless of what the consumers thought. Indeed actions taken by other companies like EA seemed to indicate that this move was going to be permanent, hence them abandoning things that would now be part of the platform. There’s been a bit of speculation that this was somehow planned all along; that Microsoft was gauging the Market’s reaction and would react based on that but if that was the case this policy would have been reversed a lot sooner, long before the backlash reached its crescendo during E3. The fact that they’ve made these changes shows that they’re listening now but there’s not to suggest that this was their plan all along.

Of course this doesn’t address some of the other issues that gamers have taken with the XboxOne, most notably the higher cost (even if its semi-justified by the included Kinect) and the rather US centric nature of many of the media features. Personally the higher price doesn’t factor into my decision too much, although I do know that’s a big deal for some, but since the XboxOne’s big selling points was around it’s media features it feels like a lot of the value I could derive from it is simply unavailable to me. Even those in the USA get a little bit of a rough ride with Netflix being behind the Xbox Live Gold wall (when it’s always available on the PS4) but since both of them are requiring the subscription for online play it’s not really something I can really fault/praise either of them for.

For what it’s worth this move might be enough to bring those who were on the fence back into the fold but as the polls and preorders showed there’s a lot of consumers who have already voted with their wallets. If this console generation has the same longevity as the current one then there’s every chance for Microsoft to make up the gap over the course of the next 8 years and considering that the majority of the console sales happen after the launch year it’s quite possible that all this outrage could turn out to be nothing more than a bump in the road. Still the first battle in this generation of console wars has been unequivocally won by Sony and it’s Microsoft’s job to make up that lost ground.