Ever since Steam reached a certain level of functionality any game that was distributed on it was kind of expected to make use of it. This isn’t a hard requirement from Valve or anything like that, no more it was an expectation from gamers that should Steam provide some services, like user login and what have you, then any game requiring them to do that again would be met with derision and, in my mind, rightly so. Whilst there were numerous examples of different game developers using their own login systems (Ubernet being one of the first to come to mind) by far the worst offender in this category was the Games for Windows Live service which would always manage to weasel its way into any game that came out or was published by Microsoft Studios.
Games for Windows Live got the most negative attention due to the fact that it directly replicated Steam’s technology, including things like the screen overlay, which meant that the user experience became somewhat confused. Additionally the benefits it provided were pretty slim as the only thing I could see was integration with my Live account, giving me achievement points, but considering most of those such games were cross platform intrepid achievement point hunters would likely prefer their Xbox. This was made all the more worse as since most PC gamers didn’t use it often the client usually needed to update itself, requiring multiple game restarts in order to get it working.
So you can imagine that there was no love lost when rumors started circulating that it was to be shut down next year.
The news comes from an unwitting source, Age of Empires Online, who mistakenly made the announcement as a courtesy to users who’d no longer be able to use the game after that point. The announcement was taken down almost immediately, although of course in the age of the Internet there’s always someone with a screenshot, which would seem to add a little credence to the idea that this was something Microsoft didn’t want everyone to know right now. Indeed in a strange coincidence it was also announced today that Arkham Origins would not be using the Games for Windows Live framework, strange considering that the previous two installments in that franchise did. Indeed looking at the list of Games for Windows Live games reveals that there’s been something of a dearth of titles released using the platform this year which would seem to confirm its imminent demise.
If the title of this post wasn’t a dead giveaway as to my feelings about this I’m honestly glad to see it go. The service never provided me any value and only served to get in the way of me playing the games, something which I don’t take kindly to. I’m sure this sentiment is shared by a lot of gamers, especially those who’ve made huge investments in the Steam platform like I have. Whilst I’m always wary of monopolies I’d hope that game developers took note of this and eschewed their own login systems in favor of something more standard and accepted.
Of course there’s also a dark side to this as Games for Windows Live going down will mean that games which rely on those services will simply stop working. Whilst I’m somewhat hopeful that the bigger titles might see a patch come through to remove it, at least enabling single player, I can’t imagine every title will see the same amount of effort put into it. There is a slim hope that Microsoft might make a general patch available however since a lot of the CD key authentication stuff was tied up with those servers I’m not too hopeful.
There is every chance that the Age of Empires guys got this wrong and Games for Windows Live will be sticking around but the evidence seems to say otherwise. Whilst I believe this is an overall positive for PC gamers the downsides to losing a hosted service like this are a painful reminder of the trade offs that coming with using them. We all like to believe that Steam is invincible, immune to things like this happening to it, but there’s every chance that in the future the same will happen to it. How the companies deal with this situation will be telling for the future as I’m sure this won’t be the last time we see such a service go down.
Security is one of those things that many people put aside when developing a new product since it’s one of those things that doesn’t get you any closer to launching and adds no face value for your end users. For many people it’s usually the last thing on their mind until they have an incident, and then afterwards it becomes the top priority (as we’ve seen with Sony recently). With the average data breach running a company something in the order of $7 million you can see why a lot of companies go belly up once they’ve been hit and that’s why I still find it frustrating when new start-ups and companies put security on the backburner. They’re really shooting themselves in the foot.
It’s not like basic security is that hard either. I’ve said in the past that SSL isn’t that hard and I stand by those comments, especially if you’re building something on any of the popular frameworks. SSL is just the beginning though as you can still fall prey to security problems like SQL injection and cross-site scripting attacks even if your site is using SSL for the more sensitive aspects. Again though since the vast majority of new web applications are built on some kind of framework most of this leg work is taken care of for you, as long as you make a token effort to implement them.
I think why I get so uppity about this is because some of the most secure institutions, like banks, fail to implement security on the same level that others, say game developers, manage to do quite well and surprisingly cheaply. The best example of this would have to be Blizzard who implemented their authenticator program to combat the constant problem of accounts being hacked. Compare this to the 3 or 4 banks I’ve had dealings with over the past couple years, none of which have offered me such a service, and you can begin to understand why I’m a little annoyed that my World of Warcraft character’s epics are more secure than the cash I use to pay for them.
It’s not all bad news however as the era of the smart phone has made it possible to replicate two factor authentication quite cheaply. Both Google and Facebook have now made it possible to login to their services using two factor authentication via an application on your smart phone. Whilst I’m sure the vast majority of people will not bother (until after something bad happens of course) it still shows that they’re at least thinking in the right direction, unlike many other services which just don’t bother.
What really surprises me is that how this isn’t a commodity service yet. The idea behind two factor authentication is simple, you have to know something (your password) and have something (your smartphone) in order to gain access to the system with the specified user account. Realistically the password problem is already solved and the second factor is really just a simple random number generator that’s seeded by a particular value that both you and the server know. Couple that with decent time synching (easily done on any phone with GPS) and your well on your way to better security. Sure there’s a bit more too it than that but since I’ve been considering doing this as a weekend project ever since I thought of it should give you a clue to just how easy it is to put decent security in an online service.
I’m hardly an expert at this whole security stuff, hell I bet if you hacked away at any of my projects for 10 minutes you’d find some awesome exploit, but even in this day and age of malware/crimeware/scamware I find it surprising just how lax some people can be we it comes to rudimentary security measures. You’re never going to be able to stop the most determined of intruders but it’s the casual hacker tourists that you want to keep out. Realistically you only need to be more secure than the next guy they have a go at and judging by the terrible level of security present online these days that’s not going to be too hard. So you developers of online web services you have no excuses for not at least attempting to put security into your product and should I catch you sending my login details in clear text over the Internet you can be sure I’ll be the first in line to blast you for making such mistakes.
Yeah that’s right, I’m going to blog about you and there’s nothing you can do about it… TAKE IT!
There were so many times when I was coding up early versions of Lobaco that I didn’t give any thought to security. Mostly it was because the features I was developing weren’t really capable of divulging anything that wasn’t already public so I happily kept on coding leaving the tightening up of the security for another day. Afterwards I started using some of the built in authentication services available with the Windows Communication Framework but I realised that whilst it was easy to use with the Silverlight client it wasn’t really designed for anything that wasn’t Windows based. After spending a good month off from programming what would be the last version of Geon I decided that I would have to build my own services from the ground up and with that my own security model.
You’d think with security being such a big aspect of any service that contains personal information about users that there would be dozens of articles about. Well there are but none of them were particularly helpful and I spent a good couple days researching into various authentication schemes. Finally I stumbled upon this post by Tim Greenfield who laid out the basics of what has now become the authentication system for Lobaco. Additionally he made the obvious (but oh so often missed) point that when you’re sending any kind of user name and password over the Internet you should make sure it’s done securely using encryption. Whilst that was a pain in the ass to implement it did mean that I could feel confident about my system’s security and could focus on developing more features.
However when it comes down to the crunch new features will often beat security in terms of priority. There were so many times I wanted to just go and build a couple new features without adding any security into them. The end result was that whilst I got them done they had to be fully reworked later to ensure that they were secure. Since I wasn’t really working under any deadline this wasn’t too much of a problem, but when new features trump security all the way to release you run the risk of releasing code into the wild that could prove devastating to your users.
No example of this has been more prolific than the recent security issues that have plagued the popular micro-blogging service Twitter. Both of them come hot on the heels of the release of the new Twitter website released recently that enables quite a bit more functionality and with it the potential to open up holes for exploitation. The first was intriguing as it basically allowed someone to force the user’s browser to execute arbitrary Java script. Due to the character length limit of Twitter the impact this could have was minimised, but it didn’t take long before malicious attackers got a hold of it and used it for various nefarious purposes. This was a classic example of something that could have easily been avoided if they sanitised user input rather than checking for malicious behaviour and coding against it.
The second one was a bit more ugly as it had the potential to do some quite nasty things to a user’s account. It used the session details that Twitter stores in your browser to send messages via your account. Like the other Twitter exploit it relied on the user’s typical behaviour of following links posted by the people they follow. This exploit can not be squarely blamed at Twitter either as the use of link shortening services that hide the actual link behind a short URL make it that much harder for normal users to distinguish the malicious from the mundane. Still Twitter should have expected such session jacking (I know I have) and built in counter measures to stop them from doing it.
Any large public system will attract those looking to exploit it for nefarious means, that’s part and parcel of doing business on the web. The key then is to build your systems with the expectation that they will be exploited rather than waiting for an incident to arise. As a developer I can empathise that developing code that’s resistance to every kind of attack is next to impossible but there are so many things that can be done to ensure that the casual hackers steer clear. Twitter is undergoing a significant amount of change with a vision to scale themselves up for the big time, right up there with Google and Facebook. Inevitably this will mean they’ll continue to have security concerns as they work to scale themselves out and hopefully these last two exploits have shown them that security is something they should consider more closely than they have in the past.