I heap a lot of praise on Windows Azure here, enough for me to start thinking about how that’s making me sound like a Microsoft shill, but honestly I think it’s well deserved. As someone who’s spent the better part of a decade setting up infrastructure for applications to run on and then began developing said applications in its spare time I really do appreciate not having to maintain another set of infrastructure. Couple that with the fact that I’m a full Microsoft stack kind of guy it’s really hard to beat the tight integration between all of the products in the cloud stack, from the development tools to the back end infrastructure. So like many of my weekends recently I spent the previous coding away on the Azure platform and it was filled with some interesting highs and rather devastating lows.
For the uninitiated Azure Web Sites are essentially a cut down version of the Azure Web Role allowing you to run pretty much full scale web apps for a fraction of the cost. Of course this comes with limitations and unless you’re running on at the Reserved tier you’re essentially sharing a server with a bunch of people (I.E. a common multi-tenant scenario). For this site, which isn’t going to receive a lot of traffic, it’s perfect and I wanted to deploy the first run app onto this platform. Like any good admin I simply dove in head first without reading any documentation on the process and to my surprise I was up and running in a matter of minutes. It was pretty much create web site, download publish profile, click Publish in Visual Studio, import profile and wait for the upload to finish.
Deploying a web site on my own infrastructure would be a lot more complicated as I can’t tell you how many times I’ve had to chase down dependency issues or missing libraries that I have installed on my PC but not on the end server. The publishing profile coupled with the smarts in Visual Studio was able to resolve everything (the deployment console shows the whole process, it was actually quite cool to watch) and have it up and running at my chosen URL in about 10 minutes total. It’s very impressive considering this is still considered preview level technology, although I’m more inclined to classify it as a release candidate.
Other Azure users can probably guess what I’m going to write about next. Yep, the horrific storage problems that Azure had for about 24 hours.
I noticed some issues on Friday afternoon when my current migration (yes that one, it’s still going as I write this) started behaving…weird. The migration is in its last throws and I expected the CPU usage to start ramping down as the multitude of threads finished their work and this lined up with what I was seeing. However I noticed the number of records migrated wasn’t climbing up at the rate it was previously (usually indicative of some error happening that I suppressed in order for the migration to run faster) but the logs showed that it was still going, just at a snail’s pace. Figuring it was just the instance dying I reimaged it and then the errors started flooding in.
Essentially I was disconnected from my NOSQL storage so whilst I could browse my migrated database I couldn’t keep pulling records out. This also had the horrible side effect of not allowing me to deploy anything as it would come back with SSL/TLS connection issues. Googling this led to all sorts of random posts as the error is also shared by the libraries that power the WebClient in .NET so it wasn’t until I stumbled across the ZDNet article that I knew I wasn’t in the wrong. Unfortunately you were really up the proverbial creek without a paddle if your Azure application was based on this as the temporary fixes for this issue, either disabling SSL for storage connections or usurping the certificate handler, left your application rather vulnerable to all sorts of nasty attacks. I’m one of the lucky few who could simply do without until it was fixed but it certainly highlighted the issues that can occur with PAAS architectures.
Honestly though that’s the only issue (that’s not been directly my fault) I’ve had with Azure since I started using it at the end of last year and comparing it to other cloud services it doesn’t fair too badly. It has made me think about what contingency strategy I’ll need to implement should any parts of the Azure infrastructure go away for a extended period of time though. For the moment I don’t think I’ll worry too much as I’m not going to be earning any income from the things I build on it but it will definitely be a consideration as I begin to unleash my products onto the world.
Security is one of those things that many people put aside when developing a new product since it’s one of those things that doesn’t get you any closer to launching and adds no face value for your end users. For many people it’s usually the last thing on their mind until they have an incident, and then afterwards it becomes the top priority (as we’ve seen with Sony recently). With the average data breach running a company something in the order of $7 million you can see why a lot of companies go belly up once they’ve been hit and that’s why I still find it frustrating when new start-ups and companies put security on the backburner. They’re really shooting themselves in the foot.
It’s not like basic security is that hard either. I’ve said in the past that SSL isn’t that hard and I stand by those comments, especially if you’re building something on any of the popular frameworks. SSL is just the beginning though as you can still fall prey to security problems like SQL injection and cross-site scripting attacks even if your site is using SSL for the more sensitive aspects. Again though since the vast majority of new web applications are built on some kind of framework most of this leg work is taken care of for you, as long as you make a token effort to implement them.
I think why I get so uppity about this is because some of the most secure institutions, like banks, fail to implement security on the same level that others, say game developers, manage to do quite well and surprisingly cheaply. The best example of this would have to be Blizzard who implemented their authenticator program to combat the constant problem of accounts being hacked. Compare this to the 3 or 4 banks I’ve had dealings with over the past couple years, none of which have offered me such a service, and you can begin to understand why I’m a little annoyed that my World of Warcraft character’s epics are more secure than the cash I use to pay for them.
It’s not all bad news however as the era of the smart phone has made it possible to replicate two factor authentication quite cheaply. Both Google and Facebook have now made it possible to login to their services using two factor authentication via an application on your smart phone. Whilst I’m sure the vast majority of people will not bother (until after something bad happens of course) it still shows that they’re at least thinking in the right direction, unlike many other services which just don’t bother.
What really surprises me is that how this isn’t a commodity service yet. The idea behind two factor authentication is simple, you have to know something (your password) and have something (your smartphone) in order to gain access to the system with the specified user account. Realistically the password problem is already solved and the second factor is really just a simple random number generator that’s seeded by a particular value that both you and the server know. Couple that with decent time synching (easily done on any phone with GPS) and your well on your way to better security. Sure there’s a bit more too it than that but since I’ve been considering doing this as a weekend project ever since I thought of it should give you a clue to just how easy it is to put decent security in an online service.
I’m hardly an expert at this whole security stuff, hell I bet if you hacked away at any of my projects for 10 minutes you’d find some awesome exploit, but even in this day and age of malware/crimeware/scamware I find it surprising just how lax some people can be we it comes to rudimentary security measures. You’re never going to be able to stop the most determined of intruders but it’s the casual hacker tourists that you want to keep out. Realistically you only need to be more secure than the next guy they have a go at and judging by the terrible level of security present online these days that’s not going to be too hard. So you developers of online web services you have no excuses for not at least attempting to put security into your product and should I catch you sending my login details in clear text over the Internet you can be sure I’ll be the first in line to blast you for making such mistakes.
Yeah that’s right, I’m going to blog about you and there’s nothing you can do about it… TAKE IT!
Now I don’t consider myself to be some uber-programmer, more like your garden variety enthusiast who knows how to work his way through a Google search to find what he’s after. Still I’m often amazed to find those who call themselves programmers (and even more worrying, convince others to pay them) falling for things that really should be obvious to anyone with half a brain about them. Sure I’m not immune to making some serious logic errors or just plain WTFery but something as fundamental as not sending your users’ passwords across the Internet in such a way that anyone with freely available packet capture software or even a Firefox plugin can read them is one of those things that really should go without saying. Traditionally this is done by encrypting the connection between you and the user using SSL so that anyone listening in just sees garbage and not your user’s password.
Securing a web connection between a user and your server, in the Microsoft world at least, doesn’t take too much configuration to get it working. For my pet project it was little more than adding a line of code at the top of the API implementation, installing a SSL certificate on my server and creating a client access policy file to enable cross domain communication. All in all I went from an API that sent everything in clear text to a fully secured API in a little under 2 hours with a good half of that being spent googling and sussing out which SSL provider I was going to go with. Still it seems that nearly every month I hear of at least one big start-up or long running service that fails to implement encryption for their login details, potentially endangering their users.
The first such company that I heard about was Foursquare, a popular geo-social networking application. Now I had been using that application for quite some time before I heard about them not encrypting anything so you can imagine how I felt when I found out they had let that little detail slip their minds for well over a year. Sure they were quick to fix it but who knows it would have gone unfixed had no one said anything about it. Their close rival Gowalla also neglected to implement any sort of secure communications for almost 3 years, making me wonder how something like that could go unnoticed for so long.
It doesn’t just stop there either. Last month saw not one but two companies being outed as passing login information around in clear text. The first was Napster (yeah even I’m surprised they’re still around) who not only has no encryption on their login forms but also sends users their login credentials when trying to get them to renew. Then just 2 weeks later it was revealed that the recent hit photo sharing app Instagram was also spreading information over the web that it shouldn’t be. To Instagram’s credit they were quick on getting a fix out, but it still seems like a fundamental error to make when you’re sending sensitive data over the Internet.
For all the vitriol that I’m launching at these companies I can understand the mindset that leads up to this kind of mistake happening. For the longest time I developed everything without SSL as it made debugging the whole application that much easier. Even with Fiddler’s SSL decrypting feature it still doesn’t seem to work quite right when cracking open encrypted communications so the solution of just turning SSL off works much better. Then when it comes time to deploy not only is your app not configured to use SSL all your API calls are made to the unsecured endpoint. If you follow good coding practices the latter shouldn’t be too hard to fix (your API URL should be a global variable) but getting the web server to serve out a SSL connection can take a bit of wrangling to get done, especially if you don’t control the web server yourself. So you deploy the code and hope that no one notices as at least 5 companies have gotten away with such things for years at a time.
Security is one of those things that’s always the lowest priority until something happens that forces your hand. It’s one of the most laborious aspects of developing a system as it’s usually not very interesting and only serves to increase the amount of work you have to do. Still it is so fundamental to get these things right from the get go that it still shocks me how many multi-developer companies manage to let things like that slip through the cracks. Perhaps it’s just my system administrator background that’s made security such a primary focus for me but really it should be one of the prime considerations for anyone looking to build a system with users on the Internet.