Windows has always had a troubled relationship with security. As the most popular desktop operating system it’s frequently the target of all sorts of weird and wonderful attacks which, to Microsoft’s credit, they’ve done their best to combat. However it’s hard to forget the numerous missteps along the way like the abhorrent User Access Control system which, in its default state, did little to improve security and just added another level of frustration for users. However if the features coming from the technical preview of Windows 10 are anything to go by Microsoft might finally be making big boy steps towards improving security on their flagship OS.
Whilst there’s numerous third party solutions to 2 factor authentication on Windows, like smartcards or tokens, the OS itself has never had that capability natively. This means that for the vast majority of Windows users this heightened security mode has been unavailable. Windows 10 brings with it the Next Generation Credentials service which allows users (both consumer and corporate) the ability to enrol a device to function as a second factor for authentication. The larger mechanics of how this work are still being worked out however the application has a PIN which would prevent unauthorized access to the code, ensuring that losing your device doesn’t mean someone automatically gains access to your Windows login. Considering this kind of technology has been freely available for years (hell my World of Warcraft characters have had it for years) it’s good to see it finally making its way into Windows as native functionality.
There’s also extensive customization abilities available thanks to Microsoft adopting the FIDO Alliance standard rather than developing their own proprietary solution. In addition to the traditional code-generation 2 factor auth you can also use your smartphone as a sort of smartcard with it being automatically recognised when brought next to a bluetooth enabled PC. This opens up the possibility for your phone to be a second factor for a whole range of services and products that currently make use of Microsoft technology, like Active Directory integrated applications. Whilst some might lament that possibility the fact that it’s based on open standards means that such functionality won’t be limited to the Microsoft family of products.
Microsoft has also announced a whole suite of better security features, many of which have been third party products for the better part of a decade. Encryption is now available for the open and save dialogs natively within the Windows APIs, allowing developers to easily integrate encryption functionality into their applications. This comes hand in hand with controls around which applications can access said encrypted data, ensuring that data handling measures can’t be circumvented by using non-standard applications. Device lock down is also now natively supported, eliminating the need for other device access control software like Lumension (which, if you’ve worked with, will likely be thankful for).
It might not be the sexiest thing to be happening in Windows 10 but it’s by far one of the more important. As the defacto platform for many people increases in Windows security are very much welcome and hopefully this will lead to a much more secure computing world for us all. These measures aren’t a silver bullet by any stretch of the imagination but they’ll go a long way to making Windows far more secure than it has been in the past.
In my travels through the USA I became intimately acquainted with their high level of airport security. Upon entering the country we were finger printed, photographed and grilled about what our trip was about. There was also the long lines for getting through the metal detectors and full body scanners, usually taking up a good 45 minutes of my time to get through. I was never chosen to go through the backscatter x-ray machines (nor did I see any of the newer millimetre wave ones) but I did see many people go through it. Most of them weren’t exactly what you’d call a security risk (mostly people in wheelchairs) but I knew exactly why those machines were there: to make everyone feel safer without actually being so.
This is what is referred to as security theatre. These scanners are supposedly better at detecting things that slip by metal detectors which they accomplish by using low-energy x-rays that penetrate through clothing. Solid objects then should become obvious and should something suspicious be identified the passenger can be taken aside for further searching. Trouble is the machines aren’t terribly effective at what they’re designed to do and the back-scatter x-ray type machines emit ionizing radiation (not a lot mind you, but there’s been minimal research done into them). Using them then seems like a pointless exercise and indeed even though they’ve been in operation in the USA for quite some time the jury is still out on whether they’re actually being effective or not.
So you can then imagine my surprise when I find out that we’ll be getting these scanners at all international airports in Australia:
PASSENGERS at airports across Australia will be forced to undergo full-body scans or be banned from flying under new laws to be introduced into Federal Parliament this week.
In a radical $28 million security overhaul, the scanners will be installed at all international airports from July and follows trials at Sydney and Melbourne in August and September last year.
The Government is touting the technology as the most advanced available, with the equipment able to detect metallic and non-metallic items beneath clothing.
Now we won’t be getting the dubious back-scatter style ones here instead we’ll have the newer millimetre wave ones that don’t emit ionizing radiation. That’s the only good news though as they’ve also amended the legislation that allows you to turn down things like this in favour of a pat down, with the penalty for refusing to go through one being that you’ll be barred from your flight. To top it all off the transport minister Anthony Albanese sealed it with this choice quote “I think the public understands that we live in a world where there are threats to our security and experience shows they want the peace of mind that comes with knowing government is doing all it can”.
It’s almost like he knows these things are a useless piece of security theatre, but is going ahead with them anyway.
More than a decade has past since the events of 11/9/2001 and we’ve yet to see a repeat, or an attempted repeat, of the events that led up to that tragedy here or overseas. The health and privacy concerns aside the reality is that these scanners don’t really accomplish what they’re designed to do and are thus just another inconvenience and waste of tax payer dollars. I can understand that there are some who will feel safer by seeing them there but that doesn’t change the facts that they’re just another piece of security theatre, and a costly one at that.
It’s nigh on impossible to make a system completely secure from outside threats, especially if it’s going to be available to the general public. Still there are certain measures you can take that will make it a lot harder for a would be attacker to get at your users’ private data, which is usually enough for them to give up and move onto another more vulnerable target. However, as my previous posts on the matters of security have shown, many companies (especially start ups) eschew security in favor of working on new features or improving user experience. This might help in the short term to get users in the door, but you run the very real risk of being compromised by a malicious attacker.
The attacker might not even be entirely malicious, as what appears to be the case with one of the newest hacker groups who are calling themselves LulzSec. There’s a lot of speculation as to who they actually are but their Twitter alludes to the fact that they were originally part of Anonymous, but decided to leave them since they disagreed with the targets they were going after and were more in it for lulz than anything else. Their targets range drastically from banks to game companies and even the USA senate with the causes changing just as wildly, ranging from simply for the fun of it to retaliations for wrong doings by corporations and politicians. It would be easy to brand them as anarchists just out to cause trouble for the reaction, but some of their handiwork has exposed some serious vulnerabilities in what should have been very secure web services.
One of their recent attacks compromised more than 200,000 Citibank accounts using the online banking system. The attack was nothing sophisticated (although authorities seem to be spinning it as such) with the attackers gaining access by simply changing the identifying URL and then automating the process of downloading all the information they could. In essence Citibank’s system wasn’t verifying that the user accessing a particular URL was authorized to do so, it would be like logging onto Twitter and then typing say Ashton Kutcher’s account name into the URL bar and then being able to send tweets on his behalf. It’s basic authorization at its most fundamental level and LulzSec shouldn’t have been able to exploit such a rudimentary security hole.
There are many other examples of LulzSec hacking various other organisations with the latest round of them all being games development companies. This has drawn the ire of many gamers which just spurred them on to attack even more game and related media outlets just so they could watch the reaction. Whilst it’s kind of hard to take the line of “if you ignore them they’ll go away” when they’re unleashing a DDoS or downloading your users data the attention that’s been lavished on them by the press and butthurt gamers alike is exactly what they’re after, and yes I do get the irony of mentioning that :P. Still had they not been catapulted to Internet stardom so quickly I can’t imagine that they would continue being as brash as they are now, although there is the possibility they might have started out doing even more malicious attacks in order to get attention.
Realistically though the companies that are getting compromised by rudimentary URL and SQL injection attacks only have themselves to blame since these are the most basic security issues that have well known solutions and shouldn’t pose a risk to them. Nintendo showed that they could withstand an attack without any disruptions or loss of sensitive data and LulzSec was quick to post the security hole and then move onto to more lulzy pastures. The DDoSing of others though is a bit more troublesome to deal with, however there are many services (some of them even free) that are designed to mitigate the impact of such an incident. So whilst LulzSec might be a right pain in the backside for many companies and consumers alike their impact would be greatly softened by a strengthening of security at the most rudimentary level and perhaps giving them just a little less attention when they do manage to break through.
Security is one of those things that many people put aside when developing a new product since it’s one of those things that doesn’t get you any closer to launching and adds no face value for your end users. For many people it’s usually the last thing on their mind until they have an incident, and then afterwards it becomes the top priority (as we’ve seen with Sony recently). With the average data breach running a company something in the order of $7 million you can see why a lot of companies go belly up once they’ve been hit and that’s why I still find it frustrating when new start-ups and companies put security on the backburner. They’re really shooting themselves in the foot.
It’s not like basic security is that hard either. I’ve said in the past that SSL isn’t that hard and I stand by those comments, especially if you’re building something on any of the popular frameworks. SSL is just the beginning though as you can still fall prey to security problems like SQL injection and cross-site scripting attacks even if your site is using SSL for the more sensitive aspects. Again though since the vast majority of new web applications are built on some kind of framework most of this leg work is taken care of for you, as long as you make a token effort to implement them.
I think why I get so uppity about this is because some of the most secure institutions, like banks, fail to implement security on the same level that others, say game developers, manage to do quite well and surprisingly cheaply. The best example of this would have to be Blizzard who implemented their authenticator program to combat the constant problem of accounts being hacked. Compare this to the 3 or 4 banks I’ve had dealings with over the past couple years, none of which have offered me such a service, and you can begin to understand why I’m a little annoyed that my World of Warcraft character’s epics are more secure than the cash I use to pay for them.
It’s not all bad news however as the era of the smart phone has made it possible to replicate two factor authentication quite cheaply. Both Google and Facebook have now made it possible to login to their services using two factor authentication via an application on your smart phone. Whilst I’m sure the vast majority of people will not bother (until after something bad happens of course) it still shows that they’re at least thinking in the right direction, unlike many other services which just don’t bother.
What really surprises me is that how this isn’t a commodity service yet. The idea behind two factor authentication is simple, you have to know something (your password) and have something (your smartphone) in order to gain access to the system with the specified user account. Realistically the password problem is already solved and the second factor is really just a simple random number generator that’s seeded by a particular value that both you and the server know. Couple that with decent time synching (easily done on any phone with GPS) and your well on your way to better security. Sure there’s a bit more too it than that but since I’ve been considering doing this as a weekend project ever since I thought of it should give you a clue to just how easy it is to put decent security in an online service.
I’m hardly an expert at this whole security stuff, hell I bet if you hacked away at any of my projects for 10 minutes you’d find some awesome exploit, but even in this day and age of malware/crimeware/scamware I find it surprising just how lax some people can be we it comes to rudimentary security measures. You’re never going to be able to stop the most determined of intruders but it’s the casual hacker tourists that you want to keep out. Realistically you only need to be more secure than the next guy they have a go at and judging by the terrible level of security present online these days that’s not going to be too hard. So you developers of online web services you have no excuses for not at least attempting to put security into your product and should I catch you sending my login details in clear text over the Internet you can be sure I’ll be the first in line to blast you for making such mistakes.
Yeah that’s right, I’m going to blog about you and there’s nothing you can do about it… TAKE IT!
I’m not really sure I could call myself a fan boy of any technology or company any more. Sure there are there are some companies who’s products I really look forward to but if they do something completely out of line I won’t jump to their defense, instead choosing to openly criticize them in the hopes that they will get better. Still I like to make known which companies I may look upon with a rose tint just so that anyone reading these posts knows what they’re getting themselves into. One of these such companies is Sony who I’ve been a long time fan of but have still criticized them them when I’ve felt they’ve done me wrong.
Today I’ll be doing that once again.
As you’re probably already aware recently the Playstation Network (PSN), the online network that allows PS3 owners to play with each other and buy digital content, was compromised by an external entity. The attackers appear to have downloaded all account and credit card information stored on Sony’s servers prompting them to shut down the service for an unknown amount of time. The breach is of such a large scale that it has received extensive coverage in both online and traditional news outlets, raising questions about how such a breach could occur and what safeguards Sony actually has to prevent such an event occurring.
Initially there was little information as to what this breach actually entailed. Sony had chosen to shutdown the PSN to prevent any further breaches and left customers in the dark as to the reason for this happening. It took them a week to notify the general public that there had been a breach and another 4 days to contact customers directly. Details were still scant on the issue until Sony sent an open letter to Congress detailing their current level of knowledge on the breach. Part of the letter hinted that the hacktivist group Anonymous may have played a part in the breach as well but did not blame them directly for the breach. More details have made themselves public since then.
It has also recently come to light that the servers that Sony was using for the PSN were running out-dated versions of the popular Apache web server and lacked even the most rudimentary security provisions that you’d expect an online service to have. This information was also public knowledge several months before the breach occurred with posts on Sony’s forums detailing the PSN servers status. As a long time system administrator I find it extremely ludicrous that the servers were able to operate in such a fashion and I’m pretty sure I know where to lay the blame.
Whilst Anonymous aren’t behind this attack they may have unwittingly provided cover for part of the operation. Their planned DDoS on the PSN servers did go ahead and would’ve provided a timely distraction for any would be attacker looking to exploit the network. Realistically they wouldn’t have been able to get much of the data out at this point (or so I assume, Sony’s servers could have shrugged off the DDoS) but it would have given them ample opportunity to set up the system for the data dump in the second breach that occurred a few days later.
No the blame here lays squarely with those in charge, namely the PSN architects and executives. The reason I say this is simple, an engineer worth his salt wouldn’t allow servers to run unpatched without strict security procedures in place. To build something on the scale of the PSN requires at least a modicum of expertise so I can’t believe that they would build a system like that unless they were instructed to do so. I believe this stems from Sony’s belief that the PS3 was unhackable and as such could be trusted as a secure endpoint. Security 101 teaches you though that any client can’t be trusted with the data that it sends you however and this explains why Sony became so paranoid when even the most modest of hacks showed the potential for the PS3 to be exploited. In the end it was Sony’s superiority complex that did them in, pretending like their castle was impregnable.
The fallout from this incident will be long and wide reaching and Sony has a helluva lot of work to do if they’re going to fully recover from this damage. Whilst they’re doing the right thing in offering some restitution to everyone who was affected it will still take them a long time to rebuild all the good will that they’ve burned on this incident. Hopefully though this teaches them some valuable lessons on security and they’ll stop thinking they’re atop the impregnable ivory tower. In the end it will be worth it for Sony, if they choose to learn from their mistakes.
It’s been a long time since I used a Norton product. Way back when I had just started working for Dick Smith Electronics I can remember happily recommending their products to nearly every customer that walked through the door and rarely did I get any complaints back from them. That all changed when I moved onto actually fixing people’s computers where upon I discovered that Norton’s latest incarnation (then 2004) was actually worse than the problems it was trying to solve. So many times I’d fully clean up a PC only to have it bog down again when I put Norton back on so you can imagine my scepticism when I was approached to review their latest version, Norton Internet Security 2011. Still I thought that they couldn’t have continued on if their product range continued down the path they had all those years ago so I decided to give it a go to see how far (or not) they had come.
Still I wasn’t entirely ready to risk my main machine with this so I fired up a Windows 7 virtual machine on my server and began the installation process on it. Installing Norton took just under 10 minutes, including the time it took to download the updates. Interestingly the installer updated itself before attempting to install on my system which is definitely a welcome change from updating afterwards. Doing so before installation means that Norton should be capable of detecting threats that might try to subvert the installation process, if you’re trying to clean an already compromised system. Unfortunately before the install will complete you have to provide your registration key, meaning there’s no free trial should you want to give your friends the software to trial before they buy it. Still the retail copy allows you to protect up to 3 PCs for the one purchase, enough to cover most households. Part of the installation process will also ask if you want to participate in the Norton Community which I’d definitely recommend you do (more on this later).
The user interface is a worlds away from the Norton that I remembered. The main screen is very well laid out with all the needed features available right on the main screen, I rarely had to dig more than one or two layers deep to find a setting I was looking for. The map at the bottom of the screen shows the recent cyber crime incidents across the world (although how they define this is a bit of a mystery) and is pretty cool to watch as ticks slowly over the past 24 hours. By itself though it doesn’t really add much value for the regular user apart from possibly piquing their curiosity about the events.
At this point a regular user could close the program and leave it at that since everything else is taken care of automatically by Norton Internet Security. This was why I used to recommend Norton products to people as they required the least amount of intervention from users to ensure that they kept working as intended. For the super and power users however there’s a fair bit more value that can be unlocked if you want to go digging a little deeper into Norton Internet Security, as I’ll show you below.
Before I get into the guts of this program let me talk about the performance of this application. Talk to any long time Windows administrator and they’ll tell you that anti-virus programs can be some of the most performance degrading applications you can install on your PC. This isn’t through any fault of their own, more it’s because to provide the maximum level of security they have to be constantly active, ensuring they’re ready for any incoming threats. Norton used to be the worst of the lot in this regard often bringing top of the line equipment to its knees in order to keep it safe.
Norton Internet Security 2011 however has progressed quite significantly since my encounters with its previous incarnations. Keen readers would’ve noticed that the main screen of Norton had a Performance link on it which reveals the screen shown above. The period shown before the two large spikes was completely idle and you can see that Norton does a good job of keeping its resource usage low during these periods. The two large spikes are from me performing a scan across about 600GB of data and doing that will use up most of your available system resources whilst the scan is running its course. This isn’t unique to Norton however and the scanning itself was quite quick, taking just under an hour to complete. The System Insight section provides an overview of what has been happening on your system over the past month. For an administrator like me such information can be quite valuable especially when trying to diagnose when some problem may have originated.
The meat of any AV program however is in its ability to catch potential problems before they can do any harm, which Norton Internet Security seems quite capable of doing.
The EICAR file is a virus test file designed to trigger any AV product. Upon downloading it I was greeted with a little pop up in my browser that said it was scanning the file for viruses and not too long after I was presented with this. As you can see not only does Norton identify the file and remove it before it has a chance to inflict any damage it also provides a wealth of information about the potential threat it removed from your system. This is where the power of the Norton Community comes in as it provides you with some idea about how widespread a threat might be and what it might do to your system if it was infiltrated. This kind of information is great for empowering users making them aware of what’s happening and hopefully educating them to avoid such things in the future. Most users probably won’t take advantage of this but it’s still quite useful for power users or system administrators.
The feature even extends to running processes which becomes quite handy for something you might be suspicious of but aren’t quite sure about. Again this kind of information might not be particularly useful to the user directly but it could prove quite valuable to administrators or super users attempting to troubleshoot issues.
The second feature set is the network protection section which encompasses two interesting features: Vulnerability Protection and the Network Security Map.
Vulnerability protection is an interesting idea. In essence Norton Internet Security can protect against flaws in particular programs, preventing the exploit from working. Whilst the vast majority of these exploits have been patched not all users are rigorous with their updates and Norton can help cover the gap for them. Additionally this also allows Norton to respond to threats quite quickly, nullifying their effects whilst the software vendors work on releasing a patch. Since there’s usually a month between patch cycles this feature goes a long way to securing a user against imminent threats that they might not even be aware of.
The network security map gives you a broad overview of the network you’re on and the other devices connected to it. This kind of thing can be helpful for users who are on public internet connections and want to be sure that their safe. Whilst this can’t detect any of the advanced threats (like a compromised access point running a man in the middle attack) it does give the users some much needed guidance on when they should and shouldn’t be doing things over a public connection. The information on other hosts is interesting too as its basically an IP and port scanner. Normal users probably won’t care about the information contained in here but after the hassle I went through to spoof a MAC address for free wifi in Los Angeles this kind of thing is quite valuable (if for all the wrong reasons ;)).
Lastly there’s the Web Protection section which contains an identity safe, credit card store and a parental controls section. Whilst there are already many password saving solutions out there the fact that Norton includes one is a good step towards improving a user’s security. Using a password store means that should you be compromised with a keylogger a malicious attacker won’t be able to get ahold of your passwords when you type them in. Sure there’s the possibility they’ll crack the store but it’s another layer of security that can help reduce the impact of a compromised system. The same can be said for the credit card store as whilst credit card details are one of the few things you don’t want to store anywhere on your computer the use of this store provides similar benefits to that of the password safe.
I didn’t get into the parental controls section much as it was very much geared towards fretting parents who require fine grained control over their child’s online experience. It provides all the useful goodies of being able to see what you’re kids are doing online and creating rule sets for browsing but probably the most useful part of it would be the online resources for educating children on safe web behaviour. Personally I’m a fan of keeping the PCs in a communal area and being an active online participant yourself instead of trying to approach the problem at arms length with tools like this. Still it wouldn’t be in the product if the users hadn’t been begging for it so I’m sure many users will appreciate its inclusion.
To be honest I went into this review with a great deal of scepticism, thinking that Norton wouldn’t have changed their sinful ways despite their continued existence. I’m glad to say that my experience with their latest product, Norton Internet Security 2011, changed all that and they’ve delivered a program I wouldn’t hesitate to recommend and use myself. Harnessing the power of their large user base in order to empower them with the information they gather is an excellent way to improve security and for power users like me it’s something that will give me just that little bit of an edge when dealing with unknown issues. Before I reviewed this product I didn’t think I’d need to pay for anti-virus ever again as things like Microsoft Security Essentials covered all the required functionality. Now however I can now see the vast difference between a paid product like this and their free cousins and I couldn’t bring myself to say that buying Norton Internet Security would be money wasted any more. If you’re looking for a paid anti-virus product with a wealth of features you wouldn’t go wrong with Norton Internet Security 2011.
Norton Internet Security 2011 is available from most software stores and online for AU$69.99. A copy of this software was provided to me free of charge for the purposes of reviewing it. All testing was conducted on a Windows 7 virtual machine running on VMware ESXi with 2 vCPUs, 2GB RAM and a 40GB HDD.
Now I don’t consider myself to be some uber-programmer, more like your garden variety enthusiast who knows how to work his way through a Google search to find what he’s after. Still I’m often amazed to find those who call themselves programmers (and even more worrying, convince others to pay them) falling for things that really should be obvious to anyone with half a brain about them. Sure I’m not immune to making some serious logic errors or just plain WTFery but something as fundamental as not sending your users’ passwords across the Internet in such a way that anyone with freely available packet capture software or even a Firefox plugin can read them is one of those things that really should go without saying. Traditionally this is done by encrypting the connection between you and the user using SSL so that anyone listening in just sees garbage and not your user’s password.
Securing a web connection between a user and your server, in the Microsoft world at least, doesn’t take too much configuration to get it working. For my pet project it was little more than adding a line of code at the top of the API implementation, installing a SSL certificate on my server and creating a client access policy file to enable cross domain communication. All in all I went from an API that sent everything in clear text to a fully secured API in a little under 2 hours with a good half of that being spent googling and sussing out which SSL provider I was going to go with. Still it seems that nearly every month I hear of at least one big start-up or long running service that fails to implement encryption for their login details, potentially endangering their users.
The first such company that I heard about was Foursquare, a popular geo-social networking application. Now I had been using that application for quite some time before I heard about them not encrypting anything so you can imagine how I felt when I found out they had let that little detail slip their minds for well over a year. Sure they were quick to fix it but who knows it would have gone unfixed had no one said anything about it. Their close rival Gowalla also neglected to implement any sort of secure communications for almost 3 years, making me wonder how something like that could go unnoticed for so long.
It doesn’t just stop there either. Last month saw not one but two companies being outed as passing login information around in clear text. The first was Napster (yeah even I’m surprised they’re still around) who not only has no encryption on their login forms but also sends users their login credentials when trying to get them to renew. Then just 2 weeks later it was revealed that the recent hit photo sharing app Instagram was also spreading information over the web that it shouldn’t be. To Instagram’s credit they were quick on getting a fix out, but it still seems like a fundamental error to make when you’re sending sensitive data over the Internet.
For all the vitriol that I’m launching at these companies I can understand the mindset that leads up to this kind of mistake happening. For the longest time I developed everything without SSL as it made debugging the whole application that much easier. Even with Fiddler’s SSL decrypting feature it still doesn’t seem to work quite right when cracking open encrypted communications so the solution of just turning SSL off works much better. Then when it comes time to deploy not only is your app not configured to use SSL all your API calls are made to the unsecured endpoint. If you follow good coding practices the latter shouldn’t be too hard to fix (your API URL should be a global variable) but getting the web server to serve out a SSL connection can take a bit of wrangling to get done, especially if you don’t control the web server yourself. So you deploy the code and hope that no one notices as at least 5 companies have gotten away with such things for years at a time.
Security is one of those things that’s always the lowest priority until something happens that forces your hand. It’s one of the most laborious aspects of developing a system as it’s usually not very interesting and only serves to increase the amount of work you have to do. Still it is so fundamental to get these things right from the get go that it still shocks me how many multi-developer companies manage to let things like that slip through the cracks. Perhaps it’s just my system administrator background that’s made security such a primary focus for me but really it should be one of the prime considerations for anyone looking to build a system with users on the Internet.
There were so many times when I was coding up early versions of Lobaco that I didn’t give any thought to security. Mostly it was because the features I was developing weren’t really capable of divulging anything that wasn’t already public so I happily kept on coding leaving the tightening up of the security for another day. Afterwards I started using some of the built in authentication services available with the Windows Communication Framework but I realised that whilst it was easy to use with the Silverlight client it wasn’t really designed for anything that wasn’t Windows based. After spending a good month off from programming what would be the last version of Geon I decided that I would have to build my own services from the ground up and with that my own security model.
You’d think with security being such a big aspect of any service that contains personal information about users that there would be dozens of articles about. Well there are but none of them were particularly helpful and I spent a good couple days researching into various authentication schemes. Finally I stumbled upon this post by Tim Greenfield who laid out the basics of what has now become the authentication system for Lobaco. Additionally he made the obvious (but oh so often missed) point that when you’re sending any kind of user name and password over the Internet you should make sure it’s done securely using encryption. Whilst that was a pain in the ass to implement it did mean that I could feel confident about my system’s security and could focus on developing more features.
However when it comes down to the crunch new features will often beat security in terms of priority. There were so many times I wanted to just go and build a couple new features without adding any security into them. The end result was that whilst I got them done they had to be fully reworked later to ensure that they were secure. Since I wasn’t really working under any deadline this wasn’t too much of a problem, but when new features trump security all the way to release you run the risk of releasing code into the wild that could prove devastating to your users.
No example of this has been more prolific than the recent security issues that have plagued the popular micro-blogging service Twitter. Both of them come hot on the heels of the release of the new Twitter website released recently that enables quite a bit more functionality and with it the potential to open up holes for exploitation. The first was intriguing as it basically allowed someone to force the user’s browser to execute arbitrary Java script. Due to the character length limit of Twitter the impact this could have was minimised, but it didn’t take long before malicious attackers got a hold of it and used it for various nefarious purposes. This was a classic example of something that could have easily been avoided if they sanitised user input rather than checking for malicious behaviour and coding against it.
The second one was a bit more ugly as it had the potential to do some quite nasty things to a user’s account. It used the session details that Twitter stores in your browser to send messages via your account. Like the other Twitter exploit it relied on the user’s typical behaviour of following links posted by the people they follow. This exploit can not be squarely blamed at Twitter either as the use of link shortening services that hide the actual link behind a short URL make it that much harder for normal users to distinguish the malicious from the mundane. Still Twitter should have expected such session jacking (I know I have) and built in counter measures to stop them from doing it.
Any large public system will attract those looking to exploit it for nefarious means, that’s part and parcel of doing business on the web. The key then is to build your systems with the expectation that they will be exploited rather than waiting for an incident to arise. As a developer I can empathise that developing code that’s resistance to every kind of attack is next to impossible but there are so many things that can be done to ensure that the casual hackers steer clear. Twitter is undergoing a significant amount of change with a vision to scale themselves up for the big time, right up there with Google and Facebook. Inevitably this will mean they’ll continue to have security concerns as they work to scale themselves out and hopefully these last two exploits have shown them that security is something they should consider more closely than they have in the past.