Monthly Archives: September 2010

When Features Trump Security: Twitter’s Ongoing Security Concerns.

There were so many times when I was coding up early versions of Lobaco that I didn’t give any thought to security. Mostly it was because the features I was developing weren’t really capable of divulging anything that wasn’t already public so I happily kept on coding leaving the tightening up of the security for another day. Afterwards I started using some of the built in authentication services available with the Windows Communication Framework but I realised that whilst it was easy to use with the Silverlight client it wasn’t really designed for anything that wasn’t Windows based. After spending a good month off from programming what would be the last version of Geon I decided that I would have to build my own services from the ground up and with that my own security model.

You’d think with security being such a big aspect of any service that contains personal information about users that there would be dozens of articles about. Well there are but none of them were particularly helpful and I spent a good couple days researching into various authentication schemes. Finally I stumbled upon this post by Tim Greenfield who laid out the basics of what has now become the authentication system for Lobaco. Additionally he made the obvious (but oh so often missed) point that when you’re sending any kind of user name and password over the Internet you should make sure it’s done securely using encryption. Whilst that was a pain in the ass to implement it did mean that I could feel confident about my system’s security and could focus on developing more features.

However when it comes down to the crunch new features will often beat security in terms of priority. There were so many times I wanted to just go and build a couple new features without adding any security into them. The end result was that whilst I got them done they had to be fully reworked later to ensure that they were secure. Since I wasn’t really working under any deadline this wasn’t too much of a problem, but when new features trump security all the way to release you run the risk of releasing code into the wild that could prove devastating to your users.

No example of this has been more prolific than the recent security issues that have plagued the popular micro-blogging service Twitter. Both of them come hot on the heels of the release of the new Twitter website released recently that enables quite a bit more functionality and with it the potential to open up holes for exploitation. The first was intriguing as it basically allowed someone to force the user’s browser to execute arbitrary Java script. Due to the character length limit of Twitter the impact this could have was minimised, but it didn’t take long before malicious attackers got a hold of it and used it for various nefarious purposes. This was a classic example of something that could have easily been avoided if they sanitised user input rather than checking for malicious behaviour and coding against it.

The second one was a bit more ugly as it had the potential to do some quite nasty things to a user’s account. It used the session details that Twitter stores in your browser to send messages via your account. Like the other Twitter exploit it relied on the user’s typical behaviour of following links posted by the people they follow. This exploit can not be squarely blamed at Twitter either as the use of link shortening services that hide the actual link behind a short URL make it that much harder for normal users to distinguish the malicious from the mundane. Still Twitter should have expected such session jacking (I know I have) and built in counter measures to stop them from doing it.

Any large public system will attract those looking to exploit it for nefarious means, that’s part and parcel of doing business on the web. The key then is to build your systems with the expectation that they will be exploited rather than waiting for an incident to arise. As a developer I can empathise that developing code that’s resistance to every kind of attack is next to impossible but there are so many things that can be done to ensure that the casual hackers steer clear. Twitter is undergoing a significant amount of change with a vision to scale themselves up for the big time, right up there with Google and Facebook. Inevitably this will mean they’ll continue to have security concerns as they work to scale themselves out and hopefully these last two exploits have shown them that security is something they should consider more closely than they have in the past.

When Best Practices Became I Have No Fucking Clue.

When you’re implementing an IT system there’s usually a couple well known paths that you can follow that will ensure it operates pretty much as expected. In the past this was what a good IT administrator would have been hired for as they would have been down these paths before and would know what should and shouldn’t be done. Over time companies began producing their own sets of guidelines which they would refer to as best practices, serving as the baseline from which administrators could create their own. With systems ever increasing in complexity these evolved from simple articles that would fit in a blog post to massive how-to manuals that detail nearly every step required to make sure you don’t bollocks up a system. This was the point where the excellent notion of best practices turned into a manual for those who had no fucking clue what they were doing and it shows with the level of competence I’ve seen in the various IT departments I’ve been privy to over the years.

Nearly every good system administrator I’ve met has been, for the most part, self trained. It usually starts out with a general interest in computers at home where they tinker away with their machines and usually end up breaking them in the most catastrophic fashion. This natural curiosity is what drives them to figure out the root cause of problems and is essential when they get involved in larger systems. Whilst training courses are all well and good they are unfortunately narrow in their focus and are, for the most part, designed to give a set of rules to use when first approaching the problem. After that the skills required (critical thinking, logical deduction, et al.) can’t really be taught and those seeking a career in this space lacking such skills typically didn’t make it past second level support.

However the distillation of industry knowledge into best practice documents has blurred the line somewhat between those who have the necessary skills to work at the third level and those below. The documents themselves aren’t to blame for this, indeed they are actually responsible for the industry as a whole becoming more reliable in delivering repeatable results. More it is the use of these documents by those who would not otherwise have the knowledge required in order to perform the tasks that these best practices outline. This is because whilst best practices give you a good idea of which direction to head in when you’re implementing or troubleshooting an IT system they do not cover the issues specific to your organisation. They can’t simply because they are unable to account for the almost infinite number of possible configurations and that’s were those key skills become a necessity.

A classic example of this that’s rife within the IT industry is the implementation of ITIL. Serving as the best practice to underpin all best practices within IT departments the ITIL framework has found its way into nearly every organisation I’ve had the pleasure of working for. As a basis for IT processes it works great, serving as a reference point that anyone who’s been trained in it can relate to. However by its very nature ITIL is just a framework, a skeleton of a what an IT department or organisation should look like. However too many times I have seen ITIL taken literally and business processes shoehorned into the bare bones framework with little thought to how much sense it makes to do so. Realistically whilst it is desirable to be ITIL complaint it’s more desirable to have business processes that work for your organisation. That is the difference between using best practices as a gospel and using them as a basis for a functional baseline on which to improve on.

The blame can’t be wholly aimed at those administrators either as it is the business’ responsibility to hold them accountable when the system doesn’t deliver as expected. Unfortunately too often best practices are used as a convenient scapegoat which wrongly puts the blame back on the business (“It doesn’t work like you expected? But it’s built to industry best practices! Change your process.”). In reality tighter specifications and rigorous testing is required to ensure that a best practice charlatan doesn’t get away with such behaviour but that unfortunately adds cost which doesn’t pass muster with the higher ups.

In the end those best practice sticklers are both a boon and a curse to people like me. Because of them I’m able to find employment anywhere and at a very considerable rate. However when I’m working with them they can make doing the right thing by the customer/business next to impossible, instead insisting that best practices be followed or the house of cards will come tumbling down. Thankfully due to my chosen specialisation being quite new there’s not a whole lot of best practices ninjas floating around and actual experience with such technology still reigns king. However with time that will change and I’ll be forced to deal with them, but that’s long enough into the future that I’m not worrying about it yet.

By then I’ll be working for myself, hopefully :)

From The Outside: An Analysis of the Virgin Blue IT Disaster.

Ah Melbourne, you’re quite the town. After spending a weekend visiting you for the weekend and soaking myself deep in your culture I’ve come to miss your delicious cuisine and exquisite coffee now that I’m back at my Canberran cubicle, but the memories of the trip still burn vividly in my mind. From the various pubs I frequented with my closest friends to perusing the wares of the Queen Victoria markets I just can’t get enough of your charm and, university admissions willing, I’ll be making you my home sometime next year. The trip was not without its dramas however and none was more far reaching than that of my attempt to depart the city of Melbourne via my airline of choice: Virgin Blue.

Whilst indulging in a few good pizzas and countless pints of Bimbo Blonde we discovered that Virgin Blue was having problems checking people in, resulting in them having to resort to manual check-ins. At the time I didn’t think it was such a big deal since initial reports hadn’t yet mentioned any flights actually being cancelled and my flight wasn’t scheduled to leave until 9:30PM that night. So we continued to indulge ourselves in the Melbourne life as was our want, cheerfully throwing our cares to the wind and ordering another round.

Things started to go all pear shaped when I thought I’d better check up on the situation and put a call into customer care hotline to see what the deal was. My first attempted was stonewalled by an automatic response stating that they weren’t taking any calls due to a large volume of people trying to get through. I managed to get into a queue about 30 minutes later and even then I was still on the phone for almost an hour before getting through. My attempts to get solid information out of them were met with the same response: “You have to go to the airport and then work it out from there”. Luckily for me and my travelling compatriots it was a public holiday on Monday so a delay, whilst annoying, wouldn’t be too devastating. We decided to proceed to the airport and what I saw there was chaos on a new level.

The Virgin check-in terminals were swamped with hundreds of passengers, all of them in varying levels of disarray and anger. Attempts to get information out of the staff wandering around were usually met with reassurance and directions to keep checking the information board whilst listening for announcements. On the way over I’d managed to work out that our flight wasn’t on the cancelled list so we were in with a chance, but seeing the sea of people hovering around the terminal didn’t give us much hope. After grabbing some quick dinner and sitting around for a while our flight number was called for manual check-ins and we lined up to get ourselves on the flight. You could see why so many flights had to be cancelled as boarding that one flight manually took them well over an hour, and that wasn’t even a full flight of passengers. 4 hours after arriving at the airport we were safe and sound in Canberra, which I unfortunately can’t say for the majority of people who chose Virgin as their carrier that day.

Throughout the whole experience all the blame was being squarely aimed at a failure in the IT system that took our their client facing check-in and online booking systems. Knowing a bit about mission critical infrastructure I remarked at how a single failure could take out a system like this, one that when it goes down costs them millions in lost business and compensation. Going through it logically I came to the conclusion that it had to be some kind of human failure that managed to wipe some critical shared infrastructure, probably a SAN that was live replicating to its disaster recovery site. I mean anything that has the potential to cause that much drama must have a recovery time less than a couple hours or so and it had been almost 12 hours since we first heard the reports of it being down.

As it turns out I was pretty far off the mark. Virgin just recently released an initial report of what happened and although it’s scant on the details what we’ve got to go on is quite interesting:

At 0800 (AEST) yesterday the solid state disk server infrastructure used to host Virgin Blue failed resulting in the outage of our guest facing service technology systems.

We are advised by Navitaire that while they were able to isolate the point of failure to the device in question relatively quickly, an initial decision to seek to repair the device proved less than fruitful and also contributed to the delay in initiating a cutover to a contingency hardware platform.

The service agreement Virgin Blue has with Navitaire requires any mission critical system outages to be remedied within a short period of time. This did not happen in this instance. We did get our check-in and online booking systems operational again by just after 0500 (AEST) today.

Navitaire are a subsidiary of Accenture, one of the largest suppliers of IT outsourcing in the world with over 177,000 employees worldwide and almost $22 billion in revenue. Having worked for one of their competitors (Unisys) for a while I know no large contract like this goes through without some kind of Service Level Agreement (SLA) in place which dictates certain metrics and their penalties should they not be met. Virgin has said that they will be seeking compensation for the blunder but to their credit they were more focused on getting their passengers sorted first before playing the blame game with Navitaire.

Still as a veteran IT administrator I can’t help but look at this disaster and wonder how it could have been avoided. A disk failure in a server is common enough that your servers are usually built around the idea of at least one of them failing. Additionally if this was based on shared storage there would have been several spare disks ready to take over in the event that one or more failed. Taking this all into consideration it appears that Navitaire had a single point of failure in the client facing parts of the system they had for Virgin and a disaster recovery process that hadn’t been tested prior to this event. All of these coalesced into an outage that lasted 21 hours when most mission critical systems like that wouldn’t tolerate anything more than 4.

Originally I had thought that Virgin had all their IT systems internal and this kind of outage seemed like pure incompetence. However upon learning about their outsourced arrangement I know exactly why this happened: profit. In an outsourced arrangement you’re always pressured to deliver exactly to the client’s SLAs whilst keeping your costs to a minimum, thereby maximising profit. Navitaire is no different and their cost saving measures meant that a failure in one place and a lack of verification testing in another lead to a massive outage to one of their big clients. Their other clients weren’t affected because they likely have independent systems for each client but I’d hazard a guess that all of them are at least partially vulnerable to the same outage that affected Virgin on the weekend.

In the end Virgin did handle the situation well all things considered, opting first to take care of their customers rather than pointing fingers right from the start. To their credit all the airport staff and plane crew stayed calm and collected throughout the ordeal and apart from the delayed check-in there was little difference between my flight down and the one back up. Hopefully this will trigger a review of their disaster recovery processes and end up with a more robust system for not only Virgin but all of Navitaire’s customers. It won’t mean much to us as customers as if that does happen we won’t notice anything, but it does mean that in the future such outages shouldn’t have such a big impact as the one of the weekend that just went by.

Resistance is Futile, Integration is Inevitable.

Enabling your users to interact with your application through the use of open APIs has been a staple of the open web since its inception over a decade ago. Before that as well the notion of letting people modify your product helped to create vast communities of people dedicated to either improving the user experience or creating features that the original creators overlooked. I can remember my first experience with this vividly, creating vast levels in the Duke Nukem 3D level creator and showing them off to my friends. Some of these community developed products can even become the killer feature of the original application itself and whilst this is a boon for the application itself it pose some issues to the developer.

Probably the earliest example I can think of this would have to be World of Warcraft. The client has a pretty comprehensive API available that enabled people to create modifications to do all sorts of wonderful things, from the more mundane inventory managers to boss timer mods that helped keep a raid coordinated.  After a while many mods became must haves for any regular player and for anyone who wanted to join in the 40 persons raids they became critical to achieving success. Over the years many of these staple mods got replaced by Blizzard’s very own implementations of them ensuring that anyone that was able to play the game was guaranteed to have them. Whilst most of the creators weren’t enthused that all their hard work was now being usurped by their corporate overlords many took it as a challenge to create even more interesting and useful mods, ensuring their user base stayed loyal.

More recently this issue has come to light with Twitter who are arguably popular due to the countless hours of work done by third parties. Their incredibly open API has meant that anything they were able to do others could do to, even to the point of them doing it better than them. In fact it’s at the point where only a quarter of their traffic is actually on their main site, the other three quarters is from their API. This shows that whilst they’ve built an incredibly useful and desirable service they’re far from the best providers of it, with their large ecosystem of applications filling in the areas where it falls down. More recently however Twitter has begun incorporating features into its product that used to be provided by third parties and the developer community hasn’t been too happy about it.

The two most recent bits of technology that Twitter has integrated have been the new Tweet button (previously provided by TweetMeme) and their new link shortening service t.co which was handled by dozens of others.  The latter wasn’t unique to Twitter at all and whilst many of the new comers to the link shortening space made their name on Twitter’s platform many of them report that it’s no longer their primary source of traffic. The t.co shortener is then really about Twitter taking control of the platform that they developed and possibly using the extra data they can gather from it as leverage in brokering advertising and partnership deals. The Tweet button however is a little bit more interesting.

Way back when news aggregator sites were all the rage. From Digg to Del.icio.us to Reddit there were all manner of different sites designed around the central idea of sharing online content with others. Whilst the methods of story aggregation differed from service to service most of them ended up implementing some kind of “Add this story to X” button that could be put on your website. This served two purposes: it helped readers show a little love to the article by giving it some attention on another site and secondly it gave content to the other site to link to, with little involvement from the user. The TweetMeme button then represented a way to drive Twitter adoption further and at the same time get even more data on their users than they previously had before. Twitter, for what it’s worth, said they licensed some of the technology from TweetMeme for their button however they have still in essence killed off one of their popular services and that’s begun to draw the ire of some developers.

The issue many developers take with Twitter building these services into their main product is because it puts a chilling effect on products based on Twitter’s ecosystem. Previously if you had built something that augmented their service chances were you could build yourself quite the web property. Unlike other companies which would acquire these innovator’s companies in order to integrate their technology Twitter has instead taken to developing the same products themselves, in direction competition with those innovators. The reasons behind this are simple, Twitter simply doesn’t have the cash available to do acquisitions like the big guys do. They’re kind of stuck between a rock and a hard place as whilst they need to encourage innovation using their platform they can’t let it go on forever, lest they become irrelevant past delivering an underlying service. Realistically the best option for them is to start generating some cash in order to start acquiring innovator’s technology rather than out competing them but they’re still too cash poor for them to this to be viable.

In the end if you build your product around someone else’s service you’re really putting yourself at their mercy. The chill that Twitter is putting on their developers probably won’t hurt them in the long run should they not continue to copy other’s solutions to their problems however their fledgling advertising based business model is at odds with all the value add developers. Twitter is quite capable of doing some impressive innovation on their own (see #newtwitter) but their in house development is nothing compared to the hordes of third parties who’ve been doing their part to improve their ecosystem. I’m interested to see what direction they go with this, especially so since I’m working on what could be classed as a competing service.

Although I’m hoping people don’t see it that way :P

Move, Kinect and Wii: Not in my Living Room, Please.

On a technical level I’m in love with motion controllers. They represent quite a few innovations that until just recently were out of the reach of the every day consumer. The release of the Wii put cheap, relatively accurate motion detection in the hands of hackers all over the world and saw the technology spread to many other sectors. Whilst I haven’t given any love to Microsoft’s Kinect the possibility of being able to do your own in home motion capture with the camera that powers it is a pretty cool prospect and I know it won’t be long before the hackers get their hands on similar tech and start wowing us with the applications. We already know my stance on the Playstation Move, with its oodles of technology packed into a hand sized magic wand.

Still if you walk into my living room that’s adorned with consoles, computers and all kinds of gadgets and gizmos the only evidence you’ll find of me having any interest in this area is a single Wiimote controller hidden away in a drawer with no console in sight. I only have the controller as my previous house mate was the one who bought the Wii and stubbornly refused to buy any more controllers for it. Wanting to actually play some games I forked out the $100 to get one but later ended up co-opting it for all sorts of nefarious purposes, using it to play World of Warcraft and a semi-successful attempt at using head tracking in EVE online. After we parted ways though I hadn’t had any compelling reasons to buy a Wii console save for maybe Trauma Center which I was only ever able to locate twice but never made the jump to purchase.

It’s not like I’m above buying an entire console for a single game either, I bought a Xbox 360 just for the chance to play Mass Effect the day it came out. More it’s that nearly every game on the Wii that I’ve wanted to play has either had a cross platform release or has been nothing more than a passing curiosity. I’d even told myself at one point that when they brought the black version of the Wii out I’d purchase one (it would match my PS3 and new Xbox 360 if I got one) but even after that happened I still couldn’t pony up the cash to get one, it just felt like a waste of money.

It could be that I really haven’t been giving my consoles a whole lot of love lately. The last two console games I played were Red Dead Redemption and Alan Wake, both engaging games but since then my attention has almost entirely been captured by Starcraft 2. I must admit I was intrigued by the prospect of replaying through Heavy Rain using the Move controller but other than that I don’t think there’s any other games out there that make use of motion controllers that I’d actually find appealing. In fact looking over the catalogue they all look to be aimed at a certain demographic: those who are traditionally non-gamers.

This really shouldn’t come as a surprise as that’s the exact strategy Nintendo had when they first released the Wii, focusing more on the non-gamer crowd and heavily promoting the social aspect of it. As the Kinect and Move are both reactions to the Wii’s success it follows that their target demographic is identical as well. For the long time gamers like myself this hasn’t really endeared the motion controllers to us as the games really aren’t designed for us. Sure there are some notable exceptions but for the most part those who identify themselves as gamers probably won’t be investing too much in these new fangled exercise inducing devices. That doesn’t mean they won’t be successful however.

There is the chance that these motion controllers will make their way into my living room by virtue of integration with other products. I’ve been eyeing off one of the newer Xbox 360 for a while now as it’s quite a looker and has the benefit of not sounding like a jet engine when it’s loading a game. My natural engineering curiosity will probably see a move controller work its way into my living sometime in the future as well but until someone demos some cool hack that I just have to try it will be a while before that comes to pass. The Wii will more than likely stay on the back burner for a long time to come but there’s always the chance of a Mass Effect event happening that overrides the more frugal parts of my brain.

Don’t Anthropomorphize Computers, They Hate it When You do That.

My parents always used to tell me that bad things came in threes. When I thought about it there was always 2 other bad things that would’ve happened around the same time so it seemed to make sense. Of course it’s just a convenient way of rationalising away coincidences as something bad will always end up happening to you and the rule is so loose that those three things could cover quite a large time period. Still yesterday seemed to be one of those days where I had at least three things go completely tits up on me in quick succession, sullying what would have been otherwise quite a cheerful day. The common thread of this whole debacle was of course computers; the one thing I get paid to be an expert on are most often the cause of my troubles.

The day started off pretty well. My MacBook Pro had arrived yesterday and I cheerfully went down to the depot to pick it up. A quick chat and a signature later I had my shiny new toy which I was all too eager to get my hands onto. There was enough for me to do at work that I wasn’t completely bored yet I had enough time to not feel pressured about anything. A few good emails from close friends ensured that my not-so-secret project was on track to actually be useful to some people, rather than just me deluding myself into thinking that. It all came undone about 15 minutes before I was about to leave work and cascaded on from there.

Part of the environment I’m responsible for went, for lack of a better word, completely berko. People couldn’t access machines whilst others just refused to start. After spending an hour trying various solutions I knew that I wouldn’t solve this problem within the next 3 hours so I decided to set up some things that would hopefully get the system to rectify itself and ran out the door as quickly as I could. After getting stuck in traffic for nearly an hour I was finally at home and ready to unbox my prize that I had been waiting a long time for, and it was well worth it.

Whilst I’ll do a full review of the MacBook Pro a little later (once I’ve got to know it better) I will say that it’s quite a slick piece of hardware. After fooling around in OSX for all of 20 minutes I fired up BootCamp and started the unholy process of installing Windows 7 on it. To Apple’s credit this process was quite smooth and in under an hour from first unboxing it I was up and running without a single hiccup along the way. After declaring that a success I decided that I should reward myself with a little Starcraft 2, and that’s when my PC got jealous.

You see I have a rather… chequered record when it comes to my personal PCs. They almost always have their quirks in one way or another, usually from me either doing something to them or not bothering to fix or replace a certain piece of hardware. My current desktop is no exception and up until recently it randomly suffered from a hard drive that would erase the MBR every so often along with being slow as a wet dog in molasses. Before that it was memory problems that would cause it lock up not 10 seconds into any game and before that it was a set of 8800GTs that would work most of the time then repeatedly crash for no apparent reason. Anyone who talked to me about it knew I had a habit of threatening the PC into working which seemed to work surprisingly often. I wasn’t above parading around the gutted corpses of its former companions as a warning to my PC should it not behave, much to the puzzlement of my wife.

For the most part though the last couple months have been pretty good. Ever since upgrading the drives in my PC to 2 Samsung Spinpoint F3s (faster than Raptors and cost almost nothing) I’ve had a pretty good run with the issues only being software related. The past few days though my PC has decided to just up and shut itself down randomly without so much of a hint as to what went wrong. Initially I thought it was overheating so I upped the fan speeds and everything seemed to run smoothly again. Last night however saw the same problem happen again (right in the middle of a game no less) but the PC failed to recover afterwards, not even wanting to POST.

You could say that it was serendipitous that I managed to get myself a new laptop just as my PC carked it but to me it just feels like my trouble child PC throwing a jealousy fit at the new arrival in the house. My server and media PC both know that I won’t take any of that sort of shenanigans from them as I’ll gut one of them to fix the other should the need arise. My PC on the other hand seems to know that no matter how much shit it drags me through I’ll always come crawling back with components in hand, hoping to revive it.

My house is a testament to that adage that a mechanic’s car will always be on the verge of breaking down. My PC deciding to die last night was frustrating but it then also let me indulge in some good old fashion hardware ogling, filling my head with dreams of new bits of hardware and what joys they may bring. My quick research into the problem has shown there will probably be an easy fix so it’s not all bad. Still at 10:00PM last night part of my head was still screaming the rule of three at me, but I managed to drown that out with some good beer and an episode of Eureka.

Now to prepare the sacrificial motherboard for the ritual tonight… :D

Using Technology to Control Others.

If there’s one notion that just doesn’t seem to die it’s that email is always a bane to someone’s productivity. Personally after using the Internet daily for the better part of 15 years I’ve gotten the whole email thing down pretty good and I don’t personally find it a distraction. Still no matter how many people I talk to they still seem to struggle with their inbox every day with people inundating them request after request or including them in a discussion that they just have to respond to. This is just one of the great many examples of people using technology to control someone else’s behaviour and it surprises me how many people still fall for it.

In the most traditional sense email was to be the electronic replacement for good old fashioned letters. In that sense they do carry a sense of urgency about them as when someone takes the time to write to you about something you can be sure that they want a response. However the low barrier to entry for writing an email as opposed to a real letter opened the floodgates for those who would not usually take the time to write and thus proceed to unleash their fury on unsuspecting victims. For myself I’ve noticed in a work place many people will often forego face to face contact with someone who’s mere meters away by using email instead, turning a 5 minute conversation into a 2 hour email ordeal that still doesn’t satisfy either party. This could also be due to my career being almost wholly contained within the public service, but I’ve seen similar behaviour at large private entities.

I think the problem many people have with electronic mediums is the urgency that they associate with it. When you get a real, physical letter from someone or some corporation there’s a real sense of “I have to do something about this” and that feeling translates into its electronic form. Seeing your inbox with dozens of emails left unread conveys that sense of leaving something important undone as each one of them is a call to your attention, begging for a response. The key is to recognise the low barrier of entry that electronic forms of communication have and to treat them as such. Of course simply ignoring your emails doesn’t solve the problem but establishing rules of engagement for people contacting you through various mediums ensures that you cut the unnecessary communications to a minimum, freeing yourself from their technological grasp.

I experienced this myself just recently when experimenting with “proper” Twitter use. The second I dropped my rules of engagement with the service was the second that I became a slave to it and the people on the other side. Sure this might be considered the norm when using Twitter but frankly the value I derive from the service is rendered moot when diverts my attention away from what I consider to be more valuable exploits. The same should be said for any form of communication you use, if the value you’re deriving or creating from using a communication method is less than the most optimal thing you could be doing in lieu of that, well maybe you should reconsider replying to those 50 emails that came in over lunch.

It’s gotten to the point where even whole companies are being founded on the idea of streamlining communication, like Xobni an email inbox searching tool. Google has also attempted to fix the email problem by developing the priority inbox which is a clever yet completely unnecessary tool. Whilst it does do a good job of showing me the emails I need to see I’d argue the problem is more that the ones it doesn’t promote simply did not need to be written. Thus we have a technological solution to a problem that’s entirely caused by its human users and would be better solved with a switch in mindset.

In the end it comes down to people letting themselves be controlled by something rather than the other way around. People know that if they want me to do something immediately they’ll come see me or phone me. If they want it done whenever I damn well feel like it they’ll send an email and no amount of important flags or all caps titles will change that. In the end it means people actually think about what they want before approaching me, meaning that the time I do actually spend communicating with them is productive and we can both get back to our priorities without too much interruption.

ShinyGraph

iPad Cannibalising Netbook Sales? Please Put Down the Kool-Aid.

If you didn’t spend 5 minutes talking to me about Apple you’d probably assume I was one of their fan boys. Whilst I don’t have many of their products I can count quite a few of them littering my house with a shiny MacBook Pro scheduled to be delivered sometime soon. Long time readers of the blog will know that I’ve launched my share of both vitriol and praise in their general direction over the past couple years with most of it tending towards the former, almost wholly due to them rubbing the caged libertarian in my head the wrong way. I’d say that the other part is from the more fanatical parts of their fan base who seem to do more work than Apple’s own PR department.

Today’s rant comes to you courtesy of the latter who have recently taken to stating that the iPad, in all its wondrous “magical” glory, has begun chomping away at netbook sales as demonstrated by some recent sales figures:

Look at the figures, things seemed to be on the rise over the previous eight months with only two monthly declines that are explained by the drop off after holiday sales (Dec to Jan decline) and the drop off after back-to-school sales (Sep to Oct decline). The moment consumers were able to put down the money for an iPad, the number of notebook sales started to fall.

Best Buy CEO Brian Dunn also backed up this data telling the Wall Street Journal that Best Buy is seeing iPad sales taking as much as 50% away from notebook computer sales!

Indeed the way the data is presented it would make you think that even the mere mention of a computing product from Apple would be enough to scare people into not buying a netbook. However this is one of those times when you need to understand that correlation does not mean causation, I.E. whilst there’s data that shows these two variables interacting this does not imply that one has affected the other. In fact I’d argue that to say so ignores a wealth of data that was pointing to netbook sales stagnating a long time ago with a plunge to follow soon after.

2007 was the first year we saw a significant amount of traction with the netbook market with around 400,000 units being sold. The year that followed saw a stratospheric rise in sales, to the tune of almost 30000% with 11.4 million units sold. Whilst I can’t find a hard figure on sales for 2009 most articles around the time pegged an increase of around 100% or 22.8 million units moved. That kind of growth as any economist will tell you is completely and utterly unsustainable and it was inevitable that the netbooks would finally reach a point where their sales growth would hit a ceiling. It appears that the time is now which just so happens to coincide with a release from Apple. Whilst I’ll admit that there may be some influence from people not refreshing their netbook in lieu of an iPad I’d hazard a guess that that number is vanishingly small.

The trouble with using such figures as a tell for the iPad’s influence is that these are comparative figures (growth is compared to the year previous). If you take a look at that graph above you’ll see that the previous year’s growth was quite massive, hovering around the 30% region for all of the months that are showing decline. I wouldn’t be surprised if next year when we’re able to do the same comparison that we see a much more sustainable growth rate in the single figures. Growing at double digit rates for extended periods of time just isn’t doable, especially in an industry where hardware is usually expected to have a useful life of 3 years or more. The drop in sales is likely a combination of the market reaching saturation, netbooks falling out of favour (to be replaced with games consoles, new cameras and 3D TVs apparently) and an overall reduction in discretionary spending thanks to a bleak economic outlook in the USA. Somewhere in the midst of all those factors are those few people who were looking to buy a netbook but decided to go for an iPad instead, but those few do not swing as much power as the other factors that have had a downward pressure on netbook sales this past year.

Look I get it, Apple made a product that a lot of people think is pretty darn spiffy and anything that could be classed as a competitor obviously will be decimated by it. We’ve still yet to see the media revolution that it was meant to spawn (amongst other things) it seems rather premature that a device that hasn’t achieved its other goals is already decimating a market that it’s only casually related to. The stories then come from those who are towing the Jobs’ party line that netbooks are nothing more than cheap laptops, with little regard for the actual facts. Luckily it appears that not all of them are getting sucked into the easy pageviews and hopefully the fud will eventually be drowned out, leaving only the deluded fan boys holding onto dubious claims and long debunked statements.

“Everyone Else is Doing it” is Not an Excuse, Mr Conroy.

As far as I’m concerned the Internet Filter is dead, never to see the light of day again. With the Greens holding the balance of power in the senate and the minority Labor government relying on one Green and three independents in order to pass anything the proposed filter has absolutely no chance of getting through. On the flip side the amendments that would be required to get it through the senate would render the legislation pointless (even more so than it is now) and I don’t think Labor wants to be seen pushing such things through after all the black eyes they got from the past year or so. Still it seems like the dead horse still has a few good beatings left in it and from time to time Senator Conroy will pop up to remind us that it’s still on the table, despite how toxic it has been for them in the past.

Conroy has had the unfortunate luck of getting former Liberal party leader Malcolm Turnbull as his shadow minister and wasted no time ripping into Labor’s policies. Whilst there are some points I agree with Conroy his idea that other countries are filtering somehow justifies the government’s proposal is just plain wrong:

“In Finland, in Sweden, in a range of Western countries, a filter is in place today, and 80, 90, 95 per cent of citizens in those countries, when they use the internet, go through that filter.

“It has no impact on speed and anybody who makes a claim that it has an impact on speed is misleading people.

“If you want to be a strict engineer, it’s 170th of the blink of an eye, but no noticeable effect for an end user. So there is no impact and the accuracy is 100 per cent.”

For all my belly aching about the filter on this blog I’d never touched on the point that in fact yes, some modern western countries had implemented some kind of filter. Sweden’s scheme is the most innocuous of the lot with it merely being a DNS blacklist which will make banned sites just simply not respond (circumvented by using a different DNS provider). Finland’s is similar to Sweden’s in that it is also DNS based but it has been mired with controversy about its accuracy and performance issues that have arisen due to its use. The UK’s is probably the worst of the lot requiring all traffic to be passed through a filter that identifies sites based on the URL provided by the Internet Watch Foundation, a group of 14 people that includes 4 police officers responsible for maintaining the blacklist. Most people in the UK don’t know about this as it’s been around for quite some time and it has also been mired with controversy about its accuracy and accountability.

Depending on the scheme that’s used there is definitely performance impacts to consider. DNS based filtering has the least impact of the lot as a failed DNS query returns quite quickly although it has the potential to slow down sites that load content from blacklisted places¹. The UK’s URL filtering scheme is horrible as it requires the request to be intercepted, inspected and then compared against the list to see whether or not it should be blocked. For small lists and low volumes of traffic this is quite transparent and I have no doubts that it would work. However, even in tests commissioned by Conroy himself, these filters have shown to be unable to cope with high traffic sites should they make it onto the filter. ACMA’s own blacklist has several high traffic sites that would swamp any filter attempting to block them, drastically affecting performance of everyone who was on that filtered connection.

Justifying your actions based on the fact that others are doing it does not make what you do right. Conroy carefully steered clear of mentioning other states that were using censorship schemes that were more closely aligned to what his legislation has proposed (like China and North Korea). The fact remains however that any kind of Internet filter will prove to be ineffectual, inaccurate and will only serve to hurt legitimate users of the Internet. I applaud Conroy’s dedication to his ideas (namely the NBN) but the Internet filter is one bit of policy that he just needs to let go. It’s not winning them any favours anymore and the Labor government really needs all the help it can get over the next 3 years and dropping this turd of a policy would be the first step to reforming themselves, at least in the tech crowd’s eyes.

¹This is a rather contenious point as you could say that any site loading content from a backlisted site more than likely requires blacklisting itself. I’d agree with that point somewhat however the big issue is when a legitimate site gets blacklisted and ends up impacting a wider range of sites. In all the filters there’s been admissions that some material has been inappropriately blocked meaning that there’s always at least the potential for performance impacts.

Single Stage to Orbit: The Model T of Space.

No matter which way you cut it space is still the playground of governments, large corporations and the worlds wealthy. The reasons behind this are obvious, the amount of effort required to get someone or something into space is enormous and past applications that result in either scientific or monetary gain there’s little interest to take the everyman up there. That has rapidly changed over the past few years with several companies now making serious investment in the private spaceflight sector. Now nearly anyone who wishes to make the journey out of Earth’s atmosphere can very well do so, a privilege that until today has been reserved for mere hundreds of people. Still we’re far off from space being just another part of everyday life like flying has become but that doesn’t mean the seeds of such things aren’t already taking hold. In fact I believe with the right investment we could well see the Model T Ford equivalent of space within the next few decades.

Right now all commercial and governmental space endeavours use some form of chemical rocket. They generate thrust by throwing their fuel out the back of them at extremely high speeds and whilst they’re by far the most energy efficient jet engines you can create they’re also one of the most fuel hungry and also require that the craft being propelled by them carry their oxidiser¹ with them. Putting this into perspective the Space Shuttle’s external tank (the giant rust coloured cylinder) carries around 6 times more oxidiser than it does fuel with it, to the tune of 630 tonnes. That’s about 30% of the total launch mass of a completed Space Shuttle launch system and this has caused many to look for alternatives that draw their oxidiser directly from the atmosphere, much like the engine in your car does today.

Most solutions I’ve seen that use the atmosphere to achieve orbital speeds rely on a technology called scramjets. From a design standpoint they look a lot simpler than it’s turbojet/turbofan predecessors as there’s no moving parts used to compress the air. Scramjets rely on extremely high speeds to do the compression for them, meaning that they can’t be operated at lower speeds, somewhere in the realm of Mach 6 for a pure scramjet design. This means that they need some kind of supplementary thrust for them to be able to function.

One such solution is a that of an aeropsike engine. Apart from looking like something straight out of science fiction aerospike engines differ from regular rocket engines in that they don’t use the traditional bell shaped exhaust nozzles that adorn nearly every rocket today. Instead they use a concave spike shape that in essence forms a bell with outside air pressure. This has the effect of levelling off the performance of the engine at all altitudes although they suffer at lower mach numbers due to the reduced pressure. Still they compliment scramjets quite well in that they can be used in both situations where the scramjet can’t function (vacuum and low speed) whilst still remaining more efficient than current rocket designs.

Both of these ideas have been proposed as base technologies that would be used in a single stage to orbit (SSTO) launch system. All orbital capable launch systems today are done in stages whereby part of the rocket is discarded when it is no longer required. The Space Shuttle for example is a two stage rocket shedding the SRBs whilst it is still within earth’s atmosphere. A SSTO solution would not shed any weight as it climbed its way into space and the main driver for doing so would be to make the craft fully reusable. As it stands right now there are no true reusable launch systems available as the only one that’s close (the Space Shuttle) requires a new tank and complete refurbishment between flights. A fully reusable craft has the potential to drastically reduce the cost and turnaround time of putting payloads into orbit, a kind of holy grail for space flight.

SSTO isn’t without its share of problems however. Due to the lack of staging any dead weight (like empty fuel tanks) are carried with you for the full duration of the flight. Nearly every SSTO design carries with it some form of traditional chemical rocket and that means that the oxidiser tanks can’t be elminated, even though they’re not required for the full flight. Additionally much of the technology that a SSTO solution relies on is either still highly experimental or has not yet entered into commercial use. This means anyone attempting to develop such a solution faces huge unknown risks and not many are willing to make that jump.

Despite all this there are those who are working on including these principals into up and coming designs. NASA recently announced a plan to develop a horizontal launcher that would use maglev based track to accelerate a scramjet plane up to the required mach number before launching it, after which it could launch small payloads into space:

As NASA studies possibilities for the next launcher to the stars, a team of engineers from Kennedy Space Center and several other field centers are looking for a system that turns a host of existing cutting-edge technologies into the next giant leap spaceward.

An early proposal has emerged that calls for a wedge-shaped aircraft with scramjets to be launched horizontally on an electrified track or gas-powered sled. The aircraft would fly up to Mach 10, using the scramjets and wings to lift it to the upper reaches of the atmosphere where a small payload canister or capsule similar to a rocket’s second stage would fire off the back of the aircraft and into orbit. The aircraft would come back and land on a runway by the launch site.

Such a system would significantly reduce the costs of getting payloads into orbit and would pave the way for larger vehicles for bigger payloads, like us humans. Whilst a fully working system is still a decade or so away it does show that there’s being work done to bring the cost of orbital transport down to more reasonable levels.

A SSTO system would be the beginnings of every sci-fi geek’s dream of being able to fly their own spaceship into space. The idea of making our spacecraft reusable is what will bring the costs down to levels that will make them commercially viable. After that point it’s a race to the bottom as to who can provide the spacecrafts for the cheapest and with several companies already competing in the sub-orbital space I know that competition would be fierce. We’re still a long way from seeing the first mass produced space craft but it no longer feels like a whimsical dream, more like an inevitability that will come to pass in our lifetimes. Doesn’t that just excite you? :D

¹As any boy scout will tell you a fire needs 3 things to burn: fuel, oxygen and a spark. Rockets are basically giant flames and require oxygen to burn. Thus oxidiser just means oxygen which also lets rocket engines operate in a vacuum.