Monthly Archives: April 2009

First One’s Free.

In this rapidly changing technologically driven world many new up and comers find it hard to differentiate themselves from amongst the hundreds of similar projects. In an effort to drive people to use their services we’re seeing more and more companies going the route of providing some or all of their products completely free to the end user. Whilst I believe this is a great idea there is, of course, always some catches when it comes to accepting free gifts from corporate overlords.

A great example I can think of is the good old de facto corporate communication device, the Crack(Black)Berry. Recently at my current gig for the Australian government my department decided to do a trial of these in order to see if there was any value in implementing it. Of course Telstra comes to the table offering a free 3 month trial with pretty much everything included. The handsets were sent out to the executives and we went through about 2 days of configuration work to get it all done for them. It didn’t matter that we’d already installed Exchange Activesync, which would allow them to use any Windows Mobile device and wouldn’t cost them a cent since we’d bought the license in a bundle. So since the Blackberrys had been in the Qantas lounge magazines we were basically stuck with trialling this technology for them, and we all knew where it was going.

Fast forward to the end of the trial and we have half the execs praising the new system, a few dissenters and the rest on the fence. It was pretty obvious from the onset that once this was in place they would not give it up, even though the corporate directive is to investigate all possible solutions and judge them on their merits.

The same situation has been used in many different situations with online services. LinkedIn used to be a completely free service for professional social networking, and it did a great job at that. It was basically a no frills Facebook, something which is handy when you’d be browsing it at work. Of course the creators saw that they could then add in extra features and offer them as premium accounts, something which is akin to buying an expensive car in real life. Sure, it will probably improve people’s impression of you (if they’ve never met you before) but past that it’s value is rather small. Since many people use LinkedIn in order to build a professional network and hopefully generate business from that the paid services might hold some value there. There’s still no substitute for good old fashioned real life networking though, but that doesn’t stop people from trying to charge for that, either.

However, there are those that still buck the trend when it comes to providing services for free and staying away from the premium service charge. Google has released service after service that, whilst most of them still carry the beta tag on them, remain free after many years in service. This can all be put down to their ruthless precision in refining down an advertising model that appeals to every business, which is built upon their solid leadership as a search engine.

In reality most new up and coming technologies these days are being offered as a free baseline with the additional features costing you a couple pennies more. It’s all done to drive up market adoption and it is a great thing for the consumers, who get a lot more for their dollars since they can try before they buy. Just don’t be too shocked when your favourite free service starts asking for your credit card 😉

Apollo to Shuttle: The Missing Years.

Talk to anyone on the street and mention the either the Apollo moon landings or the Shuttle most of them will know what you’re talking about. Whilst both of these are iconic bits of space history if you do the math on the time between these two programs you’ll come up with about 9 years where most people won’t be able to tell you what NASA and Russian Federal Space Agency (at the time it was the Soviet Space Programme) were doing at the time. Whilst it didn’t capture the imagination of several countries like the lunar landings did even by today’s standards the work carried out in these 9 years was nothing short of revolutionary, and it is a shame that it has gone so unnoticed.

Enter America’s first ever space station, Skylab. During the planning for the Apollo missions NASA had kept a long term view for other goals that they might achieve in space once Kennedy’s vision had been achieved. This lead to the development of ideas for long duration space flight, which would initially begin in Low Earth Orbit. After many different design proposals, some with up a 20 astronaut capacity, a design for a 3 man orbital laboratory and observation station was accepted and Skylab started to become real.

Overall the mission was a success as it showed that NASA was capable of putting people up into space for long periods of time and bringing them back down safely. Comparing it to today’s standards makes the achievement even more remarkable, as the whole Skylab station was shipped on a single modified Saturn V rocket, with a living volume that was about 38% of the International Space Station today. Whilst that might not sound impressive by itself the fact that it was done in one hit is definitely something we would struggle to repeat today. With the return of heavy lift launchers in the form of the Constellation Project we may see NASA attempt something like this again in the future, but not until the ISS has outlived its usefulness.

The project was not without its problems though. The station suffered major damage during liftoff causing one of the solar panels to become inoperable and the sun/micrometeorite shield to be lost. The station also suffered from over-heating issues, which was fixed by replacing the cooling system. For a first attempt at long duration space flight it was bound to have issues, and NASA managed to continue Skylab’s presence in space despite these problems. If it wasn’t for the unexpected deterioration of the orbit the Space Shuttle would have been used to service and expand the station. However due to delays in the shuttle program this could never be done, and Skylab was de-orbited gracefully.

One more mission was flown before the days of the space shuttle, and that was the Apollo-Soyuz (pronounced “Sah-yoouz”) Test Program. The first space program with international collaboration this saw the previous space rivals docking and celebrating the joys of space travel together. The mission was a complete success with many different scientific experiments being completed, and laid the groundwork for the future of international space endeavours.

So when you hear about the Shuttle or the Apollo missions remember those who went to space in between. Whilst they may not be as inspiring or as iconic as the missions that have made the news in past and present without them we wouldn’t be where we are today.

Unlocking the Hidden Value in your Technology.

Most of the time when you’re buying the latest widget you’re buying it with a purpose already in mind for it. I know the majority of the things I’ve bought were initially bought to fill a need (like the server this web page is coming to you from, it was a testbed for all sorts of wonderful things) and then are left at that. But what about that hidden little bit of value that’s inside pretty much every tech purchase these days, can we essentially get more for money we’ve already spent?

With technology moving at such a rapid pace these days pretty much every gadget you can think of has what amounts to a small computer inside it. A great example of this would be your stock standard iPod, whilst Apple is always coy about what is actually under the hood in these devices a little searching brings up this list which shows that the majority of them run on a re-branded Samsung ARM processor. While this might not mean anything to anybody a couple intrepid hackers took it upon themselves to port the world’s most popular free operating system, Linux, onto this device. Whilst this at first might seem like an exercise in futility a quick glance at their applications page shows many homebrew applications that have been developed for this platform.

This is not the only occurrence of something being used way outside its original purpose. Way back in 2005 Sony released the Playstation Portable, an amazing piece of hardware that was basically a Playstation 1 console made portable. Thanks to my working in retail at the time I had one in my pocket the day it was released, but it wasn’t until a couple years later that I discovered the huge hacking scene that was behind this device. I then discovered that I could run emulators, media streaming programs (I was able to wow my housemates by streaming media over WiFi to my PSP), homebrew games and so much more. Sure, I was running the risk of completely destroying the device in the process but the additional value I got out of it was worth the risk. Well, it was out of warranty anyway 😉

This kind of value-add is something I now seek in pretty much all of my technology purchases. Recently I bought myself a Sony Xperia X1 mobile, but not before hitting up my favourite HTC hacking site, XDA-Developers. A quick look at their Xperia section shows all sorts of wonderful things you can do with this handset. One of the most amazing things you can do is run Google’s Android platform on this handset, something which sealed the deal on the phone instantly. It’s things like this that help me justify such huge tech purchases (that and the fact that my work paid for the mobile 😉 ).

So I encourage you, look around your room and see if there’s anything there that you wouldn’t mind tinkering with and have a look around on the Internet to see what can be done. I’m sure you’ll be pleasantly surprised.

Cloud Computing: How I Learned to Stop Worrying and Love the SaaS.

A few years ago someone had the bright notion to sell Software as a Service (SaaS) instead of a product. Built off the idea of things like Google Docs it seemed like a great way to get software into an organisation without having to convince them to outlay thousands of dollars on hardware or licenses. Couple that with its synergy with other buzz words of the time (thank you Service Oriented Architecture) it seemed like a great idea. Having your applications and data available over the Internet greatly increased its portability, and was a viable solution for some companies to provide collaboration solutions to their remote workers.

However, it never really took off into large scale enterprises. Primarily this was due to privacy concerns as many companies could not trust the SaaS providers to keep their data safe and secure. Additionally, with many SaaS clients you had to have a stable Internet connection, otherwise your data was completely unavailable to you. A lot of providers then tried to shift the focus away from completely online solutions and then moved part of the infrastructure in house for the clients, attempting to alleviate the issues people had raised.

Then, for a couple of quiet blissful years no one really talked about SaaS any more. That was until someone found a new buzzword for it: Cloud Computing.

Behold the almighty cloud of the Internet. We can put all your services on here and provide you with infinitely scalable and customizable solutions! We’ve taken the ideals of SaaS and translated them onto your infrastructure (IaaS) and platforms (PaaS) to create the mighty Cloud!

In essence there’s just a bit more abstraction in the terms of implementation, but Cloud Computing is just SaaS reborn.

Cloud computing takes the idea that if we abstract all the layers of delivering a service to an end user they can then take advantage of huge amounts of infrastructure without the huge initial investment. The idea works well with things that experience high peak loads but low baselines, say a website that gets slashdotted. The cloud would be able to detect that there’s a sudden surge and provision more resources on the fly, something that all high traffic sites like the sound of. Additionally the cloud allows for users to be agnostic in their decisions about infrastructure, since cloud applications are designed to run on an abstracted layer that resides above the underlying hardware and software.

It’s the concept of “We do all the hard work for you so you don’t have to worry about X” where X is the IT problem du jour.

Don’t get me wrong though, Cloud Computing has quite a lot of uses and the added additional abstraction at the platform and infrastructure area make it a lot easier for developers and engineers to design solutions for the end users. It also gets everyone out of that mindset of “I have this nail, so I need this hammer” when in fact they should be asking “What’s the best way to secure this board to my house?”.

There are some pretty good applications coming out based on the Cloud idea. One, which made waves at this years GDC, was OnLive:

This is what the cloud is all about. You no longer have to worry about what hardware you’re running on and you have hundreds of games at your fingertips. Unfortunately it suffers from the same problems as other cloud services in that its scope is somewhat limited by the few issues that plague it and they’re planning to monetise it straight away. I’m sure there will be some kind of trial period where everyone can have a go but if they provided some ad supported free version of this it would be a huge hit instantly. Trying to charge people right off the bat will slow adoption, but it would help to keep the debt collectors at bay.

Overall Cloud Computing looks like a great idea and it is getting a lot more traction then its predecessor SaaS did. I think at the time that SaaS came out people still didn’t trust these new fangled Web 2.0 apps enough to give their corporate data to them. After many years of Facebook, Youtube and Google Docs we’ve started to come to grips with what the web can provide, and so have the business execs.

Just remember that it’s still SaaS at heart. 🙂

The National Broadband Network.

Another day, another multi-billion dollar proposal to stimulate the economy and conveniently distract everyone from the shambles of a proposal that was the Great Firewall of Australia. The newsbots are in a flurry about this one and with this being right up my alley, I can’t help but throw my few cents in ;). So let’s take a good look at this proposal and see what it will mean for Australia, the public at large and of course, Senator Conroy.

Australia is about average when it comes to broadband penetration with the majority of our users on ADSL, some on cable and the rest on some unknown connection (usually satellite or 3G wireless). This is quite comparable to many other countries and the norm seems to be the majority on ADSL with only Japan and Korea having a large representation of customers with fibre/cable speeds. What this proposal aims to do is to bring fibre connections to 90% of all homes in Australia. By my estimates with approximately 8 million households in Australia that will mean fibre speeds to about 7.2 million houses, with 800,000 left in the digital dark age. Whilst this is still a very aggressive target to meet you’d still be pretty annoyed if you were one of those 800,000 homes that was left out. Hopefully the extra fibre being run everywhere will also spur others to upgrade the DSLAMs in local exchanges for those poor people who are left out.

The current proposal is signalled to run for about 8 years. Now anyone in IT will tell you that a time frame like that for a project in this field will inevitably be out-dated by the time it is completed. Using Moore’s Law as a basis, we would see that the average computing power would have increased by about 16 times, with data rates and storage capacities following suit. If this kind of project is to be undertaken the network must be scalable with newer technologies, otherwise it will be useless by the time it is implemented. Whilst they haven’t described what kind of fibre technology they’re going to be using I would recommend single-mode fibre which should scale up to 10Gb/s, allowing the network to not be outdated the day it’s switched on.

I rejoiced when I heard that the whole thing would be government controlled, hoping to avoid the catastrophe that Telstra has become. However it became apparent that the initial investment from the government will be $4.7 billion with the rest to be raised from private investors. Once the network is complete they will sell down their holdings in the company, thereby releasing all control on it. I don’t think I have to make it anymore clear that they are basically creating a monopoly on the network by allowing this one mega-corp to own all the infrastructure instead of the government. Unless there are strict provisions in place to ensure that other ISPs will be able to tap into this network and use it fairly, we’ll just end up with yet another Telstra who won’t have much incentive to be competitive, let along co-operative with others.

Overall for Australia this proposal is mediocre at best. Whilst I applaud the idea of upgrading Australia’s broadband and making us a market leader in terms of broadband penetration the way Senator Conroy is going about it is, as usual, confused and misguided. When it was obvious that his attempt at a fibre to the node was not going to win him the right amount of political points he turned his attention to the Internet Filter. Now that filter is dieing on the vine he’s taken the $4.7 billion that was allocated for the new broadband network and tried to make it look like ten times more by saying that investors will make up the rest. Maybe he is just trying to make everyone think that they’re dreaming….

Luckily it appears that the IT community is remaining sceptical, as it should with any that Conroy proposes. Triple J’s Hack program ran an excellent show yesterday exploring the new proposal and even, interviewing the man himself. Conroy is awkward at the best of times but when he was confronted on the issue of the Internet filter and the new broadband network, he seemed to hit a few brick walls:

Senator Conroy: We said if the trial shows that this cannot be done, then we won’t do it.

Interviewer: And what’s the definition of cannot be done? What would be the acceptable amount to slow the internet down?
Senator Conroy: Well now your asking me to preempt the outcome of the trial.

Interviewer: No I’m not, you’ve got to have an understanding of what’s a pass and what’s a fail. You can’t wait ’til the trial finishes and then look back and decide how your going to measure the outcome.

Senator Conroy: Well actually that’s how you conduct a trial. You wait to see what the result is and then you make a decision based on the result. If the trial shows that it cannot be done without slowing the internet down then we will not do it.

I’m not sure I can comprehend what he thinks a trial actually is. If you follow the scientific method you’d know that first you formulate a hypothesis, establish the test, formulate the thresholds for successful/unsuccessful and then perform the test. You don’t make up your pass/fail from the data, that’s just bad science. I once defended Conroy as just a figurehead for a bad idea put forth by Labor to win votes, now I’m sure that isn’t the case.

An amazing idea that was twisted and contorted into something that will at best, create another mega-monoply on Australia’s telecommunication network. It seems no one will listen to George Santayana:

Those who cannot remember the past are condemned to repeat it.

Windows 7, What Vista Could Have Been.

Ask people on the street about Windows Vista and you’ll usually get the response “Isn’t Vista crap?” even though many of them have not used it nor have a clue about what you actually get from it. For the past year I’ve been using Vista as my main desktop and really I found it to be quite usable. Sure there were some things that were obviously changes for change sake (where did my up folder button go?!?!) but overall the new UI was pretty appealing and once you were past about 2GB of system memory there wasn’t much of a performance difference between them. However, the initial fiasco of it requiring such an exotic system just to run and the incompatibility with many legacy devices lead to bad perceptions all over the place. Something that Microsoft hasn’t been able to shake to this day.

Enter the ring Windows 7, Microsoft’s evolutionary step into the next world of operating systems. With such a shorter development time then Vista this wasn’t going to be the new revolution of the computing world that Vista was supposed to be. No this clucky little system was supposed to build on Vista’s success whilst making the whole user experience much more pleasant and secure. As it turns out, Windows 7 might actually achieve what Vista set out to do.

I’ve been using Windows 7 exclusively as my main computer for the past couple weeks, and there’s a couple things I’d love to share with you.

First off the installation of Windows 7 takes a much shorter time then any other OS that I’ve installed. From booting up from the disk to a usable system I had the install time just shy of 20 minutes, with most of that spent with me away from the computer. They’ve even gone to the trouble of making the loading screen look pretty, which while completely useless is a nice touch.

Boot times have been significantly reduced, including time to usable¹. Vista had a nasty habit of showing the initial loading screen and then a black screen for a while before letting you login. They’ve bypassed this part and once the initial logo disappears the login screen comes up seconds later.

They’ve redesigned the UI for Windows 7, which I initially groaned at. Most of the time I encounter UI changes things are moved around for no good reason (hello Facebook) and it takes me more time to figure out how to do something than what I actually want to do. However, the Windows 7 UI is a refreshing change from this, with many of the changes being revisionary steps forward, rather than a whole paradigm shift (Ha! Correctly used buzzwords). The augmentations to the start menu are very useful, especially for things like the Remote Desktop Client. If you could navigate your way around Vista you won’t find Windows 7 hard at all. In fact, I think you’ll find it easier.

Windows 7 really does seem to be everything that Vista should have been. It’s fast, very usable but different enough from the previous version to really set it apart. However, under its sleek and shiny exterior it really is a revamped Vista at heart, which leads me to my main point for this post.

Vista got blackballed from the first day it was released and unfortunately could never shake the negative press associated with it. Microsoft in its wisdom tried to remedy this by initiating Project Mojave, a sneaky little project that gussied up Vista as some new and exciting OS from Microsoft. Whilst this proved Microsoft’s point that Vista was actually a very capable and usable OS it did not improve the market’s perception of the product. Windows 7 on the other hand is Microsoft’s next genuine attempt at a new OS and as much as I’d like to say it’s changed, it’s still Vista underneath.

Sure, there are many changes between the two and not just in terms of UI. The revised UAC model is a tad more usable although still fundamentally useless from a security perspective, and there are several other administration tweaks. Device Stage sounds like a great idea, and hopefully the driver writers step up to the plate to take advantage of this.

Overall I’m very happy with the way Windows 7 is going. I believe more frequent releases of operating systems leads to them being far more in tune with the market and it will help ease the transition pain we saw with XP to Vista. The great news is that Microsoft is offering Windows 7 as a free upgrade to all Vista users and downgraders, something that will definitely work wonders for its initial adoption. Additionally most applications that have been reworked for Vista will work with 7, so there should be a lot less compatability issues moving from XP to 7.

It does beg the question however, was Vista really the failure it was made out to be or was it the failure they had to have in order to get everyone in the mindset for a change?

That is an exercise left up to the reader 🙂

¹This is the time taken from pressing the power button to actually being able to use the computer. Vista would start up quick but wouldn’t be usable for quite some time.

Tell ’em They’re Dreaming.

The Global Financial Crisis is hitting everyone, and with each passing day it would appear that it is hitting more and more people directly. With the unemployment rate hitting 5.2% back in March the figures do support that idea, with many economic forecasters saying that it could hit as high as 10% next year. Primarily this will hit the Blue Collar workers first as companies seek to reduce output in order to keep themselves afloat. Whilst it is a valid business strategy in time like these I often wonder what would happen if we simply forgot that this was happening.

Australia as a whole is in a strong position in terms of weathering the storm. Our economy is based strongly on resources (rather than services) and with our main export being coal for power generation and heating, which people will still want during a recession, we are well placed to continue on as per normal. However, economic growth was down 0.5% for the December quarter with the great decline shown in non farm GDP (whilst Farm GDP grew a whopping 10%!). Could it be that companies and consumers are cutting back just because of the threat of the economic downturn, and not because they are actually feeling the hardship?

Up until around September last year interest rates had been steadily rising in order to combat the extrodinarily high inflation that Australia was experiencing. It seemed that no amount of interest rate hikes could reel in consumers, even with fuel and transportation costs soaring at the same time. However, once people were told of dark economic times ahead suddenly that all changes, and the Reserve Bank is forced to try and spur the economy on by cutting interest rates in quick succession for months on end. Did everyone really lose all their spending power in under a month?

All the stimulus packages are based around the same thing, trying to inject cash into the consumers and corporations so that they’ll spend it, hopefully spurring the markets on so they’ll recover through normal means. How is this so different from having the media say to everyone “The economic crisis is over, we’ve done X and changed our policies Y… etc etc” and then have everyone return to their normal ways of spending? The average Joe has already been manipulated by the media to believe that the world is coming to an end, what’s stopping the media from telling them that everything is ok?

There is of course, a happy middle ground between what the government is doing now and blatantly lying to everyone about the current economic situation. Large government owned and funded projects like say, a light rail system for Canberra (We’ll have it one day folks!!!!) will create jobs and provide that first step into repairing the market, tempting private companies back in. I say government owned and funded mostly because of the recent catastrophe that occurred with BrisConnections, a privately owned but government subsidised project.

So I’d recommend a two pronged approach. Cease the constant reporting on the GFC and have the government start up a large number of projects in order to create some sustainable jobs for the battlers out there. It’s not the easy route and if a change of government happens next election Kevin Rudd will be hard pressed to take credit for his work. However, should he do it and ge re-elected he will be remembered as the herald of the new economic good times, something that people like me will find hard to forget.

The Dot Com Bust and the Social Web.

Back in the hay days of the Internet companies were all looking at exploiting this new means of marketing their ideas. This saw the meteoric rise of many Internet firms who specialized in either creating an online presence for a company or building web enabled apps. I liken it to when you were a child and one of your friends got the latest and greatest widget, you just had to have it for yourself. It was this kind of me-tooism that lead to the technological stock ticker of America (NASDAQ) to reach a dizzying height of 5048 points on March 10, 2000.

Anyone can tell you that the only place to go from the peak of achievement is back down, and boy it did.

After the rush that was the Y2K problem many companies found themselves set for the next couple years. Generally speaking most IT equipment has a life between 3~5 years when speaking in terms of major upgrades. This left many of the companies who had based themselves around selling equipment and services for the web and Y2K compliance without clients for years. Combine that with the dodgy accounting practices and the excessive IT culture that had developed (Aeron chairs anyone?) many IT companies found themselves failing in a heap very quickly, with a lot of them declaring bankruptcy and flooding the market with IT professionals.

In reality this was a good thing for the IT industry. With any new market you’ll get a time when investors will go crazy over it because it’s the latest and greatest, which leads to an asset price inflation bubble. Once people realise that the market is based purely on speculation (or someone reveals it’s just a fancy ponzi scheme) then it will inevitably crash. However, once the crash is complete and the vultures have flown away the new market will seek to establish itself as a true discipline, and I can tell you that the quality of many Internet based companies and applications improved dramatically after the dot com bust, as the business struggled to entice investors back.

Whilst I can’t remember who said this to me first I do have a great quote from one of the engineers who rode out the dot com bubble (probably paraphrased to):

I remember sitting down with some executives and explaining their new accounting system to them. About 10 minutes into the meeting one of the execs said to me “That’s all great, but can you put it on the web?”

Using this as an example, can you think about a current trend that also lends itself to this quote (I’ve already given it away with the title for this blog post).

Right now Social Networking sites and services are growing rapidly in popularity and it seems every other week some new fangled Web 2.0 application comes out that will revolutionise the way we communicate with each other on line. The popularity of these services is now starting to affect business decisions, with many companies wanting to increase their online presence utilising them in some way or another. There are some benefits to this however, as since many companies want their services available through social networking tools they have to increase their ability to interoperate with the world at large, and openness in communication is always a great thing.

It would seem the quote for the Social Web Bubble would be “That’s all great, but can it update my Facebook page?”.

However, thanks to the global financial crisis I don’t believe we’ll see another dot com bust style drop in the technology stocks like we saw back in 2000. All the speculative value that was created in the short time between the dot com bust and now has been effectively killed by the crisis, but with the strange side effect of leaving many of the companies in tact. The GFC may be a blessing in disguise for the companies who have based their wealth on social technologies, which will hopefully lead them to establish themselves properly as the times get better.

When I first thought about writing this blog post I was reminded of the old saying “Those who are ignorant of history are doomed to repeat it”. With only 8 short years between the dot com bust and the GFC it would seem that many tech companies would be wiser then to try and hop on a bandwagon to make a quick buck. The answer lies in the pioneers of the new social technology. Primarily these people are made up of those who, whilst have a rich technological background, where not in the industry at the time of the crash. A great example would be Facebook’s CEO Mark Zuckerberg, who would have been only 16 at the time of the bust. I’d bet my bottom dollar that whilst he created the idea there are many engineers working on his team who were in the dot com bust, but make their money on their skills rather then their ideas.

It would have been interesting to see what would have happened to the Social Web had the GFC not came along.

Technology Integration Testing.

Just to see how this goes, I’ve created a horrible mess of web 2.0 applications so that several different websites will update themselves when I post on this blog.

I believe this is what those crazy web kids call “mash ups” these days, when really its just programs talking to each other. Or maybe I’m getting cynical in my old (HA!) age 🙂

Expect a few more of these kinds of posts if I find I’ve broken something.


Appears that I’ve made it work. All it took was an hour and few non G rated words yelled at my server to get it working 😉

Solid State Drives, Not Just All Talk.

Last year Intel made headlines by releasing the X25-E, an amazing piece of hardware that showed everyone that it was possible to get a large amount of flash and use it as a main disk drive without having to spend thousands of dollars on custom hardware. Even though the price tag was even outside most enthusiasts price ranges it still came out as the piece of hardware that everyone wanted and dreamed about.

Fast forward a year and several other players have entered the SSD market space. Competition is always a good thing as it will lead to companies fighting it out by offering products at varying price points in order to entice people into the market. However, although there appeared to be competition on the outside a deeper look into most of the other drives showed that they shared a controller (from JMicron, the JMF602B MLC) except for Samsung and Intel. Unfortunately these drives focused on sequential throughput (transferring big files and the like) at the cost of random write performance. This in turn made all operating systems that were installed on them appeared to freeze for seconds at a time, since any Operating System is constantly writing small things to disk in the background.

However, thanks to a recent AnandTech reviewer, one company has stepped up to the plate and addressed these issues, giving a low cost option (circa $400 for a 60GB drive, as oppose to Intel’s $900 for 32GB) for people wanting to try SSDs but not put up with a freezing computer. One of my tech friends just informed me that a recent update to the firmware of the drive saw improvements up to 3~4 times that of the original drive, an amazing improvement by any metric.

So are these things worth the money? Pretty much everyone I’ve talked to believe they are. These things really aren’t meant to be your main storage drive and once the paradigm shifts from disks being slow I believe you’ll see many more systems built around a tiered storage arrangement. Have your OS and favourite applications on the SSD and keep your giant lumbering magnetic disks trundling along in the background holding all your photos, music and the like. There’s always been a strong disconnect between the blistering fast memory of your computer when compared to the slow crawl of the hard disk and it would seem that SSDs will bridge that gap, making the modern PC a much more usable device.

I am fortunate enough to be working with some of the latest gear from HP which includes solid state drives (for work, of course! :)). For the hardware geeks out there we’ve just taken delivery of 2 HP C7000 Blade Chassis, 4 BL495c FLEX10 blades with 32GB of memory and dual 32GB SSD drives (they’re Samsung SLC drives) and all the bibs and bobs that are needed to hook all this up as our new VMware environment. It is a pity that they won’t let me put them together myself (How dare they tempt a geek with a myriad of boxes of components!) but I can understand my boss’ requirements of having someone else do it, just so we can blame them should anything go wrong.

So we’ve seen what SSDs can do for the consumer market, I’ll let you know how they go in the corporate world 🙂