Monthly Archives: May 2010

The Not-So-Expert Experts.

I’ve always had a healthy amount of respect for anyone who’s dedicated a significant part of their life to becoming well versed in some area of knowledge. From things like the intricacies of woodworking to the complex mathematical modelling of 2 black holes meeting each other¹ there’s something to be said for following something to the point that few ever attempt to do. I’d like to think I’m pretty well versed in the world of IT and general technological matters and few would disagree. What’s been getting my goad up lately however is those who think along the same lines yet are completely clueless about the things they believe they are the experts in.

Now this particular gripe stems from a specific scenario so you’ll have to stick with me here. Currently I’m in charge of moving a whole bunch of systems (current count is 73, it used to be 150) from the old network to the new network. On the outside it looks pretty simple as I’m just changing the address of the system and maybe its physical location, nothing more. The service these systems provide will not be changed in the slightest, save for it appearing on another address. The technically inclined amongst us would be thinking “No worries, DNS (the thing that takes human readable names and turns them into machine readable names and vice versa) should take care of most of your issues” and you’d be right, it should. However there are always hard coded references to certain things and some software vendors think it’s a great idea to tie the licensing to the IP address. So in the course of moving the 77 odd systems I’ve managed to break a couple things along the way, but the trouble began long before things started breaking.

You see I’m not the one who developed, implemented or even (for the most part) supported any of these systems. Naturally I sought out the experts for these systems because I’d rather not break something if I have a chance to avoid it. Unfortunately it seems that many of the so called “experts” were far from it, falling into one of two very distinct categories:

  • The Scapegoat: Now I can’t really lay too much blame on these people, as in the past they probably had only a passing association with the system they’re now responsible for. Through some twist of fate these guys were lumped with the sole responsibility for their system and 99% of the time they have no idea of how anything works in it (past straight rote memorization of some features). They wouldn’t be an issue if they said this beforehand as most seem to act like an expert up until something breaks. There after they then absolve themselves of the situation by announcing their scapegoat status, leaving me to look like I’m responsible for the whole mess.
  • The Dangerous Idiot: The one who says they know everything but is nothing more than a glorified super user. These guys are far more common especially those who are not from within the ranks of IT. More often than not they’re in charge of some archaic system that they’ve managed to hold together thanks to various “tricks” they’ve learnt over the years but should something that they’ve never encountered before happen they’ll instantly throw the towel in. Whilst I’m usually able to perform some impressive CYA maneuvers to ensure that all blame for an ensuing mess is placed on their shoulders (as it rightly should) it doesn’t stop it from happening in the first place, which is why I was consulting with them in the first place.

Thinking about this idea this morning I came to conclusion that it was in fact our education system that was to blame for these people. You see much of the core curriculum is based around rote memorization and regurgitation. Very little is dedicated to developing critical thinking and fact checking, meaning that for the most part people are trained to operate in a very set way. That means that when thrust into the workplace people who have a high level of memorization for certain things are usually held up as experts which, to a point, they are. However the problem comes in when the unexpected occurs which, unfortunately, falls to me when its related to IT.

There’s also that nasty social aspect that comes into play. When you’re considered an expert on something you don’t want to admit you don’t know something about it. You can usually pick up on this though as the answers to tough questions (like “is this license tied to IP?”) are met with vague answers peppered with shoulds, coulds and I think so’s. Usually a scapegoat won’t pretend to know something they don’t (since they’ve got nothing to lose usually) and it’s precisely why I labelled the second type the dangerous idiot. Their answers could very well lead to everything falling in a crying heap.

I’d love to say that this problem was confined to the public sector as well but unfortunately its not. Many of my friends who’ve worked in some of the smaller IT consulting shops have told me they’ve been forced into the role of the dangerous idiot because their company has marketed them as such. Granted I’d trust them more than I’d trust any other dangerous idiot but the fact remains that they were trotted out as an expert in things they may have never touched before. The private sector isn’t immune to legacy systems either and with them comes many of those not-so-expert experts.

Maybe I just have a low tolerance for this sort of thing thanks to the not-so-closet skeptic that’s been cultivating in me for the past year, but it doesn’t seem to be an isolated phenomenon. I guess the only thing I can take away from this is that I shouldn’t trust any expert wholly and make sure that I’ve done enough CYA work to ensure that should everything blow up in my face the blame is squarely levelled at the experts. Still it won’t stop me from bitching about it every so often, nor using other people’s idiocies as blog fodder 😉

¹I actually studied under Bartnik for a semester. It was both awe inspiring and completely confusing, I did however learn a heck of a lot.

I Need Less Problems.

I’m a stickler for solving problems, much to the dismay of my better half. I’d blame it wholly on the fact that I’m male and an engineer so anything that comes to me in the form of a gripe or whine instantly sets itself up as a problem, just waiting for the right solution to come along and fix it. It’s gotten to the point of many people not wanting to discuss any kind of problem with me, lest they get a volley of solutions when realistically all they wanted was a sympathetic ear and 10 minutes of my time. This became quite obvious last night when I spent a great deal of time working on another project of mine which was created out of one of my own problems: my incredibly disorganised media collection.

Far be it from me to actually spend a day or so rifling through the hard drive cleaning everything up and instituting a filing system (that would be the easy way out!). No instead I decided to build an application that would do that and 100 more things for me, neglecting the fact that I really should be dedicating my time to other, more mature projects. Still I’ve managed to come up with yet another project that has the possibility of being something rather cool and useful to a select bunch of people (HTPC nerds currently) and subsequently felt another chunk of spare time disappear into the ether.

The easy solution would be to just not do anything and take the easy option out (either doing nothing or just organising my damn files). Being the egotistical person I am though I can’t really let this slide since I’m always telling people to act on their ambitions rather than putting them off for another day. I can’t stand feeling like a hypocrite and the second I start talking to people about an idea I have I feel compelled to start working towards its realisation. It’s quite disasterous for my work ethic since I always feel like my time would be better spent on my own projects, rather than fixing someone else’s problems. It all comes back to that idea of scratching your own itch, since the reward for solving your own problem is infinitely higher than solving someone else’s problem.

Taking a step back for a second I could also reclassify this as a function of time. Right now I spend 40 of my prime time hours working for someone else mostly so I can pay the bills and keep enjoying the lifestyle that I’m accustomed to. The last 6 months have seen me attempt to put into motion several plans to try and alleviate this requirement with the hope to spend a solid 3+ months on developing and marketing my own ideas. That would’ve worked to, save for my desire to travel to the US to see the last shuttle launch (budgeting is a bitch when you forget to account for something like that!). Putting this all together it becomes rather obvious that I’ve managed to get myself tangled up in a web of inspiration, workaholism and my own ego.

Some say that for all the problems I could myself tanlged up into this one is probably one of the best. I’d have to agree with them as I’m never starved for something to do (unless I’m at work, of course ;)) and when people are interested in what you’re doing there’s a real sense of achievement. I’m still a long way from the dream of working in my own startup but as I said over beers with a group of friends recently “Shit’s starting to get real”, as the roots of everything are starting to take hold. The rest of this year is going to be interesting to say the least and I can’t wait to see how everything pans out.

Space Shuttle Atlantis: Down But Not Out.

Late last night space shuttle Atlantis streaked across the night sky in a brilliant blaze of firey glory on its way back down to earth. After writing about the mission just over a week ago I’d been dreading this moment for quite some time as it meant that it would be the first shuttle to enter official retirement. Still we can’t dwell on the negatives for too long as space shuttle Atlantis has served humanity well running 32 missions, travelling over 180 million kilometers and spending a total of 282 days in space. So what else is there to say about the first of our iconic spacecraft to hit retirement? Well there is the fact that it might not be its last flight at all.

Readers of this blog will more than likely remember me detailing some of the standard operating procedures of a shuttle flight. One of those is that should the shuttle sustain enough damage to make returning to earth too risky they must have somewhere to stay whilst a rescue mission is planned. Traditionally they can take refuge as the ISS as it is more than capable of handling the extra load for a month or so whilst they roll out another shuttle. Now this doesn’t mean that NASA can just whip up an entire shuttle mission within a month, far from it. In fact all rescue missions are planned well in advance with many of the critical components ready to go, including things like the external fuel tank and SRBs. For our soon to be retired friend Atlantis this means that whilst it’s completed its final official mission, its job is far from over.

STS-134 is the last planned flight for any space shuttle and that means should Endeavour not be able to return to earth the astronauts will be trapped at the ISS. Whilst we could ferry them down in Soyuz craft it would take an extremely long time and would tax the resources of both Russia and the USA considerably. As such they’ve designated a special Launch on Need (LON) mission called STS-335 that will be launched to rescue them should Endeavour be stranded in space. This mission has Atlantis as the designated craft for the mission and this has lead to an interesting proposition:

“We need to go through the normal de-servicing steps, obviously, after the orbiter comes home…We have to prepare Atlantis and the stack as if we’re going to fly again because it is (launch-on-need) mission,” said shuttle launch director Mike Leinbach.

“We’ll be processing her as if it’s a real flight to begin with. Somewhere along the line we expect to hear whether we’re going to launch or not, and at that point in time either press on or stop that processing. But in order to support that, obviously, we have press on when she gets home.”

But with long odds that a rescue would really be required, there’s a notion to fly Atlantis crewed by just four astronauts on a regular mission and a large logistics module to service the International Space Station with supplies and more science equipment.

Designing such a mission, getting the cargo pulled together and training a crew would take many months, so the clock is ticking for a “go” or “no go” decision.

In essence we’d have a shuttle that was fully flight ready, all it would be missing is a payload. The crew would intentionally be kept small in order to ensure that in the event of an emergency they could all return aboard the attached Soyuz craft, which would also limit what kind of payload you could send up there. Still having a shuttle tricked out and ready to fly is not something you’ll have the opportunity to do again and this is what has got tongues wagging about whether or not NASA should in fact fly Atlantis one last time.

There’s no denying that any flight into space has an enormous amount of value. Whilst every precaution is taken to ensure that the ISS has everything it needs there’s no harm in bringing up extras for it. Even with the reduced crew there’s still the opportunity to fly up some additional hardware like some of cancelled ISS modules (a few of which were partially built). Still if such a mission were to go it would more than likely be a strict logistics mission as anything else would require extensive amounts of planning, something that I’m not quite sure congress would be willing to approve (flying Atlantis just for logistics would be costly enough).

So whilst the great Atlantis might have been the first to return to earth on its final official flight there’s still a chance that we’ll see this bird fly once again. I might lament the fact if it does ever fly (making my trip to the US to see the last shuttle flight moot) it would still make my heart soar to see it lifting into the sky one more time. Such is the awesome beauty that is the space shuttle.

The Me-Tooism of the Internet.

The Internet is a bit of an oddity when you try to compare its real world counterparts. Take for example this blog, in the real world it would be akin to a column in a newspaper or perhaps a small publication done off my own back. The big difference is the barrier to entry as writing a regular column for a newspaper takes either the right connections or some kind of journalistic training/merit. With the Internet you have what would be the equivalent of a newspaper shop offering to publish your content for free to unlimited numbers of people (blog networks) no matter what you actually end up writing. The extremely low barrier to entry extends to other markets as well with online businesses able to replicate their much larger real world competitors at a fraction of the cost.

I’ve always said that one of the fears that nags away at the back of my head (which also makes it one of my biggest motivators) is that some genius kid will stumble across this blog, see Geon and the value it represents and code the whole thing in a weekend marathon hack session. Then before I have a chance to release it upon the world they’ll release theirs  and I’ll be left here holding my proverbial, sobbing quietly in a corner somewhere. This comes back to the low barrier of entry which, when coupled with a successful but not-too-technical service, leads to a flurry of me-too type services all hoping to grab a share of the emerging market.

To give you some examples I can name 2 types of services that up until recently no one would’ve thought there would be a use for yet now there are at least half a dozen examples out in the wild. The first, which I blogged about a week ago, is the new social networking technology of checking in to locations in order to alert your friends you’re there, usually coupled with some kind of gaming aspect to hook you in. The list of services making use of this idea seems to be growing daily with a few examples being: Foursquare, Gowalla, BrightKite, Booyah, Yelp and Scvngr. This is not even mentioning some other services that, whilst not focused on check-ins, include them as part of their overall product.

The second is URL shortening services. Whilst long and cumbersome URLs have plagued the Internet for many years they really haven’t been a problem since if you’re sending a URL to someone else they were usually on either email and IM, which usually didn’t restrict your character limit. With the explosion of micro-blogging services with their artificial limits on post size people sought solutions in order to be able to share content on these networks. I can remember way back when in 2002 when TinyURL debuted their service and whilst it was nice to have some short links (especially considering mod_rewrite still appeared to be black magic to most people) I didn’t need it unless the link was really obscenely long, and was only really needed if it had characters that broke on copy and paste. Still today TinyURL is going strong (rated 711 most visited site on Alexa) and its list of imitators is long including services like:,,,,, and many specific URL shortners like and

At its heart this is the real power of the medium of the Internet. With traditional forms of media and business the barrier to entry is quite high, to the point of being out of the reach for the everyman. On the Internet however, where resources are near limitless and the currency of choice is not the almighty dollar, the only limitation really is how much effort you are willing to put in. That also leads to a rampant world of copy-cats where any service that enjoys even a mild success will be duplicated to no end by many people across the Internet. The key then remains how you differentiate yourself from the competition, as you won’t be unique for very long. It appears that for the majority of services there’s room for one giant and a myriad of others that cater to a specific need or location. There’s nothing wrong with being one of many but if you’re thinking of doing something new online it’s best to think about how to deal with the competition before they arrive, rather than pretending like you’re the only one who can do what you do.

Hyperbole, Rhetoric and Backflips: A Stephen Conroy Story.

Regular readers of this blog will know I’m no fan of our Senator Conroy and his proposed Internet filter, even though I have him to thank for the original creation of this blog and it’s subsequent success. Apart from delay after delay there’s been little to no movement from Conroy on the policy despite it being increasingly unpopular. Initially I was able to write him off as just a figurehead for the Rudd government’s slight bent towards a nanny state for Australia but as time has gone by Conroy has dissolved what small amount of hope I held out that that was true. Conroy believes in the policy wholly and damn those who would oppose him.

Most recently the biggest talking about the Internet filter was that it was going to be delayed until after the election, hoping to skirt some backlash over the unpopular policy. Not only did that ignore the fact that tech crowd saw this move for what it was (and would likely vote accordingly) soon after the announcement they back peddled with almost breakneck speed. Then, in a move that didn’t surprise anyone, they went ahead and delayed it anyway:

Communications Minister Stephen Conroy says he plans to introduce legislation for the Federal Government’s internet filter in the second half of the year.

Senator Conroy had intended to introduce the legislation in the first half of 2010.

The Government announced the filter two years ago as part of its cyber safety program to protect children from pornography and offensive material. Last year it ran tests on the system.

But the plan has been criticised by internet users who claim it will slow download speeds and lead to unwarranted censorship.

Right so you prematurely announced that you would delay introducing the legislation (in a vain effort to save votes) and back flipped on that position (to try and save face that you were delaying the policy) and then went ahead and delayed the policy (in an effort to save votes?!?!?!?). Not only has Conroy shown dedication to incredibly unpopular policy he’s beginning to show complete disrespect for the exact people he’s meant to be representing. The tech crowd had little love for Conroy before and any support for the man has now vanished in a public display of incompetence. Whilst there are many bigger issues that will cost the Rudd government votes they really can’t afford to lose yet another block of voters, and Conroy isn’t doing them any favours.

Still all of that could be easily written off as political games save for the fact that Conroy has launched multiple vitriolic attacks on several Internet giants. Now granted the ones who wield the most power in the Internet world are the ones who carry the most responsibility and none are as big as Google. Still the culture and policies implemented by Google are really some of the best on the Internet when it comes to user privacy and security. This didn’t stop Conroy from launching several attacks at them, with the latest ratcheting up the crazy to whole new levels:

Instead, Conroy launched tirades on search giant Google and social networking site Facebook over privacy issues raised with both corporations over the past week. The Senator called Google’s collection of Wi-Fi data the “single greatest privacy breach in history“, and attacked the social networking site over a failure to keep user’s data private.

That classy one liner I’ve bolded for effect is probably one of the best bits of hyperbolic rhetoric that I’ve seen Conroy spew forth. The Wi-Fi data that Google collected was initially only meant to be the SSIDs (the wireless network name) which they could then use to augment their geo-location software, ala Skyhook. Unfortunately they also captured some payload data as well during the course of their collection and got slammed by the German government because of it. Realistically though the data was fairly useless to them as they couldn’t have been in range of the access points for any meaningful amount of time, so the data they would have couldn’t have been more than a few MB at most. Additionally if you had set up security on your wireless access then the data they have is completely and utterly unusable as it would appear encrypted to anyone who captured it. Saying that this was a breach of privacy is a best misleading and at worst completely ignorant of the actual facts.

Conroy doesn’t stop there either, hoping to drum up support by lambasting yet another Internet giant with his choice brand of ignorant vitriol:

The Communications Minister, Stephen Conroy, has attacked the social networking site Facebook and its former college student founder for what he says is its ”complete disregard” for privacy.

Senator Conroy is under fire from many in the internet industry for his proposed mandatory net filter. He has previously attacked Google, a key critic of the filtering plan, but last night in a Senate estimates hearing turned his attention to Facebook.

”Facebook has also shown a complete disregard for users’ privacy lately,” Senator Conroy said in response to a question from a government senator.

I’ll relent for a second and say that Facebook has had some trouble recently when it has come to user’s privacy. However the fact remains that they can’t reveal any information about you that you don’t give them in the first place and putting information online that you don’t expect anyone else to see is akin to leaving your belongings on the sidewalk and expecting them not to get taken. Facebook may have had their troubles trying to find their feet when it comes to user privacy but their response has been rapid albeit somewhat confused. They’ve heard the criticisms and are responding to them, hardly what I would call a “complete disregard” for user privacy.

Conroy has shown time and time again that he has little respect for the industry he’s meant to represent as the minister for Broadband, Communications and the Digital Economy. His constant, vitriolic attacks on those who’ve been in the industry for a long time (much longer than he’s been a minister for such things) shows a flawed belief that his vision for Australia’s digital future is the right one. I and the vast majority of the technical crowd have opposed the Conroy and his Internet filter from the start and in the coming election I’d bet my bottom dollar that you’ll see a noticeable swing against him for his repeated blows against us. It would seem that the only way to kill the Internet filter is to remove him from office and it is my fervent hope that the good people of Victoria will do Australia a service and vote accordingly this year.

NASA’s New Vision: Flagship Technology Demonstrations.

For almost the past 2 decades NASA, and by association every other space faring nation, has been treading water when it comes to pioneering new space technologies. Granted we have not been without achievement, far from it, however the blazing progress that once propelled NASA and its constituents forward is a distant memory. The benefits from the first space race are still being felt today (it’s likely you’re viewing this blog post on one of them) so you can see why there are so many lofty space enthusiasts like myself who look back at a time when science and inspiration went hand in hand to achieve something that was considered impossible only a decade previously. The future is looking a lot brighter as of late because of the private space industry finally coming up to speed with NASA’s achievements, but this morning it looked positively blinding.

Just on 3 months ago President Obama announced a new vision for the future of NASA. My initial reactions to it were mostly negative but after considering the place NASA holds in our world, that of a pioneer in space, I came to see that it wasn’t a fall from what they currently are and more it was a return to what they should be. It appears that the next step has been taken towards the ultimate goal of accomplishing this with the announcement of the Flagship Technology Demonstrations:

The latest in a series of requests for information (RFIs) from NASA under its proposed Fiscal 2011 budget lists six “flagship” space testbeds costing $400 million to $1 billion each that would push technologies needed for exploration beyond low Earth orbit.

The first would be launched by 2014, with three more to go by 2016 and one every 12 to 18 months after that. Technologies include in-space fuel depots; advanced solar-electric propulsion; lightweight modules, including inflatables; aerocapture and/or landing at asteroids and larger bodies; automated rendezvous and docking; and closed-loop life support systems.

• Concepts for spacecraft buses that could use NASA’s NEXT ion propulsion system and an advanced solar array for a 30-kilowatt solar-electric propulsion stage, and which would be scalable to higher power levels.• Flight architecture suggestions for on-orbit cryogenic fuel storage and transfer within a single vehicle and between separate vehicles, with a list of detailed questions to be answered.

• Inflatable-module concepts that would follow earlier in-house work at NASA, with an inflatable shell opening around a central core that would be pressurized at launch.

• Mission concepts using inflatable or deployable aeroshells for aerocapture at Mars and return to Earth of 10-ton vehicles, as well as precision landing on “both low-G and high-G worlds.”

• Concepts for demonstrating closed-loop life support in a module on the International Space Station (ISS), and perhaps on an inflatable module flown under a separate flagship demonstration.

• Concepts for using the ISS as a target for automated rendezvous and docking missions, accomplishing the docking with the low-impact docking system under development at Johnson Space Center.

All of these points echo the original vision as previously laid out by Obama. This is fantastic news and the aggressive timeline for debuting these technologies means that NASA will be once again at the forefront of space exploration. To give you an idea of just how revolutionary these ideas are I’ll give you a run down of how each of them will change the way we explore space.

The first point hints at what would be a high powered ion drive something which would be of high value for long duration flights. If you think you’ve heard this before you’d be right as VASIMIR (which is not of NASA origin) is a very similar concept that is scheduled to be flown to the ISS either next year or the year after. Such propulsion systems allow for very efficient use of propellant which, to use the ISS as an example, could reduce post-orbit fuel required by up to 90%. Reducing the mass you take with you to orbit is always one of the goals when taking things into space, and developing this kind of technology is one of the best ways to accomplish that.

On orbit fuel stations are something that are going to be a must for any long duration space flight, including those missions with us squishy humans. Right now many craft are limited in their payload due to the fact that they have to carry up substantial amounts of fuel with them. With on-orbit fuel stations they can be made to be quite a lot lighter, thereby increasing their effective payload significantly. Couple this wit the high efficiency ion drives and you’ve got yourself a recipe for much cheaper and infinitely more productive missions, helping us push the boundaries of human exploration once again.

One of the decisions from the United States congress was the banning of any further development of the TransHab inflatable module design back in 2000. The idea was that you could launch the module deflated and then inflate it on orbit, letting you keep the payload size down whilst giving you an enormous amount of space once deployed. Compare the largest module on the ISS currently, the Kibo laboratory at 4.2m long and 4.4m in diameter, to the TransHabs ginormous 7.0m long and 8.2 in diameter and you can see what I mean, that thing is massive. So whilst it’s taken a decade for them to come full circle and realise that the tech has some real potential (we’ve got Mr Bigelow to thank for that) we may soon see such modules attached to the ISS or its successor. I think current and future astronauts would welcome the additional space.

The aeroshell idea is nothing new but the weight of the craft they’re planning to use with it is. The most famous example of the aeroshell design would be the Mars Exploration Rovers Spirit and Opportunity. These little guys only weigh in at a total of 180kg and the idea of anything larger using this design has, for the most part, been laughed off. The most recent expedition to Mars, the Phoenix Lander,  was around 350kg and instead used rockets to perform the landing. Scaling up the design to larger payloads would enable much larger missions to planets that contained significant atmosphere, as well as paving the way for future astronauts to land on such places.

The final two points are merely an extension of on-going activities. Many of the life support systems that are currently aboard the ISS are squarely aimed at making it more self sufficient with things like the water recovery system which was flown up last year. Many of the Russian vehicles that visit the ISS use automated docking facilities already and Europe has demonstrated that it is capable of such feats to when the Jules Verne ATV docked last year. This would more than likely end up with a few modifications to the US parts of the ISS, but nothing too drastic.

All in all, these are some damned good goals to be shooting for and they really can’t come any faster. Whilst we won’t have any flag planting moments for a while to come I can see that shortly after we achieve all these goals I can see them coming thick and fast afterwards. It might not look like the plan we had a decade ago but its one that we’ll need to stick to if we want the future of space to look as bright as it did over 40 years ago.

When SmartPhones Became Phones.

The last two years have seen a very impressive trend upwards in terms of the functionality you can fit in your pocket. It didn’t seem like too long ago that streaming a YouTube video to your phone would take half an hour to load and would cost you at least $5. Compared to today when my phone is actually a usable substitute for my fully fledged computer when I’m on the move. For the everyman this has led to even the cheapest of phones being filled to the brim with oodles of technology with even sub $100 phones having features like GPS and 3G connectivity. Even more interesting is the line that once separated smart phones from regular phones has become increasingly blurry to the point where consumers rarely make the distinction anymore.

Realistically the initiator of this paradigm shift¹ was Apple as they brought technology that was usually out of reach to everyone. Sure they did it whilst making a decent buck off everyone but they broke down that barrier many people held that paying over $200 for a phone was something of an extravagance. Now it’s not unusual for anyone to shell out up to $1000 on a phone these days, especially when that cost is hidden away in the form of a 2 year contract. The flow on effect was not limited to Apple however, and now we have yet another booming industry with many large corporations vying for our wallets.

For the most part Apple still reigns supreme in this world. Whilst they’re by no means the largest competitor in the smartphone market, that helm still belongs to Symbian, they still carry the lion’s share of mobile Internet traffic. That hasn’t stopped Google’s competing platform from sneaking up on them with them taking 24% to Apple’s 50%. The growth is actually becoming something of a talking point amongst the tech crowd as whilst Google has floundered in its attempts to replicate Apple’s succes with its Nexus One it’s platform is surging forward with little signs of slowdown. Could it be that the line Google towed of open winning out in the long run has some truth to it?

Amongst developers the one thing that gets trotted out against programming for Android is the market segmentation. With the specs on Android devices not tightly controlled you have many different variables (screen size, is it multi-touch, does it have a keyboard, etc) to account for when building your application. With the iPhone (and soon Windows Phone 7) those variables are eliminated and your development time is cut by a significant amount. Still the leniency granted by the Android platform means that manufacturers are able to make a wide variety of handsets that can cater to almost any need and budget, opening up the market considerably. So whilst Apple might have broken through the initial barrier to get people to buy smartphones it would appear that people are now starting to crave something a little more.

For every person that has an iPhone there’s quite a few who want something similar but couldn’t afford it or justify the expense. The Android platform, with over 60 handsets available, gives those people an option for a feature rich phone that doesn’t necessarily attract the Apple premium. In essence Apple pioneered demand for devices that it had no interest in developing and Google, with it’s desire to be in on the ground level for such a market, took the easier road of developing (well really they bought it) an open platform and leaving the handsets up to the manufacturers. At the time this was a somewhat risky move as despite Google’s brand power they had no experience in the mobile world, but it seems to be paying off in spades.

Real competition in any markets is always a good thing for end consumers and the mobile phone space is no exception. Today almost any handset you buy is as capable as a desktop PC was 10 years ago, fueling demand for instant access information and Internet enabled services. Google and Apple are gearing up to duke it out for the top spot in this space and I for one couldn’t be more excited to see them duke it out. I never really want to see either of them win though because as long as they’re fighting to keep their fans loyal I know the mobile world will keep innovating at already blistering pace they’ve managed to sustain over the past 2 years.

¹Don’t you dare call buzzword bingo here, that’s a proper use of the term.

Geo-Social Networking (or Why I Don’t Get Check-Ins).

I’d like to think of myself as knowing a bit about the geo space and how it can be used as a basis for new applications or how it can augment existing ones. I’ve been elbow deep in developing such an application for over 6 months now and I’ve spent the last couple months checking out every service that could possibly be considered a competitor to me (there’s not many, if you’re wondering). Because of this I’ve started to notice a couple trends with up and coming web applications and it seems that the social networking world is going ballistic for any service that incorporates the idea of “check ins” at any location around the world. After spending some time with these applications (even ones that are still in private beta) I can’t seem to get a hold of why they’re so popular. Then again I didn’t get Facebook for a long time either.

The basic idea that powers almost all of these applications is that you use your phone to determine your location. Based on that the application will then present you with a list of places which you can “check-in” to. If your friends on the application they’ll get a notification that you’ve checked in there, presumably to get them to comment on it or to help you arrange with getting people together. It’s a decent trade off between privacy and letting people know your location as you control when and where the application checks in and most of them allow you to share the updates with only your friends (or no one at all). The hook for most of the services seems to be the addition of some kind of game element to it, with many of them adding in achievements and points. For someone like me it falls into the “potentially useful” category, although my experience with them has led me to think that saying “potentially” was probably being kind.

The services themselves seem to be doing quite well, with Foursquare and Gowalla both managing to wrangle deals with companies to reward users of their applications. In fact it seems that check-in based services are the latest darling child for venture capitalists, which funding flowing thick and fast for any and all services that implement this idea. For the most part I’d attribute most of their success with their ability to hook into Facebook through Connect, as building a user base from scratch for a social networking based site is nigh impossible lest you tangle yourself up with Zuckerberg’s love child. It also helps improves user trust in the application, although that benefit is on shaky ground as of late.

Still though the value they provide seems to be rather limited. After hearing that a couple of my tech inclined friends had ventured onto Foursquare (and I got bored of reading about them every day on my RSS reader) I decided to download their iPhone app and give it ago. The integration between other social networking services was quite good and it instantly picked up a couple people I didn’t know where using Foursquare. Playing around with it I began checking in to various places, accumulating points and my first badge. Still I didn’t feel like I really got anything out of using the application, apart from some virtual points which don’t appear to be worth anything to anyone (although the same could be said of Xbox GamerScore and PSN Levels). This hasn’t stopped Foursquare from reaching over 1 million users in just over a year which is quite impressive when compared to the current giants (Twitter took twice as long to reach a similar milestone).

It’s no secret that I’ve shied away from calling Geon a social networking application, despite the obvious social implications it has. Primarily this is because I don’t want to be lumped in as yet another social app but more and more I find myself needing to incorporate such features into the application, as that’s what people are coming to expect. There’s also the point that many of the ideas make a lot of sense when translated properly into my application. Two recent suggestions were a kind of rework of the Twitter trending topics and the other being the ability to follow people and locations. The first wouldn’t exactly be considered a social networking feature but the latter is pretty much the bread and butter of many social networking services. Still I don’t think people will be looking for check-ins in up and coming social apps, even after Facebook introduces their Foursquare killing service.

It’s true though that although I might not get it that doesn’t matter when so many others do. For as long as I develop Geon I’ll be keeping an eye on these services to see how they evolve as their user base grows, mostly to see if there’s anything I should be doing that I’m not already. It’s going to be interesting to see how this all changes when Facebook finally unveils its location based service to the world and you never know, I might have the penny drop moment that so many people seem to be having about check-ins.

Until then however my Foursquare app will be little more than an interesting talking point to bring up amongst friends.

The Web Standards War: Apple vs Adobe.

I talk a lot about the Internet on this blog but I’d hardly call myself an expert when it comes to actually building something on it. Back when I was first learning to develop applications I was never actually introduced to the world of web programming and the small bits I learnt on networking were no where near sufficient to prepare me for developing any kind of web service. Still after being out of university for 2 years I found myself administering many web sites and then took it upon myself to learn the ins and outs of developing for the web. It’s been a bit of a roller coaster ride since then having to switch my mindset from designing and building applications that will only run on a client to making something for the world wide web. This is when I was introduced to the lovely world of web standards.

You see for a long time Internet Explorer was the king of the web browser world. Thanks to Microsoft’s mentality of embrace, extend, extinguish they initially focused on becoming the dominant force in the web browser market. They had stiff competition from the people at Netscape for quite a long time and were forced to be innovators to compete. It worked quite well with them debuting Internet Explorer 6 back in late 2001 which, at the time, was quite a revolutionary piece of software. Granted most of the market adoption was driven by the browser being install by default with any Windows installation but with such strong competition they were forced to develop something better in order to become the de facto standard of the web.

With the war won in 2003 with AOL shutting down Netscape Microsoft was free to rest on its laurels, and boy did it ever. The next 7 years saw little innovation from Microsoft and with IE6 having widespread adoption most web sites were designed to support them first and alternative browsers later. Whilst at its initial release IE6 was considered somewhat of a technical marvel it was, for the most part, not compliant with most web standards. With Mozilla rising from the ashes of the Netscape fire the need to comply with a widely agreed standard became a talking point amongst web developers, although it was largely ignored by the Internet community at large. Fast forward to 2008 and we have the search giant Google weighing in to the browser marketing, trumpeting standards compliance coupled with a well known and trusted brand name. Such was the beginning of the end of Internet’s Explorer’s dominance over the web browser market, and the rise of widely accepted web standards.

Unfortunately though web standards are a slow moving beast. IE6 was revolutionary because it provided functionality that you didn’t find within the web standards and it enabled many developers to create things that they would have otherwise not been able to. This gave rise to many of the browser plugins that we’re familiar with such as Flash, Java and more recently things like Silverlight and WebUnity. Such additions to web browsers allows them to unlock functionality typically reserved for desktop applications and grants them the portability of the world wide web. Such plugins have been the focus of intense debate recently, and none more so than Adobe’s flash.

I blogged last year about the apparent curiosity of the iPhone’s immunity to flash. Back then the control over the platform was easily justified by the commentary and speculation that was common knowledge amongst the tech crowd. Most understood that enabling Flash on Apple’s devices had the potential to both corrupt the user experience (a sin Apple would never commit) and strike a devastating blow to the cash cow that is the App store. Still whenever an Apple device is advertised as lacking the capability it’s probably the first thing the critics will trot out and it’s a valid criticism as much of the web makes use of this technology (this blog included). Still the argument for using web standards rather than propeitary plugins make sense to, although with Android getting full Flash support in 2010 it would seem like Apple is in a minority here.

That’s not to say Adobe hasn’t tried to play ball with Apple. Their current flagship product, Adobe Creative Suite 5, was touted to have the ability to generate an iPhone application from any Flash program that you created with it. Honestly I thought that was a pretty sweet deal for both Adobe and Apple. Adobe got to get its content on another platform (thereby caving into Apple) and Apple would see a flood of applications on the App store and with it a whole swath of revenue. Sure many applications would need to be rewritten for the touch interface but for all intents and purposes you could have Flash on the iPhone.

Apple, not willing to give any ground on this matter, fired the first salvo in what’s turned into a very public debate. Just over a month ago they changed their developer license agreement to rule out the use of any cross-platform frameworks. Whilst this initially looked like it would kill off a good chunk of the developers (especially those who used WebUnity to do games on the iPhone) it turns out that it was directly aimed at disabling CS5’s ability to export Flash to an iPhone app. This has then sparked comments from both sides with fingers being pointed at all sorts of things, but the main one is web standards.

Both Apple and Adobe claim that they’re supporting the open web with varying levels of truth to them. Flash is somewhat open and Apple has developed a widely adopted browser framework called WebKit which powers both their Safari browser and even Google’s Chrome. However these are small parts of much larger companies where everything else is completely proprietary, so this is really a case of the pot calling the kettle black so to speak.

So why is this such a big issue? Well as it turns out you don’t really have to dig too deep to find the answer: money and power. Just like the hay days of IE6 and Microsoft’s market domination of the web Adobe and Apple are fighting over what the next dominate technology of the web will be. Adobe is well placed to become that standard as Google has backed them on both their Android and Chrome platforms (Flash is now native in Chrome) and over 90% of Internet capable devices can display Flash to their users. Apple on the other hand has dominated the mobile market and is seeking to push the boundaries further with products like the iPad. Their reluctance to play nice has resulted in a fist fight of epic proportions, and it’s one that’s going to play out over the next few years.

Personally though I think Apple’s pick the wrong bear to poke with their anti-Flash stick. Whilst Adobe is a bit of a light weight when compared to Apple (they have about a quarter of the employees, to give you an idea) they’ve got the support of a large install base plus many large players who use their technology exclusively. Whilst many of the functions provided by Flash are usurped by HTML5 they’ve made a point that if they can’t get Flash on the iPhone, they’ll just make the best damned tools for HTML5 and dominate there again. That might sound like chest beating from them but when your flagship product is the de facto standard for artists to create in a technical field you know they’ve got some market pull behind them. If they were to port the artist friendly interface they developed for flash to HTML5 I’m sure Apple would have to rethink it’s whole position with Adobe very quickly, lest they alienate those who’ve been dedicated to the Mac name for a long time.

Apple’s idea of using their Cocoa framework as the new standard for the open web is a noble but flawed notion. Whilst the focus on the end user experience means that any application written on this framework will be at least usable and mildly intelligible it’s application to the wider Internet doesn’t seem feasible. Sure Cocoa has helped bring the mobile Internet experience out of the dark ages but its application past that is limited, especially when Android provides a similar experience that just so happens to include Flash. In the end Apple merely seeks to draw developers and users into their walled garden and will spite anyone who questions them on it. Whilst I can appreciate that it has been working for them I’m not so sure it will continue like that for long into the future, even when HTML5 gains critical mass.

It may seem like a small thing in the grand scope of the Internet but it’s always interesting to see what happens when two giants go at each other. We’re by no means at the end of this battle of the standards and I expect to see them publicly duking it out for at least a few more months to come. Still I’ll be putting my money on Adobe winning out in one way or another, with either Apple relenting or the iPhone losing its crown to the burgeoning Android market. I’m no market analyst though so there’s a good chance I’ll be completely wrong about this, but that won’t stop me from using the fallout as blog fodder 😉

Paper Certs, Brain Dumps and the Qualification Generation.

I and nearly all of my generation would have had the notion that having a university degree was the key to unlock a successful future. With around 63% of all Australians having enrolled for tertiary education at some stage in their lives we can easily assume that this a commonly held belief. It even got to the point where the trade industries were suffering due to the lack of people enrolling in apprenticeships, which lead the Howard government to attempt to sway people over to a trade in the 2007 budget. So for the most part you’re more likely to find a young Australian with a tertiary qualification of some sort than you’re not, and it appears that this qualification required mentality has spread to at least one other industry.

The IT industry overall is almost completely unregulated. There’s no formal body for qualifying someone as an IT professional nor are there any large established organisations which we can apply to, like the IEEE for engineers as an example. For the most part then when an employer is looking for someone they don’t have any standard guidelines for determining if someone who claims to have experience is the real deal, nor do they have a third party with which to verify a candidate’s story. This poses a significant problem for employers as resumes are easily faked, interviews can be coached to near perfection and you have to trust that their references aren’t just their mates doing them a favour. How then, apart from hiring them and throwing in the deep end to see them sink or swim, do you determine if a candidate is worth your time?

The answer, for many, lies in vendor certifications.

The world of IT is full of competing technologies and implementations. For every piece of equipment that makes up your computer there’s multiple companies who produce an almost identical part in form and function. As consumers this is a fantastic thing as it gives a variety of choice and low prices whilst the companies compete to ensure that their product is the one we buy. However diving into the dark world of corporate IT infrastructure shows that a companies desire to distinguish themselves from a competitor usually leads to products that are, for the most part, worlds apart from each other even if they strive to serve the same purpose. Therefore experience with one product does not readily translate to another, save for a few fundamental skills.

Coupling these 2 points of lacking formal accreditation processes and disparate technologies most companies create their own certification programs to verify that someone is competent with their brand of technology. For example Microsoft has their MCITP program (for demonstrated competence with their Windows line of products), VMware the VCP Program and so on. Any IT professional seeking to demonstrate their expertise with a product will probably undergo a program like these to formally certify their experience with a product. For those just beginning in the world of IT certifications can provide that foot in the door that many are seeking, much like those of us who got a degree for similar reasons. Still ask anyone who has a degree how much it has helped them with their professional career (put aside academia for the time being) and you’d be surprised how many retort with how it was their experienced that mattered, not the piece of paper they once held so highly.

Logically that makes sense, even outside the IT industry. It’s all well and good to have every accreditation under the sun but as many will tell you theory is usually only good in a perfect world with ideal conditions, which are quite rare in the real world. Previous experience in the field means that you at least understand the nuances of the real world implementations of theory and you should have developed your own set of algorithms to deal with the common problems that arise in your chosen field. Still if you cast your eye over the current job market you’ll see many positions requiring varying levels of qualifications in addition to industry experience and this has lead to a kind of grey market for qualifications.

I am, of course, referring to brain dumps.

Their name gives up almost all you need to know about them. Brain dumps are either straight copies of real world tests with questions and answers or study guides that are akin to the most incredible study guide ever created. You’d think that these kinds of things would be relegated to the dark recesses of some private BitTorrent tracker or secret FTP server hiding on a dark net somewhere but that’s far from the case, it’s actually quite a booming industry. Take any IT certification¹ and you can guarantee that at least part of the test or lab documents will be available online. What value can we then draw from people who have acquired these paper (I.E. nothing but paper backing them up) certifications?

The answer is rather complicated. For the most part we don’t really have anything else to fall back on, save for actually throwing someone in the job and seeing if their skills line up with their apparent qualifications. Many say that the qualifications help weed out those that would flood their inbox with useless applications, yet in my whole career I’ve only ever had 1 employer ask me for my academic record and exactly 0 have asked to verify any of my vendor certifications (I even had one who had to Google what one of them was, yet he still didn’t ask for proof it was real). Others cast their nets wide in order to scare off potential paper certs, who couldn’t hope to cover all their bases should an interview bring up every technology in question. Thus we end up in a world where the certs can be readily attained by those willing to shell out the dollars for them and employers use them only in a feeble attempt to weed those same people out.

For most employers the solution usually lies within good interviewing technique. There are certain things you can’t fake (like sound critical thinking) and using questions that have no definitive right answer is one way I’ve seen the paper certs separated from the real deal. Rote memorization or coaching won’t help you in these areas and for the most part those with experience will shine when presented with such questions (having been in such situations before).

It all seems to boil down to the fact that as a whole we’re becoming far more educated. With such a large number of people seeking higher education the value that was once granted by those pieces of paper from the hallowed halls has been diminished. In the world of IT the ease and availability of shortcuts (and, some would say, our generations entitlement mentality) to qualification heaven has, ironically, lead to the industries attempts at formal certification down the exact same path at a pace that matches the industry’s speed for innovation. They still hold some value of course, but they are far from the bastion of truth that is too often placed in them.

¹Apart from the CISCO certifications. They appear to be the only vendor who’s remained unblemished by the brain dump market. Their tests are also considered to be amongst the most difficult in the world with the lab component having an 80% first time failure rate.