Monthly Archives: February 2011

Absence Makes The Heart Grow Fonder (or Not).

My time spent developing my passion project hasn’t been continuous since the time I first started working on it. The first iteration lasted about a month and was a mad rush to cobble something together to mark the momentous “milestone” of 100 blog posts. I then spent the next couple months experimenting with Silverlight managing to replicate and extend the base feature set out to a point where I felt I was making progress. I then went on a 6 week hiatus from developing Geon to work on The Plan which, whilst making me a decent sized profit, never turned out to be the ticket to freedom I had hoped it would be. After taking a month off after that and coming back to look at Geon I couldn’t help but think that I was going about things in all the wrong ways, and came up with a completely new design.

This, I’ve found, is a common trend for me. Unless I continually work on a project I’ll always end up questioning the idea until I end up wondering what the point of doing it in the first place was. Initially this was quite good as whilst the first few iterations of Geon showed solid progress they were in all honesty horrid applications. However it was devastating for overall progress as the paradigm shifts I underwent during these times of developmental absence meant that the new vision was wholly incompatible with the old and I could see no way other than starting anew to get them back in line again. This is why the first2 iterations didn’t have any form of user logins and the third was such a horrible process that I don’t blame anyone for signing up for it.

I had thought that short breaks were immune to this idea as I had often taken a weekend or two off when a family event called or I was starting to feel burned out. However I hadn’t had the chance to do much work on Lobaco over the past 2 weeks thanks to me being otherwise occupied and those little tendrils of other worldly perspective started to creep in. Maybe it was the booze fueled weekend where I had a list of 5 other potentially marketable ideas or maybe it was just me pining for another break but suddenly I felt like there was so many other things I should be doing than pursuing my almost 2 year old idea. I let myself think that I could take part of the weekend off to work on one of those ideas but for some reason I just kept working on Lobaco.

I’m not sure if it was my persistence or hitting the submit on my application to Y-Combinator that did it but instead of pursuing those ideas that had tempted me all week long I just fired up Xcode and started plugging away. Whilst not my most productive weekend ever I did manage to tick off 2 more features for the iPhone client, leaving about 3 to go before my deadline of the end of March. I think the combination of a solid code base (that has all those rudimentary things done so I don’t have to spend time researching them) and almost half a year of iOS development under my belt is enough to keep the momentum going, making sure I don’t give up on this version until it reaches 1.0.

I used to think that time away from coding was just as valuable as time spent in code but that doesn’t seem to be holding as true as it used to be. Sure my first breaks led to radical changes in my vision for the end product (and is responsible for the Lobaco that exists today) but once you hit that sweet spot time away can be quite destructive, especially if you’re as prone as I am to distraction by new ideas. Thankfully the last 6 months of momentum aren’t lost on me and 2 weeks away wasn’t enough to distract me from my end goal. It would have been to easy to start procrastinating again without realizing it.

180842main_launch_quick_1280

STS-133: A Heart Renewed.

It was almost 4 months ago that I woke up in Orlando Florida, eagerly awaiting my trip to the fabled Kennedy Space Center and a day to be filled with all manner of space related fun. It was that same day that I had a dream torn from me, leaving my heart broken and me wanting to get as far away from that place as possible. Reading over the post today brought the whole day flooding back, along with the emotions that came with it. Still despite the pain of a dream not realized I couldn’t pull myself away from Twitter and the NASA TV stream, eagerly devouring each and every little detail of Discovery’s final launch into outer space.

And less than 30 minutes ago STS-133 launched from the Kennedy Space Center launch complex 39A.

Discovery’s final flight has been marred by a multitude of technical problems. The first 2 initial scrubs where due to leaks in the Orbital Maneuvering System which is used to control the space shuttle whilst its in orbit. The system consists of two pods at the rear of the orbiter that have a low thrust engine that uses hypergolic propellant and a leak in these would mean the shuttle would be unable to dock with the International Space Station. The leak was thought to be fixed and the launch was good to go on that faithful day, but Discovery wasn’t going without a fight.

The next launch window was scrubbed due to a problem with the backup main engine controller. Initial diagnostics showed that there was some transient contamination and that a reboot brought everything back into line. However after troubleshooting further, finding nothing wrong again, they did notice an unexpected voltage drop was observed. This lead them to delay the launch for 24 hours in order to find the issue. The next day was delayed due to weather and since I was there on the day I could see why they did. The final day for this launch window saw a hydrogen leak from the main tank that was outside acceptable mission limits, and the mission was scrubbed until today.

The external tank on Discovery had multiple issues. The first was the connector used to vent off excess hydrogen during fueling which was what caused the final delay before Discovery’s final launch. During the investigation into why there was such a substantial leak  cracks were discovered in some of the external tanks insulation and upon further inspection it was found that many parts of the external tank had cracks through them. The construction of these particular parts of the external tank was different that from what was used previously and NASA has stated that this contributed to the cracking found in the external tank. Extensive repairs were carried out on the tank and it was only declared flight ready earlier this year. This meant that the turnaround time for Discovery was the longest of any shuttle bar STS-35 at 170 days.

What’s so special about STS-133 however is the sheer amount of payload it will be delivering to the ISS. The first will be the Permanent Multipurpose Module which is a modified version of one of the Multi-Purpose Logistics modules that have flown in many previous shuttle missions. Not only will this deliver almost 8 tons worth of cargo to the space station it will also add a significant amount of livable space to the ISS, rivaling that of the Kibo module. Many future crew missions are dedicated to configuring the PMM and it’s sure to prove valuable to the ISS.

Another interesting bit of cargo that’s making its way to the ISS is Robonaut2, the first humanoid robot ever to visit the station. The idea behind it is that a humanoid robot could be capable of performing many tasks that an astronaut does such as space station maintenance. Initially it will be housed inside the ISS and will undergo strict testing to see how it copes in the harsh environment of space. After a while its capabilities could be expanded and it might not be long before you see Robonaut working along side astronauts on EVAs. This could be quite a boon for the astronauts on the ISS as planning repairs can be quite time consuming and Robonaut could provide a speedy alternative in the event of an emergency.

The last, but certainly not least, bit of Discovery’s final payload is the SpaceX DragonEye sensor. This isn’t the first time that NASA has flown something for SpaceX, having taken the same sensor up on board STS-127 and STS-129, but it is likely to be the last time the sensor is flown before a real Dragon capsule attempts to use it to dock with the space station. The DragonEye sensor is an incredibly sophisticated bit of kit. It provides a 3D image based on LIDAR readings and can determine range and bearing information. The whole system went from concept to implementation in just on 10 months, showing the skill that the SpaceX guys have went it comes to getting things done.

To be honest I was going to put off doing this post for a couple days just because I didn’t want to think about STS-133 anymore than I needed to. But the second I saw that the NASA TV steam was up I couldn’t help but be glued to it for the entire time it was up. Sure I might not be there to see it in person but I’ve finally remembered why I became so enamored with space in the first place: it’s just so damned exciting and inspiring. I may have had my heart broken in the past but when a simple video stream of something I’ve seen dozens of times in the past can erase all that hurt I know that I’m a space nut at heart and I’ll keep coming back to it no matter what.

Sometimes You Have to Ignore Your Users.

My mum isn’t the most technical person around. Sure she’s lived with my computer savvy father for the better part of 3 decades but that still doesn’t stop her from griping about new versions of software being confusing or stupid, much like any regular user would. Last night I found out that her work had just switched over to Windows 7 (something I’ve yet to do at any office, sigh) and Office 2010. Having come from XP and Office 2003 she lamented the new layout of everything and how it was impossible to get tasks done. I put forth that it was a fantastic change and whilst she might fight it now she’ll eventually come around.

I didn’t do too well of convincing her that, though ;)

You see when I first saw Vista I was appreciative of the eye candy and various other tweaks but I was a bit miffed that things had been jumbled around for seemingly no reason. Over time though I came to appreciate the new layout and the built in augmentations (start menu search is just plain awesome) that helped me do things that used to be quite laborious. Office 2007 was good too as many of the functions that used to be buried in an endless stream of menu trees were now easily available and I could create my own ribbon with my mostly used things on it. Most users didn’t see it that way however and the ribbon interface received heavy criticism, on par with that leveled at Vista. You’d then think that Microsoft would’ve listened to their users and made 7 and office 2010 closer to the XP experience, but they didn’t and continued along the same lines.

Why was that?

For all the bellyaching about Vista it was actually a fantastic product underneath. Many of the issues were caused by manufacturers not providing Vista compatible drivers, magnified by the fact that Vista was the first consumer level operating system to support 64 bit operation on a general level (XP64 was meant for Itaniums). Over the years of course drivers matured and Vista became quite a capable operating system although by then the damage had already been done. Still it laid the groundwork for the success that Windows 7 has enjoyed thus far and that will continue long after the next iteration of Windows is released (more on that another day ;)).

Office 2010 on the other hand was a different beast. Microsoft routinely consults with customers to find out what kind of features they might be looking for in future products. For the past decade or so 80% of the most requested features have already been in the product for a while, users just weren’t able to find them. In order to make them more visible Microsoft created the ribbon system, putting nearly all the features less than one click away. Quite a lot of users found this to be quite annoying since they were used to the old way of doing things (and many old shortcuts no longer worked) but in the end it’s won over many of its critics showcased by its return in 2010.

What can this experience tell us about users? Whilst they’re a great source of ideas and feedback that you can use to improve your application sometime you have to make them sit down and take their medicine so that their problems can go away. Had Microsoft bent over to the demands of some of their more vocal users we wouldn’t have products like Windows 7 and Office 2010 that rose from the ashes of their predecessors. Of course many of the changes were initially driven by user feedback so I’m not saying that their input was completely worthless, more that sometimes in improving a product you’ll end up annoying some of your loyal users even if the changes are for their benefit.

I’m More Agile Than I Thought.

I was a beautifully warm night in late December down in Jervis Bay. My friends and I had rented a holiday house for the New Years weekend and we had spent most of the day drinking and soaking in the sun, our professional lives a million miles away. We had been enjoying all manner of activities from annoying our significant others with 4 hour bouts of Magic: The Gathering to spending hours talking on whatever subject crossed our minds. Late into the evening, in a booze fueled conversation (only on my end, mind you), we got onto the subject of Agile development methodologies and almost instantly I jumped on the negative bandwagon, something that drew the ire of one my good mates.

You see I’m a fan of more traditional development methodologies, or at least what I thought were traditional ideas. You see I began my software development career back in 2004, 3 years after the Agile Manifesto was brought into existence. Many of the programming methodologies I was taught back then centered around iterative and incremental methods and using them in our projects seemed to fit the bill quite well. There was some talk of the newer agile methods but most of it was written off as being experimentation and my professional experience as a software developer mirrored this.

My viewpoint in that boozy conversation was that Agile methodologies were a house of cards, beautiful and seemingly robust if there’s no external factors on them. This hinged heavily on the idea that some of the core Agile ideals (scrum, pair programming and it’s inherit inability to document well) are detrimental in an environment with skilled programmers. The research done seems to support this however it also shows that there are significant benefits for average programmers, which you are more likely to encounter. I do consider myself to be a decently skilled programmer (how anyone can be called a programmer and still fail FizzBuzz is beyond me) which is probably why I saw Agile as being more of a detriment to my ability to write good code.

After taking a step back however, I realised I was more agile than I previously thought.

There are two methodologies I use when programming that are included in the Agile manifesto. The first of these is Extreme Programming which underpins the idea of “release early release often”, something I consider to be necessary to produce good working code. Even though I don’t announce it on this blog every time I get a new feature working I push it straight into product to see how it fairs in the real world. I also carry with me a copy of the latest iPhone client for testing in the wild to make sure the program will function as expected once its deployed. Whilst I don’t yet have any customers other than myself to get feedback from it still keeps me in the mindset of making sure whatever I produce is workable and viable.

The second is Feature Driven Development, a methodology I believe that goes hand in hand with Extreme Programming. I usually have a notepad filled with feature ideas sitting in front of me and when I get time to code I’ll pick one of them and set about getting it implemented. This helps in keeping me from being distracted from pursuing too many goals at once, making sure that they can all be completed within a certain time frame. Since I’m often coding on the weekends I’ll usually aim to get at least one feature implemented per weekend, accelerating towards multiple features per weekend as the project approaches maturity.

Whilst I haven’t yet formally conceded to my friend that Agile has its merits (you can take this post as such, Eamon ;)) after doing my research into what actually constitutes an Agile methodology I was surprised to find how much in common I shared with them. Since I’m only programming on my own at the moment many of the methods don’t apply but I can’t honestly say now that when I go about forming a team that I won’t consider using all of the Agile Manifesto to start off with. I still have my reservations about it for large scale solutions, but for startups and small teams I’m beginning to think it could be quite valuable. Heck it might even be the only viable solution for small scale enterprises.

Man I hate being wrong sometimes.

Passion Misplaced: The Dark World of Game Leaks.

The last thing you want as a developer is your code to go out into the wild before its ready. When that happens people start to build expectations on a product that’s not yet complete and will form assumptions that, for better or worse, don’t align with the vision you had so carefully constructed. Most often this happens as a result of management pressure and there’s been many a time in my career where I’ve seen systems moved up into production long before they’re ready for prime time. However the damage done there pales in comparison to that can be done to a game that’s released before its ready and I’m almost ashamed to admit that I’ve delved into this dark world of game leaks before.

The key word there is, of course, almost.

I remember my first steps into this world quite well. It was late 2002 and news began to make the rounds that someone had leaked an early alpha build of Doom 3, the next installment in the series in almost a decade. I was incredibly intrigued and began my search for the ill-gotten booty scouring the vast recesses of e-Donkey and Direct Connect, looking for someone who had the magical files. Not long after I was downloading the 380MB file over my dial up connection and I sat back whilst I waited for it to come down.

After it finished downloading I unzipped the package and waited whilst the crazy compression program they had used did its work, feverishly reassembling the code so that I could play it. This took almost an hour and the eventual result was close to double the size of the file I downloaded, something I was quite thankful for. After a few tension filled seconds of staring at the screen I double clicked the executable and I was greeted with the not yet released version of Doom 3. The game ran extremely poorly on my little box but even then I was awe struck, soaking up every second until it crashed on me. Satisfied I sank back into my chair and hopped onto Trillian to talk to my friends about what I had just seen.

It wasn’t long until I jumped back into this world again. Just under a year later rumors started to make the rounds that none other than Valve had been subjected to a sophisticated attack and the current version of Half Life 2 copied. The gaming community’s reaction was mixed as we had been promised that the game was ready to be released this year but as far as everyone could tell the current build was no where near ready. Instead of jumping straight in this time however I sat back and considered my position. Whilst I was extremely eager to see Valve’s latest offering I had seen the damage that had been done with Doom 3′s premature release and my respect for Valve gave me much trepidation when considering taking the plunge once again. Seeing the files on someone’s computer at a LAN I couldn’t let the opportunity go by and I snagged myself a copy.

The game I played back then, whilst by no means a full game, still left a long lasting impression on me. The graphics and environments were beautiful and the only level I got to work properly (I believe it was the beach level) was made all the more fun by the inclusion of the makeshift jeep. I couldn’t bring myself to play it for long though as whilst I knew that the code leak wasn’t the sole reason Valve delayed Half Life 2 I knew it wasn’t going to bring the game to me any faster. This time around I deleted my copy of the leaked game and waited patiently for its final release.

Most recently it came to my attention that the Crysis 2 source, which apparently includes the full game and a whole host of other goodies, made its way on most popular BitTorrent sites. This time around however I haven’t even bothered to go and download the game, even just for curiosity’s sake. There’s less than a month to go until the official release and really I’d rather wait that long to play it legitimately than diving back into that dark world I had left behind so long ago. The temptation was definitely there though, especially considering how much fun I had in the original Crysis, but a month isn’t a long time to wait especially with the other games I’ve got on my current backlog.

If there’s one common theme I’ve seen when these leaks come out it’s the passion that the community has for these game development companies and their flagship titles. Sure its misplaced but the fever pitch that was reached in each of these leaks shows just how much people care about these games. Whilst it might damage the project initially many of them go on to be quite successful, as both Half Life 2 and Doom 3 did. Crysis 2 should be no different but I can still understand the heartache that those developers must be going through, I don’t know what I’d do if someone nicked off with the source code to Lobaco.

Will I ever download a leaked copy of a game before it’s release? I can’t be sure in all honesty. Although I tend to avoid the hype these days I still do get really excited when I hear about some titles (Deus Ex: Human Revolution for example) and that could easily overwhelm my sensibility circuits forcing me to download the game. I do make good on purchasing the games when they’re released however and since I’m a bit of a collector’s edition nut I believe I’ve paid my penance for delving into the darker side of the gaming world. I can completely understand if game developers don’t see eye to eye with me on this issue but I hope they recognize passion, however misplaced, when they see it.

Focused Simplicity.

It’s really easy to fall into the trap of trying to build something you think is simple that ends up being a complicated mess. Us engineers are amongst the most common offenders in this regard, often taking a simple idea and letting the feature creep run out of hand until the original idea is coated in 10 layers of additional functionality. I’d say that this is partly due to our training as modular design and implementation was one of the core engineering principles that was drill into me from day 1 although to be fair they also taught us how quickly the modular idea fell apart if you took it too far. There’s also the innate desire to cram as much functionality as you can into your product or service as that would make it appear more appealing to the end user, however that’s not always the case.

When Geon was starting out I had a rough idea of what I wanted to do: see what was going on in a certain location. That in itself is a pretty simple idea and the first revisions reflected that, although that was probably due to my lack of coding experience more than anything else. As time went on I got distracted by other things that forced me away from my pet project and upon return I had one of those brainwaves for improving Geon in ways I had not yet considered. This lead to the first version that actually had a login and a whole host of other features, something I was quite proud of. However it lacked focus, was confusing to use and ultimately whilst it satisfied some of the core vision it wasn’t anything more than a few RSS feeds tied together in a silverlight front end with a badly coded login and messaging framework hidden under the deluge of other features.

Something needed to change and thus Lobaco was born.

Increasingly I’m seeing that simplicity is the key to creating an application that users will want to use. On a recent trip to Adelaide my group of friends decided to use Beluga to co-ordinate various aspects of the trip. Beluga really only does one thing, group messaging, but it does it so well and in such a simple way that we constantly found ourselves coming back to it. Sure many of the functions are already covered off by say SMS or an online forum but having a consistent view for all group members that just plain worked made organizing our band of bros that much easier. It’s this kind of simplicity that keeps me coming back to Instagr.am as well, even though there’s similar levels of functionality included in the Twitter client (apart from the filters).

Keeping an idea simple all sounds like it would be easy enough but the fact that so many fail to do so show how hard it is to refine a project down to its fundamental base in order to develop a minimum viable product. Indeed this is why I find time away from developing my projects to be nearly as valuable as the time I spend with them as often it will get me out of the problem space I’ve been operating in and allow me to refine the core idea. I’ve also found myself wanting simple products in lieu of those that do much more just because the simple ones tend to do it better. This has started to lead me down the interesting path of finding things I think I can do better by removing the cruft from a competing product and I have one to test out once I get the first iteration of the Lobaco client out of the way.

I guess that will be the true test to see if simplicity and focus are things customers desire, and not just geeks like me.

Zero Value Blogging: News From Non-News.

I’ve found that no matter how hard you try to keep the quality of your blog high you’ll eventually end up posting something that’s utter crap, even more so if you go for the silly idea of blogging on a regular basis. This particular blog is a good example of that as whilst I’m overall satisfied with the level of quality stuff I’ve written over the years there’s more than a few examples of me trying to shit when I didn’t have to go and ending up posting something that does little more than keep this blog alive in Google’s search algorithms. Still this won’t stop me from pointing out when others crap out posts that add nothing of any value to anyone, especially when the articles are pulled directly out of their asses.

One of my favorite blogs who I regularly use as a punching bag here is Techcrunch. Don’t get me wrong there’s a reason that I keep coming back to them everyday for my fix on up and coming companies (I’m mostly watching for competitors) but they do have a habit of making news out of innocuous crap in order to generate some page views. From creating recursive posts with 0 content to wild speculation on new products without little to no research they’re no strangers to peddling out shit to their readers and seemingly act surprised when a vocal bunch of them begin trolling them. With the volume they put out though its inevitable that a percentage of their content will end up like this, but that doesn’t make up for the fact that it adds nothing to the value of their site or the wider Internet.

And rightly meta-blogging like this is similarly of low to zero value as all I’m really doing here is belly-aching about a much more successful tech blog. I try to avoid posts like these wanting instead to give my readers the information behind the news so that my posts can stand by themselves (and as a result age well) but a combination of lack of inspiration, seeing one of these 0 value posts and having this thing in draft for a couple weeks finally pushed me over the edge. You’d think the irony would be getting to me, but I’m just happy that I can satisfy my OCD for the day by getting this thing written.

I think the biggest issue I have with this kind of blogging is that when big sites do it the smaller ones follow suit turning the non-news story into a story in itself. I pride myself on not laying on the bullshit too strongly here and if I can’t verify something I just don’t write about it (or flag it as opinion). Unfortunately in the rapidly paced world of online news there’s really little time to allocate to fact checking a story when it hits, leaving you with the undesirable option of either reporting it verbatim or missing the boat by attempting to verify the story. My rule of posting once a day negates this problem (and also helps keep me sane) but also negates any benefits of posting on hot news as I’m often behind the times. I’m not a news reporting site however, so the impact on me is quite minimal.

When you’re making a living from the number of page views that come to your site it is understandable that you’ll do anything to keep that number high. Hell even just having a higher page view count can make you feel pretty good (like it did back when I first started this site) but in the end being proud of your work feels a lot better. I might change my tune when I finally think about monetizing this site, which could be coming soon since I moved this to a proper server, but that won’t change the fact that I’ll hate on those who aren’t providing any real value and I encourage anyone to point back to this post should I start playing fast and loose with the quality content just to keep the page views up.

Watson-ibm-jeopardy-supercomputer

Deep Blue, Watson and The Evolution of AI.

I’m not sure why but I get a little thrill every time I see something that’s been completely automated that used to require manual intervention from start to finish. It’s probably because the more automated something is the more time I have to do other things and there’s always that little thrill in watching something you built trundle along its way, even if it falls over part way through. My most recent experiment in this area was crafting the rudimentary trainer for Super Meat Boy to get me past a nigh on impossible part of the puzzle, co-ordinating the required key strokes with millisecond precision and ultimately wresting me free of the death grip that game held on me.

The world of AI is an extension of the automation idea, using machines to perform tasks that we would otherwise have to do ourselves. The concept has always fascinated me as more and more we’re seeing various forms of AI creeping their way into our everyday lives. However most people won’t recognize them as AI simply because they’re routine, but in reality many of the functions these weak AIs perform used to be in the realms of science fiction. We’re still a long way from having a strong AI like we’re used to seeing in the movies but that doesn’t mean many facets of it aren’t already in widespread use today. Most people wouldn’t think twice when a computer asks them to speak their address but going back only a few decades would see that be classed as the realms of strong AI, not the expert system it has evolved into today.

What’s even more interesting is when we create machines that are more capable than ourselves at performing certain tasks. The most notable example (thus far) of a computer be able to beat a human at a certain non-trivial task is Deep Blue, the chess playing computer that managed to beat the world chess champion Kasparov albeit under dubious circumstances. Still the chess board is a limited problem set and whilst Deep Blue was a super computer in its time today you’d find as much power hidden under the hood of your Playstation 3. IBM’s research labs have been no slouch in developing Deep Blue’s successor, and it’s quite an impressive beast.

Watson, as it has come to be known, is the next step in the evolution of AIs performing tasks that have only been in the realms of humans. The game of choice this time around is Jeopardy a gameshow who’s answers are in the form of a question and makes extensive use of puns and colloquialisms. Jeopardy represents a unique challenge to AI developers as it involves complex natural language processing, searching immense data sets and creating relationships between disparate sources of information to finally culminate in an answer. Watson can currently determine whether or not it can answer a question within a couple seconds but that’s thanks to the giant supercomputer that’s backing it up. The demonstration round showed Watson was quite capable of playing with the Jeopardy champions, winning the round quite with a considerable lead.

What really interested me in this though was the reaction from other people when I mentioned Watson to them. It seemed that a computer playing Jeopardy (and beating the human players) wasn’t really a big surprise at all, in fact it was expected. This was telling about how us humans view computers as most people expect them to be able to accomplish anything, despite the limitations that are obvious to us geeks. I’d say this has to do with the ubiquity of computers in our everyday lives and how much we use them to perform rudimentary tasks. The idea that a computer is capable of beating a human at anything isn’t a large stretch of the imagination if you treat them as mysterious black boxes but it still honestly surprised me to learn this is how many people think.

YouTube Preview Image

Last night saw Watson play its first real game against the Jeopardy champions and whilst it didn’t repeat its performance of the demonstration round it did tie for first place. The second round is scheduled to air sometime tomorrow (Australia time) and whilst I’ve not yet had a chance to watch the entire round I can’t tell you how excited I am to see the outcome. Either way the realm of AI has taken another step forward towards the ultimate goal of creating intelligence born not out of flesh, but silicone and whilst some might dread the prospect I for one can’t wait and will follow all developments with baited breath.

4G and The National Broadband Network: They’re not in Competition.

Telstra was a brilliant example of why natural monopolies should never be put in the hands of private share holders. Whilst the situation has improved quite dramatically over the past decade thanks to strict regulation and enhanced competition we’re still suffering a few headaches of not jumping on the broadband bus earlier than we should have. Still though the Australian government is being no slouch when it comes to charging forward into the future with the National Broadband Network which, if fully implemented, will see Australia able to count themselves amongst the top tier of Internet enabled nations. Still with the high cost and long implementation timeline many are looking at alternatives that can provide similar benefits, and the first place they turn to is wireless.

Today the issue was brought into the spotlight again as Telstra announced their plans to do a nation wide rollout of 4G LTE (Long Term Evolution) wireless broadband services. The comparisons to the NBN flowed thick and fast, with many questioning the benefits of having both:

Telstra will significantly upgrade its mobile network to take advantage of fast 4G technology that will allow users to obtain speeds similar to home broadband connections while on the go.

The announcement comes on the back of a government-commissioned report warning uptake to its $36 billion network could be stifled by wireless technologies.

Long time readers will know I’ve touched on this issue briefly in the past after having a few long conversations with fellow IT workers over the NBN. On a pure theoretical level 4G wins out simply because you get similar speeds without having to invest in a large scale fiber network and you get the speeds wherever you have coverage. The problem is whilst the 4G specification does make provisions for such high speeds there’s a lot of caveats around being able to deliver it at that level, and they’re not all just about signal strength.

Upgrading the current 3G network to support 4G is no small task in itself, requiring all towers to be upgraded with additional transceivers, antennas and supporting infrastructure. Whilst upgrading the towers themselves won’t be too difficult the real problem comes in when people start wanting to use this new connection to its fullest potential, attempting to get NBN speeds from their wireless broadband. This at the very least requires an infrastructure upgrade on the scale of Fiber to the Node (FTTN) as the bandwidth requirements will outstrip the current infrastructure if they are used as a replacement for the NBN. Most critics looking to replace the NBN with wireless neglect this fact and in the end not upgrading the backhauls from the towers means that whilst NBN speeds would be possible they’d never be realised in practice.

Wireless is also no replacement for fixed line as it is much harder to provide a guaranteed level of service, something businesses and government entities rely on. Sure many of the limitations can be worked around with good engineering but it will still lack the scalability of a fixed fiber solution that already has implementations in the multi-gigabit range. Wireless might make sense for some low use consumer products (I’d love to get my mobile videos faster) but the fact is that if you’re relying on your Internet connection for critical business functions you’re not going to be doing them over wireless. Heck I don’t think anyone in the 4G enabled parts of the USA is even attempting to do that.

In reality the NBN and Telstra’s 4G network shouldn’t really be seen as being in competition with each other, they’re really 2 completely different products. The NBN is providing the ground level infrastructure for an Internet revolution in Australia, something that will bring extremely high speed Internet access to the masses. 4G should be seen as an evolutionary step in the mobile sector, enabling much more rich Internet services to be delivered to our handsets whilst offering some of the capability of a fixed line when you’re on the go. The sooner everyone realizes this the better as playing them off each other is just a waste of time and won’t lead to anything positive for Australia as a nation.

11x0211nokiaconcept

Nokia and Windows Phone 7: A Force to be Reckoned With.

I’ve had quite a few phones in my time but only 2 of them have ever been Nokias. The first was the tiny 8210 I bought purely because everyone else was getting a phone so of course I needed one as well. The second was an ill-fated N95 which, despite being an absolutely gorgeous media phone, failed to work on my network of choice thanks to it being a regional model that the seller neglected to inform me about. Still I always had a bit of a soft spot for Nokia devices because they got the job done and they were familiar to anyone who had used them before, saving many phone calls when my parents upgraded their handsets. I’ve even wondered loudly why developers ignore Nokia’s flagship mobile platform despite it’s absolutely ridiculous install base that dwarfs all of its competitors, acknowledging that it’s mostly due to their lack of innovation on the platform.

Then on the weekend a good friend of mine tells me that Nokia had teamed up with Microsoft to replace Symbian with Windows Phone 7. I had heard about Nokia’s CEO releasing a memo signalling drastic changes ahead for the company but I really didn’t expect that to result in something this drastic:

Nokia CEO Stephen Elop announced a long-rumored partnership with Microsoft this morning that would make Windows Phone 7 Nokia’s primary mobile platform.

The announcement means the end is near for Nokia’s aging Symbian platform, which many (myself included) have criticized as being too archaic to compete with modern platforms like the iPhone OS or Android. And Nokia’s homegrown next-generation OS, MeeGo, will no longer be the mythical savior for the Finnish company, as it’s now being positioned more as an experiment.

We’ve argued for some time that a move to Windows Phone 7 would make the most sense for Nokia, and after Elop’s dramatic “burning platform” memo last weekend, it was all but certain that the company would link up with Microsoft.

It’s a bold move for both Nokia and Microsoft as separated they’re not much of a threat to the two other giants in the mobile industry. However upon combining Nokia is ensuring that Windows Phone 7 reaches many more people than it can currently, delivering handsets at price ranges that other manufacturers just won’t touch. This will have a positive feedback effect of making the platform more attractive to developers which in turn drives more users to come to the platform when their applications of choice are ported or emulated. Even their concept phones are looking pretty schmick:

The partnership runs much deeper than just another vendor hopping onto the WP7 bandwagon however. Nokia has had a lot more experience than Microsoft in the mobile space and going by what is said in an open letter that the CEOs of both companies wrote together it looks like Microsoft is hoping to use that experience to further refine the WP7 line. There’s also a deep integration in terms of Microsoft services (Bing for search and adCenter for ads) and interestingly enough Bing Maps won’t be powering Nokia’s WP7 devices, it will still be OVI Maps. I’m interested to see where this integration heads because Bing Maps is actually a pretty good product and I was never a fan of the maps on Nokia devices (mostly because of the subscription fee required). They’ll also be porting all their content streams and application store across to the Microsoft Marketplace which is expected considering the level of integration they’re going for.

Of course the question has been raised as to why they didn’t go for one of the alternatives, namely their MeeGo platform or Google Android. MeeGo, for all its open source goodness, hasn’t really experienced the same amount of traction that Android has and has firmly been in the realms of “curious experiment” for the past year, even if Nokia is only admitting to it today. Android on the other hand would’ve made a lot of sense, however it appears that Nokia wanted to be an influencer of their new platform of choice rather than just another manufacturer. They’d never get this level of integration from Google unless they put in all the work and then realistically that does nothing to help the Nokia brand, it would all be for Google. Thus WP7 is really the only choice with these considerations in mind and I’m sure Microsoft was more than happy to welcome Nokia into the fray.

For a developer like me this just adds fuel to the WP7 fire that’s been burning in my head for the past couple months. Although it didn’t take me long to become semi-competent with iPhone SDK the lure of easy WP7 development has been pretty hard to ignore over the past couple months, especially when I have to dive back into Visual Studio to make API changes. Nokia’s partnership with Microsoft means that there’s all the more chance that WP7 will be a viable platform for the long term and as such any time spent developing on it is time well spent. Still if I was being truly honest with myself I’d just suck it up and do Android anyway but after wrangling with Objective-C for so long I feel like I deserve a little foray back into the world of C# and Visual Studio goodness and this announcement justifies that even more.