Monthly Archives: July 2010

Come NASA, Let us Torch the Pork Barrel.

It really never fails to suprise me how much meddling the American congress does in NASA’s affairs, given the fact that their budget takes up a whopping 0.58% of total US government spending. The past 3 decades have seen many of NASA’s great ideas turned on their heads either due to horrible design by committee or from being given directives from people who have absolutlely zero aerospace knowledge. More recently though I grew to apperciate the new direction that Obama had laid out for NASA because, unlike Bush’s vision for space exploration, it was achievable and would lay the groundwork for future missions that would reach further into space than ever before. It seems however that NASA is still struggling to shrug off some of the pork barrel politics that had plagued it in the past and which are now threatening to ruin NASA’s future completely.

Specifically there’s a recent piece of news that tells us that the senate sub-committee in charge of NASA oversight is preparing a bill to derail Obama’s new vision for space:

Though the bill effectively cancels the delayed and over-budget Constellation moon-rocket program — as Obama requested in his NASA budget — it would repurpose that money to build a new heavy-lift rocket while largely ignoring the president’s call to fund new space-faring technology and commercial rockets that would send humans into space.

But his dramatic overhaul of the human-spaceflight program has faced fierce resistance on Capitol Hill, especially from lawmakers in states with other NASA centers or with big NASA contracts like Utah, where the solid-rocket motor that would have powered Constellation’s Ares rockets is manufactured.

The Senate bill, which if passed would lay out the direction of the space program for the next three years, would revive the fortunes of Utah’s solid-rocket maker, ATK, by requiring NASA to keep using its solid-rocket motors for a new heavy-lift rocket.

Alright I can understand that it would be hard for any congress critter to not fight for the jobs of his constituents but realistically the writing has been on the wall for sometime for these folk. The retirement of the shuttles and the infrastructure they rely on was announced over 5 years ago but of course due to the fact that the end date was well outside the current election term there was little resistance to it then. Now that we’re halfway through the current term (with the scheduled end looking to be occuring just a year before the next election) dropping all those jobs that the shuttle program supports doesn’t look too good and they’re fighting it by any means necessary.

Realistically though it’s just an exercise in pork barrel politics. If you take a look at the shuttle’s components you’ll notice that they’re not all made in the same area. That’s fair enough, sometimes you just don’t have the infrastructure. However the reason behind it was pure politics as all of the districts surrounding the Kennedy Space Center wanted a piece of the shuttle pie. As a result the external tanks are made in New Orleans, SRBs in Utah and the Space Shuttle Main Engines in California¹ with each component having to be shipped over to be assembled at the KSC. It spreads the pork around a fair bit but the efficency of the NASA program suffers as a result.

There are of course those who are taking this as a signal that congress supports an alternative vision that a group of NASA engineers have proposed, called DIRECT. Now I’ve always cast a skeptical eye over the DIRECT proposal as whilst it does take advantage of a lot of current infrastructure and reduces the launch gap considerably (on paper) it’s never really got any official traction. Additionally it keeps NASA in the business of designing rockets to use for the rather rudimentary activities that are now being taken over by private space organisations. Thus whilst there might be significant cost savings in comparison to the Ares series of rockets they still pale in comparison to commerical offerings. I still support the idea of NASA developing a new heavy lift launch system solely because it has no current commerical application, but while DIRECT does give this as an option it fails to get away from the inefficencies that plague the shuttle program (namely the giant standing army of people).

Hopefully this proposal doesn’t get any traction as it would just ruin the solid plan that Obama had laid down for the future of humanity in space. It’s time for NASA to break the chains that have been holding it back for so long handing over some of its capabilities to those who can do it cheaper, safer and faster. Only then can NASA hope to return to the days of being a pioneer in space rather than languishing as the glorified taxi service to the ISS, as many would have it be.

¹I can’t 100% guarantee the build location of the SSMEs as Rocketdyne has several locations and I can’t seem to find an official source for their build location. As far as I can tell however, they’re built somewhere different again from New Orleans or Utah.

Multi-Platform Development: Wise or Chumptastic?

Choosing a target platform when you develop an application is a big decision as your choice will influence many design decisions, make certain tasks easier and relegate some things to the realm of the impossible. For those of us with a managerial bent this dilemma is usually solved with a simple cost/benefit analysis, I.E. which of these platforms can net us the greatest revenue for the smallest cost? Usually this comes down to how large a particular user base of a platform is (although that’s not everything, as I’ve pointed out) as that translates into the largest number of potential sales. However the advent of application distribution channels such as the App Store and Xbox LIVE Marketplace has complicated this metric somewhat, especially for those developers making it on their own.

For large development houses the biggest market still appears to be the way that many of them gauge which platform to target first. One of the greatest examples of this is Call of Duty: Modern Warfare 2 as it owes its success to its beginnings on the PC. However the lack of dedicated servers angered the PC crowd who thought that their omission was a travesty against them and that their outrage was enough to sway Infinity Ward to change their minds. However if you took a look at the sales numbers PC copies of the game accounted for a  very small percentageof their overall sales putting that platform squarely in the minority. The developers rightly (from a managerial perspective) ignored the complaints as the additional work to develop the requested features would have far outweighed the potential sales that they could have derived. Still they catered to 3 platforms simultaneously as the opportunity cost for cross development for them was low thanks mostly to code portability between the Xbox and PC.

When you switch over to the other end of the spectrum the cost vs benefit analysis takes on a different form. You see in large organisations you have the benefit of being able to hire on people with the various skill sets required to develop a product for a targetted platform. If you’re lashing out on your own you are faced with the choice of either developing for what you know or training yourself up on the platform that you wish to target. Whilst most skilled developers won’t take long to become proficient when you’re looking to generate income every moment you spend not developing product is a sunk cost. Logic then dictates that you stay with what you know first before attempting to branch out into other areas lest you waste a significant amount of time developing a product that doesn’t suit your target platform.

You can see this kind of behaviour quite clearly in the mobile development space with a mere 2.6% of Android and iPhone developers having it both ways:

The answer (approximated in the graphic below) surprised us: of the nearly 55,000 mobile developers in our database over 1,000 (1,412, to be exact) had already publishedapps on both iOS and Android. This represents more than 3% of the published iOS developer population, and nearly 15% of the published Android developer group.

Despite the impressive total, we worried that it would be too easy for the fanboys on both sides of the aisle to dismiss these numbers as a crackpot minority. So we dug a little deeper, using our AppRank methodology to stack rank this cross-platform developer group based on the total volume and quality of coverage they’d received among the leading tech blogs worldwide.

Anecdotally the pattern I’ve seen is that most cross platform applications actually have their roots in a single platform. Take for instance the indy smash hit Braidwhich made its debut on the XBox LIVE platform. Whilst it was initially planned for a Windows release it took quite a while for it to make it to that platform. It has also made it onto the PS3 but not until long after it had proven success on 2 other platforms. My inkling is then that many of these cross platform developers started out as single platformers and as they had success in one they decided it was worth their time to attempt another one. Few, if any at all, would attempt to do more than one platform right from the get go.

So the question remains, is cross platform development actually worth it? For people like me probably not initially. The additional work to create a product for multiple platform not only increases the initial amount of work required it will also increase the on-going maintenance time for each of the individual versions. It seems like the best idea is to write and polish your application on your platform of choice and then, should you find success, begin the process of porting it over. This goes double if you’re looking to make some cash off a platform as the sunk costs of development and reskilling are one of the quickest ways to kill a potential revenue stream before it’s fully realized.

Red Dead Redemption: Bury Me Not on the Lone Prairie.

At first glance Red Dead Redemption was a game that wasn’t up my alley at all. For starters whilst I love open worlds and the opportunities they allow for emergent gameplay I’m always cautious when it comes to sandbox style games. Rockstar has arguably mastered the format with their Grand Theft Auto series but even their most compelling release to date (GTA IV) failed to capture me long enough to play the game all the way to the end. Additionally I’ve never been much of a western fan instead finding myself engulfed in science fiction and pure fantasy, finding the genre to be a little too bland for my tastes. Still the hype and critical acclaim that Red Dead Redemption managed to garner itself was not lost on me and not having delved into a good console game in a while I set myself the goal of playing through this title to the bitter end. What followed was a highly engrossing tale that ultimately left me with feelings that I’m still working through as I write this post.

 

The story begins with you playing a grizzled cowboy named John Marston who appears to be forced onto a train against his will by some upper class looking folks. As the story progresses you find out that he used to run in a gang and the government is using him to track his friends down to either capture or kill them. His initial attempts don’t go so well but thanks to the kindness of some local strangers he makes it through. The tale then leads on from there in usual Rockstar style with story missions appearing on a radar marked with a letter and random missions popping up in the form of strangers asking for help, events happening as you ride by and a variety of mini-games to play to pass the time. The free form nature of the game enables you to craft your own unique story for John Marston as he wanders the wild west looking for his pals of a life he’s trying to leave behind.

Now credit here were it’s due. Rockstar have created a world that feels alive, open and deceptively real. There are vast, breathtaking vistas around almost every corner and even though you could ride across the entire place in less than half an hour you still have this undeniable feeling that you’re in a world that’s a million times bigger than yourself. The NPCs whilst extremely shallow in their depths of interactivity make the areas come alive with their sound bites of commentary and, once you hit a certain point, make you feel like a living legend. The addition of NPCs in the form of wildlife that form the basis of many mini games add that extra bit of flavour that make you feel like you’re actually out in the west, able to make your living off the land.

The actual gameplay of Red Dead Redemption is actually quite a complicated beast but in true Rockstar form it’s progressively revealed to you over the course of the introductory missions so that it doesn’t overwhelm you completely. The meat of the game lies within the storyline missions which can be activated by approaching any of the giant letters on your map. In addition to the story line missions there are also “stranger” missions where you can help out various people who you’ve only just met. When you’ve tapped out all of these options there’s also the mini games which take the form of various leisure activities you’d expect in the wild west (poker, blackjack, horseshoes, etc) as well as jobs which can include things like breaking horses, herding cattle and chasing down bounties.

Now I won’t lie to you but whilst there is an incredible breadth to the number of activities which you can do after a while they do start to sort of meld into each other. Many of the story line missions are quite similar in that you’ll go to the mission giver, see a cut scene, proceed to ride for about 5 minutes whilst Marston and whoever you picked up share some dialog and then you get to your destination to either shoot up some bad guys or do one of the mini games. It is enjoyable for the first couple times and the trip to the destination is quite reminiscent of what happened in the various GTA incarnations but after a while you get bored having to spend so long riding everywhere just so they can flesh out the characters a bit more. This is where the sandbox genre falls down in my opinion as while you can almost do anything in this world in the end it detracts from the uniqueness of the story line missions making everything feel like just another obstacle that needs to be passed.

Combat in Red Dead Redemption is nothing revolutionary in terms of what it accomplishes but does give enough variety to make sure you’re not left feeling like a one trick pony. Rockstar took the tried and true Gears of War style combat in that you’ll be running and gunning from behind cover whilst having no visible health bar (save for the sound going muted and the screen being covered in blood splatters). Shooters on consoles are notoriously fiddly and to combat this Rockstar added in an aimbot that locks onto a target if you aim in their general direction. Whilst I appreciated the addition (the game would’ve been tiresome without it) when it was taken away for certain things like say, using a gatling gun, I found myself hating these sequences rather than reveling in them. This was wholeheartedly made up for with the ability to be able to lasso and hogtie people in the game, which I used with reckless abandon whenever I had the chance. Strangely though you can’t hogtie any animal, even a hog! Although you are able to lasso them and, in what I assume is a bug, glide blissfully over any terrain as your prey runs scared from you. You can also do this with other people’s horses and is probably my favourite way to travel somewhere random when feeling bored in Red Dead Redemption.

PLOT SPOILERS FOLLOW BELOW HERE:

Now as for the story and its conclusion those of you who followed me on Twitter can already guess as to how I felt about the whole ordeal. After spending 20 hours getting to know the man that was John Marston I’ll admit I became sentimentally attached to the former criminal who’s been trying to mend his ways. After chasing down the last of his former gang and riding home to the tear inducing song Compass by Jamie Lidell I fully expected to see the credits role as John embraced Abegail for the first time in what felt like forever. However the proceeding missions felt hollow as they put you right back at the start of the game and strip you of a few key things (like being able to change your outfit). I knew that in the end something bad was coming for him but really what eventuated was worse than I thought of.

You see in the final moments of John’s life where he’s gunned down by no less than 20 American soldiers there was nothing really noble about it. I can appreciate the noble sacrifice for his wife and son (who are now free from his past) and the harsh reality is that it probably rings true to what would of happened back in those days. Still I wanted at least the opportunity to be able to make a last stand that would end in a shoot out that I couldn’t win instead of Marston walking out and being cowardly gunned down. I also admit that my anger at John’s end stems from a real feeling of grief at his loss, as just writing that down has me fighting back a tear.

In the end I do what I always do when that happens, I look for answers. After looking around for a bit I found that there was a stranger mission available after the end where Jack gets revenge for his father. I went and did it and whilst I felt somewhat redeemed in the fact that Edgar Ross finally got what he deserved (with me emptying at least 15 bullets into him) there was still this hollow feeling I couldn’t shake, almost to the point of me loading up my last saved game with Marston still alive in it so I could pretend like it never happened.

SPOILERS OVER!

In the end Rockstar made yet another great game that has captured the hearts of nearly everyone who’s played it. Whilst I might be uncomfortable with the last few hours I spent with it I still can’t deny the fact I spent a good 20 hours of my life on the game and I don’t regret a single minute of it. The game is not without its issues but if you’re a fan of Rockstar and the sandbox worlds that they create then you won’t feel out of place in the wild west world of Red Dead Redemption.

Rating: 8.5/10

Red Dead Redemption is available right now on PlayStation 3 and Xbox 360 right now for AU$88 and AU$88 respectively. Game was played on the PlayStation 3 with around 21 hours of reported play time and 73% overall completion.

There’s No One Device To Change The World.

I consider myself to be pretty lucky to be living in a time when technical advancements are happening so rapidly that the world as we knew it 10 years ago seems so distant as to almost be a dream. Today I carry in my pocket as much computing power as what used to be held in high end desktops and if I so desire I can tap into untold pools of resources from cloud based companies for a fraction of what the same ability would’ve cost me even a couple years ago. With technology moving forward at such a fever pace it is not surprising that we manage to come up with an almost infinite number of ways in which to utilize it. Within this continuum of possibilities there are trends towards certain aspects which resonate with a need or want that certain audiences have, thereby driving demand for a product centered around them. As such we’ve seen the development of many devices that are toted as being the next revolution in technology with many being touted as the future of technology.

Two such ideas spring to mind when I consider recent advances in computing technology and both of them, on the surface, appear to be at odds with each other.

The first is the netbook. I can remember clearly the day that they first started making the rounds in the tech news circles I frequent with the community sentiment clearly divided over this new form of computing. In essence a netbook is a rethink of traditional computing ideals in that the latest and greatest computer is no longer required to do the vast majority of tasks that users require. It took me back to my years as a retail salesman as I can remember even back then telling over 90% of my customers that any computer they bought from us would satisfy their needs since all they were doing was web browsing, emailing and documents. The netbook then was the embodiment of the majority of users requirements with the added benefit of being portable and most importantly cheap. The market exploded as the low barrier to entry brought portable computing to the masses who before netbooks never saw a use for a portable computer.

The second is tablets. These kinds of devices aren’t particularly new although I’ll forgive you if your first ever experience with such a device was the iPad. I remember when I was starting out at university I looked into getting a tablet as an alternative to carrying around notepads everywhere and was unfortunately disappointed at the offerings. Back then the tablet idea was more of a laptop that got a swivel touchscreen added to it. Couple that with the fact that in order to keep costs down they were woefully underpowered you had devices that, whilst they had their niche, didn’t have widespread adoption. The introduction of a more appliance focused device in the form of the iPad arguably got the other manufacturers developing devices for consumption rather than general computing. Now the tablet market has exploded with a flurry of competing devices, all looking to capture this next computing revolution.

Both of these types of devices have been touted as the future of computing at one point or another and both have been pushed as being in direct competition with each other. In fact the latest industry numbers and predictionswould have you believe that the tablet market has caused a crash in the number of netbook sales. The danger in drawing such conclusions is that you’re comparing what amounts to an emerging market to an established maturing industry. Slowing growth might sound like a death knell to an industry but that’s actually more to do with the fact that as a market matures there are more people not buying the devices because they already have one, I.E. the market is reaching saturation point. Additionally the percentages give the wrong idea since you’re ignoring the market size. In 2010 alone there have already been 20 million netbooks sold, over 6 times that of the iPad and similar devices. Realistically these devices aren’t even in competition with each other.

So why did I choose the rather grandiose title for this post rather than say “Tablets vs Netbooks, Facts and Figures”? The answer, strangely enough, lies within spaghetti sauce:

(I wholeheartedly encourage you to watch that entire video, it’s quite fantastic)

The talk focuses on the work of Howard Moskowitzwho is famous for reinventing the canned spaghetti sauce industry. Companies approached him to find out what the perfect product would be for their target markets. After following tradition scientific methods he found that his data bore no correlation to the variables that he had to play with until he realised that there could be no perfect product, there had to be perfect products. The paradigm shift he brought on in the food industry can be seen in almost all products they produce today with specific sets of offerings that cater to the various clumps of consumers that desire their products.

How the heck does this relate to tablets and netbooks? Simple, neither one of these types of products is the perfect solution to end user computing and neither were any of the products that came before it. Over time we’ve discovered trends that seem to work well in worldwide markets and we’ve latched onto those. Then companies attempt to find the perfect solution to their users needs by trying to aggregate all possible options. However no one product could attempt to satisfy everyone and thus we have a diverse range of devices that fit our various needs. To make the three types of sauces analogy there are those who like their computing focused on consumption (tablets, MIDs, consoles), creation (desktops, laptops, netbooks) and integration (smartphones). These are of course wholly unresearched categories, but they seem to ring true from my anecdotal experience with friends and their varying approaches towards computing.

So whilst we may have revolutions and paradigm shifts in the computing world no one of them will end up being the perfect solution to all our needs. As time goes by we will begin to notice the trends and clumps of users that share certain requirements and develop solutions for them so the offerings from companies will become increasingly focused on these key areas. For the companies it means more work as they play catch up as each of these revolutions happens and for us it means a greater computing experience than we’ve ever had before, and that’s something that never fails to excite me.

Boredom Breeds Jerks.

If I’m seriously playing a game I find it hard to take the evil/jerk options if I’m given the choice. Maybe it’s because I like to think of myself as an upstanding member of society and being a total ass in games runs counter to that line but it’s probably because I like being the hero loved by everyone rather than the dark tyrant conquering the world. Still if there’s marked differences between the good and evil choices and the game is good enough to warrant a second playthrough (like Mass Effect 1 did, I haven’t done it with 2) I’ll usually go the other way just to get that experience. However I’ve found that, usually in sandbox type games, once I get bored with certain aspects of the game I have a tendency to switch into what I call Jerk Mode where I start messing with the game and its people in any way possible usually with hilarious results (for me anyway).

I hadn’t really done this in quite a while until I recently began trying to play through Red Dead Redemption. I had fully expected the game to be done in about 15 hours but after spending that long on primarily slogging through the story line missions I started to get a little bored with the world I had been in for so long. What followed was a classic example of Jerk Mode engaging as I began hog tying the entire town of Blackwater, punching up horses and eventually letting off hundreds of rounds in the middle of town just so I could find where the last free roaming citizens were hiding only to add them to my pile of hog tied comrades. Why the in game police take offense when I look at them the wrong way when holding a knife but barely give me a second look when I have a pile of 20 hostages tied up is beyond me, but it was quite comical when they’d walk past saying “Good day Mr Marsten”.

I’ve also found myself in Jerk Mode whenever I’m watching someone play a game that allows you to break things in extremely funny ways. I remember watching one of my housemates play Fallout 3 just after it was released and he remarked on how he could kill anyone in the game, even the core story NPCs. What ensued was an hour of me watching him over the shoulder and telling him to beat up everyone he came across, just because it would be funny. To his credit he never relented although what followed was me installing the game afterwards and acting out my twisted sense of humour on the poor citizens of the Fallout world, much to his dismay.

Looking back at all the games that were privvy to my jerky behaviour I come to realise how much it endeared the games to me. Once I had got to that point of boredom in any other game I would have simply stopped playing them and found something else to fill my time. With the ability to change my playstyle completely and fool around for a while I’d end up spending quite a lot more time with the games than I usually would and, most interestingly, enjoy them quite a lot more. It could be that I’m just supressing my inner jerk and these few times are the moments when he comes out to play but there’s something to be said for a game that allows the player who has lost interest in the game to immediately rekindle it, even if that means toturing the poor NPCs of the game’s virtual world.

My gut feeling about where this behaviour stems from is that open worlds with emergent properties really didn’t exist up until about 5 years or so ago and now that I have the opportunity I’m reveling in a new found freedom. As someone who’s been a gamer for as long as he was able to muster the hand eye co-ordination required to play them I lived through the days when the games were barely able to stray from the linear formula. Today however it seems odd when games don’t incorporate real world physics, meaningful choices and at least the feeling of a big wide world that you can bend to your whim. Sure there’s still great experiences to be had with strictly linear games but I’ll always have a soft spot for games that keep me hanging around for a little while after I’m done with them, unleashing my inner jerk on the world.

Symbian: The Ignored Giant.

Market research is a great way to procrastinate. I’ve spent quite a lot of time getting to know what platforms I should be targeting just so that I don’t waste my actual development time on building something that no one will bother using. In this time that would have been better spent actually coding something I’ve come to notice an interesting trend in the world of mobile applications: everyone seems to be ignoring the biggest market of them all, Symbian. Owned by Nokia Symbian smart phones still dominate the market with over 45% market share which dwarfs all of its competitors to the point of being more than RIM (Blackberry) and iPhone combined. So why isn’t every other developer jumping at the opportunity to exploit this market to the point that they have done for the likes of Android and the iPhone? The answer, to me at least, has its roots in simplistic ideals but overall is quite convoluted.

At its heart the neglect of the Symbian platform can be traced back to one thing: money. Symbian has been around for quite some time (its ancestors can be found as far back as the late 1980s) although its current incarnation in the world of smartphones made its first appearance back in 2001, opening up a world where a phone’s capabilities could be expanded by the installation of third party applications. Its release was closely followed by the first release of PocketPC (later renamed Windows Mobile) that supported smartphones but Symbian still had the upper hand thanks to its uptake with many of the large phone manufacturers. As time went on Symbian found its way onto nearly all of Nokia’s advanced handsets which, coupled with their easy to use interface and overwhelming feature sets, led to astonishing popularity with the 100 millionth Symbian handset being sold only 5 years later with total shipments today exceeding 390 million.

Still unlike the iPhone or Android platform there really wasn’t any incentive to develop for them. The segmentation of both the Symbian and Windows Mobile market was and still is quite vast with no real guarantee of what features or specifications one phone might have. Whilst there are still many applications that can be developed despite these limitations many developers shunned the mobile space because apart from corporate applications there was no tangible way to monetize their efforts. Then along comes the iPhone with one standard set of hardware, a large fanbase and a distribution channel with built in monetization for any developer willing to shell out the $99 fee. After that the mobile space began to open up considerably but Symbian, even with its giant market share, has yet to capitalize on the mobile application market.

This means that whilst the Symbian market might be the largest of them all its also the least likely for any developer to be able to profit from. Symbian handsets cater to a much larger market than any other, including the lower end that even Android fails to capture. Unlike Apple, which deliberately targeted a market with cash to spare, Symbian users are the least likely to pony up some cash for an application. Additionally since there’s been no real central, easy to use medium for users to get applications on their Symbian phones (I know, I tried it on my N95) the vast majority of them won’t be in the mindset to go after such an application, favouring web based applications instead.

There is also, of course, the technical challenge behind building an application on these platforms. Whilst I’ve only dabbled in Windows Mobile (which for a C# developer was incredibly easy) recent reportsshow that Symbian is not only the hardest it also requires two to three times the amount of code to complete the same application on an iPhone or Android handset respectively. Whilst learning another language is really just a lesson in semantics it still slows your development time down considerably and when you’ve got your eye on making some money from your venture a steep learning curve will be a major barrier to entry. There has been some work to reduce this somewhat with the integration of the S60 platform with the open source cross platform library QT, but my previous experiences with that framework don’t make me so hopeful that it will make developing for Symbian any easier.

The ignored giant Symbian is an interesting phenomenon as intuition would tell you that the largest install base would drive the largest secondary markets. As a developer I still find it hard to ignore the call of almost 400 million devices that could possibly run my software but knowing a few people who own Symbian devices (read: they use their phone as a phone, not much else) I still feel like my effort would be better spent elsewhere. As time goes by it will be interesting to see if Symbian can continue to hold onto its dominance in this space or if they will eventually lose out to the young upstarts Android and iOS.

Are Micro-Niche Businesses the Future of Commerce?

Maybe I’ve just been reading far too much into the world of startups and small business recently but there seems to be a trend towards developing niche businesses that are profitable due to their small size and low overheads. It’s a good model as it drives their founders to make sure their core business model is solid as typically they aren’t able to diversify their offerings due to their relative size, although it is possible to create multiple niche businesses with the right planning. The vast majority of them appear to be lifestyle businesses created by their owners to escape the drudgery of their corporate lives and it’s really only become possible in the last decade or so thanks in part to the information conduit that is the Internet.

I’m not the only one noticing this trend either. US census data indicates that the past couple years have seen a phenomenal amount of new businesses pop up each and every year:

If you took the time to sit down and sift through the US Census Bureau data, you’d see that over the past few years, entrepreneurs are starting new businesses at an unprecedented rate.  Consistently, the number of existing businesses at the end of the year has increased by between 500,000 and 1million.

That means that before subtracting out the number of startups that fail, the gross number of new businesses started is actually much higher than 1 million per year.  And that’s in the U.S. alone.

Why are entrepreneurs starting new businesses in record numbers?  The first chapter of my new book, Conquer the Chaos, makes the case we’re in an “Entrepreneurial Revolution” and it’s happening due to five big reasons.

The global financial crisis was a wake up call for many people and it showed that even the largest corporate entities weren’t immune to economy. As such people have become increasingly disillusioned with the traditional sense of being employed in a large company for the majority of their life and have begun to seek alternatives. Traditionally however there really weren’t many alternatives as the capital costs to starting up a business were out of reach of the everyman. Today however you can drop in an ecommerce site, set up a paypal account and find a drop shipper with your desired product and have your entire business ready to take orders in less than a week and for orders of magnitude less than what it used to cost. If you can tolerate the risk and are dedicated to achieving your goal there’s really nothing stopping you from trying and as many have proven it really does work.

This got me thinking, with so many small companies sprouting up that are targeting a specific niche how long will it be before all niches are covered? Realistically I know there are certain industries where a small company can’t really make it, usually in capital intensive markets (say high performance computing clusters). But there are an almost endless supply of other markets that can be directly targeted by small companies offering products and services specifically tailored for them. For us consumers it would mean that we (hopefully) get a much better product/service due to it being targeted directly at our needs rather than having to shoe horn in something that fits a wider audience.

The reality is though that when a niche company provides a product or service effectively it will begin to detract customers away from the current incumbent suppliers. Initially this can be ignored as larger companies can absorb such losses without it drastically affecting its business. Depending on how successful the new niche business is however the larger corporation will often look to acquiring it which most good businesses find hard to pass up. Usually this ends up with the business being melded into the larger corporate entity although in some cases you still get them operating independently with all the profits heading upstream. This is the better (for the consumer) of the two options but it is hardly the norm, as it does nothing to strengthen the parent’s brand power.

In the end it seems that whilst it’s infinitely easier to lash out on your own inevitably traditional business models will still stick around for a long time to come. For us as consumers it means that we will always be spoilt for choice when it comes to find the right product or service for your needs and should you come up empty handed you’ll be staring down the barrel of a new market. Whether you take advantage of that opportunity is completely up to you but as the trend is showing with over 1 million business being started up each year in the US alone it’s more than likely that someone else will do it if you don’t.

And now excuse me while I whip myself back into developing shape before some smart ass in a garage codes up my ideas 😉

Femtocells: I’m Not Paying For Your Infrastructure.

I spent the vast majority of my life living out in the country where mobile phone reception was scarce even when you were on the top of the highest hill you could find. For many years I stayed with Telstra because they were the only ones that could provide me with a connection that wouldn’t drop out most of the time and, thanks to my employment at a retail establishment that peddled their wares, I was able to get a very decent plan that kept me going until about 2 years ago. After moving into the city I’ve always felt spoiled having mobile phone reception wherever I go and I’m still mildly surprised when I get coverage indoors since the corrugated iron roof we had would kill any signal. I know I’m not the only one who’s had these kinds of issues but since I was at home I had many other ways to contact people, it was more the convience factor for those few who didn’t have IM or email.

The problem hasn’t gone away for my rural comrades who still languish with poor cell phone reception. Since the population is spread out so sparsely it’s not worth any mobile provider’s time and money to try and improve the signal out there as their potential customer base is quite small. It’s the same reason that they haven’t bothered with upgrading many rural exchanges with the DSLAM architecture required to give the same people broadband although there are other companies providing directional wireless broadband solutions to cover these guys off (that’s not the same as 3G broadband, just in case you were thinking that). The solution that companies overseas seem to be peddling to those who don’t get the mobile reception that they want seems to lie with the introduction of Femtocells, but I can’t really see how that fixes anything, nor why anyone would actually pay for the privilege.

A femtocell is basically a small version of those giant cell towers you see every so often. They work off the idea that they can route the voice and data traffic over a broadband connection, usually provided by the person who has purchased the femtocell. From a technical point of view it’s actually quite a simple and elegant solution as it makes use of existing infrastructure to provide a service that some people potentially lack. When deployed into the real world however there’s some issues that I just can’t see a simple solution for, especially when you consider those in a situation similar to mine all those years ago.

Firstly there’s the dependency on a broadband connection. Now whilst I’m not terribly familiar with the broadband situation in the USA here in Australia if you’re lucky enough to be able to get any kind of broadband the chances are you’re within a certain short distance from a telephone exchange which typically has its own cell tower. If you’re unable to get cell phone reception but you have connected broadband you’re either inside a building (which usually only kills 3G) or in some kind of freakish blackspot. Either way you’re still connected to the outside world via the Internet and possibly a landline or VOIP phone which could be your mobile phone if it’s capable of running Skype or similar. Additionally for those of us who lived with little to no mobile reception and lack proper broadband a femtocell is useless, since it simply can’t operate in those conditions.

There’s also the fact that, should Australian mobile carriers follow the USA’s lead, femtocells will have to be purchased by the end user. Now it’s always nice to have full bars on your phone but realistically if you’re at home there’s not really a need for it. The data aspect is fully covered by having wifi in the house which even the cheapest of ADSL routers come with these days. I can understand the voice aspect somewhat although if you have broadband in Australia you either have a landline which you can divert your mobile to when you’re out of range of a tower or you have naked DSL and VOIP, which could be used in much the same way. Additionally if you’ve got a smartphone there’s the possibility of using something like Skype which would still be contactable via the Internet should you lose signal at home. Really the mobile carriers should provide the customer with an outdoor picocell instead as coverage blackspots like that tend not to be isolated to a single household.

I guess I’m approaching this problem from the view of someone technically inclined as I can see the attraction for someone who’s stuck in a blackspot and doesn’t want to mess around with diverts and VOIP on their phone. Still the limited application of such devices really makes me think it should be a cost beared by the carrier as realistically it’s their infrastructure that the customer is paying for as even if it was free there’s still the broadband connection, bandwidth and power required for these devices. The problem would be rendered completely moot if a service like Google Voice came to Australia but for now it seems we’re still stuck with less than ideal solutions to poor signal issues in residential areas.

Did I Miss Short Sighted Management 101?

I know I don’t have much real world experience when it comes to managing people with the majority of my experience being focused in 3 short stints of project management. Still when I coupled that with my 2 years of formal management training I feel I’ve got a good idea about how to organise a team of people to achieve a certain goal. Additionally I’ve had a fair bit of experience working under many different kinds of people all with their own distinct management styles so I know what works practically and what doesn’t. So when I say that one of the most common problems in management (apart from not knowing your own weaknesses) is a clear lack of strategic direction and planning¹. Whilst I’d love to say that this is distinctly a public sector problem, thanks wholly to our 3 year parliamentary terms, it is also rife in the private industry.

At its heart the issue stems from immediate needs of the organisation being given a much higher priority than long term goals. Logically this is understandable as the immediate issues generally have real impacts that can be measured and the benefits can be realised in short time frames. However this also means that, unless you’re extremely lucky, you’ll be sacrificing your long term sustainability for those short term gains. For the public sector this kind of behaviour is almost ingrained as goals that stretch beyond the current incumbents term don’t usually get a whole lot of traction. For the private sector however it’s usually comes down to maximising their quarterly/annual figures which usually makes the decisions even worse than those in the public sector.

A brilliant example of this fell in lap my lap yesterday when the Department of Education, Employment and Workplace Relations (DEEWR) decided that it no longer required the services of 17% of its contractor workforce and promptly told them that they didn’t have a job anymore:

The federal Department of Education, Employment and Workplace Relations ended the contracts of 51 of its 300 IT contractors, some of whom had worked at its head office for years.

Staff in other workplaces across Canberra are anticipating similar news this month as the bureaucracy seeks to cut its technology budget by $400 million this financial year.

Stunned Education Department contractors told The Canberra Times that several staff were still unaware they had no job to return to today.

Now I haven’t been able to source any of the reasoning behind the decision² despite having 2 family members working in the department but even without that I can tell you that the decision was made with no regards to the strategic direction of the department (although I’ll also tell you how they’d argue the opposite). It all comes down to a game of numbers that went horribly wrong coupled with a distinct disconnect in the communication lines between the direct managers of the contractors and those who made the decision to let them go.

Taking a long term view of this situation would have you plan this kind of move out for at least a couple months before hand. The argument could be made that they didn’t have the work for them anymore however my people on the inside tell me that’s not the case as they’re still understaffed for the workload they have. The flip side of this could be traced back to the Gershon Report which advocated slashing contractor numbers and replacing them with permanent staff members. That move however would have required them advertising those positions months ahead of pulling such a stunt which, if you checked the agency’s job listings, you’d know hasn’t happened. The only remaining reasons are either a huge management stuff up or an attempt to slim down their budget.

Both trains of thought completely disregard the long term goals that the department has. Dropping that many staff with such little notice means that the work that they were currently responsible for is no longer being taken care of. Additionally the short notice of termination means that there could not have been a handover to other staff leaving quite a lot of work in a state of limbo, either having to be redone completely or shoe horned in to meet their milestones. The quick termination would not endear the organisation to those contractors it gave the shaft to either and DEEWR already had a somewhat shaky reputation when it came to its contracted staff.

As it turns out though it was probably a massive management stuff up, since they’ve publicly apoligised for what they did and appear to be working to get them all back on board.

What is there to be learnt from all this? Well the over arching point is that major decisions should be made with a vision that stretches beyond any immediate time frames. In my example a misinterpreted directive was applied without regards to both its short and long term consequences. Had they stuck to their guns though the decision would have long lasting effects on the department’s ability to meet their goals. As it stands they’ve already managed to make 51 contractors uncomfortable with their working situation there and the buzz has already had others question their positions there. Realistically no contractor that’s aware of this news will look at DEEWR seriously from now on.

Snap decisions should never be made when there’s the potential to have consequences that will stretch beyond the immediate time frame. The most common types of managers, those who rose from the ranks of their fellow employees, are unfortunately the most prone to lacking strategic vision for their team. There is unfortunately little that us underlings can do to steer our managers clear of these kinds of mistakes however you can minimize their impact by providing sound, timely advice to influence their decisions in the right way. Hopefully if you’re in a position to make such decisions you’ve either already identified this problem or taken my advice on board, but I’m happy to discuss your points here should you disagree with me 😉

¹Strategic in this sense relates to long term ideas, on the order of 3~5 years. Ask your current boss to see if they have a strategic plan for your area/section/division, you might be surprised at what they give you back.

²As it turns out they pointedto the Gershon Report as the source for firing the contractors. Considering that the report specifically mentioned replacing contractors with permanent staff not firing them. So as it turns out the real reason was a little from both my trains of thought: a huge management stuff up done with the hopes of slashing their budget. Words fail me to describe how idiotic this is.

Web Standards: They All Have Their Agenda.

It really should come as no surprise that anything a large corporation does is usually done in their best interests. By definition their existence is centered around increasing profit for their respective shareholders within the bounds of the law and operating outside that definition will in turn make your company not long for this world. Still we manage to suspend disbelief for certain companies which have qualities we aspire to but make no mistake they are in the end driven primarily by motives of profit. Nearly all other secondary activities are conducted to further their primary directive, even if on the surface they don’t appear that way.

Take for instance the current web standards warthat’s brewing between Apple and Adobe. Whilst both companies would have you believe that their stance is the only answer to the problem the fundamental issue that they face is not one of ubiquitous web standards, more it is about control over the future of the Internet and who will be the dominant player. I’m on record as stating that Adobe will win out thanks to its current market penetration and support from many big players. It’s no secret that Google is more on Adobe’s side in this war than Apples, as a recent post from one of their (well their subsidiary) employee states:

There’s been a lot of discussion lately about whether or not the HTML5 <video> tag is going to replace Flash Player for video distribution on the web. We’ve been excited about the HTML5 effort and <video> tag for quite a while now, and most YouTube videos can now be played via our HTML5 player. This work has shown us that, while the <video> tag is a big step forward for open standards, the Adobe Flash Platform will continue to play a critical role in video distribution.
It’s important to understand what a site like YouTube needs from the browser in order to provide a good experience for viewers as well as content creators. We need to do more than just point the browser at a video file like the image tag does – there’s a lot more to it than just retrieving and displaying a video. The <video> tag certainly addresses the basic requirements and is making good progress on meeting others, but the <video> tag does not currently meet all the needs of a site like YouTube:
All of the points Harding make add quite a lot of fuel to the fire in the whole web standards debate. He’s quite right that the current version of HTML5 does not (and most like can not) provide the features required by sites like YouTube. As such there will always be a need for plugins that fill the functionality gap between the web standards and what is technically possible. The more rich the standards are the less requirement there is for plugins but as it stands right now the features provided by third party plugins are almost a necessity for a lot of sites on the Internet and it will be a long time before the standards catch up.
However if you read on you’ll see that YouTube’s apprehension to switch over to a full HTML5 based site is fueled not only by lack of features but also because their bread and butter, videos, still lacks agreement on some core components. One of those is the codec that will be used as the standard for all content used with the <video> tag. Usually you would go with the most popular codecs out of the lot which is currently H.264. The problem with that codec is that, while it is currently royalty free, it is encumbered by a number of patents held by a consortium of companies. This poses a problem for browser developers as it means eventually they will have to pay fees to implement the video part of the web standard, which doesn’t really fit with overall vision of the HTML5 standard. Google of course has their own open codec VP8 which they’ve garnered support for which brings us full circle back to my original point: they’re only developing it to further their bottom line.
Ultimately it will be the market that decides the winner out of all this. Web standards will always lag behind what Internet enabled devices are capable of and that will mean there will have to be third party plugins to bridge the gap. Whether that gap is bridged by Adobe, Apple or some other company remains to be seen but so far the market still seems to side with Adobe as the vast majority of sites (including this one) make use of Flash in one way or another. Many sites will still go to the effort to make their content more accessible to mobile devices (like this one!) but in the end we’d still have to do that even if Apple ends up losing the war on Flash.
I guess what I’m trying to say is: if a company tells you they’re doing something that seems to be for your benefit ask yourself what they have to gain from doing it. In the end you’ll notice that they will be benefiting from it far more than you ever could.