Monthly Archives: August 2011

How Would We Defend Against a Killer Asteroid?

Earth is constantly being bombarded with all sorts of things from space. The sun constantly smashes us with solar winds and radiation, asteroids are constantly making their fiery descents and every so often we’ll have one of our own bits of equipment come back down once its reached the end of its life (or sometimes, sooner). Thankfully our atmosphere does a pretty good job of breaking these things up before they reach the ground and most of the time debris from space lands in an unpopulated area, causing little to no harm. Still there’s evidence littering our planet that tells us that large objects from space make their way down to the surface, often with very deadly consequences.

Probably the most famous piece of evidence to support this, even though people don’t usually know it’s name, is the Chicxulub crater on the Yucatan peninsula. This is the crater that is currently believed to be responsible for the mass extinction event that happened approximately 65 million years ago, the one that wiped out the dinosaurs. The impactor, a fancy name for the asteroid that made that giant crater, was estimated to be about 10KM in diameter. The collision has been estimated to have a total energy output of something like 96 teratons of TNT, 2 million times more powerful that the largest nuclear weapon ever detonated. With that kind of power being unleashed it’s then very plausible that it was responsible for the extinction of many species.

The most recent example we have of something like this, although many orders of magnitude less severe, is the Tunguska event which happened in Russia back in 1908. Whilst not technically an impact from an asteroid (or comet, possibly), it is believed that the Tunguska asteroid exploded about 5~10KM above the surface, it still managed to level an area of over 2,000 square kilometres. That’s still powerful enough to take out a major metropolitan area however, so you’d hope that we’d have some strategies for dealing with potential events like this.

Turns out, we do.

Now many people would say “Why wouldn’t you just nuke the bastard” figuring that our most powerful weapon would be more than enough to vaporize a potential threat before it could materialize. The thing is though whilst nuclear weapons are immensely powerful they derive much of their power from the blast wave that they create upon detonation. In space however there’s nothing for them to create a blast wave with so much of the nuke’s devastating power is lost, leaving just the thermal radiation to do its work. Depending on the type of asteroid¹ it will either make the problem worse or simply do nothing at all.

The better option is something called a Gravity Tug, a specially designed spacecraft launched well in advance of the potential impact event to steer the asteroid off course. In essence they’re a simple idea the spacecraft simply approaches the asteroid and then stays next to it, using ion thrusters to keep a set distance between them. Whilst the gravitational effect of the spacecraft on the asteroid is minuscule over time it adds up to be enough to steer the asteroid away from its crash course with earth. Indeed this exact idea is being proposed to deflect the potential impactor Apophsis who’s got a small chance of hitting earth in 2036. Of course this only works for asteroids we know about but our tracking is good enough now that it’s quite hard for a potential disaster causing asteroid to slip through unnoticed.

When it comes down to it having an asteroid cause significant damage is a distinctly rare event with our first line of defence (our atmosphere) doing a pretty good job of breaking up would be impactors. Still it’s good to know that despite the vanishingly small possibility of such a thing happening we’re still prepared for it, even if it means having to launch something years in advance. Maybe we’ll eventually be able to modify that technology to be able to capture asteroids in our orbit so we could utilize them as bases for further operations in space. I’m not holding my breath for that though, but it’s a nice fantasy to have none the less.

¹There are 3 main types of asteroid. The first is basically solid rock compressed together, so the asteroid is one solid object. The second is a collection of rubble that’s held together by the tenuous gravity between all the small fragments. The last are iron asteroids which are solid lumps of metal, which are the really scary ones.

 

The Hybrid Cloud Paradigm Clash.

Maybe it’s my corporate IT roots but I’ve always thought that the best cloud strategy would be a combination of in house resources that would have the ability to offload elsewhere when extra resources were required. Such a deployment would mean that organisations could design their systems around base loads and have the peak handled by public clouds, saving them quite a bit of cash whilst still delivering services at an acceptable level. It would also gel well with management types as not many are completely comfortable being totally reliant on a single provider for any particular service which in light of recent cloud outages is quite prudent. For someone like myself I was more interested in setting up a few Azure instances so I could test my code against the real thing rather than the emulator that comes with Visual Studio as I’ve always found there’s certain gotchas that don’t show up until you’re running on a real instance.

Now the major cloud providers: Rackspace, AWS, et. al. haven’t really expressed much interest in supporting configurations like this which makes business sense for them since doing so would more than likely eat into their sales targets. They could license the technology of course but that brings with it a whole bunch of other problems like what are supported configurations and releasing some measure of control over the platform in order to enable end users to be able to deploy their own nodes. However I had long thought Microsoft, who has a long history of letting users install stuff on their own hardware, would eventually allow Azure to run in some scaled down fashion to facilitate this hybrid cloud idea.

Indeed many developments in their Azure product seemed to support this, the strongest of which being the VM role which allowed you to build your own virtual machine then run it on their cloud. Microsoft have offered their Azure Appliance product for a while as well, allowing large scale companies and providers the opportunity to run Azure on their own premises. Taking this all into consideration you’d think that Microsoft wasn’t too far away from offering a solution for medium organisations and developers that were seeking to go to the Azure platform but also wanted to maintain some form of control over their infrastructure.

After talking with a TechEd bound mate of mine however, it seems that idea is off the table.

VMware has had their hybrid cloud product (vCloud) available for quite some time and whilst it satisfies most of the things I’ve been talking about so far it doesn’t have the sexy cloud features like an in-built scalable NoSQL database or binary object storage. Since Microsoft had their Azure product I had assumed they weren’t interested in competing with VMware on the same level but after seeing one of the TechEd classes and subsequently browsing their cloud site it looks like they’re launching SCVMM 2012 as a direct competitor to vCloud. This means that Microsoft is basically taking the same route by letting you build your own private cloud, which is basically just a large pool of shared resources, foregoing any implementation of the features that make Azure so gosh darn sexy.

Figuring that out left me a little disappointed, but I can understand why they’re doing it.

Azure, as great as I think it is, probably doesn’t make sense in a deployment scenario of anything less than a couple hundred nodes. Much of Azure’s power, like any cloud provider, comes from its large number of distributed nodes which provide redundancy, flexibility and high performance. The Hyper-V based private cloud then is more tailored to the lower end where enterprises likely want more control that what Azure would provide, not to mention that experience in deploying Azure instances is limited to Microsoft employees and precious few from the likes of Dell, Fujitsu and HP. Hyper-V then is the better solution for those looking to deploy a private cloud and should they want to burst out to a public cloud they’ll just have to code their application to be able to do that. Such a feature isn’t impossible however, but it is an additional cost that will need to be considered.

Online Identity, Google+ and Anonymity.

Whilst we’re still in the very early days of Google’s latest attempt to break into the social networking scene they’ve still managed to create quite the stir, at least with the technically inclined crowd. The combination of a decidedly non-Google-esque interface coupled with the simple fact that it’s not Facebook was more than enough to draw a large crowd of people over to the service, to the tune of over 25 million in the short time its been made available to the public. The launch has been mostly trouble free for Google with their rock solid engineering providing a fast, bug free experience and its straightforward privacy policies. There has been one sticking point that’s been causing quite a stir however, enough so that some users don’t see it as a viable platform.

That issue is the fact that you have to use your real (legal) name on Google+.

Now for most of us this isn’t much a problem, especially if you’ve been on a social networking site before. For the past 4 years or so I’ve been using my real name or some abbreviation thereof online for the simple fact that it helped build my online presence, rather than hiding it behind a thin curtain of a pseudonym. That’s because for the most part I haven’t had the need to hide behind a curtain of anonymity (thanks to living in Australia, for the most part) since if I feel the need to express my opinion online I also feel the need to attach my name to it. Of course I still have pseudonyms that I use (Nalafang and PYROMANT|C are the 2 most prolific) but they’re more part of my gamer heritage than anything else, as I don’t really use them in any other context.

Still I understand that many people have built relationships and authority based upon their pseudonyms rather than their real names and this is where Google+ struggles. A great example of this is Digg’s top user MrBabyMan, who has quite the following thanks to his heavy involvement in the news aggregator, has a much smaller following on Google+ due to the restriction that he use his real name. Of course dedicated followers are able to suss this out but the point remains that people are far more aware of his online presence as MrBabyMan than they are as Andrew Sorcini. The question then is why is Google being so pedantic about real name use on their new social network?

You could trace it back to Google attempting to mimic what Facebook has, where it’s almost a given that anyone on there is using their real name. Of course many people don’t use their real name (for many reasons) but Facebook doesn’t seem to take much of a stance when they do, and will even let you change your name on a whim should you feel the need to do so. Google’s stance, at least according to CEO Eric Schmidt, is that they built Google+ primarily as an identity service not the social network that everyone is making it out to be. That’s an interesting notion but, for me at least, doesn’t answer the question of why Google won’t let people use pseudonyms on Google+.

There are many people who want to use Google+ as another platform for their online presence and for some this means using it under the guise of a pseudonym. Now whilst the case can be made that people will tend towards being fuckwads when given some degree of anonymity many have their online identities closely tied to the pseudonyms which they created. If Google was really serious about being an identity service then these sorts of people should have no issue since their identity, at least online, is their pseudonym. The question then becomes what’s the benefit of forcing them to use their real name rather than the one that they have so much invested in and whether this could become a big issue for Google’s new identity service.

For Google the benefits are pretty clear. Since your Google+ account is heavily intertwined with all other Google services the second you opt into their social network all those other services, nearly all of which are pseudonym supporting, now have your real name attached to them. Whilst Google already had a pretty good profile of you built up already thanks to those other services they now have a vastly more critical bit of information that ties them all together. There’s nothing particularly sinister about this motive however, it’s mostly so they can more expensive ads targeted at you, there’s a non-zero benefit to Google requiring your real name on their social network.

Those seeking to join the network under a pseudonym are at a distinct disadvantage however as they’re basically leaving their current online identity at the door. Of course the argument could be made that they’ll transition fine and it’s just that Google+ is still in its nascent stages, but that doesn’t detract from the fact that Google is doing potential users a disservice by not allowing pseudonyms. There’s a happy middle ground for both Google and potential users in the form of verified accounts (which they’re already doing for celebrities) or say letting users have a nickname displayed whilst having the real name hidden but Google doesn’t seem to be amenable to these ideas, at least not yet.

For a social network that’s basically been issue free since day one it’s a real shame to see Google get stuck on something that’s been so ingrained in the Internet community since it’s inception. I don’t think it will be the nascent social network’s undoing, but it’s definitely not getting them any positive press and has the potential to keep many power users away from the service. It will be interesting to see how they deal with this going forward as right now their focus is (rightly) on growing their network, rather than dealing with edge cases like this. However they could win themselves a lot of good press by simply allowing pseudonyms on their network, whether they will do that or not is something only Google can answer.

Progress, Proton and The Future of the ISS.

Russia’s space program has a reputation for sticking to ideas once they’ve got them right. Their Soyuz (pronounced sah-yooz) craft  are a testament to this, having undergone 4 iterations since their initial inception but still sharing many of the base characteristics that were developed decades ago. The Soyuz family are also the longest serving series of spacecraft in history and with it only having 2 fatal accidents in that time they are well regarded as the safest spacecraft around. It’s no wonder then that 2 of the Soyuz capsules remain permanently docked to the International Space Station to serve as escape pods in the even of a catastrophe, a testament to the confidence the space industry has with them.

Recent news however has brought other parts of the Russia space program into question, namely their Proton launch stack. Last week saw a Proton launched communications satellite ending up in the wrong orbit when the upper orbital insertion model failed to guide it to the proper geostationary orbit. Then just this week saw another Proton launched payload, this time a Progress craft bound for the ISS, crashed shortly after launch:

The robotic Progress 44 cargo ship blasted off atop a Soyuz U rocket at 9 a.m. EDT (1300 GMT) from the central Asian spaceport of Baikonur Cosmodrome in Kazakhstan and was due to arrive at the space station on Friday.

“Unfortunately, about 325 seconds into flight, shortly after the third stage was ignited, the vehicle commanded an engine shutdown due to an engine anomaly,” NASA station program manager Mike Suffredini told reporters today. “The vehicle impacted in the Altai region of the Russian Federation.”

Now an unmanned spacecraft failing after launch wouldn’t be so much of a problem usually (apart from investigating why it happened) but the reason why this particular failure has everyone worried is the similarity between the human carrying Soyuz capsule and the Progress cargo craft that was on top of it. In essence they’re an identical craft with the Progress having a fuel pod instead of a crew capsule allowing it to refuel the ISS on orbit. A failure then with the Progress craft calls into question the Soyuz as well, especially when there’s been 2 launches so close to each other that have experienced problems.

From a crew safety perspective however the Soyuz should still be considered a safe craft. If an event such as the one that happened this week had a Soyuz rather than a Progress on top of it the crew would have been safe thanks to the launch escape system that flies on top of all manned Soyuz capsules. When a launch abort event occurs these rockets fire and pull the capsule safely away from the rest of the launch stack and thanks to the Soyuz’s design it can then descend back to earth on its usual ballistic trajectory. It’s not the softest of landings however, but it’s easily survivable.

The loss of cargo bound for the ISS does mean that some difficult decisions have to be made. Whilst they’re not exactly strapped for supplies at the moment (current estimates have them with a year of breathing room) the time required to do a full investigation into the failure does push other resupply and crew replacement missions back significantly. Russia currently has the only launch system capable of getting humans to and from the ISS and since they’re only a 3 person craft this presents the very real possibility that the ISS crew will be scaled back. Whilst I’m all aflutter for SpaceX their manned flights aren’t expected to come online until the middle of the decade and they’re the most advanced option at this point. If the problems with the Proton launch stack can be sorted expediently then the ISS may remain fully crewed, but only time will tell if this is the case.

The Soyuz and Progress series have proven to be some of the most reliable spacecraft developed to date and I have every confidence that Russia will be able to overcome these problems as they have done so in the past. Incidents like this demonstrate how badly commercialization of rudimentary space activities is required, especially when one of the former space powers doesn’t seem that interested in space anymore. Thankfully the developing private space industry is more than up to the challenge and we’re only a few short years away from these sorts of problems boiling down to switching manufacturers, rather than curtailing our efforts in space completely.

Steve Jobs Resigns, Tim Cook Takes Over.

No beating around the bush on this one, Steve Jobs has resigned:

To the Apple Board of Directors and the Apple Community:

I have always said if there ever came a day when I could no longer meet my duties and expectations as Apple’s CEO, I would be the first to let you know. Unfortunately, that day has come.

I hereby resign as CEO of Apple. I would like to serve, if the Board sees fit, as Chairman of the Board, director and Apple employee.

As far as my successor goes, I strongly recommend that we execute our succession plan and name Tim Cook as CEO of Apple.

I believe Apple’s brightest and most innovative days are ahead of it. And I look forward to watching and contributing to its success in a new role. 

I have made some of the best friends of my life at Apple, and I thank you all for the many years of being able to work alongside you.

Steve 

The news shouldn’t come as a shock to anyone. Jobs has been been dealing with health problems for many years now and he’s had to scale back his involvement with the company as a result. The appointment of Tim Cook as the new CEO shouldn’t come as a surprise either as Cook has been acting as the interim CEO when Jobs has been absence during the past few years. Jobs’ involvement in Apple won’t completely cease either if the board approves his appointment which I doubt they’ll think twice about doing. The question on everyone’s lips is, of course, where Apple will go to from here.

The stock market understandably reacted quite negatively with Apple shares coming down a whopping 5.23% a the time of writing. The reasons behind this are many but primarily it comes down to the fact that Apple, for better or for worse, has built much of their image around their iconic CEO. Jobs has also had strong influences over the design of new products but Cook, whilst being more than capable of stepping up, has no such skills being more of a traditional operations guy. Of course no idea exists in a vacuum and I’m sure the talented people at Apple will be more than capable of continuing to deliver winning products just as they did with Jobs at the helm.

But will that be enough?

For the most part I’d say yes. Whilst the Jobs fan club might be one of the loudest and proudest out there the vast majority of Apple users are just interested in the end product. Whilst they might lose Jobs’ vision for product design (although even that’s debatable since he’s still on the board) Apple has enough momentum with their current line of products to carry them over any rough patches whilst they find their feet in a post Jobs world. The stock market’s reaction is no indicator of consumer confidence for Apple and I’m sure there’s only a minority of people who’ve decided to stop buying Apple products now that Jobs isn’t at the helm.

Apple’s current success is undeniably because of Jobs’ influence and his absence will prove to be a challenge for Apple to overcome. I highly doubt that Apple will suffer much because of this (the share price really only affects the traders and speculators) with a year or two of products in the pipeline that Jobs would have presided over. The question is will their new CEO, or any public face of Apple, be able to cultivate a similar image on the same level as Jobs did.

Microsoft’s Jupiter: A Panacea to Developer’s Ills.

If you’re a Windows developer the past few months of Microsoft’s various announcements about Windows 8 and the future of their developer ecosystem haven’t been particularly kind to you. With Microsoft announcing that their new Windows Phone 7 inspired UI for Windows 8 will be based on HTML5 and JavaScript many were left wondering if the heavy investment they had made Silverlight and .NET technologies was going to be wasted. It didn’t help matters much when Microsoft told everyone to wait until BUILD in September for more details, which let speculation run rampant amongst the community.

However it has come to my attention that Microsoft has been hinting at a potential panacea for all these woes, for quite some time now.

Back in January there were many rumours circling around the new features we could look forward to in Windows 8. Like any speculation on upcoming products there’s usually a couple facts amongst the rumour mill, usually from those who are familiar with the project. Two such features which got some air time were Mosh and Jupiter, two interesting ideas that at the time were easily written off as either speculation or things that would never eventuate. However Mosh, rumoured at the time to be a “tiled based interface”, turned out to be the feature which caused the developer uproar just a couple months ago. Indeed the speculation was pretty much spot on since it’s basically the tablet interface for Windows 8, but it also has a lot of potential for nettops and netbooks since underneath the full Windows 8 experience is still available.

The Jupiter rumour then can be taken a little bit more seriously, but I can see why many people passed it over back at the start of this year. In essence Jupiter just looked like yet another technology platform from Microsoft, just like Windows Presentation Framework and Silverlight before it. Some did recognize it as having the potential to be the bridge for Windows 8 onto tablets which again shoe horned it into being just another platform. However some did speculate that Jupiter could be much more than that, going as far to say that it could be the first step towards a unified development platform across the PC, tablet and mobile phone space. If Microsoft could pull that kind of stunt off they’d not only have one of the most desirable platforms for developers they’d also be taking a huge step forward towards realizing their Three Screens philosophy

I’ll be honest and say that up until yesterday I had no idea that Jupiter existed, so it doesn’t surprise me that many of the outraged developers wouldn’t have known about it either. However yesterday I caught wind of an article from TechCrunch that laid bare all the details of what Jupiter could be:

  • It is a new user interface library for Windows. (source)
  • It is an XAML-based framework. (source)
  • It is not Silverlight or WPF, but will be compatible with that code. (source)
  • Developers will write immersive applications in XAML/C#/VB/C++ (sourcesourcesource,source)
  • It will use IE 10′s rendering engine. (source)
  • DirectUI (which draws the visual elements on the screen, arrived in Windows Vista) is being overhauled to support the XAML applications. (sourcesource)
  • It will provide access to Windows 8 elements (sensors, networking, etc.) via a managed XAML library. (source)
  • Jupiter apps will be packaged as AppX application types that could be common to both Windows 8 and Windows Phone 8. (sourcesourcesourcesource)
  • The AppX format is universal, and can used to deploy native Win32 apps, framework-based apps (Silverlight, WPF), Web apps, and games (source)
  • Jupiter is supposed to make all the developers happy, whether .NET (i.e., re-use XAML skills), VB, old-school C++ or Silverlight/WPF. (Source? See all the above!)

Why does Jupiter matter so much? If it’s not clear from the technical details above, it’s because Jupiter may end up being the “one framework” to rule them all. That means it might be possible to port the thousands of Windows Phone apps already written with Silverlight to Windows 8 simply by reusing existing code and making small tweaks. Or maybe even no tweaks. (That part is still unclear). If so, this would be a technical advantage for developers building for Windows Phone 8 (code-named “Apollo” by the way, the son of “Jupiter”) or Windows 8.

In a nutshell it looks like Microsoft is looking to unify at all of the platforms that run Windows under the Jupiter banner, enabling developers to port applications between them without having to undergo massive rework of their code. Of course the UI would probably need to be redone for each target platform but since the same design tools will work regardless of the platform the redesigns will be far less painful then they currently are. The best part about Jupiter though is that it leverages current developer skill sets, enabling anyone with experience on the Windows platform to be able to code in the new format.

Jupiter then represents a fundamental shift in Windows developer ecosystem, one that’s for the better of everyone involved.

We’ll have to wait until BUILD in September to find out the official word from Microsoft on what Jupiter will actually end up being, but there’s a lot of evidence mounting that it will be the framework to use when building applications for Microsoft’s systems. Microsoft has a proven track record of creating some of the best developer tools around and that, coupled with the potential to have one code base to rule them all, could make all of Microsoft’s platforms extremely attractive for developers. Whether this will translate into success for Microsoft on the smartphone and tablet space remains to be seen, but they’ll definitely be giving Apple and Google a run for their developers. 

Flow, Optimization and Making Progress.

I believe everyone is familiar with the concept of being “in the zone”, I.E. that state you attain when you’re so intensely focused on something that time becomes irrelevant and all you’re focused on is achieving some certain goal. I personally find myself in this state quite often usually when I’m writing here, gaming or programming. Whilst I knew it was a common phenomenon I only learnt recently that its also recognised as a part of psychology, where they’ve termed it Flow. The concept itself is interesting an most recently I’ve started to grapple with one of the more subtle aspects, defined as point number 8 of conditions of Flow or “The activity is intrinsically rewarding, so there is an effortlessness of action”.

Now this weekend just gone past saw me back, as I almost always am, coding away on my PC. Now since I’m somewhat of a challenge junkie I’ll always seek out the novel parts of an application first rather than the rudimentary and the first day saw me implementing some new features. This always goes well and I’ll be firmly in Flow for hours at a time, effortlessly jumping through reams of documentation and masses of Google searches as I start to nail down my problem. Once the new feature is done of course then I’ll have to choose another to start work on, thereby maintaining my Flow and project progress.

However I’ve found that certain programming challenges are like kryptonite to achieving Flow. I discovered this on the second day of my weekend when I sat down to start work again, only to notice one of the tasks in my TODO list was to rework one of the earlier pages I had built to use less JavaScript and more ASP.NET Razor. The reasons behind this are simple: I’m really atrocious at JavaScript. The page in question looked good and did the job it was meant to but much of the content of the page was generated by some JavaScript code I had found on the Internet and hacked into working for me. This meant maintaining it was going to be an issue, so I set out to optimize it.

Of course the optimization process was fraught with the perils of trying to replicate into Razor what I had hacked into JavaScript with only a half understanding of what I was doing at the time. That meant untangling the mess of code that someone else had wrote and then translating that into another language that was more maintainable for someone like me. From a Flow perspective this kind of work isn’t very rewarding since I’m not going to achieve anything new and the benefits will only be realised by future me, that jerk who’s always off in some indeterminate time in the future. However the perfectionist in me knows that time saved at this point could mean multiples more saved later on, but therein lies the conundrum.

There’s a great quote by Donald Knuth (of The Art of Computer Programming fame) that says “Premature optimization is the root of all evil” which is basically a warning to avoid over optimizing your code when its still in the early stages. I’m a firm believer in the idea that you shouldn’t act like you have problems of scale until you have them but there are some fundamental differences between regular and scalable code that could prove to be incompatible with your codebase should you not make the decision early on in the piece. Of course optimization comes at the cost of progress on other pieces of work thus a balancing act between the two is required if your code is ever to see the light of day.

I guess I find it strange that optimizing my own code was so detrimental to achieving that state of coding nirvana. It’s quite possible that it was just the problem that I was working on as a previous optimization I had done, developing a cache system for a web service I was querying, seemed to have no ill effects. However that particular challenge was quite novel as I hadn’t created anything like it previously and the feedback was quite clear when I had finally achieved my goal. Unfortunately I have the feeling that most of the optimization problems will be more like the former example than this one, but so long as I write half decent code in the first place I hopefully won’t have to deal with them as much.

HP, WebOS and the Future of the Tablet Space.

So last Friday saw the announcement that HP was spinning off their WebOS/Tablet division, a move that sent waves through the media and blogosphere. Despite being stuck for decent blog material on the day I didn’t feel the story had enough legs to warrant investigation, I mean anyone but the most dedicated of WebOS fans knew that the platform wasn’t going anywhere fast. Heck it took me all of 30 seconds on Google to find these latest figures that have it pegged at somewhere around 2%, right there with Symbian (those are smart phone figures, not overall mobile) trailing the apparently “failing” Windows Phone 7 platform by a whopping 7%. Thus the announcement that they were going to dump the whole division wasn’t so much of a surprise and set about trying to find something more interesting to write about.

Over the weekend though the analysts have got their hands on some juicy details that I can get stuck into.

Now alongside the announcement that WebOS was getting the boot HP also announced that it was considering exiting the PC hardware business completely. At the moment that would seem like a ludicrous idea as that division was their largest with almost $10 billion in revenue but their enterprise services division (which is basically what used to be EDS) is creeping up on that quite quickly. Such a move also wouldn’t see them exit the server hardware business either which would be a rather suicidal move from them considering they’re the second largest player there with 30% of the market. More it seems like HP wants out of the consumer end of the market and wants to focus on enterprise software, services and the hardware that supports them.

It’s a move that several similar companies have taken in the past when faced with downwards trending revenues in the hardware sector. Back when I worked at Unisys I can remember them telling me about how they now derive around 70% of their revenue from outsourcing initiatives and only 30% from their mainframe hardware sales. They used to be a mostly hardware oriented company but switched to professional services and outsourcing when they had negative growth for several years. HP on the other hand doesn’t seem to be suffering any of these problems, which begs the question why would they bother exiting what seems to be a lucrative market for them?

It was a question I hadn’t really considered until I read this post from John Gruber. Now I’d known that HP had gotten a new CEO since Mark Hurd was ejected over that thing with former PlayBoy Girl Jodie Fisher (and his expense account, but that’s no where near as fun to write) but I hadn’t caught up with who they’d hired as his replacement. Turns out it is former SAP CEO Leo Apotheker. Now their decisions to spin off their WebOS (and potentially their PC division) make a lot of sense as that’s the kind of company Apotheker has quite a lot of experience in. Couple that with their decision to buy Autonomy, another enterprise software company, it seems almost certain that HP is heading towards the end goal of being a primarily serviced based company.

Of course with HP exiting the consumer market after only being in it for such a short time people started to wonder if there was ever going to be a serious competitor to Apple’s offerings, especially in the tablet market. Indeed it doesn’t look good for anyone trying to crack into that market as it’s pretty much all Apple all the time and if a multi-billion dollar company can’t do it then there’s not much hope for anyone else. However Android has made some impressive inroads into this Apple dominated niche, securing a solid 20% of the market.  Just like it did with the iPhone before it no single vendor will come to completely decimate Apple in this space but overall Android’s dominance will come from the sheer variety that they offer. We’ve still yet to see  Galaxy S2-esque release in the Android tablet space but I’m sure one’s not too far off.

It’ll be interesting to see how HP evolves itself over the next year or so under Apotheker’s leadership as it’s current direction is vastly different to that of the HP in the past. This isn’t necessarily a good or bad thing for the company either as whilst they might have any cause for concern now this transition could avoid the pain of attempting to do it further down the track. The WebOS split off is just the first step in this long journey for HP and there will be many more for them to take if they’re to make the transition to a professional services company.

I’m a Terrible Judge of Popularity.

It seems that no matter how long I keep doing this whole blogging thing I’m still unable to judge which of my posts will end up being popular, controversial or just simply fall flat on their face. The most popular post on my site (excluding the home page) for some bizzare reason appears to be my April fools post from a couple years ago that seems to draw in several hundred people a month simply for the fact that it has 2 pictures of ponies in it. The second is the only piece that was ever linked to by a reputable news organisation, my original post on BitCoins. Even then though the post wasn’t popular until a month after I had written it, an eternity here on the Internet.

What confuses me most is though is that the posts that I considered forced, rushed pieces of work (usually ones I write when I can’t find anything good to write about) usually end up being some of the most commented and thought provoking pieces. It could be that I’m just somewhat self defeatist in this regard, thinking that if I can’t hit that creative spark in under an hour then obviously anything I’m plonking down is going to be crap. Still though those particular posts are usually the ones where I’ve spent the least amount of effort researching, proof reading and polishing which would make you think that they’d be below average.

Normally I’d just write that off as confirmation bias, since there have been many posts from both sides of the equation that have had varying levels of success. The perceived failure of a well researched post sticks much more clearly in my mind however, because I feel like there’s been so much more effort put into it. A great example of this was last week’s post on eSports which was a massive undertaking for me, taking up a good 4 hours to research, analyse and write. Of course it could end up being a surprising success story a month down the line but for a post that managed to generate such energetic conversation amongst my peers I had thought that it would hit a chord with enough people for it to see a bit more light than it did. I might’ve missed the boat on that one though, as I ofte do with my strict “one post per day at the regularly scheduled time” routine.

Realistically though I don’t dwell too much on whether a post will be popular or not. My giant backlog of 600+ posts seems to attract a variety of people looking for posts on varying topics and there’s a good collection of posts that bring people back consistently. I am getting better at recognizing which posts will do better in the longer term but it still seems to be a guessing game for the most part. It might be a different game for bloggers who have a larger audience as right now my sample size is probably too small to draw any proper conclusions from, but until such time as I reach those dizzying heights of blogging stardom I’ll have to make do with working in the uncertainty of what the wider world would like to see from me.

SpaceX Set To Make History Before The Year Is Out.

Whenever I find myself getting frustrated with the sorry state of government funded space programs overseas I don’t have to look much further than SpaceX to feel inspired once again. From their humble beginnings back in 2002 they have shown they are capable of designing, building and launching rockets on a fraction of the budget that is currently required. Their ambition also seems to have no bounds with their CEO, Elon Musk, eyeing off a trip to Mars with the intent of retiring there. SpaceX is also the USA’s only launch system provider who’s got a roadmap for delivering humans to the International Space Station, a real necessity now that the shuttle fleet has retired.

You can then imagine how exciting it is to hear that SpaceX has received in principle approval from NASA to combine the next 2 Commercial Orbital Transport Services (COTS) demonstration flights into one. That might not sound like much on the surface but it means that SpaceX’s Dragon capsule could be docking with the ISS this year:

Over the last several months, SpaceX has been hard at work preparing for our next flight — a mission designed to demonstrate that a privately-developed space transportation system can deliver cargo to and from the International Space Station (ISS). NASA has given us a Nov. 30, 2011 launch date, which should be followed nine days later by Dragon berthing at the ISS.

NASA has agreed in principle to allow SpaceX to combine all of the tests and demonstration activities that we originally proposed as two separate missions (COTS Demo 2 and COTS Demo 3) into a single mission. Furthermore, SpaceX plans to carry additional payloads aboard the Falcon 9’s second stage which will deploy after Dragon separates and is well on its way to the ISS. NASA will grant formal approval for the combined COTS missions pending resolution of any potential risks associated with these secondary payloads. Our team continues to work closely with NASA to resolve all questions and concerns.

That’s right, if everything stays on schedule (which, I’ll admit, isn’t very likely) then we’ll see a Dragon capsule docking with the ISS and the first time in history that a private company has docked with a space station. The mission will test all of the fligh avionics, communication systems and docking procedures that SpaceX have designed for the Dragon capsule. Whilst the Dragon going up there doesn’t appear to have a cargo manifest it will be bringing cargo back down from the ISS, which will be a good test to see if their current design has any flaws in it that can be rectified for future missions.

The current docking procedure for the Dragon capsule is surprisingly similar to that of JAXA’s HTV. For the COTS Demonstration 2 flight at least the Dragon capsule will fly very close to the ISS where it will then be captured by CANADARM2 which will guide it into a docking port. It’s interesting because from the past few missions I had assumed that the Dragon was capable of automated docking, especially with (what seemed to be) rather advanced DragonEye sensor being tested on previous shuttle flights. Still automated docking is quite a challenge and the captured route is a lot safer, both for SpaceX and the astronauts aboard the ISS.

The announcement also comes hand in hand with some improvements that SpaceX has made to their launch stack. They’ve installed new liquid oxygen pumps that now allow them to fully fill the Falcon 9 in under 30 minutes, a third of the time it use to require. This means that SpaceX could roll out, fuel and launch a Falcon 9 in under an hour something that hasn’t been possible with liquid fueled rockets in the past. They’re also ramping up their production facilities with an eye to have up to 16 launches per year, a phenomenal amount by any measure.

SpaceX continues to show that the private sector is quite capable of providing services that were for the longest time considered to be too expensive for anyone but the super power governments of the world. The announcement that a Dragon capsule could be visiting the ISS this year shows how much confidence NASA has in their capabilities and I’m sure that SpaceX will not fail to disappoint. We’re on the verge of a revolution in the space travel game and SpaceX are the pioneers who will lead us there.