Monthly Archives: July 2010

White iPhone 4: Not Until The End of the Year.

Because a good bunch of people are coming to this blog looking for a white iPhone 4 (thanks in part to me hosting a picture with the name) I thought I’d put up a quick post to say you’re out of luck, as many outlets reported a while back:

Those of you holding out for white iPhone 4s will have to continue your practice in patience: Apple says that they continue to be a challenge to manufacture and won’t be available until later this year. That’s right—no longer will they be available in the second half of July, which was already pushed back from late June when the iPhone 4 originally launched. Apple has pushed the date back again by apparently several months, and is no longer committing to a date.

Apple’s latest statement follows one made one month ago just before the iPhone 4 hit the streets. At that time, Apple said that the white version had “proven more challenging to manufacture than expected,” and were therefore being delayed for at least half a month. Steve Jobs even said during last week’s iPhone 4 press conference, where he discussed the iPhone’s antenna issues, that the white iPhone was still on track to ship at the end of July, so either he was stretching the truth or the latest statement was a last-minute decision.

As a fellow white iPhone lover I knew this would upset a few people but it seems to be a theme with the white versions of Apple’s popular phone. Still you might be in luck as hopefully they’ll push for an antenna fix at the same time hoping to avoid the death grip problem that’s plagued many of its handsets.

Makes me a little happier that I bought the 3GS…. a little. 😉

Election 2010: Let Me Educate You, Australia.

I resisted getting into politics in any way for most of my adult life. For the most part I thought it was just a popularity contest that I had no intention of getting involved with, nor trying to form an opinion on more it than once every 3 years. Fortunately I can count amongst my friends a highly skilled academicwho’s area of study is politics and his constant pontificating about the subject eventually pushed me into figuring the whole thing out, lest I be unable to communicate with him (and subsequently be utterly bored). Today I pride myself on taking an engineer’s approach to the world of politics, figuring out the variables and breaking it down into manageable chunks upon which I can base my ultimate decision. It’s no secret I tend towards the liberal ideals with perhaps a touch of the libertarian in me, much like most of my generation.

This year though presented quite a conudrum as neither of the two major parties nor any of the others could logically get my full support. Labor continues to push policies that I can not agree with (Internet filtering and other nanny-state type policies) and the Liberals candidate for Prime Minister is nothing more than a rabid attack dog who couldn’t write a decent policy to save his life. The popular choice amongst my peers would then be the Greens party who, whilst giving their preferences to Labor, don’t support Internet filtering and have favourable policies in many other areas. Unfortunately for someone like me who sees the benefit in developing nuclear power in a similar fashion to countries like France the Greens can’t be an alternative as they outright oppose any kind of nuclear development. Other favourites include the newly formed Australian Sex Party who take similar positions to the Greens on many matters but unfortunately lack clear direction on many other key matters. The same can be said for many of the other minor parties as well, as whilst they have solid positions on their key issues I can’t really vote for them unless their stance on many critical issues is formalized.

After some research (which was sped up nicely by this spreadsheet) I came to the ultimate conclusion that no party fully supports my political vision. I can understand that this is usually the case with any political party as you can’t satisfy everyone but in the past I was able to easily reconcile my differences with the major parties as the issues were usually small. This last term has seen my support for the party I once supported wane without a strong competitor that rose up instead. In the end it looks to be the Greens who will get my vote as whilst I disagree with some of their policies I can reconcile that with the fact that many of my ideas won’t take off in Australia for decades to come, so I might as well go for the people who support the largest majority of my ideas.

Election time always sees discussions over the dinner table with my family about who we’re going to vote for and my weekly dinner with the parents was no different. My father was always a staunch Labor supporter whilst my mother flits between different parties depending on the political climate of the time. This year was quite a different discussion than the ones I was used to as whilst my father said he would be supporting Labor (but wasn’t quite happy about it) my mother wanted to send a message to the Labor government that she wouldn’t tolerate their actions, and so would be voting Liberal. Since they are in one of the most critical seats of Australia, Eden-Monaro, I took it upon myself to see why she felt that way and the results surprised me.

Many of the issues were those you’d find in the popular media. She wasn’t happy with Julia Gillard’s rise to power, felt that the border protection policies were lax and overall didn’t trust the government to bring Australia back into the black over the coming years. I agreed with her on several key points, I wasn’t terribly happy with the way Gillard came into power either, but the fiscal management one caught me off guard. Since my mother had lived through the Labor government previous to this one I thought she would’ve understood why Labor had to spend money during their times in government, but honestly who really does remember what happened 20 years ago?

I can tell you I certainly don’t remember much. The last time Labor was in power I was still in primary school, blissfully unaware of all the goings on. Still my perverse interest in all things financially disastrous had taught me quite a lot about the economic climate of the time, and the similarities to the current government were startling. I asked her “Do you remember what was happening in the early 90s that just happened recently?”. She couldn’t answer and I don’t think many Australians would be able to either.

The answer is: global economic crisis.

Most Australians will remember Paul Keating’s famous line of the “recession we had to have” which was in fact caused by a wider economic crisis that can be traced back to Black Monday in 1987. Whilst everything appeared to recover during the early nighties it was unfortunately shorted lived and many countries, including Australia, plunged into recession because of it. Since the great depression all governments have recognised the ideals of Keynesian economic theory which dictates that during times of recession the government should step in and spending in order to stimulate the economy. Traditionally this is done with deficit spending, I.E. borrowing money, which many people see as being detrimental. However as history has shown not going into debt to avoid a recession will make said recession last that much longer. Indeed we saw the swift action by our government that saw Australia to be the only developed country to avoid a recession, a phenomenal feat especially when the rest of the world couldn’t manage it.

The past 2 Labor governments have presided over an Australia that was ravaged by global economic tides and the notion that all a Labor government does is spend the surplus that the Liberals build up is complete bullshit. Everyone seems to forget that the last Liberal government saw such economic growth and surpluses because it was never hit by a global financial truck that required them to spend their way out of it. Indeed even the Liberal party forgot that Labor delivered a budget surplus in its first year only to have it dashed by the global financial crisis the year after. To say that a Labor government is fiscally irresponsible because they always run a deficit shows a complete disregard for the facts and is nothing more than political spin. My mother also brought out the old chestnut of interest rates being higher under a Labor government, conveniently forgetting the last 3 years.

The fact is that if you’re worried about a Labor government staying in power because you don’t trust them to run the economy think again. They proven that they are completely capable of handling an economy through the toughest times where the Liberals have only shown how they fair when the seas are calm. Additionally if you’re worried about your interest rates I’d point you to the last 6 years of the Liberal government which saw a steady rise of interest rates that only came down under Labor. Really though the interest rates have absolutely nothing whatsoever to do with the government of the day, so please ignore any pontificating you hear when its related to any political party.

Hopefully you’ve learned something from this post and I urge you to spread this knowledge amongst everyone you know. The misinformation around this subject is abnormally high and the media outlets have no interest in setting the records straight. Whilst such information won’t swing the election one way or another it may do the public some good to question what they’re being told and hopefully seek out the truth for themselves.

Apple’s Magic Trackpad: Not a Mouse Killer.

There’s something to said for the longevity of the mouse and keyboard as the primary input devices for computers. Although we’ve come a long way in terms of alternatives you’d still be hard pressed to use those alternatives as full on replacements, save for a few niche applications such as graphic design. Still that hasn’t stopped the input innovators from trying and we’ve had many different devices and schemes thrown at us. In the end however they all meet the same fate: the mouse and keyboard just plain work for their purpose and none of them have really managed to take over.

Now I’m a bit of an input fanatic having churned through nearly every imaginable input device over the past few years and even buildinga couple of my own. Still on my desk at home you’ll find a mouse and keyboard just like everybody else. The reason? Simplicity. Nothing else really comes close to making me as functional as I can possibly be than my keyboard and mouse. I might include the microphone on my headset as well but apart from chatting on Ventrilo I couldn’t really say I use it as a primary interface for my computer. No out of all the alternatives I’ve tried nothing really fits as well as the old K and M but it seems like some companies think otherwise.

A couple days ago saw Apple revamp some old products whilst launching a couple new ones. Amongst the refresh of their iMac and MacPro line there was a curiosity that caught everyone’s eye and I’m not talking about the Apple Battery Charger. No the apple of everyone’s eye was the Magic Trackpad, a large multitouch bluetooth device that some have been saying is a mouse killer. Whilst I can appreciate the idea that any kind of device that Apple releases will garner this kind of attention I can’t help but think that calling it a mouse killer is a bit premature at the very least and, more likely, completely wrong. Sure it’s a nice looking piece of kit and I’m sure it will find a home with many people but if you think the computer of the future will come equipped with something similar I’d probably rethink your position.

You see about 15 years ago I actually had something quite similar to this, it was a trackpad that plugged into the PS/2 port that emulated a mouse. Now it wasn’t as large nor did it have multi-touch, but then again this was quite a while ago. It was functional enough that I could have used it as a direct replacement for my mouse at the time. You might be wondering what a 10 year old was doing with something like that, well it was given to me by my dad. He’d bought it thinking it would be a good replacement for his mouse but after a day or so of trying to get used to it he didn’t like it and gave it to me to tinker with. I put up with it for about as long as he did with the pad finally ending up in the computer parts pile.

For us mere mortals the mouse is actually quite a refined and elegant input device. The shape conforms to the natural at rest shape of our hands and minimal effort is required to move the cursor on the screen. The track pad however with much more limited tracking area and lack of sensitivity made both normal and precision work quite tedious. Additionally, and I know this will be squarely in the “don’t care” area for most people, gamers will tell you that anything bar a mouse and keyboard for gaming will instantly put you at a disadvantage save for something like a flight or racing sim. If you get one of these trackpads you’ll find yourself trying to pick it up every so often as you try to scroll across the screen unless it happens to be the first input device you’ve ever used.

Taking a step back from Apple’s offering for a second you’d notice that they aren’t the first ones to release a multi-touch trackpad either. Whilst most of the devices aren’t as slick looking as Apple’s device they certaintly have been around for quite a while yet you don’t see them on every computing device, not even all laptops. There’s also the integration to consider as whilst they might have integrated the gestures into their OSX line of products but unfortunately that won’t translate across to their competitors. You could argue that’s not Apple’s intent but if you’re going to call this thing a mouse killer you’d better start thinking about how that product is going to work cross platform, or it isn’t killing anything (apart from your wallet).

All this being said I still think it’s a rather cool looking piece of kit and I can see it as a viable alternative to a mouse for those who want to shell out for it. However if your needs extend past the usual email/web requirement or if you don’t run OSX then there’s no real need for you to rush out and buy this latest “innovation” from Apple, save for the fact that you can’t contain your rabid fanboyism for all things Apple. In my eyes this should have gotten just about as much press as Apple’s new battery charger (which is actually not bad) but of course anything that they do in the multi-touch space will ultimately be trumped up as the next revolution in computing. Of course no one will remember this when the mouse isn’t killed and is still here for decades to come, but I’ll let that one go for now.

A Tale of Two Servers.

I’ve spent the last 4 days lost in a technical limbo. On Saturday I was making good progress with Geon and left it to spend the night out with a bunch of my great friends and to see Inception (well worth seeing by the way). However Sunday morning saw me stuck on what seemed to be the most simple of problems, wasting at least 5 hours trying to get past this point. To make matters worse my web server decided to give up the ghost and just stop serving out PHP pages and neither upgrades, reboots or the hours of troubleshooting I threw at it would bring it back to life. Since you’re reading this now you know that I’ve managed to fix it, but it wasn’t an easy journey.

You see I was running this blog from IIS 7.0 which is Microsoft’s web server. For the most part it’s pretty good, the administration tools are top notch and it only took me a couple weeks to get my head around the whole idea of hosting a web site. Before starting up this site I’d never really tried any web stuff, preferring to stick to my cosy little offline world. Still I knew when starting this blog that I was using the least preferred web server on the market and the tutorials I found for getting it all started seemed more complicated than they needed to be. But I had been given a free copy of Windows Server 2008 and I didn’t really feel like re-educating myself with Linux, which had burned me in the past.

For a long time everything was good, the site was always up (pending my Internet connection of course) and seemed to be quite responsive. Over the course of almost 2 years though I’ve added quite a lot of things to the site like photo albums, sexy comments and all manner of behind the scenes stuff. About 6 weeks ago though I started getting PHP timeouts on this site, but all the others were running fine. The weekend just past saw all of the sites die a similar death and nothing I could do seemed to revive them. I was then faced with a choice, either move this site to one of my external providers or rebuild the server completely. After considering my options I went for the rebuild and 2 days later I’m back online serving this page to you from an Apache web server running on a freshly minted Ubuntu server.

Now I’m no stranger to the world of Linux. I spent quite a good chunk of my University years programming in Linux and my time at the National Archives of Australia had me administering a couple Linux servers that were powering the Digital Preservation Project there. Still I shyed away from it mostly because it was always a pain to administer when compared to its Window’s cousins which I always able to get functioning after 1 or 2 Google searches. To its credit Ubuntu made this process far less painful that it had been in the past and the tools have matured to a point when even a Windows administrator shouldn’t feel out of place. Last night saw this website resurrected in all its glory and hopefully I’ll figure out how to get my other hosted sites up and running sometime soon.

So you might be wondering: what happened to your old server? Well it’s still there, sitting alongside it’s new Ubuntu brother on my ESXi box and it won’t be going anywhere for quite a long time. You see, as great as Apache is, it’s still not IIS and with all my code relying on many things that IIS provides I’ll still need it in some form for quite a while to come. Additionally I had it set up as my Microsoft SQL box, again for development, and whilst I could use MySQL or PostgreSQL to achieve the same end the integration with the Microsoft tools just isn’t there yet. Plus I get it for free because of my Technet subscription so the cost factor that usually accompanies it is a non-issue.

Hopefully the move to the new server will mean the site will load faster for you, won’t cause me anymore administrative headaches and will reacquaint me with the Linux world I once knew so well. It’s a load off my mind to finally see this site back up and running again, especially after 4 days of feeling like I couldn’t make progress in anything technical. We’ll be back to our regularly scheduled programming tomorrow and I hope to see you guys back here then 🙂

How Augmented Are You?

I guess you could call me a transhumanist as I’ve got a keen interest in any technology that has the ability to augment us humans in some way. For a long time much of the stuff I dreamed or postulated about was firmly fixed in the world of sci-fi and fantasy. However the past couple decades have seen technological changes happening in such a fast pace that, at least in some form, have technologies that boost our attributes beyond what they were capable of naturally. I’d never really thought about it until I started considering how I use technology in my everyday life and just how far technology had advanced some of my abilities.

The best example I can think of this is probably my career. You see whilst I owe much of knowledge in the area of IT to the fact that I’ve been exposed to it for so long the vast majority of my knowledge doesn’t reside in my head, it lies out there in Internet waiting to be called upon. For many of us who’ve reached the upper echelons of the IT world our real ability isn’t the rote memorization of solutions, more it is our ability to search the Internet and the heuristic approach we take to tackling new problems. In that way then the Internet, and really all forms of information storage that preceded it, act as a kind of external memory that we are able to call upon to augment our own when required. In that sense we have already taken the first steps into the world of transhumanism and you’d be surprised at just how far along we are today.

For the everyday person in a developed world I’d say that they’re already augmented in several ways. With Internet penetration exceeding 60%in the developed world there’s a sizable amount of people that have at their beck and call untold seas of information. Additionally many of those same people would cellphones which, in addition to their capability as a memory enhancement device, also vastly increase the ability for someone to communicate with other people. Whilst this isn’t your traditional sci-fi type transhumanist idea it is in fact the beginnings of such a movement. This feeds into the fact that many technologies now seek to integrate more personally with our lives, with some coming to the point of being a necessity.

For all this wishy-washy type transhumanist stuff there is in fact some recent developments that, until quite recently, were completely sci-fi. Take for instance robotic exoskeletons, something which everyone is familiar with since the release of the movie Aliens. At the time it was pure fantasy as such a suit would require an energy source that just couldn’t exist at the time. Whilst we don’t have powerloaders today we do in fact have two devices that are quite closely related to it. The first is the HULC exoskeleton which is capable of carrying itself and an additional 90kgs of equipment, placing no burden on its wearer. The second is the REX robotic exoskeleton which gives back the ability to walk to those who have lost it. Humanity it seems is on the cusp of overcoming nearly all of our limitations, even those that were once in the realm of fantasy.

Take a step back and look at how augmented your life is today. Since you’re reading this blog you’ve already got more information at your finger tips than any of your ancestors had in their entire world. There are more subtle things, like say your dish washer, that enable you to complete many tasks at once. Each one of those is an augmentation to you allowing you to achieve much more than you would’ve been able to in the past. The effects of such augmentations are wide spread and all of them have an accelerating effect. The next decade looks to be bright with innovations that bring human capability to places that are still in the realms of sci-fi and I for one can’t wait to see what it brings.

Shoehorning Old Business Models Into The Digital World.

As the saying goes information wants to be free. In this day and age the limitations of traditional media no longer apply and the dissemination of information across the globe can be achieved for a cost that is continually approaching zero. We can unequivocally trace this back to the permeation of Internet access across all developed worlds as it torn down the physical barriers that used to be in place. For the information gatekeepers of the past the Internet has been the most disruptive technology that they’ve had to deal with and even today after nearly 2 decades of cheap, consumer level Internet access being available they’re still trying to erect their gates like they did with all its predecessors.

Unfortunately for them, it’s not really working.

However taking a closer look at the traditional media outlet business model there really aren’t too many differences between the new and old worlds. For starters both of them had their primary source of revenue as advertising with the cost of a newspaper or magazine usually only covering the costs of printing and publication. The key differences that break this model appear to be a combination of expectation from online news sources (no one expects to pay to see something on the web) in addition to a trend away from brand loyalty (you would rarely read all the newspapers in one day, but you can conceivably do that now with online feeds). There’s still a market for print media as many prefer that form to its online counterpart, much like how people still prefer books to eBooks (although even that is changing),  but when it comes to online if you’re not doing it for free you’re likely to alienate those who were attracted to your online presence in the first place.

Of course this is all currently postulating on my part but it’s not without its roots in the real world. I’ve often held debates with my close friends and fellow bloggers that the subscription model for news is not well suited to the digital world and that any attempts to monetize in that fashion would be met with staunch resistance. Unfortunately for me the evidence was on their side since there were quite a few people (including themselves) who would pay for quality journalism in an online format so on an anecdote vs anecdote level they had the upper hand. Recently however there’s been more examples showing that my side of the fence might ultimately win out:

My sources say that not only is nobody subscribing to the website, but subscribers to the paper itself—who have free access to the site—are not going beyond the registration page. It’s an empty world.

The wider implications of this emptiness are only just starting to become clear. A Murdoch and Fleet Street veteran with whom I’ve been corresponding about the paywall reported to me on his recent conversation with an A-list entertainment publicist: “What was really interesting to me was that this person volunteered a blinding realization. ‘Why would I get any of my clients to talk to the Times or the Sunday Timesif they are behind a paywall? Who can see it? I can’t even share a link and they aren’t on search. It’s as though their writers don’t exist anymore.’”

Or to be more scientific about it The Times lost over 90% of its online readership after putting up a paywall:

These figures can then be used to model how this may impact on the number of users hitting the new Times site. Based on the last available ABCe data for Times Online readership (from February 2010), which showed that it had 1.2 million daily unique users, and Hitwise’s figures showing it had 15% of UK online newspaper traffic, that means a total of 332,800 daily users trying to visit the Times site.

If none of the people visiting the site have already registered, the one-on-four dropout rate means that traffic actually going from the registration site to the Times site is just 84,800, or 1.06% of total UK newspaper traffic – a 93% fall compared with May

The evidence thus far suggests that taking a free service and making it pay only is a sure fire way to lose your online readership. Any Internet startupwill tell you that getting people to pay for your product up front will make it an uphill battle to drive adoption and that’s why many of them have chosen to adopt a Freemium model instead. It makes quite a lot of sense when you consider the cost of changing news providers online is essentially zero and the value derived is almost identical. So why are some media outlets attempting to shoe horn their old business models into the new world? From my point of view there seems to be a few reasons why.

The first is that despite trashing their online presence by limiting it to pay only many of the online media outlets will still be able to generate a decent amount of revenue from doing so. Since a media site can arguably done quite cheaply it doesn’t take a whole lot of paying users to cover the costs and make a healthy bit of profit on top. Additionally if you provide a service that has free alternatives that just aren’t up to scratch, like say Wall Street Journal and Financial times with their stock information, you’re likely to attract quite a following especially if your users perceive that they’re getting a good deal.

For many of the larger media outlets it’s also about trying to regain some of the control that they lost when the world started looking elsewhere for their information fix. Erecting a paywall takes the medium of the Internet and attempts to make it behave much like a traditional media source with the gate keeper firmly in place. Whilst I won’t go so far as to suggest that blogs/Twitter/YouTube/Citizen Media will ever replace traditional journalism (although I’ll argue they’re a good enough alternative for many) reestablishing themselves as gate keepers in the new information conduit ensures that their traditional business models at least have some relevance in the new media marketplace. That would be rather reassuring for any media outlet that’s struggling to come to terms with the digital revolution.

The digital world has been a revolution in so many respects that its hard to find parallels in history books. Whilst there’s been many changes of similar magnitude none have been this disruptiveon so many levels over such a short period of time. It is then logical that traditional businesses, not just media outlets, would struggle to comes to terms with it. With my generation beginning to take over these traditional media streams I’m sure the trend of embracing the new world will continue and I’m sure that in a couple decades there will be another disruptive technology changes the game again.

And then someone else will write a post like this about how my generation can’t get it right in their new world. Ad infinitum.

It’s Been a Great Week For Space.

I won’t lie to you it’s been hard to be motivated about much with Canberra’s climate the way it is at the moment. Waking up to a backyard covered in frost, whilst beautiful in its own way, is a sure way to make me yearn for the comforts of my warm bed forsaking any work commitments. Despite that though I’ve had quite a few productive weekends huddle away from the icy bite of the outdoors and I’ve come to notice a lovely trend in the headlines gracing my feed reader: There’s been some tangible progress in almost all areas of space exploration and that never fails to make me extremely happy.

The first bit of news comes from Virgin Galactic. It’s been a while since we last heard from them after the maiden flightof SpaceShipTwo, almost 4 months to the day. Still that doesn’t mean that progress hasn’t been made and the announcement came out just recently that they had performed their first fully crewed flight:

A private suborbital spaceship built for the space tourism firm Virgin Galactic made its first flight with a crew onboard Thursday as it soared over California’s Mojave Desert beneath its enormous mothership.

The commercial spaceliner – called VSS Enterprise, one of the company’s fleet of SpaceShipTwo spacecraft– did not try to reach space in the test flight. Instead, it stayed firmly attached to its WhiteKnightTwo VMS Eve mothership.

The two crewmembers riding onboard VSS Enterprise evaluated all of the spacecraft’s systems and functions during the 6-hour, 12-minute flight, Virgin Galactic officials said in a statement. In addition, automated sensors and ground crews conducted thorough vehicle systems tests.

Now that might not seem like much on the surface but it is in fact quite a giant step forward for Virgin Galactic and the Scaled Composites guys. The two craft soared to over 15KMs high, that’s nearly double the height that most passenger jets fly at. To put that in perspective that means that many of the life support components of the craft have been verified as at that altitude you wouldn’t last long without functioning life support, and definitely not the 6 hours they were up there for. Completing these tests brings the SpaceShipTwo dream that much closer to reality and with the commercial flights scheduled for 2011 I’m sure we’ll see a powered test flight before the year is out.

The second came in the form of my current space crush, SpaceX. It’s been little over a month since their Falcon 9 rocket soaredinto the history books and gave us Australians a lightshow to rival those that our Nordic cousins had experienced. This week brings news that so soon after their last launch they’re already gearing up for the next one, with the parts for a new Falcon 9 arriving at Cape Caranaveral:

Six weeks after the first Falcon 9 rocketed into orbit, pieces of the second launcher have begun arriving at Cape Canaveral for a shakedown flight of SpaceX’s Dragon capsule in September, according to the company’s top executive.

The Falcon 9 first stage pulled into Cape Canaveral Thursday after a truck ride from SpaceX’s test site in central Texas.

The stage was placed inside the company’s rocket assembly hangar at launch pad 40. Officials said they untarped the rocket and completed initial inspections Thursday night.

Engineers plan more testing over the next several weeks to make sure the stage and its nine Merlin engines are ready for flight.

Again it might not seem like a lot but it’s a testament to the fact that SpaceX is quite serious about being a fully fledged orbital launch company competing with the giants of Boeing and Lockheed who’ve dominated this sector for decades. Additionally it shows that many of the processes that are required for them to be able to churn out a respectable number of rockets are in place and working beautifully, rather than the recent launch being nothing than a one off prototype ala Ares 1-X. The next flight, which looks to be on track for a launch towards the end of this year, will fly the first fully functional Dragon capsule complete with full avionics, life support and most importantly the heat shield for re-entry. The current specs of the Dragon capsule have it rated to be able to return to Earth from missions to the Moon and Mars, something that suprised the entire space community. I have no doubt that it is quite capable of this and it gives me the feeling that Elon Musk might have dreams of going far beyond LEO with SpaceX. I’m getting all giddy just thinking about it.

The last, and most impressive, is something that any science fiction fan will tell you is possible but until just recently it wasn’t actually used as the primary means of propelling a space craft. IKAROS, a craft I wrote about 2 months ago, unfurled its sails and successfully used the sun’s radiation pressure to propel the craft through space:

We’ve been following the progress of the Japanese spaceship IKAROS — the first to unfurl a solar sail in deep space. Today, the ship made the only first that really matters: it caught the sun’s rays with its 3,000 square-foot sail and successfully used the energy to speed its way through space.

Each photon of light exerts 0.0002 pounds of pressure on the 3,000-square-foot sail, and one after another they succeeded in propelling the nearly 700-pound drone. Japanese scientists expect to be able to control IKAROS’s velocity by adjusting the angle at which incoming radiation strikes the sails. For a full technical explanation of how the drone is moving, check out the Japanese space agency JAXA’s press release.

Solar sail technology is important because it allows spacecraft to travel without fuel, which could allow them to penetrate ever deeper into space.

This is probably one of the biggest advances in space technology we’ve seen in quite a long time. Solar sails have the potential to propel craft to speeds far beyond any of our current craft and rivalling even some of the theoretical nuclear craft. Of course there is still a long way to go until this can be used for larger craft (IKAROS is ~300kg) but the demonstration verifies that several key technologies function as expected and produce the required results. This success means there’s a good chance that the proposed larger solar sail craft will get the funding it needs to bring it into reality. I can’t wait to see what kinds of interesting missions solar sails will make possible.

It’s been a while since I’ve been able to write one of these starry eyed posts about space and I’ll be honest it feels good to be able to do it. Space is one of those things that I always find myself losing hours on and being able to share some of that wonder with an audience always gives me such a great feeling of accomplishment. I know one day, thanks to the achievements outlined here, that I’ll be able to venture into space and share in the impressive achievement that is humanity reaching out into space.

Interesting, if you turn the clock back a year it seems that I wrote a very similar post to this one, coincidence? Most likely 😉

Linkbaiting, Trolls and Other Secrets to Blogging Success.

I try to keep resemblance of what could be likened to journalistic integrity on this blog. I usually only write about things that I believe I have something worthy to say on the topic and I think it shows when I’ve forced out a post just to satisfy my obsessive-compulsive side. Still the temptation is always there to take the latest hot headline in one of my areas of interest and just parrot the popular sentiment as it’s an almost guaranteed way to drive people to this site. Sometimes I’m lucky enough that these two worlds collide and I get to write about something I like that brings people to my blog. One example of this was my reaction to the iPad which, whilst I knew was going to be all over the press, was an honest reaction to the product’s announcement and saw quite a few people coming here to get whatever details they could on Apple’s latest toy.

In the professional blogging world things aren’t quite so freeform.

You see, despite efforts to the contrary, the best way to make money off your online content is advertising. Depending on who you’re dealing with these can be cost per thousand impressions (CPM), cost per click (CPC) or some other variety. No matter what kind of advertising you end up slathering all over your content the amount you make will still be directly proportional to the number of users that you receive on your site. The best ways to do this usually involve breaking a story (although that doesn’t last that long in our Internet world), writing on the topic de’jeur or playing on people’s loyalties by taking a controversial stance on a subject. Take a look at any blogging site and you’ll see a combination of all of these, usually right there on the front page. All of this is done in aid of driving users and their respective advertising revenue to the site.

As always this post was inspired by an example of such behaviour that I saw on the Internet. Currently one of the hot topics amongst the tech crowd is the issue of the iPhone 4’s antenna which can be shorted out if held in a certain way. I’ve steered clear of the topic mostly because I don’t have anything useful to say on the matter and it’s already been beaten to death in the headlines over the past couple weeks. To give you an idea of just how absurd this whole situation is getting take a gander at this post over at TechCrunch:

But the thing is, that trust that my mom gives to Consumer Reports was hard earned over decades of obsessive use. She trusts Consumer Reports. And if I read it I might trust it too. If they rated stuff on shininess I’d definitely subscribe. Or if they rated robots.

But suddenly Consumer Reports is crazy for the link bait. This iPhone 4 antenna problemhas them going absolutely batshit crazy, and nearly every day they’re firing off a new set of recommendations, or demands, that conflict with the old recommendations and demands.

Ironically¹ Arrington is also guilty of the same things that he criticises Consumer Reports of doing. The post is a classic traffic driver attempt: he’s taken a rather controversial stance on something (no one else has criticized Consumer Reports to my knowledge), he’s talking about one of the hottest topics today and for what it’s worth he’s breaking the story. The post is just aching for Consumer Reports to post a response back to his claims and should they actually do that he’s got another in to write yet another trolltastic article.

For me since my blog is primarily personal and nets me zero in the ways of revenue I don’t usually have any desire to write those kinds of articles. That’s not to say I haven’t, in fact I’ve done quite a few of them. However I never really felt that good about them afterwards and talking it over with my fellow bloggers they agreed they weren’t really of the standard they’d come to expect from me. I am human however so there are times when my stance on something will go against the grain of what’s currently socially acceptable but those posts will (hopefully) contain reasoned, logically constructed arguments so at least if you don’t agree with me you understand how I came to my conclusions.

You could write this whole tirade off as someone who’s just languishing in the dark recesses of the Internet casting an evil eye to anyone who’s got a whiff of success. The Australian blood that runs through me will always want to cut the tall poppies down but realistically it all comes back to my desire to give a little something to those who read my writings. Whilst I know that not everyone cares about why people write things for all to see I feel that knowing someone’s motivations helps me greatly in understanding their content and, should they attempt to convince me of their viewpoint, acknowledge any biases they have lest I take them on as my own.

¹It gets even more ironic if you consider that this post could be construed as falling prey to the same ideas I’m criticising. I knew that when writing this, just so you know 😉

Are Device Hackers Worth This Much Effort?

I readily admit that I’m a bit of a tinkerer. There’s something really enjoyable about taking something you bought and squeezing extra functionality out of it, especially if it unlocks something that no product currently fits. I remember after having my PlayStation Portable for a while that I heard of the many great things that could be done with it, so I set out to mod it. A couple days later I had it streaming live video from my PC over our wireless network which was quite an impressive feat back in those days. Today the device hacker scene is alive and well on almost any platform that can be exploited leading to a game of cat and mouse between the creators of said devices and those who would seek to exploit them.

Now I’m not going to be naive and pretend like there aren’t nefarious motives behind parts of the hacking scene. Indeed the main motivator for quite a lot of hacks that enable people to unlock certain bits of functionality is usually done in aid of pirating legitimate software. In fact for the Xbox 360 the only hack available is arguably only for pirating software, as Microsoft’s hard line on banning users who do it shows. Still the never ending game of cat and mouse that companies play with the recreational hacking crowd doesn’t appear to make much fiscal sense on the surface as the man hours required to try and protect such systems always appear to fail with little more than a couple weeks from a few skilled individuals.

Probably one of the platforms where this kind of behaviour is almost encouraged would be Android. For starters the entire system is open source so if you were so inclined you could write custom packages for it to unlock almost any functionality you wanted. It also seems that the vast majority of Android handset manufacturers only put mild roadblocks in the way of those seeking to gain root level privileges on the devices, akin to the CD in the drive checks of games of yesteryear. Still it seems that the trend may be shifting somewhat with the recent Droid X, touted as the best Android phone to date, employing some rather drastic moves to prevent end users from tampering with it:

Motorola has apparently locked down the phone to the point where any modification attempts — including “rooting” the phone to install unauthorized apps, or changing its firmware — could render it completely inoperable (or “bricked”). The only way to fix it is to return the phone to Motorola, reports the Android fansite MyDroidWorld.

The company is using a technology called eFuseto secure the device. It runs when the phone boots up, and it checks to make sure that the phone’s firmware, kernel information, and bootloader are legit before it actually lets you use the device. Here’s MyDroidWorld’s explanation:

If the eFuse failes to verify this information then the eFuse receives a command to “blow the fuse” or “trip the fuse”. This results in the booting process becoming corrupted and resulting in a permanent bricking of the Phone. This FailSafe is activated anytime the bootloader is tampered with or any of the above three parts of the phone has been tampered with.

Us device hackers know the risks when we go into them, it’s part of the fun! I remember when I was hacking my PSP for the first time I had to find files from a not-so-trustworthy source, a random I met on an IRC channel. Knowing fully well I could end up with a $400 paperweight I went ahead anyway and, luckily enough for me, it worked. However the trend towards vendors actively seeking to brick the phones should the user try to tamper with them feels like a kick in the teeth to me. Realistically it’s my hardware and what I do with it is my business and putting barriers in place just seems like a waste of both our time.

The argument can be made that they don’t want the average user attempting to do these kinds of things with their devices. There’s some logic to that as stopping the casual hacking crowd means that a good majority of the other nefarious activities will be thwarted as well. Additionally in this day and age the originators of the hack usually make it exceptionally easy to use like the Twilight Hackfor the Nintendo Wii which merely requires loading a save game, something everyone is capable of. Still most users are bright enough to know that what they’re doing is akin to taking a chainsaw to their device, something which the manufacturer will likely not appreciate nor cover under warranty.

Coming back to the piracy issue I still feel that this comes down to the perceived¹ value that customers are placing in the products being offered. The customers who are pirating your product aren’t the kind who are just going to up and pay for it if they can’t get it for free. Really you should be looking back on yourself to see why they’re pirating it as if it’s wildly successful with the pirates but not with legit customers it’s quite possible your product is priced too high or the channels you’re offering it through are too restrictive. I’ve been researching these markets for months now and it seems no matter how hard you try to ensure no one pirates your product you only end up hurting your paying customers, driving even more of them to those dastardly corners of the Internet where they pilfer your product for free.

In my mind there’s no question that the steps taken to thwart these would be hackers is not worth the time that’s put into them. For a platform like Android I actually believe these kinds of people actually help a great deal with the whole ecosystem of the platform, ensuring that power users get what they want whilst everyday users get dedicated experts to call upon at no cost to the original company. Who knows maybe I’ll change my tune when I start trying to extract money from the markets based on these platforms but if I do feel free to point at this post and lambast me for being an idiot, as I’ll be far too detached from reality at that point 😉

¹I have a habit of re-reading my old posts when I link to them and just noticed that I praised Ubisoft for taking the right direction when trying to combat pirates. After their last DRM farce I can’t really support them anymore, but the ideas in that post remain solid (I.E. increasing value with things that can’t be pirated).

It’s Time, Old Friend (or Windows 2000, We Hardly Knew Ye).

I’ve been in the world of IT for quite some time professionally but I’ve been an enthusiast for much, much longer. I can remember the days of doing everything through the command line in DOS, eagerly hunting down the games that my father had installed. My first taste of a GUI wasn’t in the form of windows it was a rather esoteric program called XTreeGold which provided many of the base functions found in Windows 3.1. In my time with these wonderful beasts we call PCs I’ve used every iteration of Windows that’s been available and I’ve never seen such fervent devotion to any version of Windows than that seen with Windows XP.

From a technical standpoint XP wasn’t really anything new. It was the first version of a Microsoft consumer OS that shared the vast majority of its core functions with its server counterpart and the vast majority of the tech (Called the New Technology Kernel). It was a good move and all following versions of Windows have continued to share a common base with their server cousins. Still at the time many users were tightly wedded to their Windows 98/SE installations and the early adopters who tried Windows ME weren’t in any mood to trust Microsoft again. Still XP managed to overcome this hurdle and for the past 8 years or so it’s been the defacto OS on the vast majority of computers around the world.

However it’s an aging beast in the fast paced world of IT. A computer considered top of the line 10 years ago is less powerful than your run of the mill smartphone today. Windows 7 is truly an OS worth upgrading to with the vast number of improvements it makes in performance, usability and functionality. Microsoft has tried hard to get people to move across to the new system with them finally disowning Windows 2000 and XP SP2 (more on that in a second) by killing support for them:

Today is the last day that Windows 2000 and Windows XP Service Pack 2 will receive support and patches from Microsoft. Starting tomorrow, Service Pack 3 will be required to receive support and hotfixes for Windows XP.

In the past, the end of support for a service pack would mean that Microsoft would refuse to offer any kind of telephone support or troubleshooting assistance. This policy was relaxed a little in April; limited support will remain available for those organizations sticking with Service Pack 2. However, any hotfixes or security updates will be restricted to Service Pack 3.

Customers on Windows 2000 will not even have this option. The operating system is now out of its extended support phase. This brings an end to any and all hotfixes, security updates, or even paid support options. Fewer than half a percent of Internet-connected machines appear to use Windows 2000, and with the end of support, it is now open season on that minority: Microsoft will take no action to provide fixes for any security issues, regardless of their severity.

The fervent dedication to XP is wholly due to the failed product refresh cyclethat was Windows Vista but with the release of Windows 7 no one really has any excuse not to upgrade anymore. Still the corporate world is a slow moving beast and skipping the last product cycle has meant that many of them have relied on Windows XP’s backwards compatibility to keep older applications functioning. Thus the cost in transition is far higher than if they had made the switch to Vista back when it was first released as the differences between Vista and 7, at least in terms of application breaking changes, are minimal. Thankfully most organisations recognised the need to move away from Windows 2000 a long time ago and Windows 2008 enjoys quite wide adoption. I credit that mostly to Windows server editions being reserved for us caretakers of IT infrastructure since we’re usually more inclined to try out the latest tech.

The day will soon come when Windows XP will no longer be a viable option for anyone to use and whilst a small part of me will be sad to see it go I hope that it will break the kind of mindless dedication that kept organisations stuck in the same world technologically for a decade. I made my career in a world that didn’t want to hear about the latest offerings from Microsoft as they knewit wasn’t worth their time. Windows 7 is making headway in that regard and is also breaking through the stigma of switching to 64bit, something which used to be compared to running Windows ME (think of the crashes, driver incompatibility and general “WTF are you doing?” looks you’d get from us IT folks for doing it). It might not mean a heck of a lot to non-IT folks, but it’s definitely something to guys like me 🙂