Monthly Archives: April 2012

Strange Object Motion in Space.

Even with the proof behind it I still have no idea how the motion in this video works:

There’s 3 distinct angular motions going on but the seemingly discreet switch from one particular state to another (and the transition between them) really baffles me. It’s incredibly cool but thoroughly puzzling.

HP Cloud Tech Day.

So as you’re probably painfully aware (thanks to my torrent of tweets today) I spent all of today sitting down with a bunch of like minded bloggers for HP’s Cloud Tech Day which primarily focused on their recent announcement that they’d be getting into the cloud business. They were keen to get our input as to what the current situation was in the real world in relation to cloud services adoption and what customers were looking for with some surprising results. If I’m completely honest it was more aimed at strategic level rather than the nuts and bolts kind of tech day I’m used to, but I still got some pretty good insights out of it.

For starters HP is taking a rather unusual approach to the cloud. Whilst it will be offering something along the lines of the traditional public cloud like all other providers they’re also going to  attempt to make inroads into the private cloud market whilst also creating a new kind of cloud offering they’re dubbing “managed cloud”. The kicker being that should you implement an application on any of those cloud platforms you’ll be able to move it seamlessly between them, effectively granting you the elusive cloud bursting ability that everyone wants but no one really has. All the tools between all 3 platforms are the same too, enabling you to have a clear idea of how your application is behaving no matter where its hosted.

The Managed Cloud idea is an interesting one. Basically it takes the idea of a private cloud, I.E. one you host yourself, and instead of you hosting it HP will host it for you. Basically it takes away the infrastructure management worry that a private cloud still presents whilst allowing you to have most of the benefits of a private cloud. They mentioned that they already have a customer using this kind of deployment for their email infrastructure which had the significant challenge of keeping all data on Australian shores and the IT department still wanting some level of control over it.

How they’re going to go about this is still something of a mystery but there are some little tid bits that give us insight into their larger strategy. HP isn’t going to offer a new virtualization platform to underpin this technology, it will in fact utilize whatever current virtual infrastructure you have. What HP’s solution will do is abstract that platform away so you’re given a consistent environment to implement against which is what enables HP Cloud enabled apps to work between the varying cloud platforms.

Keen readers will know that this was the kind of cloud platform I’ve been predicting (and pining for) for some time. Whilst I’m still really keen to get under the hood of this solution to see what makes it tick and how applicable it will be I have to say that HP has done their research before jumping into this. Many see cloud computing as some kind of panacea to all their IT ills when in reality cloud computing is just another solution for a specific set of IT problems. Right now that’s centred around commodity services like email, documents, ERP and CRM and of course that umbrella will continue to expand into the future but there will always be those niche apps which won’t fit well into the cloud paradigm. Well not at the price point customers would be comfortable anyway.

What really interested me was the parallels that could be easily drawn between the virtualization revolution and the burgeoning cloud industry. Back in the day there was really only one player (VMware, Amazon) but as time went on many other players came online. Initially those competitors had to play feature catch up with the number 1. The biggest player noticed they were catching up quickly (through a combination of agility, business savvy and usually snapping up a couple disgruntled employees) and reacted by providing value add services above the base functionality level. The big players in virtualization (Microsoft, VMware and CITRIX) are just all about on feature parity for base hypervisor capabilities but VMware has stayed ahead by creating a multitude of added services, but their lead is starting to shrink which I’m hoping will push for a fresh wave of innovation.

Applying this to the cloud world it’s clear that HP has seen that there’s no reason in competing at a base level with cloud providers; it’s a fools gambit. Amazon has the cheap bulk computing services thing nailed and if all you’re doing is giving the same services then the only differentiator you’ll have is price. That’s not exactly a weapon against Amazon who could easily absorb losses for a quarter whilst it watches you squirm as your margins plunge into the red. No instead HP is positioning themselves as a value add cloud provider, having a cloud level that works at multiple levels. The fact that you can seamlessly between them is probably all the motivation most companies will need to give them a shot.

Of course I’m still a bit trepidatious about the idea because I haven’t seen much past the marketing blurb. As with all technology products there will be limitations and until I can get my hands on the software (hint hint) then I can’t get too excited about it. It’s great to see HP doing so much research and engaging with the public in this way but the final proof will be in the pudding, something I’m dying to see.

Instagram: The Photos Are Bad and You Should Feel Bad.

I’m a kinda-sorta photography buff, one who’s passion is only restrained by his rigid fiscal sensibility and lack of time to dedicate to taking pictures. Still I find the time to keep up with the technology and I usually find myself getting lost in a sea of lenses, bodies and various lighting rigs at least once a month simply because the tech behind getting good photographs is deeply intriguing. Indeed whenever I see a good photograph on the Internet I’m always fascinated by the process the photographer went through to create it, almost as much as I am when it comes to the tech.

Such a passion is at odds with the recently Facebook acquired app Instagram (or any of those filter picture apps).

To clear the air first yes I have an account on there and yes there are photos on it. To get all meta-hipster on your asses for a second I was totally into Instagram (then known as Burbn) before it was even known as that, back when it was still a potential competitor to my fledgling app. Owing then to my “better get on this bandwagon early” mentality back then I created an account on Instagram and used the service as it was intended: to create faux-artistic photos by taking bad cell phone pictures and then applying a filter to them. My usage of it stopped when I made the switch to Android last year and for a time I was wondering when it would come to my new platform of choice.

It did recently but in that time between then and now I came to realise that there really is nothing in the service that I can identify with. For the vast majority of users it serves as yet another social media platform, one where they can show case their “talent” to a bunch of like minded people (or simply followers from another social media platform following them to the platform du’jour). In reality all that Instagram does is auto-tune bad cell phone pictures, meaning that whilst they might be visually appealing (as auto-tuned songs generally are) they lack any substance thanks to their stock method of creation. The one thing they have going for them is convenience since you always have your phone with you but at the same time that’s why most of the photos on there are of mundane shit that no one really cares about (mine included).

To be fair I guess the issue I have isn’t so much with the Instagram service per say, probably more with the people who use it. When I see things like this guide as to which filter to use (which I’m having a hard time figuring out whether its an elaborate troll or not) I can’t help but feel that the users somehow think that they’ve suddenly become wonderful photographers by virtue of their phone and some filters. Should the prevailing attitude not be the kind of snobbish, hipster-esque douchery that currently rules the Insatgram crowd I might have just ignored the service rather than ranting about it.

From a business point of view the Instagram acquisition by Facebook doesn’t seem to make a whole lot of sense. It’s the epitome of the business style that fuelled the dot com bust back in the early 2000’s: a company with a hell of a lot of social proof but no actual revenue model (apart from getting more investors) gets snapped up by a bigger company looking to either show that it’s still trying to expand (Facebook in this case) or a dying company hoping that it can revive itself through acquisitions. Sure for a potential $100 billion company lavishing 1% of your worth on a hot new startup will seem like peanuts but all they’ve done is buy a cost centre, one that Facebook has said they have no intention of mucking with (good for the users, potentially bad for Facebook’s investors).

Instagram produces nothing of merit and using it does not turn one from a regular person into some kind of artist that can produce things with any merit. Seriously if you want to produce those kinds of pictures and not be a total dick about it go and grab the actual cameras and try to recreate the pictures. If that sounds like too much effort then don’t consider yourself a photographer or an artist, you’re just a kid playing with a photography set and I shall treat the pictures you create as such.

Fire Sales: A Reflection From a Former Dick Smith Employee.

Long time readers will know that for a good chunk of my working teenage years, in fact the majority of them now that I think about it, were spent predominantly at the Dick Smith Electronics store in Fyshwick here in the ACT. For what its worth it out of all the jobs I had (I was working up to 4 at times) it was the best of the lot but that could have easily been due to my seniority at the place. In my time there though I was privy to many of the behind the scenes activities that the vast majority of the public are unaware of and it seems recently some of those activities have been called into question again.

The events I’m referring to was the massive game clearance sale that took place just last week. Due to consumer regulations the sale was not allowed to be made public, due to the stampede it would create, but a price list inevitably made its way online and the calamity that ensued had people wondering just what the hell was going on. A major part of this was the apparent disdain for the employees who managed to snag some of the bargains before the public were allowed to have a go at them, something which one current employee has gone a long way to address.

I figured that I might just throw my hat in the ring here as well.

Way back in the day you might remember the little bastard child of a gaming handset cross mobile phone that was the Nokia N-Gage. Dick Smith stocked them and I had the mixed pleasure of trying to sell them to potential customers. It was an incredibly hard sell, one that only worked on the few uber nerds that would seek it out and the cashed up parents who bought their kids the phones they wanted (rather than the ones they needed). Of course one day Dick Smith decided to have a sale on the handsets via a coupon for a modest discount. The wording of the coupon was kind of lax and the system allowed multiple coupons to be used against a single handset, dropping the price to a tantalizing $100.

It was bedlam,the instant that word got around our store was swamped with people looking to pick up on the bargain. We kept the store open well past its closing time in order to service the last few of the orders and nearly every customer walked away with the device they wanted. The benefit for us was that a few of us got to get one of these handsets as well, many of us who then used it as our primary handset afterwards. Indeed the vast majority of ones that we saw up on eBay came from customers who had just purchased them from us not hours before, not the staff who had put them aside.

I’d have more sympathy for the greater consumer market if a couple things weren’t the way they currently are. When I started Dick Smith employees got an amazing staff discount: cost price + 10%. Whilst it didn’t make everything cheaper (games and computers being chief amongst them) it was amazing for many of the things that a budding geek like myself lusted after. About 3 years after I started working there the discount was scrapped, changed to be in line with the greater Woolworths employee discount scheme of a flat 5% off (which we got as well, but could not use in conjunction with our other discount).

There’s also the fact that for the most part this was a clearance sale, I.E. a run out of current stock lines that have either failed to move during their regular sales run or are the last of a dwindling few remaining. The fact that the price list was leaked online prior to the event meant that everyone got the impression that there would be enough stock to satisfy everyone when that was clearly not the case. Had this been a regular run of the mill sale I might have sung a different tune but it wasn’t and, I shudder to say this, sounds like a lot of people acting like entitled little bitches.

Honestly I believe the staff are entitled to have first stab at these things (much like they were for the recent WoW Sight and Sound fire sales, hear anyone whining about that?) because they work there, plain and simple. Retail employees aren’t that well compensated and it’s little perks like these, which are few and far between, that keeps them working there. If you’ve got a problem with staff having first go at sales like these then you’re more than welcome to take up a casual role in order to get the same level of “privilege” that they do. That or you could admit your value proposition for the items in question is so far below the regular retail price that you probably don’t need nor want said item.

This has gotten a lot more ranty than I thought it would so if you’d like a more level headed opinion from a consumer perspective then this piece from Matt Williams is probably more up your alley.

Batman Arkham City: The Bat is Back.

Arkham Asylum was one of the sleeper hits of 2009. It definitely wasn’t your traditional AAA title combining elements of several different genres of games into one well thought out experience. I have to admit I was sceptical of it at first, games based off comic or movie IP are traditionally quite bad, but it pleasantly surprised me. I was then quite excited when I heard about the sequel Arkham City which apparently had been hinted at in Arkham Asylum. Unfortunately I was torn between getting the collector’s edition on console or playing it on the PC, a decision that took me far too long to make. In the end I decided to play it on PC again and I’m glad I did.

Arkham City starts out with you as Bruce Wayne who’s campaigning for Arkham City, in essence a prison camp, to be shut down. Things take a turn for the worse when Hugo Strange’s mercenaries show up and throw him into city where Strange reveals that he knows that Bruce is Batman and should he try to stop his “Protocol 10” solution he will reveal that to the world. After a short altercation with the Penguin and some of his goons Bruce calls in a drop for his bat suit and begins his journey to stop Strange’s plan.

Both the visuals and the art direction of Arkham City are vastly improved from its predecessor. To Rocksteady’s credit they’ve done a pretty good job with the optimization too as even at the highest settings I was still able to run the game at high frame rates. Still there were occasions where it would slow down inexplicably as it wasn’t consistent with being inside/outside nor with heavy action. Still the graphics are great, the interactions between characters are no longer stilted affairs and the overall ambition for Arkham City is much greater than it was for Arkham Asylum and they’ve managed to achieve it well.

The core mechanics of Arkham City haven’t changed that much from Arkham Asylum but there have been some notable additions.  Due to the sheer scale of Arkham city the glide mechanic has been reworked considerably now enabling batman to, in essence, fly around the entire city almost unaided. This mechanic is made good use of as well by many of the quests and mini-games with things like flying to a certain point with limited time or giving you augmented reality challenges that unlock additional equipment and upgrades. Flying around like this was probably one of my favourite things to do in Arkham City considering you couldn’t do anything like this in its predecessor.

Combat has stayed relatively the same with most of the kinks that I complained about in Arkham Asylum being worked out. There are numerous additional gadgets available, different enemy types and new take down manoeuvres that serve to make the combat experience much more varied but at its heart its still very much the same as its predecessor. This isn’t a bad thing though as the combat in Arkham Asylum was done very well and the added variation in Arkham City keeps it faithful whilst making it stand on its own.

Whilst the combat is good it does tend to get a little samey as the game progresses but this is thankfully broken up well by the unique boss encounters. Each of them will make use of Batman’s array of gadgets in a particular way, forcing you of the regular hack ‘n’ slashy combat and into a real tactical challenge. Don’t get me wrong its’ a pretty awesome feeling when you pull of a 70+ hit combo on legions of foes but nothing got my adrenaline going as much as the boss fights did. None of them felt like a complete cock block either, something which can be hard if you’re trying to hit that fine line between satisfying challenge and impassable obstacle.

The Riddler puzzles were usually interesting but I didn’t really feel the compulsion to seek them out. Whilst its pretty easy to come across them as you’re flying around Arkham City I only ever really went after one if my health was low. Talking this over with my brother he said that the challenges felt somewhat dumbed down from the predecessor and this is probably why most people (outside those hunting for achievements) don’t really want to bother with them. I can’t for the life of me remember what the challenges were like back in Arkham Asylum but the vast majority of the puzzles in Arkham City did feel quite easy.

Just like Arkham Asylum Arkham City sets out an environment where almost the entire back catalogue of Batman super villains can make an appearance without having to having to have a back story to explain why they hell they’re there. It’s a kind of cheap way of getting them all together in the same area but it works well as it leads you to have many unique encounters based around those particular villain’s modus operandi. The screenshot above from the Mad Hatter encounter was a great example of this, putting you in a surreal world in which  you have to fight your way through to get back to reality. I liken it to the Scarecrow encounters of Arkham Asylum, unique encounters that break away from the main game in order to mix things up a bit.

The way in which you come across these kinds of unique encounters though is one of the more common complaints I’ve heard about Arkham City. Indeed Arkham Asylum was far more linear in its game play owing to its comparatively closed environment. Arkham City on the other hand is a true sandbox style game, pushing you to follow the main plot line whilst also throwing up dozens of side quests that can be done at your leisure. Truthfully this can get a little overwhelming at times as you can’t go too far without triggering one of these quests and after you’ve done a few of them you don’t feel the compulsion to seek them out as often. It is definitely is one of the weaker aspects of Arkham City.

The sections where you play as Catwoman are interesting although I must admin they weren’t my favourite part of Arkham City. The different Riddler trophies for example seem to be a cheap way to reuse the same assets, forcing you to go back to somewhere you’ve already explored in order to collect them. Since the differences between Catwoman and Batman is limited to the lack of gadgets, lack of detective mode and no glide ability it’s not different enough to make for a break from the core Batman play. I like that Rocksteady are experimenting with things like this, it shows they have confidence in their abilities to make AAA titles like Arkham City, but they’d need to work on differentiating the playable characters a bit more ion order for them to really shine.

Overall Arkham City improves greatly on its predecessor in technical terms with the graphics being improved, the glitches being ironed out and amping up the ambition of the game significantly. It’s not without its faults however owing to the transition to true sandbox style play and some compromises made to appeal to a wider audience. Still unlike many sequels Arkham City stands very well on its own as an unique game that draws well on its rich IP heritage. I wouldn’t hesitate to recommend this to both fans and new comers to the Arkham series.

Rating: 8.75/10

Batman: Arkham City is available on PC, Xbox360 and PlayStation 3 right now for $89.99, $78 and $78 respectively. Game was played on the PC on Normal difficulty with 11 hours of total play time and 33% of the achievements unlocked.

Goodbye Orion.

Our family dog, Orion, had to be put down today.

He was a beautiful dog, always loving and never complained. He was so docile and would quietly follow you anywhere, just because he wanted to be at your side. I’m going to miss him terribly and I know how cliché this sounds but I don’t know if I could love another dog as much as I loved him. He’ll be in my heart forever, right along side Tibby the cat Orion and I both grew up with.

I’ve been agonizing over whether I should write this ever since we laid him to rest but I feel I should. He was a beautiful dog and I don’t ever want to forget him. This, in a sense, is my way of making sure that he will be remembered in some way by virtue of the Internet never forgetting anything.

I love you boy, rest well now.

Transitioning From an IT Admin to a Cloud Admin.

I’ve gone on record saying that whilst the cloud won’t kill the IT admin there is a very real (and highly likely) possibility that the skills required to be a general IT administrator will change significantly over the next decade. Realistically this is no different from any other 10 year span in technology as you’d struggle to find many skills that were as relevant today as they were 10 years ago. Still the cloud does represent some fairly unique paradigm shifts and challenges to regular IT admins, some of which will require significant investment in re-skilling in order to stay relevant in a cloud augmented future.

The most important skill that IT admins will need to develop is their skills in programming. Now most IT admins have some level of experience with this already, usually with automation scripts based in VBScript, PowerShell or even (shudder) batch. Whilst these provide some of the necessary foundations for working in a cloud future they’re not the greatest for developing (or customizing) production level programs that will be used on a daily basis. The best option then is to learn some kind of formal programming language, preferably one that has reference libraries for all cloud platforms. My personal bias would be towards C# (and should be yours if your platform is Microsoft) as it’s a great language and you get the world’s best development environment to work in: Visual Studio.

IT admins should also look to gaining a deep understanding of virtualization concepts, principles and implementations as these are what underpins nearly all cloud services today. Failing to understand these concepts means that you won’t be able to take advantage of many of the benefits that a cloud platform can provide as they function very differently to the traditional 3 tier application model.

The best way to explain this is to use Microsoft’s Azure platform as an example. Whilst you can still get the 3 tier paradigm working in the Azure environment (using a Web Role, Worker Role and SQL Azure) this negates the benefits of using things like Azure Table Storage, Blob Storage and Azure Cache. The difference comes down to having to manually scale an application like you would do normally instead of enabling the application to scale itself in response to demand. In essence there’s another level of autonomy you take advantage of, one that makes capacity planning a thing of the past¹.

It’s also worth your time to develop a lot of product knowledge in the area of cloud services. As I mentioned in my previous blog cloud services are extremely good at some things and wildly inappropriate for others. However in my experience most cloud initiatives attempt to be too ambitious, looking to migrate as many services into the cloud as possible whether there are benefits to be had or not. It’s your job then to advise management as to where cloud services will be most appropriate and you can’t do this without a deep knowledge of the products on offer. A good rule of thumb is that cloud services are great at replacing commodity services (email, ERP, CRM etc.) but aren’t so great at replacing custom systems or commodity systems that have had heavy modifications to them. Still it’s worth researching the options out there to ensure you know how the cloud provider’s capabilities match up with your requirements, hopefully prior to attempting to implement them.

This is by no means an exhaustive list and realistically your strategy will have to be custom made to your company and your potential career path. However I do believe that investing in the skills I mentioned above will give you a good footing for transition from just a regular IT admin to a cloud admin. For me I find it exciting as whilst I don’t believe the cloud will overtake anything and everything in the corporate IT environment it will provide us with some amazing new capabilities.

¹Well technically it just moves the problem from you to the cloud service provider. There’s still some capacity planning to be done on your end although it comes down financial rather than computational, so that’s usually left to the finance department of your organisation. They’re traditionally much better at financial planning than IT admins are at capacity planning.

Many thanks to Derek Singleton of Software Advice for inspiring this post with his blog on Cloud Career Plans.

Google’s Project Glass: Augmented Reality Takes a Big Leap Forward.

It was just over a decade ago now but I can still vividly remember walking around the streets of Akihabara in Tokyo. It’s a technical wonderland and back then when Internet shopping was something only crazy people did (for fear of losing your credit card details) it was filled with the kind of technology you couldn’t find anywhere else. I was there on a mission looking for a pocket translator similar to the one my Japanese teacher had lent me. While my quest went unfulfilled I did manage to see all sorts of technology there that wouldn’t make it to Australia shores for years to come, and one piece in particular stuck in my mind.

There was a row of these chunky looking head sets, each hooked up to what looked like a portable CD player. I remember picking one up and looking at the headset I saw two tiny displays in it, one for each eye. Putting on the headset I was greeted to a picture that seemed massive in comparison to the actual size of the device playing some kind of demo on a loop. It wasn’t perfect but it was enough to make me fascinated with the concept and I thought it wouldn’t be long before everyone had some kind of wearable display. Here we are just over a decade later and the future I envisioned hasn’t yet come to pass but it seems we’re not far off.

Today Google announced Project Glass, one of their brain childs of the secretive Google[x] lab. There’s been rumours floating around for quite a while now that they were working on something of this nature but no one could give much above the general idea that it would be a head mounted display and Android would be powering it. Looking over what Google’s released today as well as the comments from other news outlets makes it clear that Google is quite serious about this idea and it could be something quite revolutionary.

The initial headset designs I saw back when I heard the original rumours were the kind of of clunky, overly large glasses we’ve come to expect when anyone mentions a wearable display. Google’s current design (pictured above) seems rather elegant in comparison. It’ll still draw a lot of attention thanks to the chunky white bar at the side but it’s a far cry from what we’ve come to expect from wearable displays. What’s even more impressive is the concept demo they included alongside it, showcasing what the headset is capable of:

The possibilities for something like this are huge. Just imagine extending the capabilities to recognise faces of people you’ve met before, neatly side stepping that awkward moment when you forget someone’s name. You could even work a barcode scanner into it, allowing you to scan food to see the nutritional value (and then see if it fits in with your diet) before you purchase it. I could go on forever about the possibilities of a device like the Project Glass but suffice to say it’s quite an exciting prospect.

What will be really interesting to see is how these kind of devices blend in to every day social interactions. The smart phone and tablet managed to work their way into social norms rather quickly but a device like this is a whole other ball game. The sleek and unobtrusive design will help ease its transition in some what but I can still see a long adaptation period where people will wonder why the heck you’re wearing it. That won’t deter me from doing so though as it’s this kind of device that makes me feel like I’m living in the future. That’s all it takes for me to overcome any social anxiety that I might have about wearing one of these 😉

PAL-V and Terrafugia: The Only Way You’ll Be Getting a Flying Car.

Ah the flying car, it’s like the milestone that needs to be hit before the general public believes we’re living in the future. I guess it’s because its so elusive, every time someone has made a prediction that we’d have one in X years (much like the jet pack) we’d inevitably reach that goal without a hint of it coming into reality. The reasons behind it are fairly simple, flying isn’t exactly easy and it’s not clear what the potential benefits of a flying car would be even if it could be made for mass use. Realistically it’s a solution in search of a problem and that’s the main reason why there’s been little serious development in the idea.

Of course there is a cross-over niche where some kind of flying car would have a potential market. There are many people who have their own aircraft, typically small 2 to 8 seater types, who do use them to travel distances that we’d usually take a commercial flight for. When they get to their destination though they’re in the same boat as we are, needing to find some kind of last mile transportation. The current solutions to this problem are the same for all travellers (hire cars, public transport, friends, etc.) but there’s been a few companies looking into solving this problem by making the air craft they take there road legal.

These are called, funnily enough, roadable aircraft.

The idea isn’t exactly new with examples of such craft dating all the way back to the 1930’s. Most of these designs weren’t terribly practical however usually requiring heavy amounts of work to transition between car and plane modes. There are several working modern designs that use parachutes to generate lift but they again suffer from practicality problems, usually being limited to joy craft rather than an actual useful means of transportation. There are 2 companies that have caught my eye in this space however and both of them has just recently made their maiden flight.

Terrafugia is a company that would be familiar to a lot of people since they’ve been attempting to make a roadable aircraft for just on 6 years. Their design, called the Transition, is an interesting one as it’s clear from the design that it’s primarily an aircraft that’s been modified to work on the road. To switch between plane and car modes the wings fold up along side it, allowing it to fit into standard size car spaces. Whilst its performance is nothing spectacular it does sport a rather incredible range for a vehicle, able to fly up to 787km or drive up to 1296KM. As they develop it further I’m sure they’ll make improvements to it as I’m sure I can recall those specs being a lot worse in the past.

The one that really caught my (and everyone else’s it seems) attention recently was the PAL-V One which takes yet another intriguing stab at the roadable aircraft idea. Instead of using wings to generate lift it instead relies on a set of helicopter blades that provide lift through autorotation with the thrust provided by a pusher propeller.  Aviation nuts will recognise that system as an autogyro a curious combination of helicopter and plane components. The transition between autogyro and enclosed motorcycle takes about 10 minutes but can be done by a single pilot without any additional tools. Whilst I can’t see much of a use for it now (the runway requirement kind of puts out of reach for any domestic use) I really do think the design is quite cool.

Whilst both these craft are amazing in their own right they do highlight the issue with combining driving and flying. They are really 2 completely different methods of travel and neither of them will let a regular person with a driver’s license be a pilot of them. Indeed both of these craft will require a private pilot’s license if you want to fly them. Getting one isn’t exactly out of reach for everyone but still quite a hurdle requiring 50+ hours of flight time (10 solo), several written exams and a final test with an experienced pilot. The reasoning behind this is flying isn’t as easy as driving and that’s the main reason why you’ll probably never see skies full of flying cars, at least not for several decades.

Durango, Orbis and What’s Influencing the Next Generation of Consoles.

The current generation of consoles is the longest lived of any generation of the past 2 decades. There are many reasons for this but primarily it came from the fact that the consoles of this generation, bar the Nintendo Wii, where light years ahead of their time at release. In a theoretical sense both the Xbox360 and the PlayStation 3 had 10 times the computing power of their PC contemporaries at release and they took several years to catch up. Of course now the amount of computing power available, especially that of graphics cards, far surpasses that which is available in console form and the gaming community is starting to look towards the next generation of consoles.

The last couple weeks have seen quite a lot of rumour and speculation going around as to what the next generation of consoles might bring us. Just last week some very detailed specifications on the PlayStation4, codenamed Orbis, were made public and the month before revealed that the new Xbox is codenamed Durango. As far as solid information goes however there’s been little to come by and neither Sony or Microsoft have been keen to comment on any of the speculation. Humour me then as I dive into some of the rumours and try to make sense of everything that’s flying around.

I’ll focus on Durango for the moment as I believe that it will play a critical part in Microsoft’s current platform unification crusade. Long time readers will know how much I’ve harped on about Microsoft’s Three Screens idea in the past and how Windows 8 is poised to make that a reality. What I haven’t mentioned up until now is that Microsoft didn’t appear to have a solution for the TV screen as the Xbox didn’t appear to be compatible with the WindowsRT framework that would underpin their platform unification. Rumours then began swirling that the next Xbox could be sporting a x86 compatible CPU, something which would make Metro apps possible. However SemiAccurate reports that it’s highly unlikely that the Durango CPU will be anything other than another PowerPC chip, effectively putting the kibosh on a Three Screens idea that involves the Xbox.

Now I don’t believe Microsoft is completely unaware of the foot hold they have in the living room when it comes to the Xbox so it follows that either Durango will have a x86/ARM architecture (the 2 currently confirmed WinRT compatible architectures) or WinRT will in fact work on the new Xbox. The latter is the interesting point to consider and there’s definitely some meat in that idea. Recall in the middle of last year that there was strong evidence to suggest that Windows 8 would be able to play Xbox360 games suggesting that there was some level of interoperability between the two platforms (and by virtue the Windows Phone 7 platform as well). Funnily enough if this is the case then it’s possible that Metro apps could run on the Wii U but I doubt we’ll ever see that happen.

Coincidentally Orbis, the PlayStation3 successor, is said to be sporting a x64 CPU in essence eliminating most of the differences between it and conventional PCs. Whilst the advantages to doing this are obvious (cross platform releases with only slight UI and controller modifications, for starters) the interesting point was that it almost guarantees that there will be no backwards compatibility for PlayStation3 games. Whilst the original PlayStation3s contained an actual PS2 inside them the vast majority of them simply emulated the PS2 in software, something that it was quite capable of doing thanks to the immense power under of the PlayStation3. Using a more traditional x64 CPU makes this kind of software emulation nigh on impossible and so backwards compatibility can only be achieved with either high end components or an actual Cell processor. As Ars Technica points out it’s very likely that the next generation of consoles will be more in line with current hardware than being the computational beasts of their predecessors, mostly because neither Microsoft or Sony wants to sell consoles at a loss again.

The aversion to this way of doing business, which both Microsoft and Sony did for all their past console releases, is an interesting one. Undoubtedly they’ve seen the success of Nintendo and Apple who never sell hardware at a loss and wish to emulate that success but I think it’s far more to do with the evolution of how a console gets used. Indeed on the Xbox360 more people use it for entertainment purposes than they do for gaming and there are similar numbers for the PlayStation3. Sony and Microsoft both recognise this and will want to capitalize on this with the  next generation. This also means that they can’t support their traditional business model of selling at a loss and making it up on the games since a lot of consoles won’t see that many games purchased for them. There are other ways to make up this revenue short fall, but that doesn’t necessarily mean they can keep using the console as a loss leader for their other products.

All this speculation also makes the idea of the SteamBox that much more interesting as it no longer seems like so much of an outlier when lumped in with the next generation of consoles. There’s also strong potential that should a console have a x86/x64 architecture that the Steam catalogue could come to the platform. Indeed the ground work has already been done with titles like Portal 2 offering a rudimentary level of Steam integration on the PlayStation3, so it’s not much of a stretch to think that it will make a reappearance on the next generation.

It will be interesting to see how these rumours develop over the next year or so as we get closer to the speculated announcement. Suffice to say that the next generation of consoles will be very different beasts to their predecessors with a much more heavy focus on traditional entertainment. Whether this is a positive thing for the gaming world at large will have to remain to be seen but there’s no mistaking that some radical change is on the horizon.