Monthly Archives: June 2011

When Realism Has No Place In Games.

So I’ve decided to try my hand at being a game developer after spending way too many hours thinking about it and wanting to do something more exciting than developing yet another web application. This isn’t the first time I’ve tried my hand at developing games either, I did a semester long course in games development back when I was in university. That course still rates as one of the most fun and interesting semesters I ever spent there, especially when your games were put up against the harshest critics I’ve ever met: the lecturer’s two kids. After finishing that course however I never really continued to try and make games until a mate of mine introduced me to Unity.

For a straight up programmer like myself Unity is a bit of an odd beast. The Unity editor reminded me of the brief foray I had with 3D Studio Max back in college, as it sported many of the same features like the split screen viewport and right hand column with all object’s properties in it. It’s very easy to navigate around and it didn’t take me long to whip up a simple littlesolar system simulator, albeit one that lacks any form of gameplay or semblance of realism. Still being able to go from never having used the product before to making something that would’ve taken me weeks in the space of a single weekend was pretty exciting, so I started about working on my game idea.

So of course the first game I want to make is based in space and the demo I’ve linked to before was the first steps to realizing the idea. It was however very unrealistic as the motion of the planet is governed by simply tracing out a circle, with no hint of gravity to influence things. Additionally the relative sizes and distances were completely ludicrous so I first set about making things a little more realistic to satisfy the scientist in me. Doing some rough calculations and devising my own in game scale (1 unit = 1,000KM) I made everything realistic and the result was pretty tragic.

The sun took up the vast majority of the screen until you zoomed out to crazy levels, at which point I couldn’t find where the hell my little planet had gotten off to. After panning around for a bit I saw it hiding about 4 meters above the top of my monitor, indistinguishable from the grey background it sat on. Considering this game will hopefully be played on mobile phones and tablets the thought of having to scroll like a madman constantly didn’t seem like a fantastic idea, and I relegated myself to ditching realism in favor of better gameplay. My artistic friend said we should go for something like “stylized physics”, which seems quite apt for the idea we’re going after.

It might seem obvious but the idea of suspending parts of reality for the sake of game play is what makes so many games we play today fun. The Call of Duty series of games would be no where near as fun if you got shot once in the arm and then proceeded to spend the next hour screaming for a medic, only to not be able to go back into the mission for another couple weeks while your avatar recuperated.  The onus is on the developer however to find that right balance of realism and fantasy so that the unrealistic parts flow naturally into the realistic, creating a game experience that’s both fun and doesn’t make the player think that it’s an unrealistic (or unfathomable) mess.

I’m sure my walk down the game developer road will be littered with many obvious-yet-unrealized revelations like these and even my last two weeks with Unity have been a bit of an eye opener for me. Like with any of my endeavors I’ll be posting up our progress for everyone to have a fiddle with over in The Lab and I’ll routinely be pestering everyone for feedback. Since I’m not going at this solo anymore hopefully progress will be a little bit more speedy than with my previous projects and I’ll spend a lot less time talking about it :)

 

LulzSec: Easy to Hate, Easy To Love.

It’s nigh on impossible to make a system completely secure from outside threats, especially if it’s going to be available to the general public. Still there are certain measures you can take that will make it a lot harder for a would be attacker to get at your users’ private data, which is usually enough for them to give up and move onto another more vulnerable target. However, as my previous posts on the matters of security have shown, many companies (especially start ups) eschew security in favor of working on new features or improving user experience. This might help in the short term to get users in the door, but you run the very real risk of being compromised by a malicious attacker.

The attacker might not even be entirely malicious, as what appears to be the case with one of the newest hacker groups who are calling themselves LulzSec. There’s a lot of speculation as to who they actually are but their Twitter alludes to the fact that they were originally part of Anonymous, but decided to leave them since they disagreed with the targets they were going after and were more in it for lulz than anything else. Their targets range drastically from banks to game companies and even the USA senate with the causes changing just as wildly, ranging from simply for the fun of it to retaliations for wrong doings by corporations and politicians. It would be easy to brand them as anarchists just out to cause trouble for the reaction, but some of their handiwork has exposed some serious vulnerabilities in what should have been very secure web services.

One of their recent attacks compromised more than 200,000 Citibank accounts using the online banking system. The attack was nothing sophisticated (although authorities seem to be spinning it as such) with the attackers gaining access by simply changing the identifying URL and then automating the process of downloading all the information they could. In essence Citibank’s system wasn’t verifying that the user accessing a particular URL was authorized to do so, it would be like logging onto Twitter and then typing say Ashton Kutcher’s account name into the URL bar and then being able to send tweets on his behalf. It’s basic authorization at its most fundamental level and LulzSec shouldn’t have been able to exploit such a rudimentary security hole.

There are many other examples of LulzSec hacking various other organisations with the latest round of them all being games development companies. This has drawn the ire of many gamers which just spurred them on to attack even more game and related media outlets just so they could watch the reaction. Whilst it’s kind of hard to take the line of “if you ignore them they’ll go away” when they’re unleashing a DDoS or downloading your users data the attention that’s been lavished on them by the press and butthurt gamers alike is exactly what they’re after, and yes I do get the irony of mentioning that :P. Still had they not been catapulted to Internet stardom so quickly I can’t imagine that they would continue being as brash as they are now, although there is the possibility they might have started out doing even more malicious attacks in order to get attention.

Realistically though the companies that are getting compromised by rudimentary URL and SQL injection attacks only have themselves to blame since these are the most basic security issues that have well known solutions and shouldn’t pose a risk to them. Nintendo showed that they could withstand an attack without any disruptions or loss of sensitive data and LulzSec was quick to post the security hole and then move onto to more lulzy pastures. The DDoSing of others though is a bit more troublesome to deal with, however there are many services (some of them even free) that are designed to mitigate the impact of such an incident. So whilst LulzSec might be a right pain in the backside for many companies and consumers alike their impact would be greatly softened by a strengthening of security at the most rudimentary level and perhaps giving them just a little less attention when they do manage to break through.

 

Dawn, Vesta and The Dwarf Planet Ceres.

I can remember for the longest time being completely unaware of the asteroid belt between Mars and Jupiter. After learning about it however I never paid much more thought to it, although I was curious about how there seemed to be a line that separated the smaller, rocky planets from the large gas giants of our solar system (negating Pluto, of course). As my interest in space grew I began to wonder how any spacecraft that had ventured past Mars (there have been 9 of them) hadn’t managed to have a run in with a stray asteroid. As it turns out there’s a few reasons for that, and I find them quite fascinating.

The first is that the average density of the asteroid belt is extremely low with the total mass contained within the entire system being less than 4% of that of our Moon. Our best calculations then put the odds of a satellite coming into (unintended) contact with an asteroid in this region at about a billion to one, or so unlikely that you wouldn’t even consider it a risk. The images of the asteroid belt that many are familiar with make it look far more densely packed than it really is, much like this series of pictures that shows all the artificial satellites of Earth. That’s not to say the amount of junk we’ve sent up around ourselves isn’t an issue, but an accurate scale representation of each satellite wouldn’t look anywhere near as packed as they do.

What fascinates me the most about the asteroid belt however is how the majority of the mass is concentrated within 4 objects, with the two largest of these being 4 Vesta and an object big enough to be classed as a dwarf planet called Ceres. In astronomical terms they’re right in our backyard but even with our most powerful space based telescope we’re still only able to capture relatively blurry representations of them, shrouding these little heavenly bodies in mystery. Ceres especially so, with a series of images showing a massive bright spot moving across its surface of which its nature is still unknown.

So fascinating are these objects that NASA launched a mission to both of them, named Dawn, back in 2007. This particular spacecraft is something of a novelty in of itself as well as it is the first purely exploratory mission to use only ion thrusters for propulsion. It needs these highly efficient engines as it will be the first spacecraft to launch to its target, orbit it for a set amount of time and then set off again to approach yet another target. To do this it is carrying with it over 400kg of propellant enough for it to change its velocity by over 10km/s, a figure well above that of any other spacecraft that has come before it. It may take its time in doing so, but it’s still an incredible achievement none the less.

Dawn is scheduled to arrive at its first target, 4 Vesta, in just over a month and it has already begun sending back pictures and video of this strange mega-asteroid. They’re not much to look at right now but once its closer the imagery will become much clearer, revealing the nature of all the blurry spots we’ve as of yet only been able to speculate about. Dawn will spend a year surveying 4 Vesta before it sets off on its long journey for Ceres, for which it is not expected to reach until February 2015. Its a long wait to get a better look at something that’s so small compared to nearly everything in our solar system, but the prospect still excites me immensely.

Perhaps its the combination of their close proximity yet relative lack of information about these two little bodies that makes them so interesting, they’re just sitting there begging to be investigated. The next year will reveal all sorts of insights into the asteroid belt and its second largest contributer which will in turn tell us something about Ceres itself. We’re still a long, long way away from seeing Ceres in the flesh (or rock, as it were) but any information that Dawn sends back is valuable and I can’t wait to see what it brings us.

Adapt or Die: Why I’m Keen on the Cloud.

Anyone who works in IT or a slightly related field will tell you that you’ve got to be constantly up to date with the latest technology lest you find yourself quickly obsoleted. Depending on what your technology platform of choice is the time frame you have to work in can vary pretty wildly, but you’d be doing yourself (and your career) a favour by skilling up in either a new or different technology every 2 years or so. Due to the nature of my contracts though I’ve found myself learning completely new technologies at least every year and its only in this past contract that I’ve come back full circle to the technology I initially made my career on, but that doesn’t mean the others I learnt in the interim haven’t helped immensely.

If I was honest though I couldn’t say that in the past I that I actively sought out new technologies to become familiar with. Usually I would start a new job based on the skills that I had from a previous engagement only to find that they really required something different. Being the adaptable sort I’d go ahead and skill myself up in that area, quickly becoming proficient enough to do the work they required. Since most of the places I worked in were smaller shops this worked quite well since you’re always required to be a generalist in these situations. It’s only been recently that I’ve turned my eyes towards the future to figure out where I should place my next career bet.

It was a conversation that came up between me and a colleague of mine whilst I was on a business trip with them overseas. He asked me where I thought were some of the IT trends that were going to take off in the coming years and I told him that I thought cloud based technologies were the way to go. At first he didn’t believe me, which was understandable since we work for a government agency and they don’t typically put any of their data in infrastructure they don’t own. I did manage to bring him around to the idea eventually though, thanks in part to my half decade of constant reskilling.

Way back when I was just starting out as a system administrator I was fortunate enough to start out working with VMware’s technology stack, albeit in a strange incarnation of running their workstation product on a server. At the time I didn’t think it was anything revolutionary but as time went on I saw how much money was going to waste as many servers sat idle for the majority of their lives, burning power and providing little in return. Virtualization then was a fundamental change to the way that back end infrastructure would be designed, built and maintained and I haven’t encountered any mid to large sized organisation who isn’t using it in some form.

Cloud technologies then represent the evolution of this idea. I reference cloud technologies and not “the cloud” deliberately as whilst the idea of relying on external providers to do all the heavy lifting for you is extremely attractive it unfortunately doesn’t work for everyone, especially for those who simply cannot outsource. Cloud technologies and principles however, like the idea of having massive pools of compute and storage resources that can be carved up dynamically, have the potential to change the way back end services are designed and provisioned. Most importantly it would decouple the solution design from the underlying infrastructure meaning that neither would dictate the other. That in itself is enough for most IT shops want to jump on the cloud bandwagon, and some are even doing so already.

It’s for that exact reason why I started developing on the Windows Azure platform and researching into VMware’s vCloud solution. Whilst the consumer space is very much in love with the cloud and the benefits it provides large scale IT is a much slower moving beast and it’s only just now coming around to the cloud idea. With the next version of Windows shaping up to be far more cloud focused than any of its predecessors it seems quite prudent for us IT administrators to start becoming familiar with the benefits cloud technology provides, lest we be left behind by those up and comers who are betting on this burgeoning platform.

Windows 8: The Death of the Silverlight Ecosystem?

It’s only been just over a week since Microsoft demoed their latest iteration of the Windows platform but in that short amount of time it’s already managed to stir up quite a bit of discussion from friends and foes alike. The foes were quick to call out the new OS’s tablet envy, conveniently forgetting Microsoft’s rhetoric that the next version of Windows after 7 was going to have a much more web centric focus, with the possibility of it being entirely cloud based. More interesting however is the discussion arising from long term developers on the Microsoft platform, and it’s not the kind of adulation and praise you’d normally expect.

During the D9 conference Microsoft said that the new tile mode in Windows 8 was based around HTML5 and Javascript applications. Whilst they did mention that all current apps built on the .NET platform should run as intended when running in the familiar desktop mode they made no mention of whether or not the .NET and Silverlight platforms could be used to create applications in the new style of interface. With Microsoft traditionally being quite favorable to developers the notion of having to re-skill to HTML5 and Javascript (not to mention reworking existing codebases) came as quite a shock to a lot of developers and their reaction was akin to an open revolt on the forums.

Rampant speculation soon followed and wasn’t helped by the fact that Microsoft has asked everyone to remain calm until their BUILD developer conference in September. It’s not the first time this sort of thing has happened either, a similar level of hubbub was roused when Microsoft was coy about Silverlight’s future when talking about Internet Explorer 9 and it’s dedication to web standards. They soon came out saying that they still saw a future in Silverlight, especially for the Windows Phone 7 platform, but many of them were left unconvinced. It’s then quite likely that this second round of doubt that Microsoft has cast over their third party developer’s futures was the straw that broke the camel’s back and all the blame is being leveled squarely at Microsoft.

For what it’s worth I feel their concerns are valid if the reaction to them is somewhat overblown. Microsoft has a long history of eating its own dog food and many of their client facing applications are built upon the technologies that so many are worried are going to disappear in the near future. The best example of this is their Windows Azure management console which is built entirely on Silverlight. Couple that with the fact that Microsoft has many partners with a very heavy investment in the platform and I find it hard to jump on the “Silverlight is dead” bandwagon, but that doesn’t necessarily mean Microsoft is committed to bringing Silverlight into the Windows 8 tablet world.

Sure it would be great to be able to create Silverlight applications on the new Windows 8 tile system and Microsoft would be leveraging off a lot of preexisting talent to help drive adoption of the platform. However it would also hinder Microsoft’s adoption of web standards, as many developers would favor using proprietary Microsoft technologies instead of attempting to reskill. They’d then be the slave of two masters: on the one hand the Silverlight crowd demanding ever more features and tools that are constrained to that platform and on the other the web standards crowd that has been Microsoft’s bug bear ever since alternative browsers started to gain real market traction. It’s not like Microsoft doesn’t have the resources to deal with this though, but I can understand their motivations should they want to eschew Silverlight in favour of a more standard environment.

So is this the end of the line for the Silverlight ecosystem and the developers who built their skills around it? Hard to say, with Microsoft being mum on it for the next few months we’ll just have to play it by ear until we get more information from them. In all honesty even if they do end up dropping Silverlight for HTML5 and Javascript I’d expect that the next release of Visual Studio would bring enough tools and resources with it to make the transition much easier than everyone is making it out to be. Hell if Adobe can build a Flash to HTML5 converter then it’s quite possible for Microsoft to do the same for Silverlight, even if that’s just a band-aid solution to satisfy developers who refuse to reskill.

 

Wii U Controller Backside under

Nintendo’s Wii U: Coming Full Circle.

I’ve been a Nintendo fan for well over 2 decades now, my first experiences with them dating all the way back to the original Nintendo Entertainment System which I believe is still in a functioning state in a closet out at my parent’s place. I have to admit though they kind of lost me when they released the Game Cube as by then I was hooked on my shiny new PlayStation and there weren’t any games on the Game Cube that appealed to me as a burgeoning hardcore gamer. That trend continued for a long time until my then housemate bought a Wii on the release date but even then I didn’t really play it that much, instead favoring my PS3 and Xbox360. Indeed the Wii I got using some credit card reward points has been mostly unused since we got it, even though I thought there were a couple games on it I was “dying” to try.

For what its worth it’s not really Nintendo’s fault that I haven’t really been a massive user of their last 2 generations of platforms, they made it clear that they were hunting for a different market and I wasn’t in it. Sure there were some nostalgia titles that tugged on my heart and wallet (Zelda and Mario, of course) but they weren’t enough for me to make the leap and I’ve stuck to my other staples ever since. Nintendo had firmly cemented themselves as the game console for people who don’t identify as gamers, broadening their market to unprecedented levels but also alienating the crowd who grew up with them to become today’s grown up gamers. At the time it was a trade off Nintendo appeared happy to make but recent announcements show that they may be thinking otherwise.

Nintendo recently announced the console that is to be the successor to the Wii which has been worked on under the title of Project Cafe and will be officially known as the Wii U. The console itself looks very similar to its predecessor, sporting the same overall layout whilst being a little bit bigger and preferring a rounder shape to the Wii’s highly angular design. Nintendo is also pairing the new console with another new accessory, a controller that comes with an embedded touch screen. At first it looks completely ludicrous, especially if you take into consideration that the Wii’s trademark was motion controlled games. After reading a bit more about it however it appears that this tablet-esque controller will function more like an augmentation to games rather than being the primary method of control, with the Wii nun-chucks still being used for games that rely on motion control.

The console itself is shaping up to be no slouch either, eschewing Nintendo’s trend of making under powered consoles in favor of one that is capable of producing full 1080p HD content. Whilst the official specifications for the Wii U aren’t released yet the demonstrations of the release titles for the console do not suffer from the low polygon counts of previous Wii titles with the demos looking quite stunning. With enough grunt under the hood of the Wii U Nintendo could also be making a play for the media extender market as well, something Microsoft and Sony have covered off well in the past. Couple that with a controller that would make one nice HTPC remote and I’m almost sold on the idea, but that’s not the reason why I’m tentatively excited about what the Wii U signals for Nintendo.

Nintendo has said during the E3 conference that they believe their new console will target a much broader audience than that of the Xbox or PlayStation, which taken on face value doesn’t mean a whole lot. The Wii sales numbers speak for themselves as both gamers and non-gamers alike bought the Wii and it outsold its competitors by a large margin, so if Nintendo can continue the trend with the Wii U it will be obvious that they’ll hit a broader market. However the announcement of the Wii U also came a video showing launch titles, many of which would have never previously made it to Nintendo’s console. It looks like Nintendo is trying to lure back the hardcore gaming crowd that it shunned when it re-imagined itself and that makes a long time fan like myself very happy indeed.

Of course the proof will be in the putting for the Nintendo Wii U and with the console not scheduled for release until sometime in 2012 we’ll be waiting a while before we can judge their attempt to claw back that niche that has slipped away from them. Whilst my Wii may sit next to my TV feeling woefully underused I get the feeling that its successor might not suffer the same fate and I’m excited at the possibility of Nintendo coming full circle and embracing those gamers who grew up with them. The possibility of it being a little media power house is just the icing on the cake, even if I might only end up using the controller through Bluetooth on my media PC.

The Good, The Bad and The Ugly of Apple’s WWDC.

Every year around this time the world seems to collectively wet its pants over the announcements that Apple makes at its World Wide Developers Conference, usually because Apple announces their new iPhone model. This time around however there was no new iPhone to speak of but there was still a whole bunch of news that’s sure to delight Apple fans and haters a like. As always I was impressed by some of the innovations and then thoroughly annoyed by the fans reactions, especially those who extrapolated wildly based on ideas and technology that isn’t even out in the wilds yet. I really should have expected as much, but the optimist in me doesn’t seem to want to keel over just yet.

Arguably the biggest announcement of the conference was iCloud, Apple’s new cloud service. With this service 9 of the in built applications will become cloud enabled, storing all their data in the cloud so that it’s accessible from almost anywhere. The majority of them are rudimentary cloud implementations (contacts, pictures, files, etc) but the most notable of the new cloud enabled services will be iTunes. Apart from doing the normal cloud thing of backing your music and letting you play it anywhere, ala Google and Amazon, Apple has decided to go for a completely different angle, and it’s quite an intriguing one.

iTunes will not only allow you to download your purchases unlimited times (finally!) but for the low low price of $24.99/year you can also have iTunes scan your current music folder and then get access to the same tracks in 256Kbps AAC directly from iTunes. Keen readers will recognize this feature as coming from Lala, a company that Apple acquired and seemingly shutdown just over a year ago. It would appear that the technology behind Lala is what powers the new iCloud enabled iTunes and the licensing deals that the company had struck with the music companies before its acquisition have been transferred to Apple. I really like the idea behind this and I’m sure it won’t take long for someone to come up with an entire back catalog of what’s available through iTunes, letting everyone on the service get whatever music they want for the nominal yearly fee. It’s probably a lot better than the alternative for the music companies who up until now were getting $0 from those with, how do you say, questionably acquired music libraries.

Apple also announced the next version of their mobile operating system, iOS5. There are numerous improvements to the platform but there are a few features of note. The first is iMessage which will be Apple’s replacement for SMS. The interface is identical to the current SMS application on the iPhone however if both parties are on iOS devices it will instead send the message over the Internet rather than SMS. Many are quick to call this as the death of SMS and how mobile phone companies will teeter on bankruptcy due to the loss of revenue but realistically it’s just another messaging app and many carriers have been providing unlimited SMS plans for months now, so I doubt it will be anywhere near as revolutionary as people are making it out to be.

The next biggest feature is arguably the deep level of integration that Twitter is getting in iOS. Many of the built in apps now have Twitter on their option menus, allowing you to more easily tweet things like your location or pictures from your photo library. It’s one of the better improvements that Apple has made to iOS in this revision as it was always something I felt was lacking, especially when compared to how long Android had had such features. I’m interested to see if this increases adoption rates for Twitter at all because I find it hard to imagine that everyone who has an iPhone is using Twitter already (anecdotally about 50% of the people I know do, the others couldn’t care less).

There’s also the release of OSX lion which honestly is barely worth mentioning. The list of “features” that the new operating system will have is a mix of improvements to things currently available in Snow Leopard, a couple app reworks and maybe a few actual new things to the operating system. I can see why Apple will only be charging $29.99 for it since there’s really not much to it and as a current owner of Snow Leopard I can’t see any reason to upgrade unless I’m absolutely forced to. The only reason I would, and this would be a rather dickish move by Apple if they required this, would to be able to download incremental updates to programs like Xcode which they’ve finally figured out how to do deltas on so I don’t have to get the whole bloody IDE every time they make a minor change.

Overall this WWDC was your typical Apple affair: nothing revolutionary but they’re bringing out refined technology products for the masses. iCloud is definitely the stand out announcement of the conference and will be a great hook to get people onto the Apple platform for a long time to come in the future. Whilst there might be some disappointment around the lack of a new iPhone this time around it seems to have been more than made up for with the wide swath of changes that iOS5 will be bringing to the table. With all this under consideration it’s becoming obvious that Apple is shifting itself away from the traditional PC platform with Lion getting far less attention than any of Apple’s other products. Whether or not this is because they want to stay true to their “Post PC era” vision or simply because they believe the cash is elsewhere is left as an exercise to the reader, but it’s clear that Apple views the traditional desktop as becoming an antiquated technology.

The Bullshit Behind “If It Failed, You Did It Wrong”.

I often find myself deconstructing stories and ideas to find out what the key factors were in their success or failure. It’s the engineer training in me that’s trying to find out what are key elements for something to swing one way or another hoping to apply (or remove) those traits from my own endeavors, hoping to emulate the success stories. It follows then that I spend a fair amount of my time looking introspectively, analyzing my own ideas and experiences to see how future plans line up against my set of criteria for possible future success. One of the patterns I’ve noticed from doing all this analysis is the prevalence of the idea that should you fail at something that automatically you’re the one who did something wrong and it wasn’t the idea that was at fault.

Take for instance Tim Ferriss author of two self help books, The 4 Hour Work Week and The 4 Hour Body, who has undoubtedly helped thousands of people achieve goals that they had never dreamed of attempting in the past. I’ve read both his books and whilst I believe there’s a lot of good stuff in there it’s also 50% horse shit, but that rule applies to any motivator or self help proprietor. One of the underpinnings of his latest book was the slow carb diet, aimed at shedding layers of fat and oodles of weight in extremely short periods of time. I haven’t tried it since it doesn’t line up with my current goals (I.E. gaining weight) but those who have and didn’t experience the results got hit back with this reply from the man himself:

The following will address 99%+ of confusion:

- If you have to ask, don’t eat it.
- If you haven’t had blood tests done, I don’t want to hear that the diet doesn’t work.
- If you aren’t measuring inches or haven’t measured bodyfat % with an accurate tool (BodPod, etc. and NOT bodyfat scales), I don’t want to hear that the diet doesn’t work.
- If you’re a woman and taking measurements within 10 days prior to menstruation (which I advise against in the book), I don’t want to hear about the lack of progress.

Whilst being a classic example of Wally Blocking¹ this also places all blame for failure on the end user, negating any possibility that the diet doesn’t work for everyone (and it really can’t, but that’s another story). However admitting that this diet isn’t for everyone would undermine it’s credibility and those who experienced failure would, sometimes rightly, put the failure on the process rather than themselves.

Motivators aren’t the only ones who outright deny that there’s a failure with their process, it’s also rife with the proponents of Agile development techniques. Whilst I might be coming around to some of the ideas since I found I was already using them its not uncommon to hear about those who’ve experimented Agile and haven’t had a great deal of success with it. The response from Agile experts is usually that you’re doing it wrong and that you’re inability to adhere strictly to the Agile process is what lead to your failure, not that agile might not be appropriate for your particular product or team. Of course this is a logical fallacy, akin to the no true Scotsman idea, and doing the research would show you that Agile isn’t appropriate everywhere with other methods producing great results

In the end it all boils down to the fact that not every process is perfect and can never be appropriate for any situation. Blaming the end user may maintain the illusion that your process is beyond reproach but realistically you will eventually have to face hard evidence that you can’t design a one size fits all solution, especially for anything that will be used by a large number of people. For those of you who have tried a “guaranteed to succeed” process like those I’ve described above and failed it would be worth your effort to see if the fault truly lies within you or the process simply wasn’t appropriate for what you were using it for, even if it was marketed to you as such.

¹I tried to find an online reference to this saying but can’t seem to find it anywhere. In essence Wally Blocking someone stems from the Wally character in Dilbert who actively avoids doing any work possible. One of his tactics is when asked to do some piece of work place an unnecessarily large prerequisite on getting the work done, usually on the person requesting it. This will usually result in either the person doing the work themselves or getting someone else to do it, thus Wally had blocked any potential work from coming his way.

Copenhagen Suborbitals: Changing The Space Access Game.

Getting off the rock which we’re gravitationally bound to is an expensive endeavor, so much so that doing it has well been out of the reach of anyone but the super-governments of the world for almost half a century. We’re in the middle of a space revolution with private companies popping up everywhere promising to reduce the cost of access to space with many of them delivering on their promises. Still even with so many revolutions happening in the private space industry the cost of doing so is still well out of the reach of the vast majority of people in the world, even though they’ve come down by an order of magnitude in the past decade.

Still there are people working on extremely novel solutions to this problem and they’re starting to show some very promising results. Late last year I wrote about Copenhagen Suborbitals a volunteer team that is working on a single person rocket using only donated funds. Back then they were gearing up to launch their first test rocket called HEAT from their sea launch platform that was propelled by a submarine that one of its creators built. Unfortunately they did not manage to launch as the cryogenic valve for the liquefied oxygen had frozen shut (thanks to the hair dryer they used as a heater draining the batteries on the sub) preventing the rocket from igniting. They were determined to launch it however and just recently they gave it another attempt.

The upgraded rocket, dubbed HEAT-1X, has a few improvements over its previous incarnation. The sea launch platform is now a fully enclosed unit, no longer requiring external propulsion from a submarine to get it into position. HEAT-1X now uses a polyurethane rubber mix instead of the previously used paraffin wax which was found to not vaporize completely which caused a reduction in the resulting amount of thrust. With these improvements in mind they attempted launching again back on the 3rd of June, and the results speak for themselves:

YouTube Preview Image

The launch, whilst undoubtedly a success for all involved, wasn’t without its share of problems. HEAT-1X did manage to achieve supersonic speed however it deviated from its direct vertical flight path considerably. Even though they were out in the ocean mission control decided to shut down the engine after 21 seconds of flight. The craft still managed to achieve a height of approximately 2.8KM in that time and covered over 8KM in ground distance. There was successful separation of the booster and craft stages however the parachute on the booster was torn free due to the high drag it experienced. The space craft’s parachutes didn’t unfurl properly either causing it to receive significant damage upon landing. Unfortunately the booster was lost to the Baltic Sea but the capsule was recovered successfully.

Despite those problems the HEAT-1X flight represents a tremendous step forward for the Copenhagen Suborbitals team and shows that they are quite capable of building a craft capable of delivering people into suborbital space. They’re still a long way from putting a person in one of their crafts (3~5 years is their estimate) but this launch validates much of the work they have done to this point. I really can’t wait to see them achieve their vision of getting someone into space on a shoestring budget and should they succeed they will make Denmark the fourth nation ever to launch a man into space (Russia, USA and China were ahead of them, if you were wondering). Considering that it will all be done with volunteer time and donations make the achievement even more incredible and I’m sure they’re inspiring many of their younger Danes to pursue a life in the sciences and engineering.

Windows 8: First Step to the Realization of Three Screens.

The last two years have seen a major shake up in the personal computing industry. Whilst I’m loathed to admit it Apple was the one leading the charge here, redefining the smart phone space and changing the way many people did the majority of their computing by creating the wildly successful niche of curated computing (read: tablets). It is then inevitable that many subsequent innovations from rival companies are seen as reactions to Apple’s advances, even if the steps that company is taking are towards a much larger and broader goal than competing in the same market.

I am, of course, referring to Microsoft’s Windows 8 which was just demoed recently.

There’s been quite a bit of news about the upcoming release of Windows 8 with many leaked screenshots and even leaked builds that gave us a lot of insight into what we can expect of the next version of Windows. For the most part the updates didn’t seem like anything revolutionary although things like portable desktops and a more integrated web experienced were looking pretty slick. Still Windows 7 was far from being revolutionary either but the evolution from Vista was more than enough to convince people that Microsoft was back on the right track and the adoption rates reflect that.

However the biggest shift that is coming with Windows 8 was known long before it was demoed: Windows 8 will run on ARM and other System on a Chip (SOC) devices. It’s a massive deviation from Microsoft’s current platform which is wholly x86/x86-64 based and this confirms Microsoft’s intentions to bring their full Windows experience to tablet and other low power/portable devices. The recent demo of the new operating system confirmed this with Windows 8 having both a traditional desktop interface that we’re all familiar with and also a more finger friendly version that takes all of its design cues from the Metro interface seen on all Windows Phone 7 devices.

The differences between these two interfaces just don’t stop at what input device they were optimized for either. Whilst all Windows 8 devices will be capable running the huge back catalog of software that has been developed for Windows over the past few decades in the traditional desktop interface mode the new tablet optimized interface relies on applications built using HTML5 and JavaScript. This is arguably done so that they are much more platform independent than their traditional Windows applications cousins who, whilst most likely being able to run since .NET will be ported to the ARM and SOC infrastructures, won’t have been designed for the tablet environment. They’ll still be usable in a pinch of course but you’d still want to rewrite them if a large number of your users were moving to the tablet/smartphone platform.

Looking at all these changes you can’t help but think that they were all done in reaction to Apple’s dominance of the tablet space with their iPad. It’s true that a lot of the innovations Microsoft has done with Windows 8 mirror those of what Apple has achieved in the past year or so however since Windows 8 has been in development for much longer than that not all of them can be credited to Microsoft playing the me-too game. Realistically it’s far more likely that many of these innovations are Microsoft’s first serious attempts at realizing their three screens vision and many of the changes in Windows 8 support this idea.

A lot of critics think the idea of bringing a desktop OS to a tablet form factor is doomed for failure. The evidence to support that view is strong too since Windows 7 (and any other OS for that matter) tablet hasn’t enjoyed even a percentage of the success that the dedicated tablet OS’s have. However I don’t believe that Microsoft is simply making a play for the tablet market with Windows 8, what they’re really doing is providing a framework for building user experiences that remain consistent across platforms. The idea of being capable of completing any task whether you’re on your phone, TV or dedicated computing device (which can be a tablet) is what is driving Microsoft to develop Windows 8 they way they are. Windows Phone 7 was their first steps into this arena and their UI has been widely praised for its usability and design and Microsoft’s commitment to using it on Windows 8 shows that they are trying to blur the lines that current exist between the three screens. The potential for .NET applications to run on x86, ARM and other SOC platforms seals the deal, there is little doubt that Microsoft is working towards a ubiquitous computing platform.

Microsoft’s execution of this plan is going to be vital for their continued success. Whilst they still dominate the desktop market it’s being ever so slowly eroded away by the bevy of curated computing platforms that do everything users need them to do and nothing more. We’re still a long time away from everyone out right replacing all their PCs with tablets and smart phones but the writing is on the wall for a sea change in the way we all do our computing. Windows 8 is shaping up to be Microsoft’s way of re-establishing themselves as the tech giant to beat and I’m sure the next year is going to be extremely interesting for fans and foes alike.