Monthly Archives: January 2011

Playstations, Piracy and Puerility.

Sony really has no tolerance when it comes to piracy on their systems. Whilst in the past they were mostly disinterested (since there was little they could do about it) their reaction to the current state of piracy on the Playstation 3 has been nothing short of full fledged war on those who’d seek to get something for nothing. Still it seems like their efforts might be misplaced as the damage has already been done and any methods taken to try and contain it merely serve as a Streisand Effect, further publicising the efforts of those they’d seek to contain. Still for all the hubbub that’s going on I personally believe that it’s a storm in a teacup, with both sides making a bigger deal of this than it really is.

The roots of this entire debacle can be traced back to one curious hacker, Geohot. Just on a year ago he released details of a hack that basically enabled him full control over the PS3 when it was in OtherOS mode, opening the door for much better homebrew applications that could take full advantage of the PS3′s power. Sony, to their discredit, overreacted to this by removing OtherOS as a feature in the next update. In all honesty Geohot’s initial hack was barely a threat to anyone as it required a very high level of knowledge and the guts to crack open your PS3 and solder switches across vital components. Removing said feature then triggered many other hacker groups to start having a shot at breaking open the PS3, and 8 month’s later we saw the rise of the PS3 jailbreaks.

Most recently however the whole scene went into overdrive after the hacker team fail0verflow released details on how to recover many of the private keys that Sony uses to verify game discs and other critical GameOS functions. It didn’t take too long after that for Geohot to release the root key which, in essence, cracked the entire system wide open. Whilst I’ve yet to dive into the nitty gritty myself it would seem that this round of hacks requires no crazy dongles or anything that’s above the level of the average Windows user. A quick look over some of my old hacking haunts shows there’s quite a spread of tools available, even a nifty little program that can point your PS3 to a share where you can store all your games, neat. Sony has been quick to come down on these hacks and the hackers have been even quicker in response, showing that the arms race Sony is playing against the masses will never be won.

The thing is though that whilst this enables piracy on a console that has been immune to it for the majority of its life it’s far from being the catastrophe that Sony seems to think it will be. The PC and the Xbox have both suffered from rampant piracy from their earliest days and the industry continues to flourish in spite of them. The fact is that anyone who would be solely pirating games isn’t a lost customer in the first place and many of them would’ve steered clear of the PS3 because of that. Heck even after I modded my Xbox so I could play some “backed up” games I ended up reverting it back simply because I wanted to play online and I didn’t play any of those games for longer than an hour. The simple fact is that a game I’m not willing to part with the money for is a game I wouldn’t play anyway, and I’m sure that’s common across most console owners.

Piracy is often the excuse used for all sorts of draconian measures that publishers use to try and protect their investments. Time and time again however it has been shown that users who can’t pirate aren’t instantly converted into paying customers, they simply do without and move onto another source of free entertainment. Piracy, on the surface at least, appears to be a much worse problem than it actually is and whilst the PS3 may now be wide open for all those who want to exploit it I doubt we’ll see publishers pulling releases for the platform any time soon. Personally I’d love to be able to rip my library of games to a hard drive so I could have them all on tap whenever I wanted them, but with Sony’s rampant anti-piracy stance it looks like I’ll have to forgo that dream until I don’t want to use my PS3 online anymore.

And I don’t think that’s going to be any time soon, either.

psp2ngp3

Next Generation Portable (AKA: The PSP2), Sony’s Answer to the 3DS.

My history with Sony can only really only be described as one of their fan boys. It all started well over a decade ago when I picked up my first Playstation, several years after they had been released. I loved that console dearly and when the Playstation 2 was announced I threw myself into wild amounts of debt with my parents so I could pick up one of the consoles on launch day. This extended to the time when they released their first portable gaming system, the Playstation Portable, as I convinced my then boss to let me take one home before the official release date. I’ve spent a good chunk of time with my PSP over the past few years and even still use it today for the odd game of Lumines or Guilty Gear. Still ever since some teaser images were released of it’s successor I’ve been eagerly awaiting its debut and yesterday afternoon finally saw an official announcement from Sony.

That there is the next generation of Sony’s portable gaming systems. On the surface it doesn’t look to be much more than an overgrown PSP with an additional analog control stick but the real meat of this device is in what’s under the hood, as shown by it’s impressive specifications:

CPU ARM® Cortex™-A9 core (4 core)
GPU SGX543MP4+
External
Dimensions
Approx. 182.0 x 18.6 x 83.5mm (width x height x depth) (tentative, excludes largest projection)
Screen
(Touch screen)
5 inches (16:9), 960 x 544, Approx. 16 million colors, OLED
Multi touch screen (capacitive type)
Rear touch pad Multi touch pad (capacitive type)
Cameras Front camera, Rear camera
Sound Built-in stereo speakers
Built-in microphone
Sensors Six-axis motion sensing system (three-axis gyroscope, three-axis accelerometer), Three-axis electronic compass
Location Built-in GPS
Wi-Fi location service support
Keys / Switches PS button
Power button
Directional buttons (Up/Down/Right/Left)
Action buttons (Triangle, Circle, Cross, Square)
Shoulder buttons (Right/Left)
Right stick, Left stick
START button, SELECT button
Volume buttons (+/-)
Wireless
communications
Mobile network connectivity (3G)
IEEE 802.11b/g/n (n = 1×1)(Wi-Fi) (Infrastructure mode/Ad-hoc mode)
Bluetooth® 2.1+EDR (A2DP/AVRCP/HSP)

Such specifications are becoming somewhat of a trademark of Sony, opting to go for the most powerful system they can deliver on a chosen platform. It’s been a double edged sword for them as whilst they can always claim the specifications crown their products are then hampered by their high cost, as illustrated with every console they’ve released. Still this thing is mightily impressive with connectivity rivalling that of today’s smart phones and processing power that hasn’t been seen before in a device of its size.

There are a few notable things to mention about Sony’s next handheld and one of them is shown in the picture above. That’s a capacitive touch panel that allows you to interact with the NGP, much like the touchscreen on any modern phone. Many companies have experimented with these in the past as a way to forego having a touchscreen, eliminating the need to touch the screen and leave fingerprints all over it. Interestingly enough though Sony decided to include a touchscreen on the front as well, meaning you can interact with it via both ways. How this is going to be used remains to be seen but its addition does make for some interesting possibilities.

Of notable absence is also any form of a media drive, ala the UMD. Whilst the format seemed like a good idea initially it was plagued by problems like reducing the battery life in half and lack of blank media like other formats. The former was an unfortunate problem that could never be worked around and the latter an attempt to stop piracy which failed miserably. Sony then attempted to revamp the PSP brand with the PSP Go which did away with the UMD in favour of digital distribution. However the PSP Go had abysmal adoption rates with many users outraged that their UMD collections were now completely useless. Still the PSP Go has paved the way for the NGP much like Windows Vista did for Windows 7 and the lack of any kind of media drive on the NGP shows that Sony is committed to a fully digital distribution network going forward.

Sony’s had a hard time in the portable gaming world but the fact remains they’re the only other company who’s still trying to take on the king of the market, Nintendo. Whilst the 3DS does look good on the surface its high price and the publics general disinterest in 3D means that Sony has a real chance to make a grab for the handheld crown with the NGP. However they have a real uphill battle ahead of them, especially when you consider that their new hand held will probably be more expensive than the 3DS. For a rabid Sony fan like myself it’s a no brainer, I’ll definitely be grabbing one of these on launch day just because it looks like such a versatile piece of kit. We’ll have to see if its worth buying as a game console when the time comes but it’s shaping up to be an interesting year for the handheld space.

 

This Isn’t The Microsoft I Know…

You’d be forgiven for thinking that Microsoft was never a major player in the smartphone space. Most people had never really heard of or seen a smartphone until Apple released the iPhone and the market really didn’t heat up until a couple years after that fact. However if you were to go all the way back to 2004 you’d find they were extremely well positioned, capturing 23% of the total market share with many analysts saying that they would be leader in smartphone software by the end of the decade. Today however they’re the next to last option for anyone looking for a smartphone thanks wholly to their inertia in responding to the incoming threats from Apple and Google.

Microsoft wasn’t oblivious to this fact but their response took too long to come to market to save any of the market share they had previously gained. Their new product, Windows Phone 7, is quite good if you consider it on the same level as Android 1.0 and the first iPhone. Strangely enough it also suffers some of the problems that plagued the earlier revisions of its competitors products had (like the lack of copy and paste) but to Microsoft’s credit their PR and response time on the issue is an order of magnitude better. They might have come too late into the game to make a significant grab with their first new offering but as history has shown us Microsoft can make a successful business even if it takes them half a decade of losses to catch up to the competition (read:the Xbox).

More recently though I’ve noticed a shift in the way Microsoft is operating within their mobile space. Traditionally, whilst they’ve been keen to push adoption for their platform through almost any means necessary, they’ve been quick to stand against any unsanctioned uses of their products. You can see this mentality in action with their Xbox department who’s fervently fought any and all means to run homebrew applications on their consoles. Granted the vast majority of users modding their consoles do so for piracy reasons so their stance is understandable but recent developments are starting to show that they might not be adverse to users running homebrew applications on their devices.

ChevronWP7 was the first (and as far as I know, only) application to allow users to to jailbreak their WP7 devices in order to be able to load arbitrary applications onto them. Microsoft wasn’t entirely happy with it’s release but didn’t do anything drastic in order to stop its development. They did however announce that the next update to WP7 would see it disabled, much like Apple does with their iOS updates, but they did something that the others haven’t ever done before, they met with the ChevronWP7 team:

After two full days of meetings with various members of the Windows Phone 7 team, we couldn’t wait to share with everyone some results from these discussions.

To address our goals of homebrew support on Windows Phone 7, we discussed why we think it’s important, the groups of people it affects, its direct and indirect benefits and how to manage any risks.

With that in mind, we will work with Microsoft towards long-term solutions that support mutual goals of broadening access to the platform while protecting intellectual property and ensuring platform security.

Wait, what? In the days gone by it wouldn’t have been out of place for Microsoft to send out a cease and desist letter before unleashing a horde of lawyers to destroy such a project in its infancy. Inviting the developers to your headquarters, showing them the roadmap for future technologies and then allying with them is down right shocking but shows how Microsoft has come to recognise the power of the communities that form around the platforms they develop. In all respects those users of ChevronWP7 probably make up a minority of WP7 users but they’re definitely amongst the most vocal users and potentially future revenue generators should they end up distributing their homebrew into the real world. Heck they’re even reaching out to avid device hacker Geohot since he mentioned his interest in the WP7 platform, offering him a free phone to get him started.

The last few years haven’t been kind to Microsoft in the mobile space and it appears that they’re finally ready to take their medicine so that they might have a shot at recapturing some of their former glory. They’ve got an extremely long and hard fight ahead of them should they want to take back any significant market share from Apple or Google, but the last couple months have shown that they’re willing to work with their users and enthusiasts to deliver products that they and hopefully the world at large will want to have. My next phone is shaping up to be a WP7 device simply because the offering is just that good (and development will be 1000x easier) and should Microsoft continue their recent stint of good behaviour I can only see it getting better and better.

Minecraft5

Minecraft: Addictive Simplicity.

Sandbox games and I have a sordid history. Whilst I often enjoy them it’s not usually because of the engrossing story or intriguing game mechanics; more it’s after I’ve finished the mission at hand, saved my game and then promptly engage Jerk Mode and go on whatever kind of rampage the game allows me. Long time readers will remember this being the case in my Just Cause 2 review where I grew tried of having to do everything within the rules of the game and modded my way to Jerk nirvana. Still there have been some notable exceptions, like Red Dead Redemption, where the combination of certain elements came together in just the right way to get me completely draw in an engrossed in the story.

Minecraft, whilst sharing the sandbox title, has almost no elements of a traditional game in this genre. Having more in common with game mods like Gary’s Mod Minecraft throws you into a world where the possibilities really are only limited by your imagination. Over the past few months I have watched the news around it go from a single story to a media storm and I was always fascinated by the way it managed to draw people into it. Up until a couple weeks ago however I hadn’t bothered to try it for myself, not even the free version. However after watching a few videos of some of the more rudimentary aspects of the game I decided to give it a go, and shelled out the requisite $20 for the full (beta) version.

That’s a deep mine…

The premise of the game is extremely simple. You’re thrust into a world where everything is made of blocks and at night time hordes of zombies and other nefarious creatures will emerge from the wilderness, baying for your blood. The tools you have at your disposal are only your blocky hands but the world of blocks around you can be used to your advantage. By cutting down trees you can make wood which can then be converted into a whole range of tools. The race is then on to create some kind of shelter before nightfall comes, so that you might have a place to hide when the horde arrives. As you progress deeper however you’ll begin to discover other rare and wonderful materials that can make even better tools and weapons, leading you to delve even deeper underground in order to find those precious resources.

However whilst the basic idea extends to only surviving through the night there’s the entire meta game of creating almost anything you can think of within the Minecraft world. The world’s resources are pretty much at your disposal and their block like nature means you can build almost anything out of them. This has lead to many people building extremely ornate structures within Minecraft, ranging from simple things like houses right up to the Starship Enterprise. As with any sandbox game I took the opportunity for absurdity as far as I could imagine it at the time building a 1 block wide tall spire high up into the clouds where I mounted my fortress of evil.

All that’s missing is an Eye of Sauron.

The basic game mechanic of Minecraft has a dinstinctly MMORPG feel to it. You start out by cutting down trees for wood so you can make a pick axe to mine cobblestone. You then use the cobblestone to make better tools in order to mine iron. You then use the iron to mine other resources like gold, diamond and redstone. Much like the gear grind that all MMORPGs take you through before you’re able to do the end content Minecraft gets you hooked in quickly with the first few resource levels passing quickly. Afterwards it’s a much longer slog to get the minerals you require to advance, usually requiring you to dig extremely deep to find them. Like any MMORPG though this mechanic is highly addictive, leading me to lose many hours searching for the next mineral vein so that I can craft that next item.

After the first week however I started to grow tired of the endless mining that didn’t seem to be going anywhere. I had dug all the way down to bedrock and had found numerous rare resources but seemed to be lacking the one mineral I needed to harvest them: iron. Googling around for a while lead me to figure out that I was digging far too deep to find much iron and that the best place to find resources was in randomly generated dungeons or caves, basically pre-hollowed out sections of the map that were always teaming with resources (and zombies). After randomly digging for a while I started hearing the distinctive zombie groan and I followed it to the ultimate prize.

Oh yeah, that’s the good stuff!

Exploring this find lead me onto a string of caves all containing the resources I needed to progress further and I was hooked again. Whilst the last few hours I’ve spent with Minecraft have focused more on extending my fortress of evil and the surrounding area I still find myself often taking a trip down into the mines in the hope of coming across another cave or mineral vein as the excitement of finding one is on par with getting some epic loots in a MMORPG. I also set about setting up a Minecraft server so that I could play along with some of my more dedicated Minecraft friends although with a server fan dying I’ve had to put that on hold until I can ensure that it won’t overheat with more than one person playing on it.

Would I recommend this game? Most definitely, especially if you’re the type that enjoys sandbox style games that allow almost unlimited creativity. I was the kind of person who lost hours in Gary’s Mod, making whacky contraptions and using them to unleash untold torment onto hordes of Half Life’s NPCs. The tables are very much turned in Minecraft’s world but it’s just as enjoyable and I have no doubt that anyone can lose a few good hours in it just exploring the retro world that Minecraft generates for you. The game is still technically in beta but for the price they’re asking it’s well worth the price of admission.

Minecraft is available for PC and web browser right now for a free trial or AU$20. Game was played on a local single player instance for the majority of game time with an hour or so spent on a multiplayer server. No rating is being assigned to this game as it’s still in beta.

Hard to Argue With Numbers Like That (or the iPad Conundrum).

It’s no secret that I’m amongst the iPad’s most harsh critics. My initial reaction was one of frustration and disappointment with my following posts continuing the trend, launching volley after volley about how the iPad had failed to meet the goals that some of its largest supporters had laid out before it. After that I avoided commenting on it except for one point where I dispelled some of the rumours that the iPad was killing the netbook market, since there was more evidence that the netbook market was approaching saturation than the iPad was stealing sales. Still I hadn’t heard any reports of the product failing miserably so I had assumed it was going along well, I just didn’t know how well.

To be honest I was intrigued to see how the iPad did almost a year later as whilst the initial sales were pretty amazing I hadn’t really heard anything since then. Usually when a company is doing well they like to trumpet that success openly (hello Android) but Apple’s silence felt like it said a lot about how the iPad was performing. As it turns out it was doing really well, so well in fact that even the most wild predictions of its success were way off:

Apple sold almost 15 million iPads last year.  It is outselling Macs in units, and closing in on revenues.  The 7.3 million iPads sold just in the December quarter represented a 75 percent increase from the September quarter, and the $4.6 billion in revenue represented a 65 percent sequential jump. (The iPad launched in April).  By any measure, this is an incredible ramp for an entirely new computing product.  It is so startling that nobody predicted it—not bullish Wall Street analysts, or even wild-eyed bloggers.

A post on Asymco tallies all the early predictions of iPad unit sales from both Wall Street analysts and tech bloggers. The iPAd ended up selling 14.8 million units in 2010.  The highest Wall Street estimate from April was 7 million (Brian Marshall of Broadpoint AmTech).  David Bailey at Goldman Sachs predicted 6.2 million.  Even Apple table-pounder Gene Munster initially thought they would sell only 3.5 million iPads. The average prediction among the 14 analysts listed was 3.3 million.

Even I’d find it hard to keep a straight face and say that almost 15 million sold in under a year isn’t a sign of success. Since Jobs’ return to the Cupertino company they’ve made a name for themselves in bringing technology to the masses in a way that just seems to command people to buy them and the iPad is just another example of how good they are at doing this. The iPad coincidentally fuelled demand for other Apple products leading to Apple having the best financial quarter ever. Even the industry analysts had a hard time predicting that one. There’s then no denying that the iPad is definitely a force to be reckoned with. Whilst much of the groundwork was laid by the several generations of iPhones before it the iPad is quite a viable platform for developers to work on and companies to promote their brand with.

However I still can’t help but feel that some of the hype surrounding it was a little bit too far reaching. Initially many people saw something like the iPad as the death knell for traditional print media, killing all those who dared defy the trend and publish themselves through the digital medium. In the beginning there were signs of a media revolution in the works with many big media companies signing on to create iPad versions of their more traditional media. The results were good too with many of the digital magazines and newspapers selling hundreds of thousands of copies in their first runs. However the shine soon faded failing to capture a new digital market and not even managing to cannibalise sales from their traditional outlets. The media revolution that so many expected the iPad to herald in has unfortunately fallen by the way side and I take a rather sadistic pleasure in saying “I told you so”.

By all other accounts though the iPad counts as a resounding success. Whilst I hate the fact that Apple managed to popularise the tablet format I can’t honestly say they haven’t created a market that barely existed before their product arrived. As always the hype may have run away from them a little bit in terms of what people thought the device symbolises but, let’s be honest here, that should be expected of any new device that Apple releases. I’m still waiting to see if any of the tablets will take my fancy enough to override the fiscal conservative in me but it would seem that Apple has managed to do that enough people to make the iPad the most successful tablet ever released, and that’s something.

Becoming a Temporary Expert.

I don’t know if it’s an engineer thing or just me personally but I find I work best when I’m thrown into the deep end of something. I usually end up in this situation when someone asks me if something can be done and I can’t think of any reason why it can’t, and thus end up being the one developing the solution. More often than not this puts me far out of my current realm of knowledge so I end up doing extensive amounts of research in order to be able to achieve what I set about doing. I’ve come to call this process one of becoming a temporary expert as at either end of it I’m probably no more knowledgeable than anyone else on the subject, but for that brief period in the middle I’d definitely consider myself an expert.

Take for instance my on-again off-again hobby of photography. About 4 years ago I was planning a trip to New Zealand with my then girlfriend (and after the trip fiancée) and I thought I should get myself a decent camera for the trip. Before this though I hadn’t really done any kind of real photography but I knew the best kind of camera to get would be a digital SLR. The next month was filled with hundreds of online articles, reviews and guides from others who all had varying levels of opinions on what I should be buying. In the end, after digesting the massive amount of information I had shoved into my brain, I chose myself a Canon 400D and less than a week later I had it in my possession.

However as time went by I found myself no longer keeping up to speed on the various developments in the photography world. Sure the odd article or blog would cross my path but apart from buying another lens a couple months down the line my knowledge in this area began to fade, as would a foreign language once learnt but seldom used. Most of the fundamentals stuck with me however, but those extra bits of knowledge that made me feel like I knew something inside and out slipped away in the vast depths of my mind. It was probably for the best since photography is a rather expensive hobby anyway.

Most often I find myself going through this temporary expert process during the course of my everyday work, usually when I’m working in smaller IT departments. You see whilst bigger IT departments have the luxury of hiring many people with specific, specialist knowledge smaller areas have to make do with generalists who know a bit about everything. The most effective generalists are also quite adept at this temporary expert process, able to dive back into a technology they were once familiar with head first in order to become a specialist when they’re required. This process isn’t cheap however since the amount of time and effort required to attain the required level of expertise can be quite large, especially if they’ve never had any experience with the technology before.

I remember doing this quite extensively back when I was working for the Australian Maritime Safety Authority. I had been hired on to revamp their virtualisation environment and a large part of that was working with their SAN. Whilst I had had experience in the past with their particular brand of SAN my last job hadn’t let me anywhere near any of their kit since they had a specialist who took care of it all. I spent the better part of a month diving through technical documents and resources to make sure what I was planning to do was feasible and would deliver as expected. The process worked and I was able to accomplish what they hired me for, unfortunately working myself out of a job. If you came to me today and asked me to accomplish the same thing I would probably have to do the same process all over again as my expertise in that field was only temporary.

I guess this rule applies to any skill you develop but fail to use over a long period of time. As someone who gets deeply interested in anything with even a slight technological bent I’ve found myself lost in countless topics over my lifetime, fully immersing myself in them for as long as my interest lasts. Then as my interest wanes so does my knowledge until the passion is rekindled again and the process starts anew.

Necessity is the Mother of Invention.

I’ve been developing computer programs on and off for a good 7 years and in that time I’ve come across my share of challenges. The last year or so has been probably the most challenging of my entire development career as I’ve struggled to come to grips with the Internet’s way of doing things and how to enable disparate systems to talk to each other. Along the way I’ve often hit various problems that on the surface appear to be next to impossible to do or I come to a point where a new requirement makes an old solution no longer viable. Time has shown however that whilst I might not be able to find an applicable solution through hours of intense Googling or RTFM there are always clues that lead to an eventual solution. I’ve found though that such solutions have to be necessary parts of the larger solution otherwise I’ll just simply ignore them.

Take for instance my past weekend’s work gone by with Lobaco. Things had been going well, the last week’s work had seen me enable user sign ups in the iPhone application and had the beginnings of an enhanced post screen that allowed users to post pictures along with their posts. Initial testing of the features seemed to work well and I started testing the build on my iPhone. Quickly however I discovered that both the new features I had put in struggled to upload images to my web server, crashing whenever a picture was over 800 by 600 in size. Since my web client seemed to be able to handle this without an issue I wondered what the problem would be, so I started digging deeper.

You see way back when I had resigned myself to doing everything in JavaScript Object Notation, or JSON for short. The reason behind this was that thanks to it being an open standard nearly every platform out there has a native or third party library for serialising and deserialising objects, making my life a whole lot easier when it comes to cross platform communication (I.E. my server talking to an iPhone). Trouble with this format is that whilst it’s quite portable everything done in it must be text. This causes a problem for large files like images as they have to be changed into text before they can be sent over the Internet. The process I used for this is called Base64 and it has the unfortunate side effect of increasing the size of the file to be transferred by roughly 37%. It also generates an absolutely massive string that brings any debugger to its knees if you try to display it, making troubleshooting issues hard.

The image uploading I had designed and successfully used up until this point was now untenable as the iPhone client simply refused to play nice with ~300KB strings. I set about trying to find a solution to my problem hoping to find a quick solution to my problem. Whilst I didn’t find a good drag and drop solution I did come across this post which detailed a way in which to program a Windows web service that could receive arbitrary data. Implementing their solution as it is detailed there still didn’t actually work as advertised but after wrangling the code and over coming the inbuilt message size limits in WCF I was successfully able to upload images without having to muck around with enormous strings. This of course did mean changing a great deal of how the API and clients worked but in the end it was worth it for something that solved so many problems.

The thing is before I went on this whole adventure had you asked me if such a thing was possible I would’ve probably told you no, at least not within the whole WCF world. In fact much of my early research into this problem was centred around possibly implementing a small PHP script to accomplish the same thing (as there are numerous examples of that already), however the lack of integration with my all Microsoft solution means I’d be left with a standalone piece of code that I wouldn’t have much interest in improving or maintaining. By the simple virtue that I had to come up with a solution to this problem meant I tried my darnedest to find it, and lo I ended up creating something I couldn’t find anywhere else.

It’s like that old saying that necessity is the mother of all invention and that’s true for both this problem and Lobaco as an idea in itself. Indeed many of the current great Internet giants and start ups were founded on the idea of solving a problem that the founders themselves were experiencing and felt that things could be better. I guess I just find it fascinating how granular a saying like that can be, with necessity driving me to invent solutions at all levels. It goes to show that embarking into the unknown is a great learning experience and there’s really no substitute for diving in head first and trying your hardest to solve an as of yet unsolvable problem.

HTML5, Video and The War For Web Standards.

Whenever I find myself in the depths of coding for a mobile handset or browser HTML5 always seems to be the panacea to all my problems. Its promises of cross platform compatibility and ability to leverage the vast amount of work already done with Javascript has seen me lose several weeks of productivity in the hopes of forgoing much more work later on.  It always ends the same way with me getting some rudimentary functionality working before trying it in a different browser and seeing all my work fall in a screaming heap, forcing me to do the work I had so aptly delayed. The source of this problem is that whilst HTML5 may one day be the norm for how web pages are done on the Internet it is still a work in progress and the implementation of the standards vary from browser to browser.

This lack of standard implementation across the browser market is just another form of a format war. Whilst they might all appear to be collaborating on the future of the web realistically they’re all fighting for their version of the web to become the standard. No longer is a company able to release a product like IE6 onto the market that plays fast and loose with the standards in favour of delivering more functionality as that will more than likely end up being ignored by the web development community. Now the war is mostly being raged through standards committees, but that doesn’t mean the same old strong arming tactics aren’t being used.

Last week Google announced that it would no longer be supporting the H.264 codec for the HTML5 <video> tag. The post triggered wide spread discussion about the future of the HTML5 standard and Google felt the need to clarify its position:

Why is Google supporting WebM for the HTML tag?

This week’s announcement was solely related to the HTML tag, which is part of the emerging set of standards commonly referred to as “HTML5.” We believe there is great promise in the tag and want to see it succeed. As it stands, the organizations involved in defining the HTML video standard are at an impasse. There is no agreement on which video codec should be the baseline standard. Firefox and Opera support the open WebM and Ogg Theora codecs and will not support H.264 due to its licensing requirements; Safari and IE9 support H.264. With this status quo, all publishers and developers using the tag will be forced to support multiple formats.

On the surface it would appear that Google is attempting to use its share of the browser market to put some pressure on the HTML5 standards committee to make WebM or Theora the default codec for <video> tag. For the most part that’s true and should they get their way Google will have control over yet another aspect of the web (in contrast to now when they’re just the dominating player thanks to YouTube). However whilst such a move might at first appear to only benefit Google, Mozilla and Opera I believe that a push away from H.264 is beneficial for everyone on the web, except for Microsoft and Apple.

You see whilst there’s no official agreement on what the default codec should be for HTML5 there are in fact 2 groups within the standards committee that agree wholeheartedly on which one should be the standard. Google, Mozilla and Opera all believe that WebM or Ogg Theora (or both!) should be the default standard whilst Apple and Microsoft both want H.264. The reason behind that is quite obvious when you look at the patent body responsible for licensing the H.264 technology, the MPEG-LA. Both Apple and Microsoft are have patents in the MPEG-LA patent pool meaning they have a vested interest in making it the default standard. This is the main reason why having H.264 as the default is bad Internet users and web standards as it would force anyone who develops HTML5 products using video to license the H.264 codec, something which could be quite devastating to early stage start ups. Additionally it encumbers what should be a completely open and free standard with licensing requirements, something that hasn’t been present in any web standard to date.

Whilst the decision doesn’ affect me directly, no matter which way it goes, I can’t support something that has the potential to stifle innovation like a licensing requirement does. Google throwing its weight behind its own and other open codecs has highlighted the issue succinctly and hopefully this will lead to more productive discussion around which (if any) codec will become the standard for HTML5. We’re still a long way from having a fully formalised version of HTMl5 that anyone can implement but it’s good to see some movement on this front, even if it’s just one web giant poking the trolls.

Why Mobile Gaming Isn’t Killing Other Platforms.

I remember my first mobile phone well, it was a Nokia 8210 that I got myself locked into a 2 year contract for mostly because I wanted to play snake on it. After having the phone a month (and subsequently having it stolen) I grew tired of the game and resigned myself to just using at it was intended, as a phone. This continued with all my following phones for the next few years as I favoured function and form over features, even forgoing the opportunity to play old classics like Doom on my Atom Exec. However after picking myself up an iPhone early last year I started looking into the world of mobile gaming and I was surprised to see such a healthy games community, grabbing a few free titles for my shiny new gadget.

Primarily though I noticed that the vast majority of games available on the App Store were from small development houses, usually ones I’d never heard of before. Whilst there were a few familiar titles there (like Plants vs Zombies) for the most part any game that I got for my iPhone wasn’t from any of the big publishers. Indeed the most popular game for the iPhone, Angry Birds, comes from a company that counts a mere 17 people as its employees and I’m sure at least a few of them only came on since their flagship title’s release. Still the power of the platform is indisputable with over 50 million potential users and a barrier to entry of just one Apple computer and a $99 per year fee. Still it had me wondering though, with all this potential for the mobile platform (including Android, which has sold just as many handsets as Apple has) why aren’t more of the big names targeting these platforms with more than token efforts?

The answer, as always, is in the money.

Whilst the potential revenue from 50 million people is something to make even the most hardened CEO weak at the knees the fact remains that not all of them are gamers. Heck just going by the most successful games on this platform the vast majority of Android and iPhone owners aren’t gamers with more than 80% of them not bothering to buy the best game available. Additionally games released on the mobile platform are traditionally considered time wasters, something you’re doing when you don’t have anything better to do. Rarely do you find a game with any sense of depth to it, let alone does such a game strike it big on the platform’s application store. Couple that with the fact that no mobile game has gotten away with charging the same amount as their predecessors on other platforms has and you can start to see why the big publishers don’t spend too much time with the mobile platform, it’s just not fiscally viable.

For the small and independent developers however the mobile scene presents an opportunity unlike any they’ve seen before. Whilst there is much greater potential on other platforms (The Xbox 360 and Playstation 3 both have user numbers rivalling that of the iPhone and Android platforms) the barriers to entry for them are quite high in comparison. Microsoft, to their credit, has reduced the barrier to the same level as the iPhone ($99/year and you bring your own console) but thus far it has failed to attract as much attention as the mobile platform has. Other platforms are plagued by high investment costs for development such as any Sony or Nintendo product, requiring expensive development consoles and licenses to be purchased before any code can be written for them. Thus the mobile platform fits well for the smaller developers as it gives them the opportunity to release something, have it noticed and then use that to leverage into other, more profitable platforms.

I guess this post came about from the anger I feel when people start talking about the iPhone or Android becoming a dominant player in the games market. The fact is that whilst they’re a boon for smaller developers they have nothing when compared to any of the other platforms. Sure the revenue numbers from the App Store might be impressive but when you compare the biggest numbers from there (Angry Birds, circa $10 million) to the biggest on one of the others (Call of Duty: Black Ops $1 billion total) you can see why the big guys stick to the more traditional platforms. There’s definitely something to the world of mobile gaming but it will always be a footnote when compared to its bigger brothers, even when compared to the somewhat beleaguered handheld, the PSP.

Worlds Not of Our Own: The Hunt For Exoplanets.

Humanity, for the longest time, has been aware of planets outside the one that we reside on. Ask anyone today about the planets in our solar system and they’re sure to be able to name at least one other planet but ask them about any outside our solar system and you’re sure to draw a blank look. That’s not their fault however as the discovery of planets outside our solar system (which is by definition, not a planet but an exoplanet) is only recent, dating just over 20 years when the first was discovered in 1988. Since then we’ve discovered well over 500 more planets that exist outside our immediate vicinity and whilst their discovery is great none of them have yet been much like the one we currently call home.

In fact the vast majority of the exoplanets that have been discovered have been massive gas giants orbiting their parent stars at the same distance as Mercury orbits from our sun. This threw scientists initially as back then our current theories on solar system formation didn’t support the notion of large planets forming that close to their parent star. However as time we found more and more examples of such planets, these hot gas giants orbiting at velocities the likes we’d never seen before. The reason behind this is simple, the methods we use to find exoplanets are quite adept at finding these planets and not so much those which we’d consider potential homes.

The method by which the vast majority of exoplanets have been discovered is called the Radial Velocity method. As a planet orbits around its parent star the parent star also moves in tandem, tracing out an elliptical path that’s pinned around the common centre of mass between the two heavenly bodies. As the star does this we can observe changes in the star’s radial velocity, the speed at which the star is moving towards or away from this. Using this data we can then infer the minimum mass, distance and speed required to induce such changes in the planet’s radial velocity which will be the exoplanet itself. This method is prone to finding large planets orbiting close to their parent stars because they will cause larger perturbations in the star’s radial orbit more frequently, allowing us to detect them far more easily.

More recently one of the most productive methods of detecting an exoplanet is the Transit method. This method works by continuously measuring a star’s brightness over a long period of time. When an exoplanet crosses in front of its parent relative to us the star’s apparent brightness drops for the time it is in transit. This of course means that this method is limited to detecting planets and stars whose orbits line up in such a way to cause a transit like this. For earth like exoplanets there’s only a 0.47% chance that such planets will line up just right so we can observe them but thankfully this method can be done on tens of thousands of stars at once, ensuring that we discover at least a few in our search. Exoplanets discovered this way usually require verification by another method before they’re confirmed since there are many things that can cause a dip in a star’s apparent brightness.

There are of course numerous other methods to discover planets outside our solar system but for the most part the vast majority of them have been discovered by one of the two methods mentioned above. For both of them they are heavily skewed towards discovering big planets with short transit times as these produce the most observable effects on their parent stars. Still this does not preclude them from finding exoplanets like earth as shown with the recent discovery of Kepler10-b, a small rocky world in torturous conditions:

The planet, called Kepler-10b, is also the first rocky alien planet to be confirmed by NASA’s Kepler mission using data collected between May 2009 and early January 2010. But, while Kepler-10b is a rocky world, it is not located in the so-called habitable zone – a region in a planetary system where liquid water can potentially exist on the planet’s surface.

“Kepler-10b is the smallest exoplanet discovered to date, and the first unquestionably rocky planet orbiting a star outside our solar system,” said Natalie Batalha, Kepler’s deputy science team leader at NASA’s Ames Research Center in Moffett Field, Calif., at a press conference here at the 217th American Astronomy Society meeting.

Kepler-10b is the smallest transitioning planet to be confirmed to date and shows that it’s possible to discover worlds like our own using current technology. As time goes on and the amount of data increases I’m certain that we’ll eventually find more planets like these, hopefully a bit further out so they’ll be in the habitable zone. The Kepler mission is just a few months shy of its 2 year anniversary with at least another 1.5 years to go and if all goes well it should be returning swaths of data for us for the entire time to come.

I’m always fascinated by the latest discoveries in space even when they’re something like a molten mercury 564 light years away. Our technology is becoming more advanced with every passing day and I know that future missions will end up discovering millions of planets at a time with thousands of potentially life supporting worlds. It’s amazing to think that just 3 decades ago we couldn’t be sure that planets existed outside our solar system and today we know for sure there are more than 500 of them out there.

Ain’t science grand?