Monthly Archives: October 2011

Hard Reset Screenshot wallpaper

Hard Reset: A Futuristic Neon Hell.

These days you’d be hard pressed to find a first person shooter that doesn’t resort to the current norm of cover based, infinite regenerating health standard. It seems the days of searching out med kits and carrying ridiculous numbers of weapons is a thing of the past, a part of the first person shooter heritage that will be left behind in favour of current trends. Still there are some who dare to flirt with the old ways and the developers behind Hard Reset, namely Flying Wig Hog (consisting of many people who made Painkiller), are just those people. Whilst Hard Reset isn’t strictly an old fashioned shooter there are some throwbacks to the old ways with some of the new mixed in for good measure.

Hard Reset throws you into a post-apocalyptic world where humanity has been driven to the brink of extinction, pushed back into a single city called The Sanctuary by an enemy of their own creation: the machines. Inside the Sanctuary is a repository of billions of human identities, ostensibly those who were killed in the war that resulted in humanity being in the state that it is in. The machines want to assimilate those memories into their core matrix and as such have been assaulting the Sanctuary relentlessly. You play as Fletcher, a member of a team called CLN who’s job it is to protect humanity from the machines. Things start to get hairy when the machines break through the barrier and begin assaulting the Sanctuary directly.

The setting in Hard Reset is most aptly described as a cyberpunk’s wet dream, being a combination of post-apocalyptic drab combined with dazzling neon colours with Japanese characters littering the landscape. It’s definitely not the most pretty of games, especially when compared to other recent releases like Rage and Battlefield 3, but it’s far from being visually boring like many other generic shooters tend to be. Trouble is that many of the enemies in the game are also visually similar to the world that surrounds them which can make it somewhat frustrating at times.

Combat in Hard Reset is a mixed affair swinging between the dizzying highs of laying waste to hoards of enemies and the frustrating lows of replay a section over and over again because of some surprise tactics that will one shot you. You’re given 2 weapons to start off with the CLG, a typical machine gun weapon, and the NRG, a futuristic energy weapon that streams out balls of plasma. Both of these weapons can be upgraded to become another type of weapon (which you can change on demand) with the CLG being projectile based (shotgun, rocket launcher, mines, etc) and the NRG being energy based (shock field, railgun and a “smart weapon” which I’ll touch on shortly). You can also upgrade your combat armour giving you other abilities like a radar or additional damage resistance.

Now here’s where I’ll admit to finding Hard Reset and absolute chore to play until I got the smart gun upgrade. You see the initial incarnations of your weapons are ridiculously weak with even the weakest of enemies needing a thorough thrashing with them before they’ll keel over. The NRG upgrade that creates an electric shock field mitigated this somewhat but it was still extremely tedious to set up the field, wait for it to off all the enemies inside it and then wait for the next wave to arrive. The smart weapon, an upgrade that shoots projectiles that home in on your enemies and can shoot through walls, took much of this tedium away as I could simply scan around for incoming hostiles and launch volleys at them before they could get to me. This became very helpful in the later game when boss fights (like the one pictured above) when it would lock onto the places I needed to shoot at. Granted they were pulsating orange so I wouldn’t of had trouble finding them otherwise, but the knowledge that I was guaranteed to hit the right spot made those somewhat tiresome boss fights a lot easier.

The story itself is rather thin on the ground, with the majority of it being told in slides between levels when the game is loading. There’s a little interaction between your character and some others in the game, but they’re just through poorly animated avatars in the corner of your HUD. As a medium to carry the game along it does the job adequately but it’s rather loosely strung together and the game cuts off abruptly with the trademarked “oh there could be a sequel!” cliff hanger ending that I always groan about. Then again if you’re expecting Mass Effect level of interaction and immersion from a $30 shooter than I’d be questioning your sanity.

 

 

Hard Reset is a bit of an oddity, showing many signs of the polish I’ve come to expect from much bigger budget games but also dragging with it some of the troubles of being an independently developed game. At just on 5 hours of straight up game play (with no multi-player) it was a somewhat enjoyable diversion whilst I was waiting for Christmas glut of AAA titles to start dribbling in. If you’re into the cyberpunk genre and love your action over the top then Hard Reset will be right up your alley.

Rating: 7.0/10

Hard Reset is available right on PC for $29.99 on Steam. Game was played on Hard with a grand total of around 5 hours play time.

bitcoin price chart 28-10-2010 to 2011

With the Bubble Burst Has BitCoin Become the Currency it Strived to be?

At the time that I wrote that the BitCoin bubble was bursting I wasn’t really sure just how far the digital currency’s value would decline. Well here we are 3 months on and the value of a BitCoin has slumped to approximately US$3, an order of magnitude less than the dizzying highs it was on all those months ago. I made the prediction back then that once everyone stopped treating BitCoins as an investment vehicle the nascent currency could actually become what it strived to be rather than a speculator’s wet dream. So since one half of my prediction came true (the arguably easy to predict part) one has to wonder, how is BitCoin doing as a currency now?

Image used under a Creative Commons license from BitCoinCharts

The chart above details the dramatic rise and fall of the BitCoin price over the past year. As you can see whilst the value (the line graph) of a BitCoin may have tanked significantly it is still higher than that of what it was a year ago, by a large factor. What’s interesting to note though is the trade volume (the bar graph) which you can see in the months preceding the speculative bubble was quite low, almost non-existent for some months. The trading volume after the peak however as been far more active than it has been previously from which we can draw some conclusions about the BitCoin market.

Now the first conclusion I drew from this graph was that the market is becoming far more liquid with more buyers and sellers entering the market. Of course this high level of market activity could also be people attempting to sell down their BitCoin holdings, but that just favours the buyer side of the equation which is what is driving the price down. The volatility in the price is still very much at odds with its aspirations to become a real currency however so until the price hits a floor and stays there for a couple months BitCoin will struggle to be more widely adopted as a transaction medium.

The biggest impact that the drop in price will have though is the drop in free infrastructure it was getting from people mining for BitCoins. Whilst GPU mining was very profitable in the $15+ range when you’re getting down around these price levels it’s really not economically viable to mine coins. Thus the only people who will still do it are the ones who believe in the idea and want to help out or those who are running BitCoin services like Mt.Gox. Whilst that’s far from the BitCoin infrastructure just up and disappearing it does mean that many people who flocked to the BitCoin idea because of the financial feasibility of it will drop it in favour of greener pastures, whatever they might be.

Thus the burst BitCoin bubble is something of a mixed bag. Whilst the increased liquidity and speculator free market is definitely a great help to BitCoin becoming a serious currency the continued price instability and loss of supporters negates those benefits completely. The price crash also hasn’t addressed the early adopter problem either, leaving swaths of easily had BitCoins in the hands of a small collective of users.

Summing these all up together it seems that, as a currency at least, BitCoin is still just another alternative currency that’s struggling to achieve the goals it set out to accomplish. Technically it’s a masterful system that’s remained resistant to nearly all attempts to break it with all the problems coming from external parties and not the BitCoin system itself. However the economics of BitCoin are the real issue here and those things can’t be overcome with technical genius alone. BitCoin still has a long, long way to go before anyone can seriously consider it as a currency and there’s no telling if it’ll last long enough for it’s teething problems to be overcome. 

Google+ and The Future of Google Services.

In the mere months that it has been released Google+ has managed to accumulate quite the following, grabbing 40 million users. It’s still quite small compared to the current incumbent Facebook (who’s users outnumber Google+ 20 to 1) but that’s an incredible amount of growth, more than any other social network has ever been able to achieve before. Google has finally got it right with this attempt to break into the social networking world and it’s paying off for them in spades. What’s got everyone talking now is where Google is heading, not just with Google+ but also with the rest of their vast service catalogue.

Over the past 6 months or so, ever since co-founder Larry Page took over as CEO of Google, there’s been a rather interesting/worrying trend that’s been developing at Google. For as long as I can remember Google had a habit of experimenting openly with their users, cheerfully opening up access to beta products in order to get the wider public interested in them. However most recently they’ve begun to shutter these types of projects with the first signal that this trend could end coming with the closing down of Google Labs. In the months that followed many of Google’s other ancillary services, like Google Health and Google Power Meter, have been shut down with many more facing the chopping block.

For anyone following Google the writing had been on the wall ever since Page announced back in July that they were going to be focusing more closely on their core services. What’s really interesting however is that the direction that Google’s now heading in is not Page’s thinking alone, but one that was heavily influenced by the late great Steve Jobs. Just before Page took the top job at Google he placed met up with Jobs to get some advice on what he should be doing and it’s easy to see where Page’s motivation for cutting the fat from Google had come from:

Jobs didn’t mince words when Page arrived at Jobs’ Palo Alto home. He told Page to build a good team of lieutenants. In his first week as Google’s CEO, Page reshuffled his management team to eliminate bureaucracy. Jobs also warned Page not to let Google get lazy or flabby.

“The main thing I stressed was to focus,” Jobs told Isaacson about his conversation with Page. “Figure out what Google wants to be when it grows up. It’s now all over the map. What are the five products you want to focus on? Get rid of the rest because they’re dragging you down. They’re turning you into Microsoft. They’re causing you to turn out adequate products that are adequate but not great.”

Just over a week ago Google announced that another 5 services (Buzz, Code Search, University Research, iGoogle Social and Jaiku) would be shut down in favour of the features of those applications being taken over by Google+. Indeed any Google service that has some sort of social bent is getting integrated under the Google+ umbrella, with many of the sharing features in things like Google Reader being moved out to Google+. For Google this is done to both encourage people to use their still nascent social network as well as reducing their large application portfolio. Integrating everything they can into Google+ may seem like a desperate move to try and grab more market share away from Facebook but Google is betting a lot on the Google+ platform, and I believe it will pay off for them.

The momentum that Google+ has gained over the past few months has shown that Google can do social and do it well. After nailing that down it makes a lot of sense to combine services, especially those ones that are considered core to a social network, under the Google+ umbrella as that builds out the product and makes it far more enticing to end users. It’s sad to see some other services get completely shut down but that does open up the market to start-ups who can take up the slack that Google leaves behind as they increase their focus on their core products. 

A Tale of Woe and Eco-Friendly Hard Drives.

Up until recently most of my data at home hadn’t been living in the safest environment. You see like many people I kept all my data on single hard drives, their only real protection being that most of them spent their lives unplugged, sitting next to my hard drive docking bay. Of course tragedy struck one day when my playful feline companion decided that the power cord for one of the portable hard drives looked like something to play with and promptly pulled it onto the floor. Luckily nothing of real importance was on there (apart from my music collection that had some of the oldest files I had ever managed to keep) but it did get me thinking about making my data a little more secure.

The easiest way to provide at least some level of protection was to get my data onto a RAID set so that at least a single disk failure wouldn’t take out my data again. I figured that if I put one large RAID in my media box and a second in my main PC (which I was planning to do anyway) then I could keep copies of the data on each of them, as RAID on its own is not a backup solution. A couple thousand dollars and a weekend later I was in possession of a new main PC and all the fixings of a new RAID set on my media PC ready to hold my data. Everything was looking pretty rosy for a while, but then the problems started.

Now the media PC that I had built was something of a beast, sporting enough RAM and a good enough graphics card to be able to play most recent games at high settings. Soon after I had completed building it I was going to a LAN with a bunch of mates of mine, one of which who was travelling from Melbourne and wasn’t able to bring his PC with him. Too easy I thought, he can just use this new awesome beast of a box to play games with us and everything shall be good. In all honesty it was until I saw him reboot it once and the RAID controller flashed up a warning about the RAID being critical, which sent chills down my spine.

Looking at the RAID UI in Windows I found that yes indeed one of the disks had dropped out of the RAID set, but there didn’t seem to be anything wrong with it. Confused I started the rebuild on the RAID set and it managed to complete successfully after a few hours, leaving me to think that I might have bumped a cable or something to trigger the “failure”. When I got it home however the problem kept recurring, but it was random and never seemed to follow a distinct pattern, except for it being the same disk every time. Eventually however it stabilized and so I figured that it was just a transient problem and left it at that.

Unfortunately for me it happened again last night, but it wasn’t the same disk this time. Figuring it was a bung RAID controller I was preparing to siphon my data off it in order to rebuild it as a software RAID when my wife asked me if I had actually tried Googling around to see if others had had the same issue. I had done so in the past but I hadn’t been very thorough with it so I decided that it was probably worth the effort, especially if it could save me another 4 hours of babying the copy process. What I found has made me deeply frustrated, not just with certain companies but also myself for not researching this properly.

The drives I bought all those months ago where Seagate ST2000DL003 2TB Green drives which are cheap, low power drives that seemed perfect for a large amount of RAID storage. However there’s a slight problem with these kinds of drives when they’re put into a RAID set. You see the hard drives have error correction built into them but thanks to their “green” rating this process can be quite slow, on the order of 10 seconds to minutes if the drive is under heavy load. RAID controllers are programmed to mark disks as failed if they stop responding after a certain period of time, usually a couple seconds or so. That means should a drive start correcting itself and not respond quick enough to the RAID controller it will mark the disk as failed and remove it, putting the array into a critical state.

Seeing the possibility for this to cause issues for everyone hard drive manufacturers have developed a protocol called Time-Limited Error Recovery (or Error Recovery Correction for Seagate). TLER limits the amount of time the hard drive will spend attempting to recover from an error, so if it can’t be dealt with within that time frame it’ll then hand it off to the RAID controller, leaving the disk in the RAID and allowing it to recover. For the drives I had bought this setting is set to off as default and a quick Google has shown that any attempts to change it are futile. Most other brands are able to change this particular value but for these particular Seagate drives they are unfortunately locked in this state.

So where does this leave me? Well apart from hoping that Seagate releases a firmware update that allows me to change that particular value I’m up the proverbial creek without a paddle. Replacing these drives with similar drives from another manufacturer will set me back another $400 and a weekend’s worth of work so it’s not something I’m going to do immediately. I’m going to pester Seagate and hope that they’ll release a fix for this because other than that one issue they’ve been fantastic drives and I’d hate to have to get rid of them because of it. Hopefully they’re responsive about it but judging by what people are saying on the Seagate forums I shouldn’t hold my breath, but it’s all I’ve got right now.

DARPA’s Phoenix: Making The Most of Space Junk.

Debris in orbit are becoming one of the greatest challenges that we face as we become a space fairing species. You see by the simple fact that something is in orbit means that it has an incredible amount of potential energy, zipping around the earth at Mach 25 ready to wreck anything that might cross its path. Thankfully there’s quite a lot of empty space up there and we’re really good at tracking the larger bits so it’s usually not much of an issue. However as time goes by and more things are launched into orbit this problem isn’t going to get any better, so we need to start thinking of a solution.

Problem is that recovery of space junk is an inherently costly exercise with little to no benefits to be had. A mission to recover a non-responsive satellite or other spacecraft is almost as complex as the mission that launched said object in the first place, even more so if you include humans in the equation. Additionally you can’t send up a single mission to recover multiple other missions as typically satellites are on very different orbits, done so that they won’t collide with each other (although that has happened before). Changing orbits, known as a plane change, is extremely expensive energy wise and as such most craft aren’t capable of changing more than a couple degrees before their entire fuel supply is exhausted. The simple solution is to deorbit any spacecraft after its useful life but unfortunately that’s not the current norm and there’s no laws governing that practice yet.

It’s even worse for geostationary satellites as in that particular orbit things don’t tend to naturally deorbit over time. Instead anything in a geostationary orbit is pretty much going to be there forever unless some outside force acts on them. Geostationary orbits are also particularly valuable due to their advantageous properties for things like communication and location so the problem of space debris up there is of a much bigger concern. Thankfully most geostationary satellites have the decency to move themselves into a graveyard orbit (one just outside geostationary which will eventually see them flung from earth orbit) but this method isn’t guaranteed. Mass that’s already in orbit is incredibly valuable however and DARPA has been working on a potential solution to debris in geostationary orbit.

YouTube Preview Image

The DARPA Phoenix program is an interesting idea, in essence a in orbit salvager that cannibalizes other satellites’ parts in order to create new “satlets”. These new satlets won’t be anywhere near as capable as their now defunct donors were but they do have the potential to breathe a whole lot of life back into the hardware that’s just sitting there idle otherwise. Compared to a regular geosynchronous mission something like Phoenix would be quite cheap since a good chunk of the mass is already up in orbit. Such a mission can really only be done in geostationary orbit since all the satellites are in the same plane and the energy required to move between them is minimal. That is our most valuable orbit however so such a mission could prove to be quite fruitful.

Dealing with the ever growing amount of space debris that we have orbiting us is a challenge that we’ve still yet to answer. Programs like DARPA’s Phoenix though are the kinds of projects we’ll need to both reduce the amount of orbital junk we have as well as making the most out of the stuff we’ve already put up there. I’m really keen to see how the Phoenix project goes as it’d would be quite a step forward for on orbit maintenance and construction as well as being just plain awesome.

Rage Screenshot Wallpaper vehicle car combat

Rage: A Beautiful Tech Demo.

It’s hard to believe that it’s been a good 7 years since we saw a release from the famous id Software developers. For a company that had regular releases every 2 to 3 years for almost 2 decades prior the silence from them was rather unusual, sparking rumours that they were in a Duke Nukem Forever situation. Still their tech demos of the new id Tech 5 engine that was powering their next game showed that they were making progress and that has culminated in their next game release: Rage. After a good 12 hours or so with it over the past couple weeks I’m in two minds about id’s latest game, or more aptly their latest engine.

Rage puts you about 130 odd years into the future into a post apocalyptic world that’s been ravaged by the 99942 Apophis. Now the space nerds amongst us will recognise that that is a very real asteroid and whilst we’ve since eliminated the prospect of it hitting earth in 2029 as the game predicts it was none the less a gripping hook to get me into the story. You play as Daniel Tosh, one of the chosen few to be buried in a capsule with other survivors in cryogenic suspension, only to wake a century after the impact has occured in order to rebuild humanity. When you wake however you find that the pod malfunctioned and you’re the only survivor out of your particular ark and the world that you’ve come out in is a desolate wasteland.

Now Rage has copped a lot of flak for the absurdly broken release that it had on PC and when I first played it I was no exception. There was massive amounts of tearing, models glitching in and out of sight and textures not rendering properly or at the incorrect level of detail. The first patch plus a new round of ATI drivers fixed most of those problems making the game playable but it wasn’t until a friend of mine linked me to this post on the steam forums that Rage actually began to shine. After applying the new config the game was absolutely beautiful, both visually and performance wise with my computer running everything at absolute maximum settings I never had an performance problems. Rage still didn’t like to be alt-tabbed however as that would bring back tearing with mad vengeance. Such problems did not plague the console release however, so their launch day experience was probably much better.

Rage is very much like Borderlands in that it fuses RPG elements with FPS game play. The main story line is driven via quests given to you by various NPC characters and there’s a multitude of side quests that won’t further the plot but will get you things to help you along your journey. There’s no skill trees or levels per say but you will acquire various upgrades that will help to make the game easier. Most of the weapons have some form of upgrade but they’re usually not that useful, especially once you pick up certain weapons like the sniper rifle or the Authority Machine Gun. There’s also a crafting system that allows you to concoct all sorts of interesting things and, thankfully, there’s no limit on the amount of stuff you can carry so you can always have what you need when you need it. 

The game play in Rage is divided into 2 distinct categories: the vehicle sections and then your typical FPS run and gun. The vehicle sections, as pictured above, serve as being a break between quests where you’ll be accosted by bandits in the wasteland. There’s also a series of jump challenges scattered around the place for you to attempt, but since they give no reward apart from possibly an achievement there’s no real incentive to go for them. Your vehicle can also be upgraded with “Race Certificates” won from races or given as rewards to quests. Some of these races are fun (like the rocket races, where you get to blow your opponents up) where as others just feel like a chore. I only spent the bare minimum amount of time on the races however as once you’ve got the few key upgrades there’s no incentive to keep doing them.

The FPS component of Rage is a pretty typical affair, being a somewhat cover based shooter with the added advantage of you being able to heal and also revive yourself should you end up being overwhelmed. For the most part its quite servicable as you can choose to either strut out into the open and keep yourself healed with bandages (of which you can make an almost unlimited amount of) or pick people off from behind cover. The additional secondary weapons like the wingsticks (basically a bladed boomerang) and sentry bots help to keep the combat interesting and can be the difference between making it through alive or reloading your save for the nth time.

There are however a couple glitches in the combat system that need mentioning. If you’re say unloading shell after shell into an enemy whilst they’re doing a particular animation there’s no indication as to whether you’ve killed them or not. This becomes rather irritating when the death animations for some NPCs closely resemble that of them stumbling after taking a big hit, leading you to waste countless rounds in order to just make sure that they’re down. There’s also the fact that headshots, even with the sniper rifle, don’t usually one shot enemies like they usually do. This isn’t a glitch per say more of an annoyance as that one carefully lined up shot has to be two carefully lined up shots which you don’t usually have the luxury of taking.

The story that had such a gripping hook at the start is unfortunately quite thin on the ground with your character’s motivations for doing what he’s doing coming from other people telling him what to do constantly. Although the world is meant to feel open ended the story, and all of the missions, are completely linear with no real options for going at something another way. Rage’s storyline also suffers from major pacing problems as well, especially towards the end when you’re suddenly plonked onto the final mission with little more notice than the title of the mission indicating that it might be a one way trip. The end boss fight, if you could call it that, also pales in comparison to some of the other boss fights in the game leaving you feel like you’ve missed something along the way. Ultimately the intial hook that got me in was the pinnacle of the storytelling in Rage and that’s very disappointing.

Rage has its moments as a game but ultimately it feels more like a 12 hour tech demo than it does a fully fledged game that took 7 years to build. I would usually let id off the hook on this one since they’d be licensing their engine (which is a technical marvel) and thus the game wasn’t their main focus but outside of Zenimax companies (id’s parent company) the id Tech 5 engine won’t be available for licensing. Thus for the foreseeable future the only 2 games that will use this engine will be Rage and Doom 4, which is a shame because once it’s set up right it’s quite spectacular. Rage then as a game is a FPS/RPG hybrid that manages to deliver sometimes but suffers from multiple problems that detract from the technical beauty that it contains.

Rating: 7.0/10

Rage is available on PC, Xbox360 and PlayStation 3 right now for $108, $108 and $88 respectively. Game was played entirely on the PC with around 12 hours of total play time and 46% of the achievements unlocked. 

lytro camera

Lytro: Light Field Technology Becomes a Reality.

One of my not-so-secret passions is photography. I got into it about 5 years ago when I was heading over to New Zealand with my then girlfriend (now wife) as I wanted a proper camera, something that could capture some decent shots. Of course I got caught up in the technology of it all and for the next year or so I spent many waking hours putting together my dream kit of camera bodies, lenses and various accessories that I wanted to buy. My fiscal prudence stopped me short of splurging much, I only lashed out once for a new lens, but the passion has remained even if it’s taken a back seat to my other ambitions.

For photographers one of the greatest challenges is getting the focus just right so that your subject is clear and the other details fade into the background, preferably with a nice bokeh. I struggled with this very problem recently when we threw a surprise party for my wife and one of her dearest friend’s birthdays. Try as I might to get the depth of field right on some of the preparations we were doing (like the Super Mario styled cupcakes) I just couldn’t get it 100% right, at least not without the help of some post production. You can imagine then how excited I was when I heard about light field technology and what it could mean for photography.

In essence a light field camera would give you the ability to change the focus, almost infinitely, after the picture had been taken. It can do this as it doesn’t capture light in the same way that most cameras do. Instead of taking one picture through one lens light field cameras instead capture thousands of individual rays of light and the direction from which they were coming. Afterwards you can use this data to focus the picture wherever you want and even produce 3D images. Even though auto-focus has done a pretty good job of eliminating the need to hand focus shots the ability to refocus after the fact is a far more powerful advancement, one that could revolutionize the photography industry.

I first heard about it when Lytro, a light field based startup, mentioned that they were developing the technology back in June. At the time I was thinking that they’d end up being a manufacturer or licensor of their technology, selling their sensors to the likes of Canon and Nikon. However they’d stated that they were going to make a camera first before pursuing that route and I figured that meant we wouldn’t see anything from them for at least another year or two. I was quite surprised to learn that they have their cameras up for pre-order and delivery is expected early next year.

As a camera it defies current norms almost completely. It’s a square cylinder with an LCD screen on the back and the capture button is a capacitive notch on the top. From that design I’d assume you’d take pictures with it by using it like a ye olde telescope which will be rather comical to watch. There’s 2 models available, an 8GB and 16GB one, that can hold 350 and 750 pictures respectively. The effective resolution that you get out of the Lytro camera seems to be about 1MP but the images are roughly 20MB big. The models come in at $399 and $499 respectively which, on the surface, seems a bit rich for something that does nothing but take really small photos.

However I think Lytro is going the right way with this technology, much like Tesla did when they first released the Roadster. In essence the Lytro camera is a market test as $400 is almost nothing compared to the amount of money a photography enthusiast will spend on a piece of kit (heck I spent about that much on the single lens I bought). Many then will be bought as a curiosity and that will give Lytro enough traction to continue developing their light field technology, hopefully one day releasing a sensor for the DSLR market. From the amount of buzz I’ve read about them over the past few days it seems like that is a very real possibility and I’d be one of the teaming masses lining up to get a DSLR with that kind of capability.

They’re not the only light field camera maker out there either, heck they’re not even the first. Raytrix, a 3D camera manufacturing company, was actually the first to market with a camera that incorporated light field technology. Looking over their product range they’ve got quite the selection of cameras available for purchase although they seem to be aimed more at the professional rather than consumer market. They even offer to convert your favourite camera into a light field one and even give you some rough specs of what your camera will be post conversion. Lytro certainly has its work cut out for them with a company like Raytrix competing against them and it’ll be interesting to see how that develops.

On a personal level this kind of technology gets me all kinds of excited. I think that’s because they’re so unexpected, I mean once auto-focus made it easy for anyone to take a picture you’d think that it was a solved problem space. But no, people find ingenious ways of using good old fashioned science to come up with solutions to problems we thought were already solved. The light field space is really going to heat up over the next couple years and it’s got my inner photographer rattling his cage, eager to play with the latest and greatest. I’m damned tempted to give into him as well as this tech is just so freakin’ cool.

Galaxy Nexus

Samsung’s Galaxy Nexus: An Evolutionary Behemoth.

It’s no secret that I’m a big fan of my Samsung Galaxy S2, mostly because the specifications are enough to make any geek weak at the knees. It’s not just geeks that are obsessed with the phone either as Samsung has moved an impressive 10 million of them in the 5 months that its been available. Samsung has made something of a name for itself in being the phone manufacturer to have if you’re looking for an Android handset, especially when you consider Google used their original Galaxy S as the basis for their flagship phone the Nexus S. Rumours have been circulating for a while that Samsung would once again be the manufacturer of choice, a surprising rumour considering they had just sunk a few billion into acquiring Motorola

Yesterday however saw the announcement of Google’s new flagship phone the Galaxy Nexus and sure enough it’s Samsung hardware that’s under the hood.

The stand out feature of the Galaxy Nexus is the gigantic screen, coming in at an incredible 4.65 inches and a resolution of 1280 x 720 (the industry standard for 720p). That gives you a PPI of 315 which is slightly below the iPhone 4/4S’ retina screen which comes in at 326 PPI which is amazing when you consider it’s well over an inch bigger. As far as I can tell it’s the highest resolution on a smart phone in the market currently and there’s only a handful of handsets that boast a similar sized screen. Whether this monster of a screen will be a draw card though is up for debate as not all of us are blessed with the giant hands to take full advantage of it.

Under the hood it’s a bit of a strange beast, especially when compared to its predecessors. It uses a Texas Instruments OMAP 4460 processor (dual core, 1.2GHz) instead of the usual ARM A9 or Samsung’s own Exynos SOC coupled with a whopping 1GB of RAM. The accompanying hardware includes a 5MP camera capable of 1080p video, all the usual connectivity options with the addition of NFC and wireless N and, strangely enough, a barometer. The Galaxy Nexus does not feature expandable storage like most of its predecessors did, instead coming in 16GB and 32GB variants. All up it makes for a phone that’s definitely a step up from the Galaxy S2 but not in every regard with some features on par or below that of the S2.

Looking at the design of the Galaxy Nexus I couldn’t help but notice that it had sort of regressed back to the previous design style, being more like the Galaxy S rather than the S2. As it turns out this is quite deliberate as Samsung designed the Galaxy Nexus in such a way as to avoid more lawsuits from Apple. It’s rather unfortunate as the design of the Galaxy S2 is really quite nice and I’m not particularly partial to the rounded look at all. Still I can understand why they want to avoid more problems with Apple, it’s a costly exercise and neither of them are going to come out the other side smelling of roses.

Hand in hand with the Galaxy Nexus announcement Google has also debuted Ice Cream Sandwich, the latest version of the Android OS. There’s a myriad of improvements that I won’t go through here (follow the link for a full run down) but notable features are the ability to unlock your phone by it recognizing your face, integrated screen capture (yes, that hasn’t been a default feature for this long), a NFC sharing app called Android Beam and a better interface for seeing how much data you’re using that includes the ability to kill data hogging apps. Like the Galaxy Nexus itself Ice Cream Sandwich is more of an evolutionary step rather than being revolutionary but it looks like a worthy compliment to Google’s new flagship phone.

The Galaxy Nexus shows that Samsung is very capable of delivering impressive smart phones over and over again. The hardware, for the most part, is quite incredible bringing features to the table that haven’t yet been seen before. Ice Cream Sandwich looks to be a good upgrade to the Android operating system and coupled with the Galaxy Nexus the pair will make one very desirable smart phone. Will I be getting one of them? Probably not as my S2 is more than enough to last me until next year when I’ll be looking to upgrade again, but I can’t say I’m not tempted ;)

hexagon-spy-satelllite-description

The Spy Satellite HEXAGON: Ah, Now The Shuttle’s Design Makes Sense.

Whilst the Space Shuttle will always be one of the most iconic spacecraft that humanity has created it’s design was one of compromises and competing objectives. One of the design features, which influenced nearly every characteristic of the Shuttle, was the requirement from the Department of Defense that stipulated that the Shuttle needed to be able to launch into a polar orbit and return after a single trip around the earth. This is the primary reason for the Shuttle being so aeroplane like in its design, requiring those large wings so it has a long downrange capability so that it could return to its launch site after that single orbit. The Shuttle never flew such a mission, but now I know why the DoD required this capability.

It was speculated that that particular requirement was spawned out of a need to capture spy satellites, both their own and possibly enemy reconnaissance craft. At the time digital photography was still very much in its infancy and high resolution imagery was still film based so any satellite based spying would be carrying film on board. The Shuttle then could easily serve as the retrieval vehicle for the spy craft as well as functioning as a counter intelligence device. It never flew a mission like this for a couple reasons, mostly that a Shuttle launch was far more expensive than simply deorbiting a satellite and sending another one up there. There was also the rumour that Russia had started arming its spacecraft and sending humans up there to retrieve them would be an unnecessary risk.

The Shuttle’s payload bay was also quite massive in comparison to the spy satellites of the time which put further into question the DoD’s requirements. It seems however that a recently declassified spy satellite, called HEXAGON, was actually the perfect fit and could have influenced the Shuttle’s design:

CHANTILLY, Va. – Twenty-five years after their top-secret, Cold War-era missions ended, two clandestine American satellite programs were declassified Saturday (Sept. 17) with the unveiling of three of the United States’ most closely guarded assets: the KH-7 GAMBIT, the KH-8 GAMBIT 3 and the KH-9 HEXAGON spy satellites.

“I see a lot of Hubble heritage in this spacecraft, most notably in terms of spacecraft size,” Landis said. “Once the space shuttle design was settled upon, the design of Hubble — at the time it was called the Large Space Telescope — was set upon. I can imagine that there may have been a convergence or confluence of the designs. The Hubble’s primary mirror is 2.4 meters [7.9 feet] in diameter and the spacecraft is 14 feet in diameter. Both vehicles (KH-9 and Hubble) would fit into the shuttle’s cargo bay lengthwise, the KH-9 being longer than Hubble [60 feet]; both would also fit on a Titan-class launch vehicle.”

HEXAGON is an amazing piece of cold war era technology. It was equipped with two medium format cameras that would sweep back and forth to image an area, capturing an area 370 nautical miles wide. Each HEXAGON satellite carried with it some 60 miles worth of film in 4 separate film buckets which would detach from the craft when used and return to earth where they would be snagged by a capture craft. They were hardy little canisters too with one of them ending up on the bottom of an ocean but was retrieved by one of the navy’s Deep Submergence Vehicles. There were around 20 launches of the HEXAGON series of craft with only a single failure towards the end of the program.

What really surprised me about HEXAGON though was the resolution they were able to achieve some 30+ years ago. HEXAGON’s resolution was improved throughout its lifetime but later missions had a resolution of some 60cm, more than enough to make out people and very detailed images of say cars and other craft. For comparison GeoEye-1, which had the highest resolution camera on an earth imaging craft at the time of launch, is only just capable of a 40cm per pixel resolution (and that imagery is property of the USA government). Taking that into consideration I’m wondering what kind of imaging satellite the USA is using now, considering that the DoD appears to be a couple decades ahead of the commercial curve.

It’s always interesting when pieces of a larger puzzle like the Shuttle’s design start falling into place. Whilst it’s debatable whether or not HEXAGON (and it’s sister craft) were a direct influence on the Shuttle there’s enough coincidences to give the theory a bit of credence. I can see why the USA kept HEXAGON a secret for so long, that kind of capability would’ve been down right scary back in the 80’s and its reveal makes you wonder what they’re flying now. It’s stuff like this that keeps me obsessed about space and what we, as a species, are capable of.

Siri Tea Early Grey Hot

Siri: Merely a Curiosity or an Interface Revolution?

Voice controlled computers and electronics have always been a staple science fiction, flaunting with the idea that we could simply issue commands to our silicone based underlings and have them do our bidding. Even though technology has come an incredibly long way in the past couple decades understanding natural language is still a challenge that remains unconquered. Modern day speech recognition systems often rely on key words in order to perform the required commands, usually forcing the user to use unnatural language in order to get what they want. Apple’s latest innovation, Siri, seems to be a step forward in this regard and could potentially signal in a shift in the way people use their smartphones and other devices.

On the surface Siri appears to understand quite a bit of natural language, being able to understand that a single task can be said in several different ways. Siri also appears to have a basic conversational engine in it as well so that it can interpret commands in the context of what you’ve said to it before. The scope of what Siri can do however is quite limited but that’s not necessarily a bad thing as being able to nail a handful of actions from natural language is still leaps and bounds above what other voice recognition systems are currently capable of.

Siri also has a sense of humour, often replying to out of left field questions with little quips or amusing shut downs. I was however disappointed with the response for a classic nerd line of “Tea. Earl Grey. Hot” which recieved the following response:

 

This screen shot also shows that Siri’s speech recognition isn’t always 100% either, especially when it’s trying to guess what you were saying.

Many are quick to draw the comparison between Android’s voice command system and apps available on the platform like Vlingo. The big difference there though is that these services are much more like search engines than Siri, performing the required actions only if you utter the commands and key words in the right order. That’s the way nearly all voice operated systems have worked in the past (like those automated call centres that everyone hates) and are usually the reason why most people are disappointed in them. Siri has the one up here as people are being encouraged to speak to it in a natural way, rather than changing the way they speak in order to be able to use it.

For all the good that Siri is capable of accomplishing it’s still at it’s heart a voice recognition system and with that comes some severe limitations. Ambient noise, including others talking around you, will confuse Siri completely making it unusable unless you’re in relatively quite area. I’m not just saying this as a general thing either, friends with Siri have mentioned this as one of its short comings. Of course this isn’t unique to Siri and is unlikely to be a problem that can be overcome by technology alone (unless you could speak to Siri via a brain implant, say).

Like many other voice recognition systems Siri is geared more toward the accent of the country it was developed in, I.E. American. This isn’t just limited to the different spellings between say the Queen’s English and American English but also for the inflections and nuances that different accents introduce. Siri will also fall in a crying heap if the pronunciation and spelling are different as well, again limiting its usefulness. This is a problem that can and has been overcome in the past by other speech recognition systems and I would expect that with additional languages for Siri already on the way that these kinds of problems will eventually be solved.

A fun little fact that I came across in my research for this post was that Apple still considered Siri to be a beta product (right at the bottom, in small text that’s easy to miss). That’s unusual for Apple as they’re not one to release a product unfinished, even if that comes at the cost of features not making it in. In a global sense Siri really is still beta with some of her services, like Yelp and location based stuff, not being available to people outside of the USA (like the above screenshot shows). Apple is of course working to make them all available but it’s quite unusual for them to do something in this fashion.

So is Siri the next step in user interfaces? I don’t think so. It’s a great step forward for sure and there will be people who make heavy use of it in their daily activities. However once the novelty wears off and the witty responses run out I don’t see a compelling reason for people to continue using Siri. The lack of a developer API as well (and no mention of whether one will be available) means that the services that can be hooked into Siri are limited to those that Apple will develop, meaning some really useful services might never be integrated forcing users to go back to native apps. Depending on how many services are excluded people may just find it easier to not use Siri at all, opting for the already (usually quite good) native app experience. I could be proven wrong on this, especially with technology like Watson on the horizon, but for now Siri’s more of a curiosity than anything else.