Monthly Archives: November 2012

Mark of the Ninja Screenshot Wallpaper Upgrades and Archetype choice600px

Mark of the Ninja: For The Good of the Clan.

The simplicity of 2D platformer games must be really liberating for developers, especially small time independent ones. I say this because it seems that I’ve played a lot of games this year that fit into that genre and the amount of innovative game ideas that I’ve seen has really surprised me. These were the titles I grew up on and they were, for the most part, usually a small variation on the original Duke Nukem idea. One thing I didn’t expect was the introduction of stealth based game play something which has traditionally been contained to 3D games. Mark of the Ninja blends stealth along with puzzle solving and platforming to form a pretty unique game experience, one that doesn’t really have anything that I can directly compare it to.

Unlike most ninja games which take place in feudal Japan Mark of the Ninja is set during present day. You, an unnamed ninja, were receiving your first tattoo which would grant you special powers when you passed out. A short while later a fellow ninja, named Ora, wakes you up as the ninja stronghold is under attack by a security agency headed by a man named Karajan. After rescuing your fellow ninjas as well as your master, Azai, you’re then sent on a mission of vengeance against Karajan for the atrocities that he committed against your clan.

Mark of the Ninja has a style to it that’s reminiscent of all those flash animations of yesteryear but there’s a level of refinement about it that many of those lacked. The cut scenes for example feel like they came straight out of a professional animation house and wouldn’t be out of place in any cartoon you’d see on a Saturday morning. There’s also incredible amounts of detail everywhere from the interactive area which is littered with all sorts of things to the backgrounds which are done exceptionally well. This blends exceptionally well with the music and foley which provides a very detailed soundscape to compliment the impressive art work.

Mark of the Ninja is primarily a stealth game and its implementation in the 2D, platformer world is quite an interesting one. For starters unlike most 2D games Mark of the Ninja includes a line of sight mechanic which forms a big part of any stealth game. This means that you’ll spend the vast majority of your time walking between shadows, dodging guards where you can, so you can either sneak up behind guards and dispatch them quickly or just move on leaving them none-the-wiser. If it so pleases you though you can go toe to toe with every guard you meet however and there are some sections which will be far easier (and quicker) should you choose to do that.

Initially you start off with only a few tools at your disposal, namely your sword and bamboo darts that can be used to take out lights and other fixtures. As the game progresses you unlock additional abilities and equipment that allow for a much wider range of actions, enabling you do things like terrify your enemies by laying spike traps or dangling corpses from the room for all to see. All these options will mean that your play through is almost guaranteed to not be the same as anyone else’s as there just so many ways to go about doing the same thing.

Indeed that seems to be the whole point of Mark of the Ninja. Whilst it is primarily a 2D stealth platformer it also has many elements of a puzzler/exploration game as there are many rewards to be found by simply taking the least obvious path. I can’t tell you how many times I found artefacts/scrolls by going in the wrong direction or moving blocks in random ways. If you’re persistent enough too the most laborious of challenges can usually be circumvented by finding a path that leads around it or simply puts you behind the guards that were blocking your path. Mark of the Ninja then is a game that rewards the player for being curious but thankfully forgoes punishing you severely if you don’t.

The upgrade system bears mentioning as how many upgrades you can afford depends directly on: how many challenges you complete, your overall score and how many of the hidden scrolls you uncover. For each of these there are a potential 3 tokens up for grabs giving you a total of nine for each level. These can then be spent on various upgrades that either give you new abilities/equipment or upgrades to the ones you currently have. Depending on what you get this can completely change the way you play the game, especially if you combine these upgrades with one of the costumes which will grant you several benefits (usually at the cost of one particular trait).

This is usually the point where I mention any bugs or glitches that detracted from my game play experience but I’m pleased to report that there doesn’t seem to be any. Sure there were times when my character acted in a way I didn’t expect but its hard for me to blame the game for that as I get the feeling it was more me fat fingering the keys rather than the game engine wigging out on me. I did have some rather awkward checkpoint moments where it’d place me into locations that I hadn’t yet explored when reloading (which was actually great sometimes) putting me in rather precarious situations but it was nothing I couldn’t handle.

The story of Mark of the Ninja is also quite well done, especially considering it forgoes the usual ninja setting and instead brings the whole ninja idea into modern day. Whilst I didn’t really feel the levels of emotions like I did for things like To The Moon it certainly didn’t suffer from issues like poor voice acting, irrational characters or glaring plot holes like plagued other titles I’ve played recently. I will admit that I’m yet to finish it (I believe I’m on the second last mission) so I’m not sure about the ultimate conclusion but from what I’ve heard from my other friends they weren’t disappointed with it, so it has that going for it at least.

Mark of the Ninja effortlessly combines all the best aspects of 2D platformers with stealth game play to form a game that makes you feel like the ultimate ninja whilst still providing an incredibly satisfying challenge. The graphics are superbly done, the sound track excellent and above all the core game play is immensely satisfying. I could go on but really for a game that’s asking price is so low compared to its quality I’d rather just recommend you go out and play it since it’s really worth a play through.

Rating: 9.0/10

Mark of the Ninja is available on PC and Xbox360 right now for $14.99 and an equivalent amount of Xbox points. Game was played on the PC with around 6 hours of total game time and 43% of the achievements unlocked.

 

Curiosity Self Portrait Mars

The Inevitable Disappointment of Curiosity’s Mars Discovery.

New scientific discoveries get me excited, they really do. After discovering the awesome Science Daily I found myself losing hours in research papers that show cased everything from new discoveries with great potential to good old fashioned applications of science that were already producing benefits for everyone involved. Of course it gets a whole lot more exciting when that science is being conducted on an entirely different planet so you can imagine my excitement when I heard that Curiosity had discovered something amazing, something that had could have been “history in the making”.

It’s one thing for space and science nuts like me to get excited about these kinds of things, we usually know what to expect and the confirmation of it is what gets us all giddy, but its another thing entirely for the rest of the world to start getting excited about it. You see what started out as a couple posts on my feed reader with a couple scientists on the Curiosity team eventually mutated into dozens and when I saw that Australian TV programs were covering it I knew that it had gotten out of hand. It’s not that this was wholly unexpected, the public interest in Curosity has been the highest I’ve seen since the Spirit and Opportunity first touched down on Mars, but I knew that this fever pitch over the potential ground breaking news would inevitably lead to public disappointment no matter how significant the find was.

To put it in perspective Curiosity has a very distinct set of capabilities, most of them targeted towards imaging and the study of the composition of the things it comes across. Much of the speculation I read about Curiosity’s find centred around the idea that it had detected life in some form or another which would truly be earth shattering news. However Curiosity just isn’t set up to do that in the way most people think it is as its microscopes are simply not capable of imaging microbes directly. The only way it could detect signs of life would be through the on-board laboratory using its mass spectrometer, gas chromatograph and laser spectrometer and even then it would only detect organic compounds (like methane) which is a good, but not certain, indication of life.

Unfortunately whilst the scientists had done their best to try and down play what the result might actually be the damage has been done as the public’s expectations are wildly out of alignment with what it could actually be. It’s annoying as it doesn’t help the image of the greater scientific community when things like this happen and it’s unfortunately become a semi-regular occurrence. I can really blame the scientists for this one, they really are working on a historic mission that will further our understanding of Mars and many other things, but care has to be taken to avoid these kinds of situations in the future. Hopefully the media will also refrain from sensationalising science to the point where the story no longer matches the reality, but I’m not holding my breath on that one.

For what its worth though I’m still looking forward to whatever it is they found out we’re still only in the beginning of Curiosity’s mission, meaning there’s plenty more science to be done and many more discoveries to be had. Whilst they might not be the amazing things that the media might have speculated them to be they will still be exciting for the scientific community and will undoubtedly further our understanding in many different areas. Hopefully this will be the only PR debacle of Curiosity’s mission as I’d hate to have to write a follow up post.

dwfttw vehicle

Travelling Faster Than The Wind Using Only…The Wind?

It sounds ludicrous right? Being able to travel faster than the wind using only the wind sounds like an incredibly crazy idea as for it to work there has to be some kind of other external force acting on it for that to work. Indeed the idea perplexed me for quite a while, in a much similar way as the airplane on a treadmill problem did, but once you get your head around the idea of apparent wind it starts to get a bit easier. Of course nothing beats a good example and it just so happens that there’s been a cracker of one to cross my decks recently.

YouTube Preview Image

The video above shows an intriguing vehicle called Sailrocket 2, a sail boat that has a rather intriguing design that allows it to travel at almost 3 times the current speed of the wind its in. The simplest way to explain this is that, as the design kind of suggests, it’s not travelling directly down the wind. It’s in fact travelling across the wind which causes it to experience another apparent wind due to the direction its travelling in which allows it to gain speed. Although this sounds a bit perpetual-motiony things like the hull resistance, efficiency of the sail and how close the boat can sail to the apparent wind it generates. Done right however you can get up to 6 times the speed of the prevailing winds which can be pretty damn fast as Sailrocket 2 demonstrates.

But what if I told you that, through some engineering trickery, similar things can happen travelling directly down the wind?

That my friends is a vehicle that is capable of just such a feat. The concept had been making waves for quite some time as whilst the idea of going faster than the wind whilst travelling across it is well known and proven doing the same thing travelling with the wind was seen as impossible. 2 years ago however a team headed by Rick Cavallaro built one of them and proceeded to set records with it not long after. It works by actually being two cars in one with its first mode of operation being directly driven by the wind and the second using the wind as a power source to drive the wheels directly (at least that’s my understanding anyway). This is what allows it to travel faster than the wind that’s driving it and makes for a pretty neat piece of engineering.

It’s this kind of non-intuitive science and engineering that really gets me going. I spent hours trying to understand all the principles behind this when I first heard of them and even now I’m still not 100% on them. That’s part of the fun though as the more I read about it the more I understand and the more interesting projects based on those ideas I uncover. It’s a rather deep rabbit hole to fall into however and I wouldn’t recommend it unless you’re as fascinated with science as I am.

Haswell Chip Wafer

Intel’s Next Generation CPU To Be Non-Removable, Drawing Enthusiast’s Ire.

The ability to swap components around has been an expected feature for PC enthusiasts ever since I can remember. Indeed the use of integrated components was traditionally frowned upon as they were typically of lower quality and should they fail you were simply left without that functionality with no recourse but to buy a new motherboard. Over time however the quality of integrated components has increased significantly and many PC builders, myself included, now forego the cost of additional add-in cards in favour of their integrated brethren. There are still some notable exceptions to this rule however, like graphics cards for instance, and there were certain components that most of us never thought would end up as being an integrated component, like the CPU.

Turns out we could be dead wrong about that.

Now it’s not like fully integrated computers are a new thing, in fact this blog post is coming to you via a PC that has essentially 0 replaceable/upgradable parts, commonly referred to as a laptop. Apple has famously taken this level of integration to its logical extreme in order to create its relatively high powered line of laptops with slim form factors and many other companies have since followed suit due to the success Apple’s laptop line have had. Still they were a relatively small market compared to the other big CPU consumers of the world (namely desktops and servers) which have both resisted the integrated approach mostly because it didn’t provide any direct benefits like it did for laptops. That may change if the rumours about Intel’s next generation chip, Haswell, turn out to be true.

Reports are emerging that Haswell won’t be available in a Land Grid Array (LGA) package and will only be sold in the Ball Grid Array (BGA) form factor. For the uninitiated the main difference between the two is that the former is the current standard which allows for processors to be replaced on a whim. BGA on the other hand is the package used when an integrated circuit is to be permanently attached to its circuit board as the “ball grid” is in fact blobs of solder that will be used to attach it. Not providing a LGA package essentially means the end for any kind of user-replaceable CPU, something which has been a staple of the enthusiast PC community ever since its inception. It also means a big shake up of the OEM industry who now have to make decisions about what kinds of motherboards they’re going to make as the current wide range of choice can’t really be supported with the CPUs being integrated.

My initial reaction to this was one of confusion as this would signify a really big change away from how the PC business has been running for the past 3 decades. This isn’t to say that change isn’t welcome, indeed the integration of rudimentary components like the sound card and NIC were very much welcome additions (after their quality improved), however making the CPU integrated essentially puts the kibosh on the high level of configurability that we PC builders have enjoyed for such a long time. This might not sound like a big deal but for things like servers and fleet desktop PCs that customizability also means that the components are interchangeable, making maintenance far easier and cheaper. Upgradeability is another reason however I don’t believe that’s a big of a factor as some would make it out to be, especially with how often socket sizes have changed over the past 5 years or so.

What’s got most enthusiasts worried about this move is the siloing of particular feature sets to certain CPU designations. To put it in perspective there’s typically 3 product ranges for any CPU family: the budget range (typically lower power, less performance but dirt cheap), the mid range (aimed at budget concious enthusiasts and fleet units) and the high end performance tier (almost exclusively for enthusiasts and high performance computing situations). If these CPUs are tied to the motherboard it’s highly likely that some feature sets will be reserved for certain ranges of CPUs. Since there are many applications where a low power PC can take advantage of high end features (like oodles of SATA ports for instance) and vice versa this is a valid concern and one that I haven’t been able to find any good answers to. There is the possibility of OEMs producing CPU daughter boards like the slotkets of old however without an agree upon standard you’d be effectively locking yourself into that vendor, something which not everyone is comfortable doing.

Still until I see more information its hard for me to make up my mind where I stand on this. There’s a lot of potential for it to go very, very wrong which could see Intel on the wrong side of a community that’s been dedicated to it for the better part of 30 years. They’re arguably in the minority however and its very possible that Intel is getting increasing numbers of orders that require BGA style chips, especially where their Atoms can’t cut it. I’m not sure what they could do right in this regard to win me over but I get the feeling that, just like the other integrated components I used to despise, there may come a time when I become indifferent to it and those zero insertion force sockets of old will be a distant memory, a relic of PC computing’s past.

Dealing With The “Skills Shortage”.

Canberra is a weird little microcosm as its existence is purely because the 2 largest cities in Australia couldn’t agree on who could be the capital of the country and they instead decided to meet, almost literally, in the middle. Much like Washington DC this means that all of the national level government agencies are concentrated in this area meaning that the vast majority of the 360,000 or so population work either directly or indirectly for the government. This concentration of services in a small area has distorted many of the markets that exist in your typical city centres and probably most notable of them all is the jobs market.

To put it in perspective there’s a few figures that will help me illustrate my point more clearly. For starters the average salary of a Canberran worker is much higher than the Australian average even beating out commodity rich states which are still reaping the benefits of the mining boom. Additionally Canberra’s unemployment is among the lowest in Australia hovering around a staggering 3.7%.  This means that the labour market here is somewhat distorted and that’s especially true for the IT industry. However, like the manufacturing industry in the USA, there are still many who will bellyache endlessly about the lack of qualified people available to fill the needs of even this small city.

The problem is, as it always has been, simple economics.

I spent a good chunk of my career working directly for the public service, jumping straight out of university in a decent paying job that I figured I’d be in for quite a while. However it didn’t take long for me to realise that there was another market out there for people with my exact same skills, one that was offering a substantial amount more to do the same work. Like any rational person I jumped at this opportunity and have been continuing to do so for the past 6 years. However I still see positions similar to mine advertised with salaries attached to them that are, to be fair, embarrassing for anyone with those kinds of skills to take when they can get so much more for doing the same amount of work. This has led to a certain amount of tension between Canberra’s IT workers and the government that wishes to employ them with many agencies referring to this as a skills shortage.

The schism is partly due to the double faceted nature of the Canberran IT market. One the one hand the government will pay you a certain amount if you’re permanently employed with them and another if you’re hired as an outside contractor. However these positions are, for the most part, identical except that one pays an extraordinary amount more at the cost of some of the benefits (flex time, sick/annual leave, etc.). It follows that many IT workers are savy enough to take advantage of this and plan their lives around those lack of benefits accordingly and thus will never even consider the lower paid option because it just doesn’t make sense for them.

This hasn’t stopped the government from trying however. The Gershon report had been the main driver behind this, although its effects have been waning for the past 2 years,  but now its the much more general cost reductions that are coming in as part of the overall budget goal of delivering a surplus. The problem here however, as I mentioned in the post I just linked, is that once you’re above a certain pay grade in the public service you’re expected to facilitate some kind of management function which doesn’t really align with the requirements of IT specialists. Considering that even outside of Canberra’s arguably inflated jobs market such specialists are able to make far more than the highest, non-managerial role in the government it comes as no surprise that the contractor market had flourished the way it did and why the implementation of the Gershon report did nothing but decimate the government’s IT capability.

Simply put the skills/labour shortage that’s been experienced in many places, not just Canberra, is primarily due a disconnect between the skills required and the amount organisations are willing to pay for said skills. The motivation behind the lower wage costs is obvious but the outcome should not be unexpected when you try to drive the price down but the supply remains the same. Indeed many of the complaints about a labour shortage are quickly followed by calls for incentives and education in the areas where there’s a skills shortage rather than looking at the possibility that people are simply becoming more market savy and are not willing to put up with lower wages when they know they can do better elsewhere.

I had personally only believed that this applied to the Canberra IT industry but in doing the research for this post it seems like it applies far more broadly than I had first anticipated. In all honesty this does nothing but hurt the industry as it only helps to increase tensions between employers and employees when there’s a known disconnect between the employee’s market value and their compensation. I’d put the challenge to most employers to see how many good, skilled applicants they get if they start paying better rates as I’d hazard a guess their hit rate would vastly improve.

dolla dolla bill y'all

Why There’s No Silicon Valley Equivalent in Australia.

If you follow the start up scene, care of industry blogs like TechCrunch/GigaOM/VentureBeat/etc, the lack of Australian companies making waves is glaring obvious. It’s not like we haven’t had successes here, indeed you don’t have to look far to find quite a few notables, but there’s no question that we don’t have a technology Mecca where all aspiring entrepreneurs look towards when trying to realise their vision. You could argue that Sydney already fits this bill since that’s where most of the money is but it’s not the place where the innovation is most concentrated as Melbourne as arguably given risen to just as many success stories. This decentralized nature of Australia’s start-up industry presents a significant barrier to many potential businesses and whilst I don’t have a good solution to them the reasons behind it are quite simple.

Reserve bank governor Glenn Stevens gave a speech at the CEDA annual dinner a couple nights ago and hit the nail on the head as to why Australia doesn’t appear to have the same vibrant start-up ecosystem that can be found overseas:

Only 4.8 per cent of start-ups in Sydney and Melbourne successfully become “scaled” (large enough to be sustainable) which is another way of saying that 95.2 per cent fail. In Silicon Valley, the success rate is 8 per cent.

The difference is capital: start-ups in California raise 100 times as much money as Sydney ones in the scale stage, and they raise 4.8 times as much in the earlier stages of discovery, validation and efficiency.

Yet as everyone knows, Australia punches well above its weight in capital formation, thanks to compulsory superannuation and the $1.4 trillion super pool. Why doesn’t any of that money find its way to supporting

Current fiscal policies are quite conducive to long term, low risk, moderate return investments (such as property and bank stocks) and the investment practices of our superannuation funds reflects this. Indeed even at a personal level Australian investors are risk adverse with majority preferring things like property, extra super contributions or term deposits. Partly you could also put some of the blame on Australia’s culture which is more inclined towards property ownership as the ultimate achievement a regular Australian can aspire to, whereas the USA’s is far more entrenched in the entrepreneurial idea.

We then have to ask ourselves that if we’re aspiring to create a Silicon Beach here in Australia what we need to do in order to make that happen.

The report itself details a couple ideas that can be done from a policy perspective, namely making certain company structures and incentive schemes cheaper and easier, however that’s only part of the issue. Ideally you’d also want some policies that make investing in risky start-up companies more attractive than the current alternatives. I don’t think abolishing current legislation like Negative Gearing would help much in this regard but it could potentially be extended to cover off losses made on start-up investments. There are many other options of course (and I’m not saying mine is the perfect one) and I’d definitely be supportive of some investigation into policy frameworks that have been used overseas and their applicability here in Australia.

There’s also the possibility of the government intervening with additional funding in order to get start-ups past the validation phase in order to increase the hit rate for the venture capital industry. I’ve talked a bit about this previously, focusing on using the NBN as a launchpad for Australia’s Silicon Beach, and really the NBN should be the catalyst which drives Australia’s start up industry forward. There’s already specific industry funds being set up, like the one that just came through for Australian game developers, but the creation of a more general fund to help start up validate their ideas would be far more effective in boosting the high tech innovation industry. It would be much harder to design and manage for sure, but no one ever said trying to replicate Silicon Valley’s success would be easy.

For what its worth I believe the government is working hard towards realising this lofty goal (thanks to some conversations I’ve had with people in the know on these kinds of things) and as long as they draw heavily on the current start-up and innovation industry in Australia I believe we will be able to achieve it. It’s going to be very hard to break the risk adverse mindset of the Australian public but that’s something that time and gentle pushes in the right direction, something perfectly suited to legislative changes. How that should all be done is left as an exercise to the reader (who I hope is someone in parliament).

An Inexpensive, Ingenious Way To Clear Land Mines.

Clearing land mines isn’t an easy task usually requiring something heavy equipment that costs several million dollars in order to do it safely. It’s for this unfortunate reason that many places around the world are still littered with ordnances left over from conflicts that have long since passed. They’re such a large problem that they account for more than 15,000 deaths each year and for every 5000 mines removed it’s likely that one removal worker will die. The following clip demonstrates what can be done when a little ingenuity is applied to the problem and could very well be the solution that sees wide spread use in places that can’t simply afford to remove mines the traditional way.

It’s an amazing piece of engineering as it’s incredibly simple, cheap to produce and solves an incredibly complicated problem that’s traditionally been out of reach of the people it will help. Although the device might have limited application currently I can easily see the design being adapted and improved for use in other areas of the world without too much trouble. It might not be the most time efficient solution but its a lot better than the alternative.

I just love ideas like this as they remind me that even the most perplexing of problems can have a simple, elegant solution to them. When that solution will go on to save thousands of lives per year that’s just even better, especially when its in impoverished countries that still dealing with a war that has been gone for so long.

Call of Duty Black Ops II Screenshot Wallpaper Wingsuits600px

Call of Duty: Black Ops II: Suffer With Me.

One of the things I really like about reviewing games is going back over my reviews when a sequel or another instalment in a franchise comes out. The Call of Duty series takes the top prize for being my most reviewed franchise with not 1, not 2 but 3 previous reviews which I can draw on directly for comparisons. For someone who used to avoid any game that was based around one war or another it’s interesting to see how quickly I came around once I started playing the Call of Duty series, being hooked after a single game. Call of Duty: Black Ops II is the latest instalment in the franchise from Treyarch and I must say that they’ve really outdone themselves this time, firmly placing themselves on the same level as Infinity Ward.

Call of Duty: Black Ops II takes place in the not too distant future of the USA in 2025. The story centres around David Mason, son of Alex Mason the main protagonist from the original Black Ops, who’s tracking down a known terrorist called Raul Menendez. Much of the story is recounted in flashbacks from an ageing Frank Woods who David Mason consult with to try and find out where Menedez is and what he might be up to. It’s through these flash backs that you start to make sense of some of the events of your past and understand why things certain things have happened and why you’re still alive to see them.

For a primarily console game I wasn’t expecting a major update in graphics from any of its predecessors as I believe they were tapping out the capabilities of the Xbox360 some time ago. Compared to Modern Warfare 3 this seems to be largely true with them both having similar levels of graphical detail. However if you compare it to Treyarch’s previous release there’s most definitely a step up which they’re to be commended for. If I’m honest whilst the graphics aren’t a massive improvement over Modern Warfare 3 they are a hell of a lot more smooth, especially when there’s a lot of action going on. For a game that is almost entirely fast paced action this is a very welcome improvement, especially when it comes to multiplayer (which I’ll touch on later).

If you’ve played any of the Call of Duty series you’ll know the basic breakdown of the game play that I’m about to give you. It’s a First Person Shooter and so you’ll spend the vast majority of your time running around, letting bullets loose at varying arrays of enemies and utilizing your additional equipment (like grenades, flash bangs and remote C4) to tip the scales in your favour. Thanks to the ability to customize your load out before starting a mission you can also tailor your experience somewhat by say favouring sniper rifles over close range spray ‘n’ pray type weapons. For what its worth I usually played with assault rifles and SMGs, preferring to run carelessly into battle while unleashing torrents of bullets at my foes.

Black Ops II, like nearly all other titles in the Call of Duty franchise, has their trademark FPS experience that’s so well polished that it just flows with an effortless grace. All the actions (running, jumping, shooting) just plain work like you expect them to. Whilst many other FPS type games will draw my ire for one core game play issue or another I really do find it hard to find fault with the fundamentals of any Call of Duty game. Arguably this is due to the ongoing success of the series which has been allowed to refine every element over the course of so many games but it still doesn’t fail to impress me, even after seeing it for the 4th time in as many years.

Treyarch has recognized that simply running from point A to point B and shooting everything along the way does get a little boring after a while and has included many different distractions along the way to break up the repetition. Shown above is just one of the many little set pieces they include (this one was actually fairly early on in the game) which was an extremely fun way to start the mission off. They have also included a second mission type called Strike Force which is very different from the usual missions and is more akin to a game like Natural Selection, blending RTS elements with FPS game play.

The Strike Force missions put you in control of a squad of marines, robots and flying drones that you will use to accomplish a mission. They’re all different, ranging from a defend the objective to rescuing and escorting someone out, and whilst you can treat it like a regular mission by taking control of one of the units directly you’ll need to issue orders to the other AIs constantly if you want to finish it successfully. If I’m honest I didn’t enjoy them that much at the start but after a while I really started to get into them, employing varying tactics and just loving being able to play with reckless abandon.

After all this praise I feel its appropriate to mention the few minor issues with Black Ops II that can lead to you having a bad time. Like nearly all FPS games that lump you with AI friends to help you out they are, for the most part, completely useless and will likely cause your death more often than they’ll save it. For instance I’ve seen my AI buddies run around corner and proceeded to think it was completely safe however since most of the other AIs won’t target them, only you, this can often mean that there’s someone hiding around the corner but they won’t trigger until you run into their line of sight. This is in addition to them getting in your way every so often which can cause your death when you’re trying to take cover or, more comically, fail a mission when they put their head in front of your sniper rifle (“Friendly fire will not be tolerated!” apparently).

I also had an issue with some of the triggers not going off, causing the game to get stuck at a particular point. The one I can remember clearly was when I was in the bunker just before the Celerium device. I walked in and reprogrammed an ASD to fight for me but after doing so my crew just sort of stood around, not doing anything. Try as I might to get them to move I simply couldn’t and since there’s no “restart from last checkpoint” option in the menu I opted for the tried and true jump on my own grenade to get back to my last checkpoint. After that everything worked as expected but it wasn’t an isolated incident and its something that’s been present in previous Call of Duty titles.

In a very surprising change to the Call of Duty formula you actually have quite a bit of agency in Black Ops II with the game playing out very differently should you make different choices at different times. They are, for the most part, unfortunately binary but there are other softer choices like completing the Strike Missions which will have an influence on how the last hours of the game plays out. The Black Ops II wiki page (SPOILER WARNING on that link) informs me that there’s no less than 5 separate endings available to you which is far more than you average FPS. That, combined with the fact that they’re not presented to you in Endotron 3000 style means that Black Ops II is quite a step up in terms of story.

The story in and of itself is quite enthralling too, even if the beginning confused me somewhat (although that’s somewhat typical for me in Call of Duty games, if I’m honest). I was nicely surprised by how progressive it seemed as well with many characters being female, including the President, and subtle references to current social ideals like the 99% vs the 1% and so on. After my good mate’s take down of the last Call of Duty’s story and lack of agency I had a much more critical eye on Black Ops II’s story than I have for any other game in the series and it makes me very happy to say that they’ve stepped up their game and my expectations were more than met.

The multi-player is pretty much what I’ve come to expect from Call of Duty games bringing back all the classic match up modes along side the newer ideas like Kill Confirmed. Unlike the original Black Ops which allowed you to choose a server Black Ops II instead uses the same match making system that Modern Warfare 3 did. Usually I’d make a note here about how this sucks (and there are still reasons why it does) but since it works and can usually find me a game in under a minute it’s hard to complain about it. Treyarch has also brought back the much loved Nuketown map which has been revamped for the modern era. They also took it away which led to quite the uproar from the community (many of whom preordered just to get said map) but they’ve since brought it back so kudos to them for listening.

There’s really not a lot that’s new or inventive about the multi-player in Black Ops II that I’ve seen yet with the experience system, upgrades and challenges all being very reminiscent of both Modern Warfare 3 and the original Black Ops. It’s kind of hard to improve on that formula since it works so well but those who are looking for a wholly new multiplayer experience ala Battlefield 3 will find themselves disappointed. However for those like me who love the fast paced, spammy action that maps like Nuketown bring you it’s more the same thing we’ve come to love and I still can’t get enough of it.

Call of Duty: Black Ops II catapults Teryarch up from the doldrums of being Infinity Ward’s poor cousin and firmly places them right at their side, showing that they’re quite capable of delivering a game that’s every bit as epic and enjoyable. The graphics are a great step up, the game play smooth and polished and the story is very fulfilling, a rarity in the FPS genre. The multiplayer might not be much different from its predecessors but it works well and is just as addictive as its predecessors which will see me spending many more hours on it. I thoroughly enjoyed my time both in the single and multi player parts of this game and should you be in the market for some top notch, AAA FPS action then you really can’t go past Black Ops II.

Rating: 9.5/10

Call of Duty: Black Ops II is available on PC, PlayStation 3 and Xbox360 right now for $89.99, $78 and $78 respectively. Game was played entirely on the PC on Veteran difficulty with 7.3 hours in single player unlocking 71% of the achievements and 2 hours in multiplayer. A review copy of the game was provided to The Refined Geek from Activision for the purposes of reviewing.

Windows Azure

Azure Tables: Watch Out For Closed Connections.

Windows Azure Tables are one of those newfangled NoSQL type databases that excels in storing giant swaths of structured data. For what they are they’re quite good as you can store very large amounts of data in there without having to pay through the nose like you would for a traditional SQL server or an Azure instance of SQL. However that advantage comes at a cost: querying the data on anything but the partition key (think of it as a partition of the data within a table) and the row key (the unique identifier within that partition) results in queries that take quite a while to run, especially when compared to its SQL counter parts. There are ways to get around this however no matter how well you structure your data eventually you’ll run up against this limitation and that’s where things start to get interesting.

By default whenever you do a large query against an Azure Table you’ll only get back 1000 records, even if the query will return more. However if your query did have more results than that you’ll be able to access them via a continuation token that you can add to your original query, telling Azure that you want the records past that point. For those of us coding on the native .NET platform we get the lovely benefit of having all of this handled for us directly by simply adding .AsTableServiceQuery() to the end of our LINQ statements (if that’s what you’re using) which will handle the continuation tokens for us. For most applications this is great as it means you don’t have to fiddle around with the rather annoying way of extracting those tokens out of the response headers.

Of course that leads you down the somewhat lazy path of not thinking about the kinds of queries you’re running against your Tables and this can lead to problems down the line. Since Azure is a shared service there are upper limits on how long queries can run and how much data they can return to you. These limits aren’t exactly set in stone and depending on how busy the particular server you’re querying is or the current network utilization at the time your query could either take an incredibly long time to return or could simply end up getting closed off. Anyone who’s developed for Azure in the past will know that this is pretty common, even for the more robust things like Azure SQL, but there’s one thing that I’ve noticed over the past couple weeks that I haven’t seen mentioned anywhere else.

As the above paragraphs might indicate I have a lot of queries that try and grab big chunks of data from Azure Tables and have, of course, coded in RetryPolicies so they’ll keep at it if they should fail. There’s one thing that all the policies in the world won’t protect you from however and that’s connections that are forcibly closed. I’ve had quite a few of these recently and I noticed that they appear to come in waves, rippling through all my threads causing unhandled exceptions and forcing them to restart themselves. I’ve done my best to optimize the queries since then and the errors have mostly subsided but it appears that should one long running query trigger Azure to force the connection closed all connections from that instance to the same Table storage will also be closed.

Depending on how your application is coded this might not be an issue however for mine, where the worker role has about 8 concurrent threads running at any one time all attempting to access the same Table Storage account, it means one long running query that gets terminated triggers a cascade of failures across the rest of threads. For the most part this was avoided by querying directly on row and partition keys however the larger queries had to be broken up using the continuation tokens and then the results concatenated in memory. This introduces another limit on particular queries (as storing large lists in memory isn’t particularly great) which you’ll have to architect your code around. It’s by no means an unsolvable problem however it was one that has forced me to rethink certain parts of my application which will probably need to be on Azure SQL rather than Azure Tables.

Like any cloud platform Azure is a great service which requires you to understand what its various services are good for and what they’re not. I initially set out to use Azure Tables for everything and have since found that it’s simply not appropriate for that, especially if you need to query on parameters that aren’t the row or partition keys. If you have connections being closed on you inexplicably be sure to check for any potentially long running queries on the same role as this post can attest they could very well be the source of what ales you.

The Dip Results

My Challenge Response Curve.

I’ve come to terms with the fact that I’m a challenge addict, always seeking out new technology or platforms that has new problems which I can solve. I’d be lying if I didn’t say I enjoyed it as diving deep into an unknown area is something that always gives me a thrill and is arguably what keeps me coming back. This addiction to challenge however is its own worst enemy as whilst I might have dabbled in nearly every piece of technology imaginable I really only know them to a certain point before they bore me after which I’ll dump them for the next intriguing challenge. For someone who’s spent the better part of 2 years dreaming about starting his own technology based company this addiction to challenge is highly counter productive, something which I need to work on.

Like many of my ilk I’ve been trained in the art of pattern recognition, mostly for identifying when something can be automated or a process solidified in order to make it more efficient or reliable. My addiction to challenge hadn’t managed to slip past this process and after thinking about it for a while I realised that I had a kind of response curve to challenges. Initially there’s an overwhelming sense of progress as problems are overcome at a rapid pace, you’ve got momentum and you feel like the idea you’re working on has a lot of merit. Then, after a while, the challenges start to become routine and you start question your motives. It’s at this point where I find myself looking for something new and exciting, usually finding it without too much hassle.

I’ve come to learn that I’m not alone in this kind of response, it’s called the dip:

The idea comes from a book by Seth Godin, a serial entrepreneur and author who penned a whole book about this idea 5 years ago. I’d love to say that I’ve read it but I haven’t and all the credit goes to Matt Aimonetti’s post about how us engineers typically suck at choosing jobs (which I totally agree with, if I don’t wholly agree with the reasons why we suck) for introducing me to the concept from that book. I’d been thinking about writing a post about my challenge response curve for a while now but I hadn’t really figured out how to visualize the idea and the graph above is pretty much exactly what I was picturing in my head, even if I didn’t have the axis labelled (I had no idea what they were, honestly).

This is not to say that putting endless amounts of effort behind something will always yield results though. One of the tricks I’ve learnt since discovering my addiction to challenge is that once you’re in that dip area it’s all too easy to doing “work” on something and really not get anywhere, which adds to the frustration. Typically I found this was when I would just stare at bits of code for ages, thinking about how best to optimize them. Routinely this ended with me being stuck in a loop just thinking about the same ideas over and over again without taking the dive and trying them out for fear of wasting the effort. At that point you either need to break away from that task or simply slug through and try it out. Sure you might waste some time or effort by doing something that wasn’t worth doing or not spending time on the project but that’s far better than wasting time that ultimately results in nothing.

I’m currently working yet another idea (yeah I know, I’m terrible) where I was implementing a search function so that users could discover information without having to trudge through pages of stuff they weren’t interested in. Now for small scale data sets, like the one I have for development, this is pretty easy however for larger sets, like the one I have in production, they simply take too long to run. I had mulled over the idea of how to solve this for quite a while and implemented a solution over the weekend. This solution, whilst better than the original, was still performing unacceptably and forced me to rethink my approach to the issue. That time I spent on the current solution is now technically wasted but had I not spent that effort I would still be sitting here now thinking that it was the best course of action. I guess that realisation that even “wasted” effort has value was something I hadn’t really come to grips with and I don’t think I’m alone in thinking that.

Thankfully this is one of those things that once you’re aware of the issue there’s many things you can do about it in order to overcome it. I’m not saying that my particular coping strategy will work for everyone, I know it won’t, but I do feel that the dip applies almost universally even if the curve varies from person to person. How you recognize that you’re in the dip and how you get out of it is something that I’m not sure that I can help with but I know that simply being aware of it has helped me immensely and it’s for that exact reason that I’m pretty excited about my most recent projects.