Striking the right balance between making a game enjoyable and it’s relative difficulty level can be a rather tricky task. Way back in the dawn of gaming developers would often shoot for the high difficulty level simply because that meant people would play their game longer, even if that came at the cost of some enjoyability. It worked, for the most part, because I can remember becoming infuriated with many games yet still being unable to put them down, losing many hours to challenges that had long lost all meaning to me and all that remained was a desire to see it finished. They Bleed Pixels, the latest game from Spooky Squid Games Inc, feels like a homage to those days and if my reaction to it was anything to go by it’s a pretty authentic experience.
They Bleed Pixels puts you in a Lovecraftian world where you’re put in control of an anonymous (I couldn’t find out her name, at least) girl who’s dropped off at a home for troubled girls. It’s at this place she discovers a book, dripping in blood and pulsating with a decidedly evil red glow, that invades her dreams and twists her physical form into a purple skinned version of herself with claws for hands. She then has to battle her way past countless enemies and obstacles in order to reach the end and wake up from the terrible dream that is holding her captive.
As my long time readers will know I’ve got a bit of a thing for pixel art games, probably due to the nostalgia factor, and They Bleed Pixels delivers quite well in this regard. The art direction is great as everything has this eerie vibe to it, even when the music playing behind it is quite upbeat. This is only made better by the very satisfying explosions of pixels when you dispatch enemies (or yourself if you find the wrong edge of a saw) which fly across the screen and coat every surface they touch. Combined with the meaty foley that accompanies it They Bleed Pixels is quite a visceral experience for the eyes, ears and mind.
The core game of They Bleed Pixels is the tried and true platformer which seems to take quite a lot of inspiration from the Super Meat Boy style of games. The mechanics are quite similar: you can grab onto walls and slide down them at varying paces, you have a double jump so that areas that seem inaccessible actually are and as you progress nearly every wall has something on it that will kill you. Like all games in its genre the platforming sections start off simple and then ramp up the difficulty slowly which I believe is the key to cementing you in your seat as you die repeatedly to the same obstacle.
Unlike other platformer only titles They Bleed Pixels includes a combat system that makes use of only a single button for attack that can then be modified by the use of the movement and jump keys. For a game that obviously prefers a controller based scheme (most of the menus reference button A, for instance) I can see this working quite well, indeed Super Meat Boy’s developers recommend this as the conrol scheme of choice, but I ignored their advice and used my keyboard. Using these techniques, which are laid out for you in a tutorial, you can rack up big combos on your enemies which leads into one of the other game mechanics.
Unlike other platformers which have set check points or only save at the end of the level They Bleed Pixels has a meter at the top of the screen which fills up when you dispatch your enemies. The higher your combos the faster it will fill so the game encourages you to blast through as fast as you can in order to fill your meter faster. Once its filled you can then stand still to create a checkpoint however you can also risk it and keep going in order to push your score even higher. It’s a dangerous mechanism and more than once I found myself sent back much father than I would of liked just because I wanted to amp my score up.
For probably the first 2 dreams I was really enjoying the play style of They Bleed Pixels mostly because it felt like Super Meat Boy without the tendency to induce RSI. Sure there were a lot of tense moments but I never finished a level without more than a few dozen deaths, something which in Super Meat Boy just counted as the warm up. However as the game went on I found myself stuck on levels for up to an hour or more, throwing myself repeatedly at the same obstacles and seeming to get no where. Whilst I’d like to blame game bugs for it (and did for most of the time) after carefully watching what was happening I could only blame myself for what was happening, but that didn’t stop me from feeling frustrated.
I think primarily my gripe with the later levels comes from the repetition of challenges that the player has already beatn previously. If you look at the screenshot below and compare it to the second one in this post you’ll note how similar these challenges are (jumping from one side of a block to the other) and that particular challenge is present in nearly every level. There are also long sections where you’re basically doing the same thing over and over until you get to the end which doesn’t feel like a good challenge. Indeed it feels more like a punishment for not being able to execute the moves correctly which can happen quite easily when you panic and hit the attack button rather than jump.
Now don’t get me wrong, the game stands well enough on its own, but there was definitely a point where it transitioned from being a fun level of challenge to being just straight up insane and that’s where the fun started to rapidly drain out of it. I got to the last level, heck I was only about 3 screens away from finishing the game, but after spending a good 20 minutes or so on a puzzle and seemingly getting no where I just couldn’t bring myself to go back to it. I will take the criticism that I just wasn’t good enough to complete it (my performance in Super Meat Boy is a testament to how mediocre I am with these kinds of games) but even that knowledge won’t change the fact that I stopped having fun in the last couple hours.
For its genre They Bleed Pixels is an incredibly well polished title that will provide hours of frustrating enjoyment. Whilst I’m not into that whole achievement scene there are enough challenges listed to keep even the most dedicated achievement hunter mashing buttons for double, maybe even triple my play time. Whilst I might have lost interest in it right towards the end I can’t deny the overall quality of They Bleed Pixels, especially when compared to others in its genre.
They Bleed Pixels is available right now on PC and Xbox360 for $9.99 and an equivalent amount of Xbox points. Game was played entirely on the PC with around 7 hours played and 19% of the achievements unlocked.
Like all industry terms the definitions of what constitutes a cloud service have become somewhat loose as every vendor puts their own particular spin on it. Whilst many cloud products share a baseline of particular features (I.E. high automation, abstraction from underlying hardware, availability as far as your credit card will go) what’s available after that point becomes rather fluid which leads to the PR department making some claims that don’t necessairly line up with reality, or at least what I believe the terms actually mean. For Microsoft’s cloud offering in Azure this became quite clear during the opening keynotes of TechEd 2012 and the subsequent sessions I attended made it clear that the current industry definitions need some work in order to ensure that there’s no confusion around what the capabilities of each of these cloud services actually are.
If this opening paragraph is sound familiar then I’m flattered, you read one of my LifeHacker posts, but there was something I didn’t dive into in that post that I want to explore here.
It’s clear that there’s actually 3 different clouds in Microsoft’s arsenal: the private cloud that’s a combination of System Centre Configuration Manager and Windows Server, the what I’m calling Hosted Private Cloud (referred to as Public by Microsoft) which is basically the same as the previous definition except its running on Microsoft’s hardware and lastly Windows Azure which is the true public cloud. All of these have their own set of pros and cons and I still stand by my statements that the dominant cloud structure in the future will be some kind of hybrid version of all of these but right now the reality is that not a single provider manages to bridge all these gaps, and this is where Microsoft could step in.
The future might be looking more and more cloudy by the day however there’s still a major feature gap between what’s available in Windows Azure when compared to the traditional Microsoft offerings. I can understand that some features might not be entirely feasible at a small scale (indeed many will ask what the point of having something like Azure Table Storage working on a single server would achieve, but hear me out) but Microsoft could make major inroads to Azure adoption by making many of the features installable in Windows Server 2012. They don’t have to come all at once, indeed many of the features in Azure become available in a piecemeal fashion, but there are some key features that I believe could provide tremendous value for the enterprise and ease them into adoption of Microsoft’s public cloud offerings.
SQL Azure Federations for instance could provide database sharding to standalone MSSQL servers giving a much easier route to scaling out SQL than the current clustering solution. Sure there would probably need to be some level of complexity added in for it to function in smaller environments but the principles behind it could easily translate down into the enterprise level. If Microsoft was feeling particularly smart they could even bundle in the option to scale records out onto SQL Azure databases, giving enterprises that coveted cloud burst capability that everyone talks about but no one seems to be able to do.
In fact I believe that pretty much every service provided by Azure, from Table storage all the way down to the CDN interface, could be made available as a feature on Windows Server 2012. They wouldn’t be exact replicas of their cloudified brethren but you could offer API consistency between private and public clouds. This I feel is the ultimate cloud service as it would allow companies to start out with cheap on premise infrastructure (or more likely leverage current investments) and then build out from there. Peaky demands cloud then be easily scaled out to the public cloud and, if the cost is low enough, the whole service could simply transition there.
These features aren’t something that will readily port overnight but if Microsoft truly is serious about bringing cloud capabilities to the masses (and not just hosted virtual machine solutions) then they’ll have to seriously look at providing them. Heck just taking some of the ideals and integrating them into their enterprise products would be a step in the right direction, one that I feel would win them almost universal praise from their consumers.
There’s been something of a goal shift within the space industry recently. For quite a long time the focus was on returning to the moon and establishing a presence there which was born out of George W. Bush’s Vision for Space Exploration. However since then the goals of NASA, and indeed the goals of the most promising private space company, have shifted from going back to where we once visited to charting a course to virgin territory. Whilst its entirely possible that both NASA and SpaceX are just looking to capitalize on the attention that’s been focused on the Mars Curiosity Rover by announcing plans to send humans to our red sister there’s no denying that both of them are seriously considering the idea and it seems NASA might be looking at some rather radical ideas.
There’s been quite a lot of talk about what the best way to get to Mars would be and most of them involve a way station of some sort, something close to Earth that we can use as a staging ground whilst we prepare for the actual mission. The ideas have ranged from simply using the International Space Station to establishing a base on the moon. NASA has recently started investigating the idea of putting a base out at L-2 (Lagrange Point 2), beyond the orbit of the moon. Such a base would provide quite a few advantages and not just to potential manned missions to Mars.
You see the Lagrange points are special places where the gravitational effects of all the nearby bodies balances out so that you don’t really need to do a heck of a lot to remain there indefinitely. That’s quite desirable because it means you have to take up less station keeping equipment and fuel with you, making room for bigger and better payloads. It’s for this (and numerous other reasons) that the Hubble successor, the James Webb Space Telescope, will be placed at L-2. There’s also one other advantage to L-2 as well and that’s the fact that you don’t need very much energy to get anywhere in our solar system once you’re there, especially if you time it right and get some lovely gravitational boosts along the way.
Putting a station there and maintaining it would be no small feat however. At L-2 you’re well outside the protective magnetic field of Earth which means that any potential space station has to be heavily shielded against the solar winds and cosmic radiation that will bombard it relentlessly. This either means a much smaller single launch station (ala Salyut and Skylab) or multiple successive launches. It’s not an insurmountable task but it’s definitely a step up from the ISS in terms of complexity and investment required. The L-2 location also makes getting to and from the station much more complicated than getting to the ISS or even the moon and that raises questions about how to handle things like emergency situations and resupply flights. Again there’s no technical limitation to this but you’re well into envelope pushing territory when you’re working out a L-2.
At the same time though I do believe that if you’re considering a base at L-2 you’d also better consider doing something similar on the moon, especially if landing on other planets is your end goal. You see we do have quite a bit of experience in building space stations and a base at L-2 would be an organic progression of that. However what we don’t have is any experience in building habitats on the surface of other planets and the moon, with its lack of atmosphere and harsh environment, would be an amazing test bed for potential habitats on other planets. This is not to say that a moon base is better than something at L-2, they both have their pros and cons, just that if L-2 is a consideration then the place 1.5 million kms before it might not be a bad idea either.
I think the most exciting thing to come out of all of this is the fact that NASA is investigating some things which really are pushing the limits of our capabilities in space. I’ve long said that this is where they need to be focused as the private space industry has shown that they’re quite capable of doing the day to day stuff which should leave NASA’s budget free to do some really incredible stuff. With that finally happening I could not be happier as it means that we’re not that far off from becoming an interplanetary species.
In principle, at least.
Kickstarter was one of those services that faced the typical chicken and egg problem of Internet start ups. As a crowd funding platform its success was born out of the exposure it could bring to potential projects and in the beginning that was essentially nothing. As time went on and crowdfunding became more mainstream Kickstarter then became the portal to get projects funded online and since then we’ve seen the projects transform from being mostly single guys in garages to mutli-discplinary teams looking to launch disruptive technology. Whilst I still believe that Kickstarter doesn’t fundamentally change the rules of the funding game the shift of the value judgement from the entity to the wider world is a big one and one that has seen many products come to life that might not have done otherwise.
Of course as the service and the number of projects has grown over the years it was statistically inevitable that things would start to go wrong. Thankfully the majority of the problems faced by Kickstarter campaigns are usually overly ambitious product designers who under estimate the time it will take to get their product to market leading to delays to their initial time frames. There haven’t been that many outright problems either with failed projects never getting any money (and still being publicly accessible after the fact) and there’s only a handful of projects that vanished into the ether, all apparently due to copyright claims.
Still there were a couple high profile cases of projects being showcased that were little more than a concept that someone wanted to create. Now this is the reason why Kickstarter exists, to get projects like that the funding they need to get over that initial hump, however for physical goods having nothing but a couple product renderings can lead to some serious down the road and there were numerous projects that suffered major delays because of this. There were even notable projects that had a prototype but struggled to scale to meet the demand created by their Kickstarter campaign.
Kickstarter, to its credit, has recognised this problem and recently changed the rules, putting it rather bluntly that Kicksater is not a store.
Looking at the changes the first thing you’d notice is the number of projects that were previously funded that would no longer fly under the new rules. Personally I think its a good thing as requiring an actual prototype means that a project creator will have to have gone through many of the initial hurdles to bring the product to reality and thus won’t be using the Kickstarter funds to do this. It does mean that the barrier to entry for product and hardware categories just went up a few notches but it also means that there’s a much higher likelihood that such products will actually come into existence. The change that puts an end to multiple items is done to ensure another Pen Type-A/Pebble situation doesn’t occur again, although there’s still the potential for that to happen.
I think the changes are overwhelmingly positive and whilst there might be some projects excluded from using Kickstarter as a funding platform there’s still many other crowd funding alternatives that still support projects of that nature. It also helps to make sure people understand the (usually low) risks of using Kickstarter as there’s every chance in the world that the product/service will not be viable and neither Kickstarter nor the project founders are under any obligation to issue refunds for projects that fail after funding. This might be spelt out in no uncertain terms in the fine print when you sign up but anything to make people more aware of what they’re getting themselves into to is a good thing and does wonders for Kickstarter’s reputation.
It hasn’t turned me off the idea, that’s for sure.
Things like this never fail to bring me to tears:
It’s not the most original video on the planet (or off, as the case might be) but it’s probably one of the most memorable ones of these edge of space type deals. The train’s face is CGI but the rest of it is completely real, done in a process that can be replicated on the cheap if you know what you’re doing. There are however a couple nits that I like to pick about videos like these mostly around what people tend to classify as “space”.
The international defined standard for being in space and not in Earth’s atmosphere is defined as 100KM above sea level, referred to as the Kármán line. The most exotic of helium ballons will only manage to make it about halfway to that point before bursting and falling back down to earth. Whilst the atmosphere at those heights wouldn’t support life for any length of time and you can clearly see the curvature of the Earth it’s not in space unless you’re past that point. Even saying you’re at the edge of the space is a little on the nose, but I’ll usually let that slide.
Despite all that I still love videos like this as they really put the whole world in perspective. That feeling has a name too, the overview effect, which many astronauts have reported feeling upon seeing the Earth from space or on the lunar surface. It’s my hope (and running bet with a friend) that I’ll one day see the earth from that perspective too.
I’ve long been of the opinion that many of my fellow Generation Ys are suffering from a crisis of desire in regards to the Australian property market. It’s an understandable phenomenon as most of us grew up in what are now quite nice suburbs, central to a lot of services and now considered to be an extremely desirable place to live. It then comes as no surprise that our generation would want to replicate this with their first home purchase and regrettably this leads many to believe that the property market is unaffordable, which at that level it most certainly is. Buying out in the mortgage belt, like most of their parents did back when the time came for them to do so, has been my solution to the issue for quite some time now but some recent reading has pointed me towards http://fhareversemortgagecalculator.com/ which in turn pointed me in another direction, one that I hadn’t considered previously.
To give you some background on where this thought came from I’ll point you in the direction of a really solid article from The Atlantic on the drastic change in spending habits between Gen Y’s and their predecessors. In it Thompson lays out the idea that perhaps Generation Y has replaced the home and car as the most desirable objects with modern technology like smart phones. This is coupled with an increasing tendency towards sharing those same goods (called collaborative consumption) that have such a high capital cost which means total ownership plummets whilst use sky rockets. It’s an interesting idea and I was wondering if the trend translated across to Australia.
Turns out part of it does.
Whilst I couldn’t find any good information around car ownership with Australia being a country that’s heavily focused on property ownership there was a lot to dig through in regards to Gen Ys attitude towards property. Shockingly, at least for me, the vast majority of Generation Ys do intend to buy, somewhere on the order of 77% which is actually above previously generations. Faced with the decision of not being able to get the home they own many will consider a cheaper investment property initially in order to be able to leverage it later into the property they actually want. That’s not the interesting part though, what I found out is that 72% of Australian Gen Ys would buy a house with a friend or family member. Whilst I’ve known people who’ve done this I had no idea that it would be so common and that’s an intriguing insight.
I’ve long held the position that the median house price on a single income is unaffordable in Australia and it appears that Gen Y is aware of that, at least on some level. Collaborative consumption of the housing resource then is our way of reacting to this, in effect shrinking the affordability gap by spreading the pain around a bit. Indeed I did something very similar to this when we bought our first house in Canberra by renting out two of the rooms to friends for the first year. The experiences from others are similar as well with the sharing arrangement usually only being temporary (on the order of years, not decades) before they’re able to part ways into a home of their own.
This means my hammering away at the point that Gen Y is suffering under a crisis of desire (they still are, at least in my opinion) probably isn’t going to help them change their minds. What I should probably be focusing on instead is the ways in which to structure these kinds of sharing arrangements in order to make the desired property more affordable or what strategies they can use in order to get themselves into a position to make it affordable. As you can probably tell I’m still wrestling with the best way to approach this and the ultimate idea will have to be a post for another day.
3 years. That’s how long I’ve been writing about the R18+ rating in Australia. I had thought that I was pretty much done with it when the rating sailed through the lower house 6 months ago but a week ago the guidelines for the new rating were released by the Australian Classification Board and the gaming community collectively sighed in dismay at what was presented. Taking a look over the guidelines it’s clear that the idea of a unified classification scheme for all forms of media will never come into reality in Australia as apparently games must be treated differently to all other mediums of expression. Their reasoning for this might look sound on the surface (games are interactive and thus more impactful) but their thinking isn’t based on any science I can find and we all know how angry that makes me.
The guidelines themselves are short and concise which makes them rather easy to compare to their previous iterations. Whilst the R18+ rating does open the doors to games that are adult in nature there are some pretty severe restrictions when compared to it’s sister medium of film. Indeed if you look at the guidelines for film’s version of R18+ and then look at the one for games the number of justifications, limits and “in context” qualifiers the comparison is quite stark which shows that the classification board believes that games are more impactful due to their interactive nature. I’ve heard this line before but never actually did some research into whether it was true or not.
Today I found out that it’s not.
Whilst it’s hard to find causative links between video games and any sort of trend in behaviour due to the impossibility of doing proper control testing there is some decent data out there. However meta-analysis of previous studies can show data trends that we can get correlations from. Before you repeat the “correlation is not causation” mantra at me don’t forget that correlation is required for causation¹ so any time you see it pop up the relationship almost always warrants further investigation. In this case whilst the research suggests that violent media may lead to increased aggression that does not directly translate to increased violence and violent media is never the sole factor responsible.
What the research does show however is that the tendency towards aggressive behaviours is no more influenced by interactive games than it is passive consumption of other forms of media. Indeed more research shows that contextual justification of violence is by far more influential than the interactivity or quantity of violence present. Thus the idea that games have to be somehow held up to a different standard than that of other mediums due to its interactivity is at best an emotional argument and not one we should be basing laws around.
Of course since these are a set of guidelines it ultimately comes down to the reviewers to enforce them and there’s a chance that they won’t do so literally. Indeed many games that got slapped with R18+ ratings in other countries previously were waved through under the MA15+ here in Australia and it’s quite possible that with the introduction of the R18+ rating that many of the games that fell under the NC banner previously will get waved through in much the same way. This is pure speculation on my part however and we shall have to wait for the first lot of R18+ games to come through the ACB before we’ll know if there’s any credence to that theory.
It makes me incredibly angry to see policy based around emotional arguments rather than solid research. If I can find the right articles in the couple hours I spend on researching these things then I’d expect nothing less from public servants who are paid to do the same in order to advise their politicians. I can only hope that the government takes the advice of the ALRC seriously and looks towards unifying the classification scheme so we can abandon these silly schemes of differing levels of classification for different types of media. It’s another long shot for sure but after 3 years of shouting to get to this point I’m not about to give up now.
¹And for those smart asses out there who will then tell me that you can have causation without correlation I’ll tell you to go back to your data and have a good hard look at it. If SPSS tells you that there’s no correlation in the data when you somehow know there is then there’s a problem with your data or hypothesis.
There’s only one thing that I don’t like about my little 60D and that’s the fact that it’s not a full frame camera. For the uninitiated this means that the sensor contained within the camera, the thing that actually records the image, is smaller than the standard 35mm size which was prevalent during the film days. This means that in comparison to its bigger brothers in more serious cameras there are some trade offs made, most done in the name of reducing cost. Indeed for comparison a full framed camera would be over double the price I paid for my 60D and would actually lack some of the features that I considered useful (like the screen that swings out). The rumour mill has been churning for quite a while that Canon would eventually release an affordable full frame DSLR at this year’s Photokina and the prospect really excited me, even if my 60D is still only months old at this point.
News broke late yesterday that yes the rumours were true and Canon was releasing a new camera called the EOS 6D which was in essence a full frame camera for the masses. The nomenclature would have you believe that it was in fact a full frame upgrade for the 60D, something that was widely rumoured to be the case, but diving into the specifications reveals that it shares a lot more with the 5D lineage than it does with its prosumer cousin. This doesn’t mean the camera is more focused on the professional field, indeed the inclusion of things like wifi and GPS are usually considered to be conusmer features (I’ve had them in my Sony pocket cam for years, for example), but if I’m honest the picture I built up of the new camera in my head doesn’t exactly align with what Canon has revealed and that’s left me somewhat disappointed.
Before I get into that though let me list off the things that are really quite awesome about the 6D. The full frame sensor in a camera that will cost $2099 is pretty damn phenomenal even if that’s still well out of the range of the people buying in the 60D range. It’s actually the cheapest full frame DSLR available (even the Sony fixed lens full frame is $700 more) and that in itself is an achievement worth celebrating. All the benefits of the bigger sensor are a given (better low light performance, crazy ISOs and better resolution) and the addition of WiFi and GPS means that the 6D is definitely one of the most feature packed cameras Canon has ever released. Still it’s the omission of certain features and reduction in others that’s left me wondering if it’s worth me upgrading to it.
For starters there’s the lack of an articulated screen. It sounds like a small thing as there are external monitor solutions that would get me similar functionality but I’ve found that little flip out screen on my 60D so damn useful that it pains me to give it up. The reasons behind its absence are sound though as they want to make the 6D one of their more sturdier cameras (it’s fully weather sealed as well) and an articulated screen is arguably working against them in that regard.
There’s also the auto-focus system which only comes with 11 focus points of which only 1 is cross type. This is a pretty significant step down from the 60D and coming from someone who struggled with their 400D’s lackluster autofocus system I can’t really see myself wanting to go back to that. It could very well be fine but on paper it doesn’t make me want to throw my money recklessly in Canon’s direction like I did with all the rumours leading up to this point.
One thing could sway me and that would be if MagicLatern made its way onto the 6D platform. The amount of features you unlock by running this software is simply incredible and whilst it won’t fix the 2 things that have failed to impress me it would make the 6D much more palatable for me. Considering that the team behind it just managed to get their software working on the ever elusive 7D there’s a good chance of it happening and I’ll have to see how I feel about the 6D after that happens.
Realistically the disappointment I’m feeling is my fault. I broke my rule about avoiding the hype and built up an image of the product that had no basis in reality. When it didn’t match those expectations exactly I was, of course, let down and there’s really nothing Canon could have done to prevent that. Maybe as time goes on the idea of the 6D will grow on me a bit more and then after another red wine filled night you might see another vague tweet that indicates I’ve changed my mind.
Time to restock the wine rack, methinks.
Space sims are one of my favourite game genres. Indeed my go to title whenever I find myself without an Internet connection is Microsoft’s Freelancer, a game released way back in 2003 that still manages to be home to a lively modding community who’ve extended the games’ life considerably. It’s been a while since I’ve seen something of that calibre though with many recent titles like DarkStar One and Sol Exodus not managing to capture me in the same way. Eve Online got close though I’m hesitant to make comparisons between a MMORPG and a single player game as the experience is wildly different. On the surface Galaxy on Fire 2 would be in another world yet again however it really did feel like Freelancer all over again, and that’s a good thing.
Galaxy on Fire 2 isn’t a PC platform native. Whilst its original versions were released for Java (something I’m having trouble finding out if that meant it was actually keyboard/mouse) I was first introduced it as a title that I could use to stress my then shiny new Samsung Galaxy S2 with. After fiddling around with ChainFire3D for a while and eventually getting the Tegra emulation to work properly I was able to play Galaxy on Fire 2 without a problem and really quite enjoyed it. However holding an ever warming handset for more than 20 minutes was a tiresome experience so I never got around to finishing it. You can then imagine my excitement when I saw the title was coming to Steam in all its spacey glory.
You play as space fighter pilot Keith T. Maxwell, out on a routine mission to hunt down pirates and then head back to collect your reward. Unfortunately during the fire fight your hyperdrive is damaged causing your ship to begin malfunctioning. The malfunction then propels Keith forward in time 35 years and far across the galaxy where he finds himself among a newly formed confederation that’s severed all contact from the rest of the galaxy. At the same time a new threat in the form of a wormhole capable species begins attacking at random which Keith, of course, gets roped into helping out with.
I feel like just commenting on how the graphics look on PC would be doing Galaxy on Fire 2 something of an injustice. Taken in the context that the above pictures are a pretty similar quality to what you see on a phone says something about how powerful today’s smart phones really are and just how good Galaxy on Fire 2 looks on them. For a PC sure they’re not that fantastic (the screenshots are done with all the settings on absolute maximum) but in comparison to other recent titles in the same genre they’re actually not that bad and in light of their origins they’re actually quite impressive.
Just like any true space simulator there are a few core components that make up Galaxy on Fire 2’s game play. There’s the full 3D space combat where you’ll battle other enemies in space ships, a commodity trading market (including everyone’s favourite mini-game: mining!) and a set of storyline missions that functions as both a tutorial in the beginning as well as a way to give you game changing pieces of technology in an organic fashion. This level of detail is undoubtedly the reason why I feel that Galaxy on Fire 2 is well above its recent competitors as others would do away with one or more of the aspects which meant a good section of the expect game play in the genre was gone with nothing left to fill the void.
The combat in Galaxy on Fire 2 is pretty decent although its mobile roots do show in its simplicity. In essence most dogfights are the exact same encounter: you’ll get shot at from a distance, find the enemy that was shooting at you then proceed to chase them as you wear them down. The AI isn’t particularly smart and will react in pretty much the same way every time and thus the only real increase in challenge comes from either tougher enemies or by throwing large numbers of them at you. In essence it’s challenging right up until you figure out how to cheat the AI (hint: they can only seem to predict motion in 1 plane of movement) and then after that you’re pretty much just burning time until they all go boom.
There were 2 issues with the combat that I need to mention. The first is the lack of any trajectory compensating reticle, I.E. a little targeting thing that shows you were to aim in order to hit the target that’s moving in front of you. It’s pretty much a given in any space sim (and pretty much anything with a flying component these days) so its absence feels more like laziness than something that adds challenge. Indeed initially it forced me to choose a different weapon type in order to make aiming easier (read: rapid fire) which I felt was extremely limiting. The second issue is the motion of enemies when they’re close to static obstacles. Instead of flying around them enemies will instead hit them, stick to them, and then track along them; that is if they don’t just fly directly through them first. Collision avoidance in space sims isn’t particularly difficult so I can only hope its absence is deliberate for one reason or another.
The trading section is pretty interesting as reading about it on some of the Galaxy on Fire 2 forums shows that it has supply and demand curves so you can create demands in areas and then fill them later on for a huge profit. I personally didn’t bother much with it until I got the blueprint for the Khador drive which requires about $40K worth of materials but retails for about 6 times that which sent me on a trading rampage to try and find the cheapest places so I could start churning these things out. I only ended up building 2 of them in the end and that was enough to get me a ship (a Groza, if you’re interested) that was more than capable of handling pretty much everything that was thrown at me, despite what the forums said to the contrary.
The missions, both the story line and I assume procedurally generated space lounge ones, are pretty simplistic in nature with most of them being not much more than a variation of the “Go here, kill that, repeat” kind of deal. They do mix it up a bit with some of them being disable, capture or raiding pirate base type affairs which helps to keep it interesting for a while but I’d be lying if I said it didn’t get repetitive after a while. Indeed I think this is a problem with most space simulators as the grind of working up to better ships and weapons often sees you repeating the same missions unless you find a shortcut of some description (like selling Khador drives).
The story of Galaxy on Fire 2 certainly isn’t bad but if you’re looking for something akin to a space opera like Battlestar Galactica you’re going to be left wanting. It’s fully voice acted with the actors doing a good job of making the dialogue lively but there really isn’t much to it apart from the wry humour and half assed romance plot. It’s enough to carry the game along and I did genuinely want to see Keith and his love interest get together but there was no lasting emotional impact which is usually how I judge a good quality story.
After saying all that you’d get the impression that I didn’t really enjoy Galaxy on Fire 2 but actually I quite did. Sure the graphics and gameplay are somewhat simplistic and the combat gets repetitive but Galaxy on Fire 2 is the closest thing I’ve had to Freelancer in a long, long time. That’s saying a lot as Freelancer was a game made with (I assume) a much bigger budget and was built for the PC from the ground up rather than coming to it after finding wild success on the mobile market. As a mobile game Galaxy on Fire 2 is an incredible demonstration of what the smart phone platform is capable of. On the PC its a great experience for those of us who cut our teeth on other space sims and hopefully Fishlabs will continue to release their titles (and expansion packs) for the platform.
Galaxy on Fire 2 is available on Android, iOS and PC right now for $5.49, $4.99 and $19.99 respectively. Game was played entirely on the PC on Hard difficulty with 7 hours of total play time and 24% of the achievements unlocked.
I usually reserve these kinds of things for a quick tweet or Facebook post but I figured it was time I actually explained the creation of these particular videos. Shown below for your viewing pleasure is yet another Curiosity descent video that makes for some incredible watching:
For starters the first thing I’ll let you in on is that all the sound you hear in this video is 100% fake as Curiosity does not have a microphone on board. That may seem strange, I mean what camera that can take video doesn’t have one, but they’ve launched craft to Mars with microphones before (the Mars Polar Lander was one, although it was tragically lost, with the Phoenix Lander being one that actually made it) and the recordings made back then weren’t particularly interesting. Most of the noise that they recorded was akin to static and really didn’t have much use scientifically so future Mars craft like Curiosity don’t carry them so they can use the payload space for more experiments. Additionally the actual sound would probably be a lot more harsh (ever heard a microphone in high wind?) as at this stage Curiosity was rocketing towards Mars at a pretty decent rate.
The original video, shown here, is based off the images from the MARDI camera that’s on the bottom of the rover specifically for this purpose. Now I’ve heard differing reports as to what the actual frame rate was as the original video says it’s somewhere on the order of 2 FPS (297 images over 150 seconds) but most are quote as saying its 4FPS. The imager itself is capable of doing up to 10FPS but I don’t believe it was for this particular video. How then, you might be wondering, do they manage to get something like 20 FPS like the video does above? Well the original video is probably the best candidate for something called Video Interpolation (or inbetweening as its usually referred to).
In essence the additional frames are generated from the frames either side of it and the algorithms are essentially guessing what’s going to come next. For the MARDI images this works quite well as the amount of change between frames is quite low and thus the interpolation between frames looks quite good. Most of the better ones of these also have a lot of hand work with them as well to smooth out some things (like the heat shield falling motion). If there’s a lot of action between frames you tend to get smudging which you can actually see hints of in the video (look at the landscape shifting about as it gets closer). It works on any kind of video too and a lot of enterprising YouTubers use it in order to get that slow motion effect without having to spend the untold thousands on high speed video cameras.
I find the videos interesting both because of what they are (technical achievements in both their creation and interpolation) and what they represent to us as species. The response to the Curiosity videos has been nothing short of amazing and it makes me so happy to see so many being inspired by it. It’s things like this that spur on the next generation to become the kinds of people capable of making things like this and it never fails to impress me time and time again.