I’ve often wondered what the world of experimental indie games looks like to someone who doesn’t have a long history with games. Whilst the average age of a gamer is pushing past 35 there’s still got to be a good chunk of people who didn’t grow up in the golden age of gaming which means that many of the conventions relied upon in these games would simply be unfamiliar to them. Usually this is done in aid of getting out of the way of the user’s experience (tutorials are by far the worst immersion breakers, bulldozing through the 4th wall) and it’s something I appreciate although I recognise how this might be of limited appeal to others. MirrorMoon EP is one such game, relying on your sense of curiosity and exploration to uncover the vast world that it encompasses.
I am slowly learning to travel in space.
Time is a meaningless variable that slips through my fingers.
Stopping requires a lot of energy while moving feels almost like staying still.
Breathing is hard inside this machine.
I need to stay calm.
And with only that to go on you’re dropped on a mysterious moon, one that has a strange relationship with another nearby celestial body.
MirrorMoon EP feels visually similar to other minimalistic exploration games like Kairo favouring texture-less environments with solid colours covering every surface. As I alluded to earlier I believe that this is done in order to focus you on the gameplay above everything else and indeed since the control mechanism doesn’t allow you to sight see particularly well (more on that later) it does feel like MirrorMoon EP is doing its best to get out of your way. As a fan of minimalism this works quite well for me, especially when I find myself agape at some of the scenery which is nothing more than a couple light shafts arranged in a particular manner.
MirrorMoon EP is an exploration puzzler and the first world you find yourself upon serves as an introduction into the numerous mechanics that are built into it. Your viewpoint is locked however so whilst you’re in first person mode you can’t look up or down, nor even to your left or right. Instead you have to move yourself around like your entire body is encased in concrete, fixing your vision firmly forward. This, coupled with the incredibly small sizes of the world, means that your sense of location and direction is severely limited however you’re able to unlock a large set of tools that will help you find your way around and some of them are quite novel.
As the name of MirrorMoon EP alludes to you’ll quickly find out that there’s another “moon” nearby that, once you’ve discovered the right tools, you’re able to interact with. Initially it doesnt’ make a whole lot of sense, the first one allows you to rotate it around to see different features, however it becomes apparent that the moon you’re manipulating is in fact a duplicate of your own. Then using the tools you have disovered by randomly bumbling about you can then use it to guide yourself, allowing you to unlock more and more secrets. Eventually you’ll solve the puzzle and be treated to some more vague on screen text but after that you’re given access to the real game and it’s quite something.
This is your console for exploring the vast space that is contained within MirrorMoon EP. Like pretty much everything else in the game details on how it operates is scant but after clicking around you’ll eventually figure out what everything does. The screen with numerous dots all over it is a map of all the other moons you can visit and each of them contains an unique puzzle for you to solve which, once completed, will allow you access to a glowing orb. Should you be the first person to find that orb then you’re granted with a special privilege.
You get to name that moon.
Now with the massive number of planets available I get the feeling that they’re all procedurally generated so some of them are going to be amazing and others are going to be quite dull (I believe I visited one that was completely dark and the orb was right in front of you). The names are also persistent and you’ll be able to tell if someone’s named a planet by the name being something other than THX/89 or something of a similar format. I managed to haphazardly visit another person’s planet without realising it but soon after found myself seeking out all the planets I could in order to solve them before anyone else did.
Whilst is a pretty novel and interesting mechanic it unfortunately gets boring quite quickly as whilst the planets are usually different in some way a lot of the time it’s just a jumble of various mechanics mashed together procedurally. Once you’ve seen a dozen or so planets you’ve likely seen them all and so what initially seems like something with infinite replay value quickly fades into repetition. I do like the idea though and for some people I’m sure this would be infinitely interesting (kind of like Kerbel Space Program in a way) but for me I just couldn’t be bothered after a while.
Now I’ll have to admit some fault here as whilst I managed to complete the first “side” easily (and got the achievement to that effect) I haven’t yet been able to figure out how to finish side B in order to get a complete understanding of the story. It does seem quite interesting, especially with the references to the “anomaly” and how the machine interacts with space and time, however the intentional vagueness of both the game and the story have curtailed my efforts to dig up any substantial meaning from it. I could just Google it, like I did for Kairo, and I probably will if I don’t find out anything more soon.
MirrorMoon EP is an interesting game, one which is heavily shaped by your own experiences with it. The unapologetic, minimalistic nature of it will definitely be a turn off for some however the heavy focus on the game play to the exclusion of nearly everything else is something that MirrorMoon EP pulls off exceptionally. Unfortunately I feel like it’s replay value is somewhat limited due to its procedural nature and the intentional vagueness of both story and gameplay may have lead to me giving up on it prematurely. Still MirrorMoon EP stands out as yet another shining example of the indie exploration/puzzler genre and is definitely worth looking into.
MirrorMoon EP is available on Steam and OUYA right now for $9.99. Total game time was around 2 hours with 40% of the achievements unlocked.
The OUYA and I have a complicated relationship. When I first saw it I loved the idea of a console that was free from any restrictions, one that would inevitably become a playground for the independent developers that I had come to love so much. However the reality fell short of my (and many others) lofty expectations but deep down I still really wanted it to take off. Whilst I’m not cheering for the downfall of the three kings of consoles having a viable alternative for developers who can’t afford to develop for traditional platforms is something that the industry needs and before you ask no, smartphones don’t count (at least not yet).
OUYA’s latest move has done nothing to improve this situation, however.
OUYA recently announced the Free the Games fund, an initiative whereby a game that’s funded through Kickstarter can have its contributions doubled, to the tune of $50,000. On the surface that sounds like a great thing as that kind of cash is kind of unheard of for many independent developers and studios however this isn’t free money exactly. First off your game must be an OUYA exclusive for the first 6 months of its life after which you’re free to do whatever you want with it. Secondly your Kickstarter goal must be at least $50,000 and you have to reach it to be eligible to get your funds doubled. These two aspects combined together have seen the Free the Games fund met with some harsh criticism and, frankly, I’m inclined to agree with them.
For starters being an OUYA exclusive drastically limits your market potential as even the most successful game on that platform has only managed to sell around 2,000 copies. Considering that cross platform development is now easier than ever thanks to tools like Unity indie developers are quite capable of releasing for multiple platforms even with the limited resources that they have to work with. Thus it makes sense to release on as many platforms as is feasible to maximise your market exposure unless you’ve got a compelling reason to go exclusive. $50,000 might be compelling enough for some, especially if that will allow you to develop a cross platform release during the exclusivity period, but the second caveat on that funding is what makes that particular scenario unlikely.
The average game project on Kickstarter gets no where near the amount of funding that OUYA is asking for in order to receive the grant. According to Kickstarter’s own numbers the average funding level of a games project is on the order of $22,000 which includes outliers which have nabbed millions of dollars worth of funding. In truth the average indie studio would probably be lucky to get anywhere near the average with 63% of them raising less than $20,000. OUYA’s logic is likely then that any game below that amount would be too risky for them to invest in but its far more likely that they’re pricing out the vast majority of the indies they were hoping to attract and those who meet the requirements will likely not want to trade exclusivity for the additional funding.
In theory I think it’s a great idea however it’s implementation is sorely lacking. I think a lot more people would be on their side if they reduced the amount of funding required by a factor of ten and changed the exclusivity deal to guaranteeing that the game would be available on the OUYA platform. That way the developers aren’t constrained to the OUYA platform, allowing them to develop the game however they want, and the OUYA would get an order of magnitude more titles developed for the platform. Of course that also means the risk of getting shovelware increases somewhat however after my decidedly average experience with OUYA exclusive titles I can’t say that they’d be diluting the pool too much.
I’m still hoping that OUYA manages to turn this around as their core idea of an unchained console is still something I think should be applauded but the realisation of a viable, alternative console platform seems to keep drifting further away. Their latest move has only served to alienate much of the community it set out to serve however with a few tweaks I think it could be quite workable, allowing OUYA to achieve its goals whilst furthering the indie game dev scene. It doesn’t look like they’re intent on doing that however so this will likely end up being yet another mark against them.
I haven’t been an iPhone user for many years now, my iPhone 3GS sitting disused in the drawer beside me ever since it was replaced, mostly because the alternatives presented by other companies have, in my opinion, outclassed them for a long time. This is not to say that I think everything else should replace their phone with a Xperia Z, that particular phone is definitely not for everyone, as I realise that the iPhone fills a need for many people. Indeed it’s the phone I usually recommend to my less technically inclined friends and family members because I know that they have a support system tailored towards them (meaning they’ll bug me less). So whilst today’s announcement of the new models won’t have me opening up my wallet anytime soon it is something I feel I need to be aware of, if only for the small thrill I get for being critical of an Apple product.
So as many had speculated Apple announced 2 new iPhones today: the iPhone 5C which is essentially the entry level model and the iPhone 5S which is the top of the line one with all the latest and greatest features. The most interesting different between the two is the radical difference in design with the 5C looking more like a kids toy with its pastel style colours and the 5S looking distinctly more adult with it’s muted tones of silver, grey and gold. As expected the 5C is the cheaper of the two with the base model starting from AUD$739 and the 5S AUD$869 with the prices ramping up steadily depending on how much storage you want.
The 5C is interesting because everyone was expecting a budget iPhone to come out and Apple’s response is clearly not what most people had in mind. Sure it’s the cheapest model of the lot (bar the Phone 4S) but should you want to upgrade the storage you’re already paying the same amount as the entry level 5S. The difference in features as well are also pretty minimal with the exceptions being an A6 vs A7 processor, slightly bulkier dimensions, new fandangled fingerprint home button and a slightly better camera. Of course those slight differences are usually enough to push any potential iPhone buyer to the higher end model so the question then becomes: who is the 5C marketed towards?
It’s certainly not at the low end of the market, as most people were expecting, even though it looks the part with its all plastic finish (which we haven’t seen since I last used an iPhone). It might appeal to those who like those particular colours although realistically I can’t see that being much of a draw card considering you can buy any colour case for $10 these days. Indeed even when you factor in the typical on contract price for a new iPhone (~$200) the difference between an entry level 5C and 5S is so small that most would likely dole out the extra cash just to have the better version, especially considering how visually different they are.
Another thing running against the 5C is that the 5S shares the same dimensions as the original iPhone 5 allowing you to use all your old cases and accessories with it. I know this won’t be a dealbreaker for many but it seems obvious that the 5S is aimed at people coming from the iPhone 5 whereas the 5C doesn’t appear to have any particular market in mind that necessitates its differences. If this was Apple’s attempt to try and claw back some of the market that Android has been happily dominating then I can help but feel it’s completely misguided. Then again I lost my desire for Apple products years ago so I might be missing out on what the appeal of a gimped, not-really-budget Apple handset might be.
The iPhone 5S does look like a decent phone sporting most of the features you’d expect from a current generation smart phone. NFC is still missing which, if I’m honest, isn’t as big of a deal as I used to make it out to be as I’ve now got a NFC phone and I can’t use it for jack so I don’t count it as downer anymore. As always though the price of a comparable Android handset to what you get from Apple is a big sore point with the top of the line model topping out at an incredible AUD$1129. I know Apple is a premium brand but when the price difference between the high and low end is $260 and the only difference is storage you really have to ask if its worth it, especially when comparable Android phones will have the same level of features and will be cheaper (my 16GB Xperia Z was $768 for reference).
I will be really interested to see how the 5C pans out as many are billing it as the “budget” iPhone that everyone was after when in truth it’s anything but that. The 5S is your typical product refresh cycle from Apple, bringing in a few new cool things but nothing particularly revolutionary. Of course you should consider everything I’ve said through the eyes of a long time Android user and lover as whilst I’ve owned an iPhone before it’s been so long between drinks that I can barely remember the experience anymore. Still I’m sure at least the 5S will do well in the marketplace as all the flagship Apple phones do.
This blog has had a pretty good run as far as data retention goes. I’ve been through probably a dozen different servers over its life and every time I’ve managed to maintain continuity of pretty much everything. It’s not because I kept rigorous backups or anything like that, no I was just good at making sure I had all my data moved over and working before I deleted the old one. Sure there’s various bits of data scattered among my hard drives but none of it is readily usable so should the unthinkable happen I was up the proverbial creek without a paddle.
And, of course, late on Saturday night, the unthinkable happened.
Like a good little admin I thought it would be good to do a cleanup of the directory before I embarked on this as I was going to have to move the backup file to my desktop, no small feat considering it was some 1.9GB big and I’m on Australian Internet (thanks Abbott!). I had a previous backup file there which I moved to my /var/www directory to make sure I could download it (I could) and so I looked to cleaning everything else up. I’ve had a couple legacy directories in there for a while and so I decided to remove them. This would have been fine except I fat fingered the command and typed rm -r which happily went about its business deleting the entire folder contents. The next ls I ran sent me into a fit of rage as I struggled to figure out what to do next.
If this was a Windows box it would’ve been a minor inconvenience as I’d just fire up Recuva (if CTRL + Z didn’t work) and get all the files restore however in Linux restoring deleted files seems to be a right pain in the ass. Try as I might extundelete couldn’t restore squat and every other application looked like it required a PhD to operate. The other option was to contact my VPS provider’s support to see if they could help out however since I’m not paying a terrible amount for the service I doubt it would been very expedient, nor would I have expected them to be able to recover anything.
In desperation I reached out to my old VPS provider to see if they still had a copy of my virtual machine. The service had only been cancelled a week ago and I know a lot of them keep copies for a little while just in case something like this happens, mostly because it’s a good source of revenue (I would’ve gladly paid $200 for it). However this morning the email came from them stating unequivocally that the files are gone and there’s no way to get them back, so I was left with very few options to get everything working again.
Thankfully I still had the database which contains much of the configuration information required to get this site back up and running so all that was required was to get the base WordPress install working and then reinstall all the necessary plugins. It was during this exercise that I stumbled across the potential attack vector that let whoever it was ruin my site in the first place: my permissions were all kinds of fucked, essentially allowing open slather to anyone who wanted it. Whilst I’ve since struggled to get everything working like it was before I now know that my permissions are far better than they were and hopefully should keep it from happening again.
As for the rest of the content I have about half of the images I’ve uploaded over the past 5 years in a source folder and, if I was so inclined, could reupload them. However I’ve decided to leave that for the moment as the free CDN that WordPress gives you as part of Jetpack has most of those images in it anyway which is why everything on the front page is working as it should. I may end up doing it anyway just as an exercise to flex my PowerShell skills but it’s no longer a critical issue.
So what has this whole experience taught me? Well mostly that I should practice what I preach as if a customer came running to me in this situation I’d have little sympathy for them and would likely spend maybe 20% of the total effort I’ve spent on this site to try and restore theirs. The unintentional purge has been somewhat good as I’ve dropped many of the plugins I no longer used which has made the site substantially leaner and I’ve moved from having my pants around my ankles, begging for attackers to take advantage of me, to at least holding them around my waist. I’ll also be implementing some kind of rudimentary backup solution so that if this happens again I at least have a point in time to restore to as this whole experience has been far too stressful for my liking and I’d rather not repeat it again.
Universal praise for a game is always something that will draw skepticism from me as it’s rare that a game will please everyone that plays it. Indeed this is the reason why I try to avoid the hype for any game now as I’ve had far too many receive wide critical acclaim (Bayonetta being the greatest example of this) only to find out that they just didn’t merit the high scores that were granted to them. Universal derision on the other hand is far more reliable with games that get hit with bad review after bad review usually being quite deserving of the title. Thus when I read about so many people taking Ride to Hell: Retribution to task I couldn’t help but witness this trainwreck for myself.
The year is 1969 and Jake Conway is a Vietnam veteran, returning home for the first time. He’s part of a biker gang, one that still has a lot of rivals, but he’s not interested in that, he just wants to live a quiet life with his brother. Past rivalries quickly catch up with him however and his brother is brutally murdered in front of him and Jake is mortally wounded. He survives, somehow, and swears revenge upon those who did this to him. So begins your ride to retribution, one that’s filled with poor game design choices and ludicrously bad implementation.
If Ride to Hell was an iOS/Android game I’d give it a pass for graphics as they’re at the level I’ve come to expect from a mobile platform. However this game saw a release on both major consoles and PC which means they knew they had a decent amount of grunt to work with and simply didn’t make use of it. Now this is usually done for a reason, like when you’re expecting a lot of action on screen and don’t want the FPS to drop, but Ride to Hell has none of that and so the only conclusion you can come up with is that either the developers ran out of time or they were simply not capable of producing something that was better. I’m tending towards the former however as the rest of the game smacks of something that was rushed to release.
For starters the models and animations are either weird or just plain terrible. For starters look at the hands above, for Jake on the left they look normal-ish but on his brother they’re freakishly oversized. Not only that his jacket is fully rigid, hovering a good half a foot off his back at all times. It gets worse when every character flaps their mouth in a wide gape every time they talk which just draws attention to the stiff animation of nearly everything else within Ride to Hell. Indeed you get the feeling that some of this stuff was just placeholder animations whilst they worked on getting better ones in but they just never got the time to do so.
This rushed feeling permeates throughout Ride to Hell as nearly every aspect of the game feels like there was so much more planned for it but it never saw the light of day. Even in it’s decidedly half-assed state the game still takes up a whopping 10GB worth of space which, when compared to something like Tomb Raider which is about the same size, shows that their ambitions far exceeded their grasp. All this is likely a product of its tumultuous origin story which has seen this turd of a game be in development for 5 years prior to its release.
The port to PC hasn’t done it any favours either as they’ve literally just made sure it works on the platform and then done nothing to improve the experience. Like many gamers I have a native resolution for my monitor and if I don’t play games at that res then they tend to look like crap. Well Ride to Hell doesn’t even have an option to change the resolution nor any other graphics options that have been standard for years. Worse still all the menus and interfaces show their console first nature with the mouse being unusuable in any of them. They also break several gaming conventions for typical bindings for command keys, a sin few can get away with.
Combat in Ride to Hell is a mixture of third person cover based shooting and “freeflow” beat ’em up combat. It’s obvious that different sections of the game were designed for different types of combat however you’re free to choose whatever method you see fit. So this means if the developers wanted you to melee the next section and you whip our your gun it’s quite likely you can take out the whole room as they run blindly at you. Similarly if an enemy was programmed to use his guns then you going in fists first usually means they won’t block at all and you can take them out rather quickly.
The AIs also seem to have no idea about line of sight as there were many time s I could hear gun shots but not see any bullets flying nor the enemy that was shooting them. Eventually I’d find one of them hiding behind a pillar or something similar, randomly firing rounds in my direction but hitting the giant obstacle in their way. You could also do the old hide just around the corner trick where they can see you, but not hit you, and then just line up the perfect head shot to take them out in one go. Even the melee guys, who in most games will charge directly at you in order to get you to engage, just stand there doing nothing if you’re around a corner. Needless to say the AI needs a lot of work if it even wants to match 2008 standards.
It’s obvious that Ride to Hell: Retribution was designed to be some kind of open world game, ala Grand Theft Auto. The first indication I got of this was a lot of the dialogue made reference to locations with directions, as if you were going to be taking yourself there. Indeed the amount of assets used between sections for the various races/quick time event combat encounters would lead you to believe that it’s one big continuous world. It’s pretty much confirmed when you get given your home base which allows you to choose missions, buy upgrades and customize your ride which are all features you’d expect in a sandbox style game.
The amount of effort put into these side features shows that the ambitions of this game were much higher than what they managed to achieve. The bike customization for instance is pretty detailed with nearly every part of the bike customizable. However the second you get to it 90% of the parts are unlocked with only a few requiring you to do something to be able to use them. Not only does this remove much of the incentive to keep on playing it also signals that they likely had many more collectibles/achievements planned that would unlock additional bike customizations.
The skill/weapon upgrade system is incredibly basic, to the point where it looks like it was slapped on at the last minute to give the player some sense of progression. However since all the weapons are available to you it doesn’t make sense to buy anything but the best in its category and after the first mission you have enough cash to buy the best one from at least one of the weapon classes. The melee combat skills are simply not worth your time as they don’t fundamentally change the way combat flows nor do they make it particularly easier.
Ride to Hell: Retribution is terrible, suffering from development woes that should have seen it dead and buried, not released to the public in it’s god awful state. Every aspect of it is unfinished and the band aids put on top to try and it up only make it worse, highlighting every undeveloped aspect. There’s really nothing redeemable about Ride to Hell at all except for maybe it serving as yet another text book case of why some games should just be allowed to die rather than be released to the public. I honestly feel for the devs as it looks like this game was meant for so much more but its development story has instead resulted in this turd that’s only appeal is how terrible it is.
Ride to Hell: Retribution is available on PC, Xbox360 and PlayStation3 right now for $14, $68 and $68 respectively. Game was played on the PC with around 2 hours of total play time and 21% of the achievements unlocked.
If you’re old enough to remember a time when mobile phones weren’t common place you also likely remember the time when Nokia was the brand to have, much like Apple is today. I myself owned quite a few of them with my very first phone ever being the (then) ridiculously small Nokia 8210. I soon gravitated towards other, more shiny devices as my disposable income allowed but I did find myself in possession of an N95 because, at the time, it was probably one of the best handsets around for techno-enthusiasts like myself. However it’s hard to deny that they’ve struggled to compete in today’s smartphone market and, unfortunately, their previous domination in the feature phone market has also slipped away from them.
Their saving grace was meant to come from partnering with Microsoft and indeed I attested to as much at the time. Casting my mind back to when I wrote that post I was actually of the mind that Nokia was going to be the driving force for Microsoft however in retrospect it seems the partnership was done in the hopes that both of their flagging attempts in the smartphone market could be combined into one, potentially viable, product. Whilst I’ve praised the design and quality of Windows Phone based Nokias in the past it’s clear that the amalgamation of 2 small players hasn’t resulted in a viable strategy to accumulate a decent amount of market share.
You can then imagine my surprise when Microsoft up and bought Nokia’s Devices and Services business as it doesn’t appear to be a great move for them.
So Nokia as a company isn’t going anywhere as they still retain control of a couple key businesses (Solutions and Networks, HERE/Navteq and Advanced Technologies which I’ll talk about in a bit) however they’re not going to be making phones anymore as that entire capability has been transferred to Microsoft. That’s got a decent amount of value in itself, mostly in the manufacturing and supply chains, and Microsoft’s numbers will swell by 32,000 when the deal is finished. However whether that’s going to result in any large benefits for Microsoft is debateable as they arguably got most of this in their 2011 strategic partnership just that they can now do all the same without the Nokia branding on the final product.
If this type of deal is sounding familiar then you’re probably remembering the nearly identical acquisition that Google made in Motorola back in 2011. Google’s reasons and subsequent use of the company were quite different however and, strangely enough, they have yet to use them to make one of Nexus phones. Probably the biggest difference, and this is key to why this deal is great for Nokia and terrible for Microsoft, is the fact that Google got all of Motorola’s patents, Microsoft hasn’t got squat.
As part of the merger a new section is being created in Nokia called Advanced Technologies which, as far as I can tell, is going to be the repository for all of Nokia’s technology patents. Microsoft has been granted a 10 year license to all of these, and when that’s expired they’ll get a perpetual one, however Nokia gets to keep ownership of all of them and the license they gave Microsoft is non-exclusive. So since Nokia is really no longer a phone company they’re now free to start litigating against anyone they choose without much fear of counter-suits harming any of their products. Indeed they’ve stated that the patent suits will likely continue post acquisition signalling that Nokia is likely going to look a lot more like a patent troll than a technology company in the near future.
Meanwhile Microsoft has been left with a flagging handset business, one that’s failed to reach the kind of growth that would be required to make it sustainable long term. Now there’s something to be said about Microsoft being able to release Lumia branded handsets (they get the branding in this deal) but honestly their other forays into the consumer electronics space haven’t gone so well so I’m not sure what they’re going to accomplish here. They’ve already got the capability and distribution channels to get products out there (go into any PC store and you’ll find Microsoft branded peripherals there, guaranteed) so whilst it might be nice to get Nokia’s version of that all built and ready I’m sure they could have built one themselves for a similar amount of cash. Of course the Lumia tablet might be able to change consumer’s minds on that one but most of the user complaints around Windows RT weren’t about the hardware (as evidenced in my review).
In all honesty I have no idea why Microsoft would think this would be a good move, let alone a move that would let them do anything more than they’re currently doing. If they had acquired Nokia’s vast portfolio of patents in the process I’d be singing a different tune as Microsoft has shown how good they are in wringing license fees out of people (so much so that the revenue they get from Android licensing exceeds that of their Windows Phone division) . However that hasn’t happened and instead we’ve got Nokia lining up to become a patent troll of epic proportions and Microsoft left $7 billion patent licensing deal that comes with its own failing handset business. I’m not alone in this sentiment either as Microsoft’s shares dropped 5% on this announcement which isn’t great news for this deal.
I really want to know where they’re going with this because I can’t for the life of me figure it out.
One of the biggest arguments I’ve heard against developing anything for the Android platform is the problem of fragmentation. Now it’s no secret that Android is the promiscuous smartphone operating system, letting anyone and everyone have their way with it, but that has led to an ecosystem that is made up of numerous devices that all have varying amounts of capabilities. Worse still the features of the Android OS itself aren’t very standard either with only a minority of users running the latest software at any point in time and the rest never making a true majority. Google has been doing a lot to combat this but unfortunately the unified nature of the iOS platform is hard to deny, especially when you look at the raw numbers from Google themselves.
Android developer’s lives have been made somewhat easier by the fact that they can add in lists of required features and lock out devices that don’t have them however that also limits your potential market so many developers aren’t too stringent with their requirements. Indeed those settings are also user controllable as well which can allow users you explicitly wanted to disallow being able to access your application (ala ChainFire3D to emulate NVIDIA Tegra devices). This might not be an issue for most of the basic apps out there but for things like games and applications that require certain performance characterisitcs it can be a real headache for developers to work with, let alone the sub-par user experience that comes as a result of it.
This isn’t made any easier by handset manufacturers and telecommunications providers dragging their feet every time an upgrade comes along. Even though I’ve always bought unlocked and unbranded phones the time between Google releasing an update and me receiving them has been on the order of months, sometimes coming so late that I’ve upgraded to a new phone before they’ve come out. This is why the Nexus range of phones directly from Google is so appealing, you’re guaranteed those updates immediately and without any of the cruft that your manufacturer of choice might cram in. Of course then there was that whole issue with supply but that’s another story.
For what it’s worth Google does seem to be aware of this and has tried to make inroads to solving it in the past. None of these have been particularly successful but their latest attempt, called Google Play Services, might just be the first step in the right direction to eliminating at least one aspect of Android fragmentation. Essentially instead of most new feature releases coming through Android updates like they have done in the past Google will instead deliver them via the new service. It’s done completely outside the Play store, heck it even has its own update mechanism (which isn’t visible to the end user), and is essentially Google’s solution to eliminate the feet dragging that carriers and handset manufacturers are renown for.
On the surface it sounds pretty great as pretty much every Android device is capable of running this which means that many features that just aren’t available to older versions can be made available via Google Play Services. This will also help developers immensely as they’ll be able to code against those APIs knowing that it’ll be widely available. I’m a little worried about its clandestine nature however with its silent, non-interactive updating process which seems like a potential attack vector but smarter people than me are working on it so I’ll hold off on bashing them until there’s a proven exploit.
Of course the one fragmentation problem this doesn’t solve is the one that comes from the varying hardware that the Android operating system runs on. Feature levels, performance characteristics and even screen resolution and aspect ratio are things that can’t be solved in software and will still pose a challenge to developers looking to create a consistent experience. It’s the lesser of the two problems, granted, but this is the price that Android has to pay for its wide market domination. Short of pulling a Microsoft and imposing design restrictions on manufacturers I don’t think there’s much that Google can do about this and, honestly, I don’t think they have any intentions to.
How this will translate into the real world remains to be seen however as whilst the idea is good the implementation will determine just how far this goes to solving Android’s fragmentation issue. Personally I think it will work well although not nearly as well as controlling the entire ecosystem, but that freedom is exactly what allowed Android to get to where it is today. Google isn’t showing any signs of losing that crown yet either so this really is all about improving the end user experience.
My first overseas trip is probably the best example I can give of my over-packer mentality. It was the middle of 2001, I’d only just become comfortable with the college lifestyle (for the Americans I’m referring to the 2 years prior to university) and my parents agreed to send me on a school trip to Japan something I was incredibly eager to do. Of course this being the first time they’d sent one of their children overseas my parents ensured I’d have everything I’d need whilst over there, and I really do mean everything. I managed to lug 2 giant sports bags around with me for the entire trip which contained nearly every article of clothing I owned. Whilst it was nice to not have to do laundry I think the ridicule I recieved for my rather ludicrous amount of baggage was well deserved.
The habit didn’t die there unfortunately, managing to cement itself as something that I’d do instinctively throughout all my travels over the years. Indeed this became something of a running joke of whenever I’d go to visit friends as they’d often wonder why I was waiting for checked baggage only to break into hysterics when they sighted my brimming luggage trundle past on the carousel. When travelling overseas it was a little more defensible, although my recent visit to the USA did have me questioning why I needed to bring along as much as I did. This, combined with my casual interest in minimization (I love things like those tiny, fully featured houses people build), has led me to shed much of the cruft that I used to lug with me and I’m quite happy with the results.
I’ll admit that the catalyst for it was my Sydney trip last week where I was only going to be staying a single night. It really didn’t make sense to check in baggage for that, even if my back pack felt a little swollen with clothes plus laptop, and the experience of getting off the plane and being able to head straight for a cab was something I felt I wanted to repeat. Thus I set about seeking out the biggest sized carry on I could find and was surprised at just how much I could get away with.
I settled on an Antler Cyberlite International Cabin Suitcase and to my surprise it’s plenty big enough for me to fit up to a week’s worth of clothes and other supplies, more than enough for any business trip I’ll find myself on. I was a little worried that it might be a little too big but I had no complaints from the cabin crew this morning and indeed the amount of space in the overhead locker seemed to dwarf my supposedly huge carry on. In theory it’d be enough for pretty much anything then although I don’t think I’ve shaken the over-packer bug for everything just yet but at least I won’t be lugging around a massive bag for short domestic trips anymore.
I’m sure there are those out there that can take it further than I have, indeed many of the people who scoffed at my over-packing previously would routinely show up in Canberra carrying nothing more than a single backpack, however this feels like a happy medium. It still allows me to have a little fat in my packing (I’ve brought along an extra days worth of clothes in this round) whilst also giving me the advantages that I didn’t know I was missing out on before. Going any further would be quickly met with diminishing returns and I’m sure I’d end up sweating what I forgot.
But I’m weird like that 😉