With indie games there’s usually a tradeoff to be made between game mechanics and story telling. This is not to say you can’t have both, indeed there have been many good indie games that have demonstrated exactly that, more that a game with a limit amount of resources needs to pick one to focus on lest both aspects suffer from a lack of attention. I tend to prefer those that focus on story first, mostly because developing new game mechanics is fraught with risk, but I’ve begun to enjoy those that bring new and innovative game mechanics to their chosen genre and The Swapper does just that.
The Swapper takes place in the furthest reaches of space aboard the space station Theseus. It begins oddly for you, the unnamed protagonist, seemingly trapped inside an escape pod that’s then launched to the surface of the planet you were orbiting. Thankfully you land at one of the ground bases that were established earlier and are able to make your way down to one of the teleporters to get yourself back up there. Before you do that however you discover an incredible device, one capable of producing clones of yourself and, should you desire, swap your consciousness into them. Why such a device exists and why you were sent to the surface still remain a mystery to you but upon returning to Theseus it becomes clear that there’s a lot more to this story than you might first think.
In terms of graphics and art style The Swapper is unique in that much of the assets and textures are actually hand crafted from clay rather than created digitally. This gives everything a rather strange level of stylized realism that’s not disconcerting but definitely gives it an unique vibe. Some of the areas have a distinct Trine feel to them (most notably the garden areas) although the whimsy was replaced with a sense of foreboding loneliness, even after I had made contact with others. The soundtrack is also quite amazing with sounds ranging from a quiet space ambiance to sorrowful tracks that seem to rise and fall at the perfect moments. They also get bonus points for muting all the sounds when you’re in the vacuum of space something far too many game developers forget to do.
The Swapper is a puzzler at heart, one that introduces an unique (as far as I can tell) mechanic to make some decidedly challenging puzzles. They all center around “The Swapper”, a device that allows you to create up to 4 clones of yourself and, should they be in line of sight, swap to them. The clones mirror your movements perfectly unless they’re blocked by terrain or other obstacles which means that many of the puzzles will be spent figuring out where to place them perfectly so that when you move towards your final goal everything falls into place. That’s how it starts initially at least and it doesn’t take long for them to start throwing more obstacles in your way that make the challenges so much more intriguing.
The first spanner in the works is the addition of blue and red lights. Blue lights block your ability to create clones in the region that they illuminate but you can still swap to them should you be in line of sight. Red light stops you from swapping with clones you’ve created but you can still create them. Initially they’re just put in locations that means you’ll have to maneuver yourself around in order to do the swaps you need but eventually they’re positioned so that even just getting all your clones in the right spot requires some lateral thinking. They even start overlapping lights creating lovely purple areas where you can’t do anything which makes for some really annoying puzzles.
Things start to get all sorts of whacky when they introduce gravity reversal into it. These are in the form of little floor panels that allow you to swap the direction of gravity for your character or the clone that’s on them. Since it’s not global this means you can have clones in differing forms of gravity and should they run over another gravity changing panel it will swap for them as well. When I first encountered this I didn’t think much of it, the earlier puzzles certainly weren’t very challenging, but the last few were really quite challenging. Of course once you get into the right mindset they start to become a bit easier as you recognise the tricks they want you to use but the first one of them had me second guessing myself for a good 20 minutes.
Facepalm Games also gets a lot of brownie points for including a map and teleport system that allows you to traverse the ship quickly. This is good because The Swapper is one of those games where progress is determined by how many puzzles you’ve solved and, should you miss a few, you could find yourself needing to backtrack to find them so you can get enough orbs to continue. They also cheerfully highlight any areas with unsolved puzzles something which is an absolute godsend that I hope I see replicated in other indie puzzlers. Needless to say they’ve put a lot of effort into taking out the crap in order to focus better on the story and mechanics which I’m sure we can all appreciate.
Surprisingly there were no game breaking bugs to report, nor any minor quibbles that I took issue with. Due to the style of game there is a bit of room for emergent game play to occur which can lead to puzzle solutions that probably weren’t intended and that could lead to some frustration for players. I know that when I found a couple ways of doing things my thought process was locked into solving them in that way and the actual solution didn’t rely on those quirks at all. This becomes less of an issue as the game progresses as the latter puzzles really only have one way of doing them but should you find yourself caught in a thought loop remembering this fact could save you a lot of time.
The story of The Swapper is incredibly engaging being told through snippets that you find on consoles, via communicating with the Watchers and through the crazed ramblings of the only other person you come into contact with. Usually this kind of fragmented telling gets to me but this had a definite structure to it, guiding you through the various pieces of information to form a story that’s really quite incredible. The fact that it all ties together neatly at the end also deserves praise as it would be far too easy to leave this open ended, begging for a sequel.
The Swapper is an intriguingly curious game, one that combines a unique puzzle mechanic with an engrossing story to form an experience that is truly like no other. Everything about it, from the art to the soundtrack to the gameplay, all have their own distinct feel about them and the fact that they all merge together in one seamless package makes the result so much greater than the sum of its individual components. It’s one of those games that really demands to be played rather than explained, even if you’re not a fan of the puzzler/indie genre.
The Swapper is available right now on PC for $14.99. Total game time was 4.5 hours with 0% of the achievements unlocked.
The public cloud is a great solution to a wide selection of problems however there are times when its use is simply not appropriate. This is typical of organisations who have specific requirements around how their data is handled, usually due to data sovereignty or regulatory compliance. However whilst the public cloud is a great way to bolster your infrastructure on the cheap (although that’s debatable when you start ramping up your VM size) it doesn’t take advantage of the current investments in infrastructure that you’ve already made. For large, established organisations this is not insignificant and is why many of them were reluctant to transition fully to public cloud based services. This is why I believe the future of the cloud will be paved with hybrid solutions, something I’ve been saying for years now.
Microsoft has finally shown that they’ve understood this with the release of Windows Azure Pack for Server 2012R2. Sure there was beginnings of it with SCVMM 2012 allowing you to add in your Azure account and move VMs up there but that kind of thing has been available for ages through hosting partners. The Azure Pack on the other hand brings features that were hidden behind the public cloud wall down to the private level, allowing you to make full use of it without having to rely on Azure. If I’m honest I thought that Microsoft would probably be the only ones to try this given their presence in both the cloud and enterprise space but it seems other companies have begun to notice the hybrid trend.
Google has been working with the engineers at Red Hat to produce the Test Compatibility Kit for Google App Engine. Essentially this kit provides the framework for verifying the API level functionality of a private Google App Engine implementation, something which is achievable through an application called CapeDwarf. The vast majority of the App Engine functionality is contained within that application, enough so that current developers on the platform could conceivably use their code using on premises infrastructure if they so wished. There doesn’t appear to be a bridge between the two currently, like there is with Azure, as CapeDwarf utilizes its own administrative console.
They’ve done the right thing by partnering with RedHat as otherwise they’d lack the penetration in the enterprise market to make this a worthwhile endeavour. I don’t know how much presence JBoss/OpenShift has though so it might be less of using current infrastructure and more about getting Google’s platform into more places than it currently is. I can’t seem to find any solid¹ market share figures to see how Google currently rates compared to the other primary providers but I’d hazard a guess they’re similar to Azure, I.E. far behind Rackspace and Amazon. The argument could be made that such software would hurt their public cloud product but I feel these kinds of solutions are the foot in the door needed to get organisations thinking about using these services.
Whilst my preferred cloud is still Azure I’m still a firm believer that the more options we have to realise the hybrid dream the better. We’re still a long way from having truly portable applications that can move between freely between private and public platforms but the roots are starting to take hold. Given the rapid pace of IT innovation I’m confident that the next couple years will see the hybrid dream fully realised and then I’ll finally be able to stop pining for it.
¹This article suggests that Microsoft has 20% of the market which, since Microsoft has raked in $1 billion, would peg the total market at some $5 billion total which is way out of line with what Gartner says. If you know of some cloud platform figures I’d like to see them as apart from AWS being number 1 I can’t find much else.
The idea of planets orbiting other stars doesn’t seem like a particularly novel idea today but it’s only recently that we’ve been able to definitively prove that there are planets outside our own solar system. Whilst there was the beginnings of evidence surfacing back in 1988 the first, definitive proof we had of an extrasolar planet came in 1992, a mere 2 decades ago. As our technology has increased in capability the number of planets we discover year by year has increased dramatically and, even cooler still, the different types of planets we’re discovering is also increasing. Heck we’ve even found planets that don’t have a parent star, something which was almost a fantasy as they were thought to be nearly impossible to detect.
What the last decade has revealed is that planets are not only a common occurrence in the universe but systems like are own, ones with multiple planets in them, are also commonplace. Initially most of the exoplanet discoveries were limited to certain types of planets, namely large gas giants with short orbital periods, but as our technology has improved we’ve been able to discover smaller bodies that orbit further out. Depending on the size of the star and the planet they could end up in what we refer to as the habitable (or Goldilocks) zone, the area where liquid water could exist on the surface. Finding one of these is cause for celebration as that closely matches our own solar system so you can imagine the excitement when we found 3 potentials orbiting Gliese 667C.
Gliese 667C is actually part of a ternary star system which means that each of these planets technically has 3 suns, although the other 2 appear to more like bright stars that have the same illumination capacity as the full moon does here on earth. The diagram above makes it look like there’s potentially 5 planets in the habitable zone (just barely for H and D) but those ones are far more likely to be closer to Venus and Mars respectively. C, F, and E on the other hand are what we call super earths, rocky planets that have a mass around 2 to 10 times that of earth. Typically they’re also quite a bit larger than earth as well which means that the gravity on these kinds of planets is actually quite comparable. Out of all of them Gliese667Cf is the best candidate for habitability and thus extraterrestrial life.
What’s particularly exciting for me is this provides more evidence for the idea that other stars are typically swamped in planets, making the configuration of our solar system quite common. This adds fuel to the already intense discussion that surrounds the Drake Equation which I’d argue has now been tipped towards increasing the left hand side dramatically. Of course you can’t consider that equation without also considering the Fermi Paradox since, as far as we can tell, we’re still all alone out here. The only solution is for us to visit these planets and to see if there is anything there although doing so in an acceptable time frame is still beyond the current limits of our technical ability (but not our theoretical capacity, however).
It’s really quite amusing to see the stuff of science fiction rapidly turn into science fact. As time goes on it seems that the wildest things we could dream of, like planets with multiple suns, are not only real but may not be that unusual either. Hell it’s almost an inevitability that we’ll one day go to places like this just because it’s there. It might not be this century or heck even this millenium but we’ve shown in the past that we’re a stubborn race when it comes to things like this and we’ll be damned if anything will stop us from achieving it. I can only hope medical science advances enough for me to be able to see that and, hopefully, experience such planets for myself.
Most people have a rough idea about what plasma is, usually thanks to the plasma TV craze that hit many years ago and has since been replaced by LCDs, but few will know that plasma is actually one of the 4 fundamental states of matter right along side solid, liquid and gas. The transition between a gas and a plasma is done through a process called ionization/deionization which converts the gas into an electrically conductive cloud which can be done by either inducing a large voltage difference or by subjecting the gas to extremely high tempreatures. The following video shows the latter and is a rather cool demonstration of the transition process.
The short run time for sustaining the plasma cloud is simple, given enough time that superheated cloud of carbon atoms would start to melt the pyrex container which would free the plasma to wreck all sorts of havok on the microwave itself. I’m not sure how long it’d last though as it looks like the atomised carbon atoms need to be cluster together for it to work, hence the spool up time require to set up the initial plasma reaction. Indeed if my experiments with bananas are anything to go by (it’s relatively safe but still, I’m not going to recommend you do it) you’d instead get little flashes rather than the sustained cloud.
What really interested me was the hum that was generated as it was pretty regular and I couldn’t really figure out what would be causing it. As it turns out there’s actually a couple things that could be responsible and, interestingly enough, the frequency could change depending on the input frequency of the power source going to the microwave. That link also suggests another, similar experiment with cut in half grapes that’s supposedly a lot safer (although this site argues otherwise) and the results look very similar to my results with bananas. It seems there’s all manner of things you can use to create plasma in the microwave, something I didn’t expect.
This is one of those experiments that I reckon would be really great for class demonstrations (this is probably also the reason why I shouldn’t be allow to teach science in schools but come on, fire and explosions are awesome!).
My stance on phone based photography is pretty well known (some would go as far as to say infamous) and is probably one of the only issues that causes me significant cognitive dissonance on a regular basis. You see I’m not in the hard against camp where anything below a pro-level DSLR doesn’t count but nor am I fully vested in the idea that the simple act of taking pictures makes you a photographer. It’s a matter of personal opinion, of course, and I’m not going to make myself out to be the arbiter of what is and isn’t photography, especially when I firmly believe in the “Photography is 50% photographer, 40% light and 10% equipment” rule of thumb.
Indeed I thought I had gotten over all my angst about phone based photography after my last post about it all. Heck I even spent an inordinate amount of time trying to learn my current phone’s camera, using it almost exclusively whilst I was in New Orleans in order to source some eye candy for my daily travel posts. I’ll be honest when I say the experience was a little frustrating but there was more than a few pics I was actually proud of, the above being one of them. My chosen toolset was not that of Instagram or any of its more well known competitors however as I prefer to use SnapSeed due to the flexibility it grants me (and the fact that they make some amazing Lightroom plugins as well) and I haven’t uploaded them to any of my regular sharing sites. Still for someone who had essentially written this whole area off I felt I was making progress until I read this article:
Since the launch of the original iPhone and the arrival of the App Store, the differences between those photographs taken on a smartphone and those taken on regular digital cameras have become far less apparent. Not because the phone cameras are getting better (despite the ever-improving optics, sensors, and software on smartphones, there’s still a huge difference in quality between an iPhone camera and a Canon 5D Mark III), but because of where photographs are being viewed. The vast majority of imagery is now seen in the exact same places: on smartphones and tablets, via apps such as Pinterest, Facebook, Google+, Flipboard and most importantly, Instagram. At 1024 x 1024 pixels, who can really tell whether a photo was taken on an iPhone or a Canon 5D? More to the point, who cares?
There’s a lot in Bareham’s post that I agree with, especially when it comes to the way most photographs are consumed these days. It’s rare now to see pictures materialize themselves in a physical medium or even at a scale where the differences between photographic platforms starts to become apparent. Indeed even I, the unabashed Canon DSLR fanboy, still has none of his work on display in his own house, preferring to show people my pictures on their laptop or other Internet connected device. Indeed many pictures I love on my phone often fail to impress me later when I view them on a larger screen although that’s probably more due to my perfectionist ways more than anything else.
Still I’m not convinced that the introduction of the iPhone, or any camera phone really for that matter really (I had a camera phone for a good 4 years by that point), changed everything about photography. Sure it made it more accessible thanks to its integration into a platform that nearly everyone has but it hadn’t really been out of reach for quite some time. Indeed many people had said similar things about the consumer level 35mm cameras back when they were first introduced and whilst the camera phones provided an added level of immediacy it’s not like that wasn’t available with the cheap digital point and shoots before it. Indeed the act simply became more public once the apps on our phones allowed us to share those photos much quicker than we could before.
Thinking it over a bit more it’s actually quite shocking to see how my journey into photography is the inverse of Bareham’s. I had had these easy to use and share cameras for ages thanks to my love of all things technological but that creative spark simply never took hold. That all changed when I got my first DSLR and I began to learn about the technical aspects of photography; suddenly a whole new world had opened up to me that I hadn’t known about. I felt compelled to share my images with everyone and I started seeking out photographic subjects that weren’t my friends at parties or the sunset from my front porch. It has then graduated into what I do today, something that’s weaved its way into all aspects of my life regardless of what I’m doing.
Perhaps then the technology is simply a catalyst for the realisation of a subconscious desire, something that we want to achieve but have no idea how to accomplish in our current mindset. We all have our favourite platforms on which we create, ones that we’ll always gravitate back to over time, and for many people that has become their phones. I no longer begrudge them, indeed I’ve come to realise that nearly every criticism I’ve levelled at them can be just as easily aimed at any other creative endeavour, but nor do I believe they’re the revolution that some claim them to be. We’re simply in the latest cycle of technologically fueled progress that’s been a key part of photography for the past century, one that I’m very glad to be a part of.
My previous post on games and female protagonists sparked an interesting conversation among my friends as we tried to recall all the games we’d played that had either a female lead character or at least one that played a major role in the game’s story. Even though we play a fairly broad range of titles the number of strong female characters we could name was dwarfed by their male counterparts, something that seems particularly odd now that 45% of all gamers are women. Thankfully that seems to be changing (albeit slowly) as games like Remember Me are becoming more frequent, even if they have to fight for their very existence.
You awake in an all white cell, your memory being wiped clean as part of the intake process for the prison you’re being kept in. A doctor approaches you and starts asking you rudimentary questions, trying to figure out just how much of yourself remains after your treatment. It seems that you’re somewhat resistant to the Sensen’s memory wiping ability and need to be sent elsewhere for further treatments. However whilst you’re on your way to what appears to be your final doom you’re contacted by a man called Edge who helps you escape. The world you’re then thrust into however is a dark and terrifying one that’s under the control of the Memorize corporation. Not directly however, but simply because their technology allows anyone to forget the most painful moments of their life turning them into memory junkies. Edge wants you to fight them and you can’t fight the compulsion to do so.
Remember Me is pretty much what I’ve come to expect from current generation console titles as it’s able to make full use of all the hardware power that’s available to it. The game incorporates all the modern effects: high amounts of motion blur, high resolution textures and it’s own glitchy overlay whilst also keeping its frame rate at a solid 60 fps. I will take slight issue with the lip synching as, outside the cutscenes, it’s either done extremely poorly or just not at all. It’s really the only let down in the whole audio/visual experience as pretty much everything else is spot on.
The game play of Remember Me is a mix of beat em up style combat, logic puzzles and an unique mechanic whereby you remix someone’s memories in order for them to do what you want them to. Whilst the fundamentals of each of these core mechanics will be familiar to most long time gamers they all have their own twist to them that makes them unique to the Remember Me world. By far the most intricate of them all is the combat system which you can heavily customize to suit your style of play. The logic puzzles and memory remixing are somewhat simplistic by comparison but are still an enjoyable part of the overall game play.
Combat follows the Arkham Asylum/Arkham City model of beat em up where you spend the majority of your time attempting to land combos whilst enemies throw themselves at you. It’s a little more nuanced and is reminiscent of fighter game combos where you must hit every button at the right time and in the right order to pull it off. However the combo aid at the bottom of the screen helps a lot and it’s also far more forgiving than any fighter game I’ve ever played. The really cool thing about the combat system though is the customization allowing you to change how the combo works and what benefits landing it will give you.
You have 4 types of “pressens” which are mapped to the buttons on the controller. The first is the damage one which, as its name implies, will increase the damage dealt by that particular strike. Regeneration ones will give you health upon landing a hit and cooldown pressens reduce the time between the use of your special abilities (more on those later). The final one is the chain pressen which inherits all the pressens that came before it making it a powerful tool for creating combos that are truly crazy. There’s also the twist of pressens having more effect the further along in the combo they are which, when you’re dealing with an 8 hit combo, can make a pressen that felt useless suddenly become really viable. You can also chop and change between the pressens during combat, allowing you to adjust your fighting style to the challenges at hand.
You’ll be doing this more often than you think as whilst towards the end you’ll have enough pressens and combos available to you to cover any situation initially you’ll either be short of either of them at any given time. My original 8 hit combo felt like the perfect fit for pretty much any situation but when you’re surrounded by 8 enemies at a time it became incredibly hard to land and thus needed to be reworked into a 5 and 3 hit combo respectively. There’s also certain types of enemies that will require you to build a combo just to take them down especially if their death relies on using one of your special abilities.
Augmenting your regular punches and kicks are s-pressens, special abilities that allow you to deal with the varying challenges much easier and quicker than you could do otherwise. They’re unlocked gradually, always as part of the game throwing a new type of enemy at you that basically requires that s-pressen to take them down, and how you use them is really up to you. They also rely on focus, shown as the white/blue bar above, which is generated whenever you hit or are hit by someone. In the beginning they’re quite cool and feel like the ultimate get out of jail free card but eventually their effectiveness starts to drop off and their use becomes something of a necessity.
This is probably where Remember Me starts to struggle as ramping up the difficulty involves nullifying the abilities that have been granted to you whilst throwing ever increasing numbers of enemies at you. It’s something that the whole games industry is struggling with at the moment, the idea of providing challenge whilst keeping the player engaged, but simply throwing more bodies or removing player options is most certainly more towards the anti-fun part of the spectrum and should honestly be avoided. Of course you could argue that due to its hack ‘n’slash nature Remember Me implies that this is how the challenge will be ramped up but I find that a poor excuse for a game that incorporates such a nuanced combat system in the first place. I don’t pretend to have a solution to this, indeed even the game designers I know say that this is something that the best struggle to achieve, but it’s definitely one of those things that will count against a game in my view.
The memory remixing puzzles are quite awesome as they play on the idea of small changes having big impacts on how something would play out. Whilst the outcomes are relatively fixed, I.E. there’s no emergent behaviour possible in any of them, the different outcomes are quite varied and the difference between a successful remix and a failure can be something as simple as doing something too early, or too late. There’s also a ton of red herrings in all of them, things that when modified won’t do anything at all, which keeps you second guessing your decisions right up until everything falls into place. I can’t really talk about it much more without spoiling the crap out of some of the puzzles but suffice to say it’s really good despite the fact it didn’t feature as prominently as I thought it would.
Outside of the memory remixing there’s a bunch of puzzles that make use of Remembranes, fragments of memory that you purloin from other people in order to move forward. They start off as being easy timing puzzles, usually involving you avoiding detection from robots that move in a predictable pattern, but they eventually graduate into riddles that unlock codes forcing you to decipher the ramblings of a man who was driven insane. They’re a small part of the game however and you could usually stumble through them without thinking about it too hard although I will admit I got caught on the second to last puzzle involving the hominus/m3morize/evolutio words.
One point that bears mentioning is the strange, strange world that Remember Me exists in. Now I’m not talking about the major plot points that drive the story that revolve around the Memorize memory technology, more that whilst the developers have strived to create a world that feels alive they’ve in fact created one that’s just simply weird. There are robots everywhere, and I mean everywhere, but apart from the patrol robots not a single one will react to you, not even ones that are in places where you’re not supposed to be (despite being a wanted criminal). They’ve obviously been put there to make it feel like the city is alive in some way without them having to code in a lot of people (which do exist, but are few and far between) but instead it creates this weird atmosphere where you’d expect them to react to you but they don’t. You’d probably be better just leaving them out because having them there just creates this extremely odd atmosphere.
Remember Me’s story is quite gripping once you get over the stumbling block of Nilin implicitly trusting Edge and doing everything he asks. They touch on this very point with the inter-chapter monologues that help to bridge over some of the more glaring plot issues, but it essentially leaves Nilin without any particular motivation for a good chunk of the game. It does morph into a much more rich and detailed story towards the end however, even though quite a lot of things are still left unclear, and the last couple hours were intense enough for everyone in my house to stop what they were doing in order to watch everything to the end. It’s definitely far above what I’ve come to expect from these kinds of games and Dotnod Entertainment should be commended for making a strong female lead, even if there’s a few rough edges.
For a new IP Remember Me does incredibly well, showcasing some incredibly refined game mechanics with a top notch story that combine to produce a well rounded and highly polished game experience. It still has some teething issues, something which is not uncommon to games trying out new ideas, but it manages to pull the majority of them off without sacrificing other aspects of the game. A strong female lead is also a welcome addition something which hopefully won’t be considered a controversial choice for too much longer. I thoroughly enjoyed my time with Remember Me and would recommend it for anyone seeking out a fresh experience that’s unlike anything else that’s come before it.
Remember Me is available on PlayStation3, Xbox360 and PC right now for $79, $79 and $49.99 respectively. Game was played on the PlayStation 3 on the Errorist Agent difficulty with around 8 hours of total play time and 39% of the achievements unlocked.
Whilst its easy to argue to the contrary Microsoft really is a company that listens to its customers. Many of the improvements I wrote about during my time at TechEd North America were the direct result of them consulting with their users and integrating their requests into their updated product lines. Of course this doesn’t make them immune to blundering down the wrong path as they have done with the XboxOne (and a lot would argue Windows 8 as well, something which I’m finding hard to ignore these days) something which Sony gleefully capitalized on. Their initial attempts at damage control did little to help their image and it was looking like they were just going to wear it until launch day.
And then they did this:
Essentially it’s a backtrack to the way things are done today with the removal of the need for the console to check in every day in order for you to be able to play installed/disc based games. This comes hand in hand with Microsoft now allowing you to trade/sell/gift your disc based games to anyone, just like you can do now. They’re keeping the ability to download games directly from Xbox Live although it seems the somewhat convoluted sharing program has also been nixed, meaning you can no longer share games with your family members nor can you share downloaded titles with friends. Considering that not many people found that particular feature attractive I’m not sure it will be missed but it does look like Microsoft wanted to put the boot in a little to show us what we could have had.
I’ll be honest and say I didn’t expect this as Microsoft had been pretty adamant that it was going to stick around regardless of what the consumers thought. Indeed actions taken by other companies like EA seemed to indicate that this move was going to be permanent, hence them abandoning things that would now be part of the platform. There’s been a bit of speculation that this was somehow planned all along; that Microsoft was gauging the Market’s reaction and would react based on that but if that was the case this policy would have been reversed a lot sooner, long before the backlash reached its crescendo during E3. The fact that they’ve made these changes shows that they’re listening now but there’s not to suggest that this was their plan all along.
Of course this doesn’t address some of the other issues that gamers have taken with the XboxOne, most notably the higher cost (even if its semi-justified by the included Kinect) and the rather US centric nature of many of the media features. Personally the higher price doesn’t factor into my decision too much, although I do know that’s a big deal for some, but since the XboxOne’s big selling points was around it’s media features it feels like a lot of the value I could derive from it is simply unavailable to me. Even those in the USA get a little bit of a rough ride with Netflix being behind the Xbox Live Gold wall (when it’s always available on the PS4) but since both of them are requiring the subscription for online play it’s not really something I can really fault/praise either of them for.
For what it’s worth this move might be enough to bring those who were on the fence back into the fold but as the polls and preorders showed there’s a lot of consumers who have already voted with their wallets. If this console generation has the same longevity as the current one then there’s every chance for Microsoft to make up the gap over the course of the next 8 years and considering that the majority of the console sales happen after the launch year it’s quite possible that all this outrage could turn out to be nothing more than a bump in the road. Still the first battle in this generation of console wars has been unequivocally won by Sony and it’s Microsoft’s job to make up that lost ground.
Just outside the Googleplex in Mountain View California there’s a small facility that was the birthplace for many of the revolutionary technologies that Google is known for today. It’s called Google [x] and is akin to the giant research and development labs of corporations in ages past where no idea is off limits. It’s spawned some of the most amazing projects that Google has made public including the Driverless Car and Project Glass. These are only a handful of the projects that are currently under development at this lab however with vast majority of them remaining secret until they’re ready for release into the world. One more of their projects has just reached that milestone and it’s called Project Loon.
The idea is incredibly simple: provide Internet access to everyone regardless of their location. How they’re going about that however is the genius part: they’re going to use a system of high altitude balloons and base relay stations with each of them being able to cover a 40KM area. For countries that don’t have the resources to lay the cables required to provide Internet this provides a really easy solution to covering large areas and even makes providing Internet possible to regions that would otherwise be inaccessible.
What’s really amazing however is how they’re going about solving some of the issues you run into when you’re using balloons as your transportation system:
The height they fly at is around the bottom end of the range for your typical weather balloon (they can be found from 18KM all the way up to 38KM) and is about half the height from where Felix Baumgartner made his high altitude jump from last year. I wasn’t aware that different layers of the stratosphere had different wind directions and making use of them to keep the balloons in position is just an awesome piece of engineering. Of course this would all be for naught if the Internet service they delivered wasn’t anything above what’s available now with satellite broadband, but it seems they’ve got that covered too.
The Loon stations use the 2.4GHz and 5.8GHz frequencies for communications with ground receivers and base stations and are capable of delivering speeds comparable to 3G (~2MBps or so). Now if I’m honest the choice to use these public signal spaces seems like a little bit of a gamble as whilst it’s free to use it’s also a signal space that’s already quite congested. I guess this is less of a problem in the places where Loon is primarily aimed at, namely regional and remote areas, but even those places have microwaves and personal wifi networks. It’s not an insurmountable problem of course, and I’m sure the way-smarter-than-me people at Google[x] have already thought of that, it’s just an issue with anything that tries to use that same frequency space.
I might never end up being a user of this particular project but as someone who lived on the end of a 56K line for the majority of his life I can tell you how exciting this is for people living outside broadband enabled areas. According to Google it’s launching this month in New Zealand to a bunch of pilot users so it won’t be long before we see how this technology works in the real world. From there I’m keen to see where they take it next as there’s a lot of developing countries where this technology could make some really big waves.
The Internet situation I have at home is what I’d call workable but far from ideal. I’m an ADSL2+ subscriber, a technology that will give you speeds up to 25MBps should you be really close to the exchange, on good copper and (this is key) make the appropriate sacrifices to your last mile providers. Whilst my line of sight distance to the exchange promises speeds in the 15MBps range I’m lucky to see about 40% of that with my sync speed usually hovering around the 4~5MBps range. For a lot of things this is quite usable, indeed as someone who had dial-up for most of his life these speeds are still something I’m thankful for, but it’s becoming increasingly obvious that my reach far exceeds my grasp something which as a technology centric person is fast becoming an untenable position.
Honestly I don’t think about it too much as it’s not like it’s a recent realisation and, since the difference between the best and worst speeds I’ve had weren’t that great in retrospect, I’ve developed a lot of habits to cope with it. Most of these are running things over longer periods when I wouldn’t be using the Internet anyway but not all tasks fit nicely into that solution. Indeed last night when I wanted to add in a video that I recorded to my post, one that was only ~180MB in size, I knew there was going to be a pretty long delay in getting the post online. The total upload time was around 30mins in the end which is just enough time for me to get distracted with other things and completely forget about what I was doing until later that night.
Sure it’s not an amazing example of why I need faster Internet but it does highlight the issue. The video wasn’t particularly large nor super high resolution (720p, 60fps), it was produced on technology that’s over 2 years old and uploaded to a service that’s been around for 7 years. The bottleneck in that equation is the connection that all of them share from my home network, something which hasn’t changed that much in the last decade that I’ve been a broadband Internet user.
For me it’s even worse when I run up against the limitations of paltry connection for things like services I’d like to host myself. In its infancy this blog was hosted from my little server at home but it became quickly apparent that little things like pictures were simply untenable because they’d take forever to load even if I shrunk them down to near unusable sizes. It became even worse when I started looking into using the point to point VPN feature in Azure for connecting a small home environment to the virtual machines I’m running in the cloud as my tiny connection was simply not enough to handle the kind of traffic it would produce. That might not sound like a big deal but for any startup in Australia thinking about doing something similar it kills the idea of creating using the service in that fashion which puts a lot of pressure on their remaining runway.
It’s reasons like this which keep me highly skeptical of the Liberal’s plan for the NBN as the speeds they’re aspiring towards aren’t that much dissimilar to what I’m supposed to be getting now. Indeed they can’t even really guarantee those speeds thanks to their reliance on the woefully inadequate copper network for the last run in their FTTN plan. Canberra residents will be able to tell you how much of a folly their idea is after the debacle that is TransACT (recently bought for $60 million and then its infrastructure sold for $9 million) which utterly failed to deliver on it’s promises, even when they deployed their own copper infrastructure.
It also doesn’t help that their leader thinks that 25MBps is more than enough for Australian residents which, if true, would mean that ADSL2+ would be enough for everyone, including businesses. Us IT admins have known that this hasn’t been the case for a while, especially considering how rare it is to get those speeds, and the reliance on the primary limiting factor (Telstra’s copper network) for the Liberal’s NBN plan effectively ensures that this will continue on for the foreseeable future.
All those points pale in comparison to the one key factor: we will need to go full fibre eventually.
The copper we have deployed in Australia has a hard upper limit to the amount of bandwidth it can carry, one that we’re already running up against today. It can be improved through remediation by installing thicker cables but that’s a pretty expensive endeavour, especially when you take into account the additional infrastructure required to support the faster speeds. Since there’s no plan to do such remediation on the scales required (either by Telstra or as part of the Liberal’s NBN plan) these current limitations will remain in place. Fibre on the other hand doesn’t suffer from the same issues with the new cables able to carry several orders of magnitude more bandwidth just with today’s technology. The cost of deploying it isn’t cheap, as we already know, but considering it will pay for itself well before it reaches the end of its useful life.
My whinging is slightly moot because I’ll probably be one of the lucky ones to have fibre being rolled out to my neighbourhood before the election but I do feel the NBN’s effectiveness will be drastically decreased if its not ubiquitous. It’s one of the few multi-term policies that will have real, tangible benefits for all Australians and messing with it will turn it from a grand project to a pointless exercise. I hope the Liberal’s policy is really just all that much hot air to placate their base because otherwise the Internet future of Australia will be incredibly dim and that’s not something that I, or any user of technology, wants for this country.
Mars doesn’t have much of an atmosphere and the little it does have is rather hostile to life, being composed almost entirely of carbon dioxide with only small percentages of other gasses detectable. Due to the freezing temperatures that grip it constantly -60°C in summer and -125°C in winter a lot of this carbon dioxide ends up in its solid form, usually buried in the permafrost. Last year NASA even confirmed that Mars experiences dry snow a phenomena where frozen carbon dioxide falls to the surface in the form of snow not unlike the water based version we have on Earth. These are all mightily cool in their own regard but there was one particular interaction that came to my attention recently that’s just so much cooler because I realized I had first seen it for myself in my backyard.
I had heard about these gullies before and had always wondered how the heck they formed. It’s not like Mars is a completely dead planet, we’ve caught crazy things like avalanches happening on it, but things that look like they require surface water (or some other liquid) in order to create them are usually out of the question (at least for new features anyway). They’re even reminiscent of the sailing stones in Death Valley, although we’ve probably solved that mystery, but the lack of something at the end of them was the thing that was really puzzling.
Where I saw this in my backyard was a chance encounter with a couple blocks of dry ice that came with a delivery of frozen meals. They weren’t as big as the blocks in the movie above, although you can get them pretty easily if you know where to look, but of course the science nerd in my wife and I couldn’t resist playing with them in the kid pool we had set up. The result wasn’t exactly surprising since we’ve all seen this kind of stuff before but it was rather interesting to see the same principles at work on Earth just as they are on Mars.
The effect isn’t nearly as dramatic but you can definitely see the same carbon dioxide cushion at work which makes the block appear to glide on the surface rather than bobbing in it like water ice does. Another cool thing (which I didn’t show in the video) is when it’s placed just below the surface that same cushion will actually propel it straight to the bottom where it will pin itself and bubble like crazy until it’s all melted away.
I’d recommend doing this for yourself as it’s one thing to see it in a video and a completely different thing altogether to play around with it. Of course there’s a whole host of other things you can do, some which I’d probably not recommend (anything that involves a pressure vessel contains a certain amount of danger), but just watching it interact with other things is pretty satisfying.