If you cast your mind back to your high school science days you’ll likely remember being taught certain things about atoms and what they’re made up of. The theories you were taught, things like the strong/weak forces and electromagnetism, form part of what’s called the Standard Model of particle physics. This model was born out of an international collaboration of many scientists who were looking to unify the world of subatomic physics and, for the most part, has proved extremely useful in guiding research. However it has its limitations and the Large Hadron Collider was built in order to test them. Whilst the current results have largely supported the Standard Model there is a growing cache of evidence that runs contrary to it, and the latest findings are quite interesting.
The data comes out of the LHCb detector from the previous run that was conducted from 2011 to 2012. The process that they were looking into is called B meson decay, notable for the fact that it creates a whole host of lighter particles including 2 leptons (called the tau lepton and the muon). These particles are of interest to researchers as the Standard Model makes a prediction about them called Lepton Universality. Essentially this theory states that, once you’ve corrected for mass, all leptons are treated equally by all the fundamental forces. This means that they should all decay at the same rate however the team investigating this principle found a small but significant difference in the rate in which these leptons decayed. Put simply should this phenomena be confirmed with further data it would point towards non-Standard Model particle physics.
The reason why scientists aren’t decrying the Standard Model’s death just yet is due to the confidence level at which this discovery has been made. Right now the data can only point to a 2σ (95%) confidence that their data isn’t a statistical aberration. Whilst that sounds like a pretty sure bet the standard required for a discovery is the much more difficult 5σ level (the level at which CERN attained before announcing the Higgs-Boson discovery). The current higher luminosity run that the LHC is conducting should hopefully provide the level of data required although I did read that it still might not be sufficient.
The results have gotten increased attention because they’re actually not the first experiment to bring the lepton universality principle into question. Indeed previous research out of the Stanford Linear Accelerator Center’s (SLAC) BaBar experiment produced similar results when investigating lepton decay. What’s quite interesting about that experiment though is that it found the same discrepancy through electron collisions whilst the LHC uses higher energy protons. The difference in method with similar results means that this discrepancy is likely universal, requiring either a new model or a reworking of the current one.
Whilst it’s still far too early to start ringing the death bell for the Standard Model there’s a growing mountain of evidence that suggests it’s not the universal theory of everything it was once hoped to be. That might sound like a bad thing however it’s anything but as it would open up numerous new avenues for scientific research. Indeed this is what science is built on, forming hypothesis and then testing them in the real world so we can better understand the mechanics of the universe we live in. The day when everything matches our models will be a boring day indeed as it will mean there’s nothing left to research.
Although I honestly cannot fathom that every occurring.
In the 3 or so years since I reviewed Dear Esther, a game which in my opinion was an incoherent mess, I’ve come to appreciate the walking simulator genre. They’re definitely not for everyone, what with the achingly slow pace and reliance a strong story to really make them, however they can shine beautifully when done right. If I’m honest though had I known that Everybody’s Gone to the Rapture was done by The Chinese Room (the guys behind Dear Esther) I probably wouldn’t have played it. Thankfully though I didn’t find that fact out until I was a fair way in as Everybody’s Gone to the Rapture deserves to be judged on its own merits and not its heritage.
It’s a beautiful day in the quiet small town of Yaughton in Britain. It’s one of those places where you feel like you could hear a pin drop a mile away, the only sounds being the rustling of the leaves with the occasional bird chirp or quiet rumbling of a car off in the distance. That stillness belies something far more sinister however as you quickly discover that this town is bereft of people and the only thing that remains is an eerie ball of light that dances through the streets. As you walk through the town it begins to reveal its story to you and that of the people of Yaughton.
Everybody’s Gone to the Rapture utilizes the Crytek 3 engine and definitely makes good use of the capabilities that it provides. Whilst it’s not the pinnacle of graphical mastery that the engine’s flagship game was it’s still a decidedly pretty game. Indeed the sweeping views of an idyllic English countryside backdropped by columns of light are some of most enjoyable and serene set pieces I’ve seen in a long time. However what really sets Everybody’s Gone to the Rapture apart from all others in its genre is the absolutely stunning soundtrack, one that wouldn’t be out of place as a movie score. It definitely pleased me to find out that Jessica Curry, the composer, has received a BAFTA for her efforts as her current work shows just how capable she is.
As the genre would suggest Everybody’s Gone to the Rapture is essentially a sightseeing tour, one that will walk you through the town of Yaughton and gradually reveal the story to you. Unlike most walking sims though there’s a guide to show you the way, a small ball of light that will dance and flit around from point to point, urging you to follow it. Then, when you reach certain trigger points, you’ll see events of the past rendered in a shower of light, the voices clear but the people seeming like ghosts playing out their past lives. The only real game mechanic to speak of is tilting your controller one way or the other to sync up with the light but beyond that it’s a lot of holding the left stick forward.
Walking sims generally encourage you to explore the environment, usually with the promise of revealing more of the story to you or opening up a shortcut. The addition of a guide, in this case the ball of light that races around from place to place, would seem to be contrary to that but it’s something that I actually came to enjoy later on. You see, whilst it’s easy enough to figure out what general direction you should be heading in, there’s a lot of places you can get yourself into which don’t lead anywhere. Following the ball, and straying from the path where it seems obvious to do so, seems to be the best way to play Everybody’s Gone to the Rapture.
One mistake that was unfortunately repeated by The Chinese Room was providing avenues of exploration, ones that seemed wholly intentional, that lead to absolutely nothing. The best example of this was the church early on in the game which, when you first go to it, you can’t access the second half of. However if you look around it’s clear there’s another path available to you but you’ll have to go all the way around to get to it. Naturally I did that only to be greeted by sweet fuck all when I arrived there. In any other game this would be a minor annoyance but in a walking simulator it was a 15 minute ordeal, even with the sprint button pegged down. This was the same issue I found so much frustration with in Dear Esther and it pains me to see them making the same mistake.
Thankfully the one mistake they didn’t repeat was delivering the story to you in randomized, disjointed sections. Whilst the story is still far from linear, delivered in vignettes as you stumble across key locations, it at least has a sense of flow and timing to it. Each section follows a particular individual’s story over the course of the events that preceded your arrival, revealing more and more details about their particular part they played. There’s also optional bits of dialogue that you can trigger by picking up phones or turning on radios which are key to understanding the central character’s motivations.
For me the way the story was delivered was the key difference between Dear Esther and Everybody’s Gone to the Rapture. In Dear Esther I struggled to have any empathy for any of the characters as it was hard to tell where I was in the story and how that section fit into it. With Everybody’s Gone to the Rapture on the other hand whilst the vignettes might be told in any order they’re at least internally consistent and often reference a point in time in the larger story. This means that there’s a flow to the larger story that its predecessor lacked, giving you a much better sense for how the events that led up to your arrival unfolded.
The story itself does meander a bit but it’s interleaved with enough background and character development that you feel drawn into their lives and the minutia of this small town. It grips you early on, especially with one scene (pictured in the second screenshot) where a desperate mother struggles to understand what’s going while being comforted by the local priest. The slightly disjointed nature means you know the ending long before it happens however the final few reveals were still an emotional journey. It may not have left me an emotional wreck like other similar games have done but it was definitely one of the more memorable stories in recent memory.
Everybody’s Gone to the Rapture aptly demonstrates the talent that The Chinese Room team has. Everything about this game, from the graphics to the story to the soundtrack, are well above par in all regards. They may make the same mistake of opening up paths of exploration without reward however there’s many more issues that plagued Dear Esther that are simply not present in their latest title. Indeed Everybody’s Gone to the Rapture is one of the few games in this genre that I feel would have appeal beyond that of genre fans as it truly is a great experience.
Everybody’s Gone to the Rapture is available on PlayStation4 right now for $29.99. Total play time was approximately 5 hours.
Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.
The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:
In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?
Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.
As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.
New Windows releases bring with them a bevy of new features, use cases and controversy. Indeed I can think back to every new Windows release dating back to Windows 95 and there was always something that set off a furor, whether it was UI changes or compatibility issues. For us technical folk though a new version of Windows brings with it opportunity, to experiment with the latest tech and dream about where we’ll take it. For the last month I’ve been using Windows 10 on my home machines and, honestly, whilst it feels much like its Windows 8.1 predecessor I don’t think that’s entirely a bad thing.
Visually Windows 10 is a big departure from its 8 and 8.1 predecessors as, for any non-tablet device, the full screen metro app tray is gone, replaced with a more familiar start menu. The full screen option is still there however, hiding in the notifications area under the guise of Tablet Mode, and for transformer or tablet style devices this will be the default option. The flat aesthetic has been taken even further again with all the iconography being reworked, ironing out almost any 3D element. You’re also not allowed to change the login screen’s laser window background without the aid of a resource hacker, likely due to the extreme amount of effort that went into creating the image.
For most, especially those who didn’t jump in the Windows 8 bandwagon, the navigation of the start menu will familiar although I must admit after the years I’ve spent with its predecessor it’s taken some getting used to. Whilst the charms menu might have disappeared the essence of it appears throughout Windows 10, mostly in the form of settings panels like Network Settings. For the most part they do make the routine tasks easier, like selecting a wifi network, however once things get complicated (like if you have say 2 wireless adapters) then you’re going to have to root around a little bit to find what you’re looking for. It is a slightly better system than what Windows 8 had, however.
To give myself the full Windows 10 experience I installed it on 2 different machines in 2 different ways. The first was a clean install on the laptop you see above (my trusty ASUS Zenbook UX32V) and that went along without a hitch. For those familiar with the Windows 8 style installer there’s not much to write home about here as it’s near identical to the previous installers. The second install was an upgrade on my main machine as, funnily enough, I had it on good word that the upgrade process was actually quite useable. As it turns out it is as pretty much everything came across without a hitch. The only hiccup came from my audio drivers not working correctly (seemed to default to digital out and wouldn’t let me change it) however a reinstall of the latest drivers fixed everything.
In terms of features there’s really not much in the way of things I’d consider “must haves” however that’s likely because I’ve been using many of those features since Windows 8 was first released. There are some interesting little additions however like the games features that allow you to stream, record and capture screenshots for all DirectX games (something which Windows will remind you about when you start them up). Microsoft Edge is also astonishingly fast and quite useable however since it’s so new the lack of extensions for it have precluded me from using it extensively. Interestingly Internet Explorer still makes an appearance in Windows 10, obviously for those corporate applications that continue to require it.
Under the hood there’s a bevy of changes (which I won’t bore you with here) however the most interesting thing about them is the way Windows 10 is structured for improvements going forward. You see Windows 10 is currently slated to be the last major release of Windows ever but this doesn’t mean that it will remain stagnant. Instead new features will be released incrementally on a much more frequent basis. Indeed the roadmaps I’ve seen show that there are several major releases planned in the not too distant future and indeed if you want a peek at the them all you need to do is sign up for the Windows Insider program. Such a strategy could reap a lot of benefits, especially for organisations seeking to avoid the heartache of Windows version upgrades in the future.
All in all Windows 10 is pretty much what I expected it to be. It has the best parts of Windows 7 and 8 and mashed together into a cohesive whole that should appease the majority of Windows users. Sure there are some things that some won’t like, the privacy settings being chief among them, however they’re at least solvable issues rather than showstoppers like Vista’s compatibility or 8’s metro interface. Whether Microsoft’s strategy of no more major versions ever is tenable or not is something we’ll have to see over the coming years but at the very least they’ve got a strong base with which to build from.
Space history of the past few decades is dominated by the Space Shuttle. Envisioned as a revolution in access to space it was designed to be launched numerous times per year, dramatically reducing the costs of access to space. The reality was unfortunately not in line with the vision as the numerous design concessions made, coupled with the incredibly long average turnaround time for missions, meant that the costs far exceeded that of many other alternative systems. Still it was an iconic craft, one that several generations will point to as the one thing they remember about our trips beyond our atmosphere. What few people realise though is that there was potential for the shuttle to have a Russian sister and her name was Buran.
The Buran project started in 1974, only 5 or so years after the Space Shuttle program was kicked off by NASA. The goals of both projects were quite similar in nature, both aiming to develop a reusable craft that could deliver satellites, cosmonauts and other cargo into orbit. Indeed when you look at the resulting craft, one of which is shown above in its abandoned complex at the Baikonur Cosmodrome, the similarities are striking. It gets even more interesting when you compare their resulting specifications as they’re almost identical with only a meter or two difference between them. Of course under the hood there’s a lot of differences, especially when it comes to the primary purpose of the Buran launch system,
The propulsion system of the Buran differed significantly from the Shuttle with the boosters being a liquid oxygen/hydrogen mix rather than a solid rocket fuel. There are advantages to this, chief among them being able to shut down the engines once you start them (something solid rocket boosters can’t do) however at the same time these were not designed to be reusable, unlike their Shuttle compatriots. This would mean that the only reusable part of the Buran launch system was the orbiter itself which would increase the per-launch cost. Additionally the Buran included a fully autonomous flight control system from the get go, something the Shuttle only received during an upgrade later in its life.
That last part is somewhat telling of Buran’s true purpose as, whilst it could service non-military goals, it was primarily developed to assist Russia’s (then the Soviet Union) military interests. Indeed the winged profile of the craft enables many mission profiles that are simply of no interest to non-military agencies and having it fully autonomous from the get go shows it was meant more conflict than research. Indeed when commenting on the programme’s cancellation a Russian cosmonaut commented that the Buran didn’t have any civilian tasks planned for it and, with a lack of requirements to fuel a military programme, it was cancelled.
That was not before it saw numerous test flights, including a successful orbital test flight. The achievements that the Buran made during its single flight are not to be underestimated as it was the first craft to perform such a flight fully unmanned and to make a fully automated landing. That latter feat is even more impressive when you consider that there was a very strong crosswind, some 60 kilometers per hour, and it managed to land mere meters off its originally intended mark. Indeed had Russia continued development of the Buran shuttle there’s every chance that it would have been a much more advanced version of its American sister for a very long time.
Today however the Buran shuttles and their various test components lie scattered around the globe in varying states of disrepair and decay. Every so often rumours about a resurrection of the program surface, however it’s been so long since the program was in operation that such a program would only share the name and little more. Russia’s space program has continued on to great success however, their Soyuz craft becoming the backbone of many of humanity’s endeavours in space. Whilst the Buran may never have become the icon for space that its sister Shuttle did it remains the highly advanced concept that could have been, a testament to the ingenuity and capability of the Russian space program.
Superconductors are the ideal electrical conductors, having the desirable attribute of no electrical resistance allowing 100% efficiency for power transmitted along them. Current applications of superconductors are limited to areas where their operational complexity (most of which comes from the cooling required to keep them in a superconducting state) is outweighed by the benefits they provide. Such complexity is what has driven the search for a superconductor that can operate at normal temperatures as they would bring about a whole new swath of applications that are currently not feasible. Whilst we’re still a long way from that goal a new temperature record has been set for superconductivity: a positively balmy -70°C.
The record comes out of the Naval Research Laboratory in Washington DC and was accomplished using hydrogen sulfide gas. Compared to other superconductors, which typically take the form of some exotic combination metals, using a gas sounds odd however what they did to the gas made it anything but your run of the mill rotten egg gas. You see to make the hydrogen sulfide superconducting they first subject the gas to extreme pressures, over 1.5 million times that of normal atmospheric pressures. This transforms the gas into its metallic form which they then proceeded to cool down to its supercritical temperature.
Such a novel discovery has spurred on other researchers to investigate the phenomena and the preliminary results that are coming out are promising. Most of the other labs which have sought to recreate the effect have confirmed at least one part of superconductivity, the fact that the highly pressurized hydrogen sulfide gas has no electrical resistance. Currently unconfirmed from other labs however is the other effect: the expulsion of all magnetic fields (called the Meissner effect). That’s likely due to this discovery still being relatively new so I’m sure confirmation of that effect is not far off.
Whilst this is most certainly a great discovery, one that has already spurred on new wave of research into high temperature superconductors, the practical implications of it are still a little unclear. Whilst the temperature is far more manageable than its traditional counterparts the fact that it requires extreme pressures may preclude it from being used. Indeed large pressurized systems present many risks that often require just as complex solutions to manage them as cryogenic systems do. In the end more research is required to ascertain the operating parameters of these superconductors and, should their benefits outweigh their complexity, then they will make their way into everyday use.
Despite that though it’s great to see progress being made in this area, especially one that has the potential to realise the long thought impossible dream of a room temperature semiconductor. The benefits of a such a technology are so wide reaching that it’s great to see so much focus on it which gives us hope that achieving that goal is just a matter of time. It might not be tomorrow, or the next decade, but the longest journeys begun with a single step, and what a step this is.
You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.
The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.
The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.
As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.
It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.
Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.
The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.
But why, I hear you asking, would you want a computerized brain in your pocket?
IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.
The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.
Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.
When it comes to exoplanets the question that I often hear asked is: why are they all largely the same? The answer lies in the methods that we use for detecting exoplanets, the most successful of which is observing the gravitational pull that planets have on their host stars. This method requires that planets make a full orbit around their parent start in order for us to detect them which means that many go unnoticed, requiring observation times far beyond what we’re currently capable of. However there are new methods which are beginning to bear fruit with one of the most recent discoveries being a planet called 51-Eridani-b.
Unlike most other exoplanets, whose presence is inferred from the data we gather on their parent star, 51-Eridani-b is the smallest exoplanet that we’ve ever imaged directly. Whilst we didn’t get anything like the artist’s impression above it’s still quite an achievement as planets are usually many orders of magnitude dimmer than their parent stars. This makes directly imaging them incredibly difficult however this new method, which has been built into a device called the Gemini Planet Imager, allows us to directly image a certain type of exoplanet. The main advantage of this method is that it does not require a lengthy observation time to produce results although like other methods it also has some limitations.
The Gemini Planet Imager was built for the Gemini South Telescope in Chile, the sister telescope of the more famous Gemini North Telescope in Hawaii. Essentially it’s an extremely high contrast imager, one that’s able to detect a planet that’s one ten millionth as bright as its parent star. Whilst this kind of sensitivity is impressive even it can’t detect Earth-like planets around a star similar to our sun. Instead the planets that we’re likely to detect are young jupiter planets which are still hot from their formation being far more luminous than a planet typically is. This is exactly what 51-Eridani-b is, a fiery young planet that orbits a star that’s about 5 times as bright as our own.
Equally as impressive is the technology behind the Gemini Planet Imager which enables it to directly image planets like this. The first part is a coronagraph, a specially designed interference device which allows us to block out the majority of a parent star’s light. Behind that is a set of adaptive optics, essentially a set of tiny mirrors that can make micro-adjustments in order to counteract atmospheric distortions. It has to do this since, unlike space based telescopes, there’s a lot of turbulent air between us and the things we want to look at. These mirrors, which are deformable at the micro level using MEMS, are able to do this with incredible precision.
With the successful discovery of 51-Eridani-b I’m sure further discoveries won’t be far off. Whilst the Gemini Planet Imager might only be able to discover a certain type of planet it does prove that the technology platform works. This then means that improvements can be made, expanding its capabilities further. I have no doubt that future versions of this technology will be able to directly image smaller and smaller planets, one day culminating in a direct image of an Earth-like planet around a sun-like star. That, dear read, will be a day for the history books and it all began here with 51-Eridani-b.
In my review of Cradle (which I meant to get out last week, apologies!) I noted that I’ve found two distinct types of exploration games. Some are guided, wanting to gently push you towards some goal, others are more free form, wanting you to roam and discover your own story. With Cradle more in the guided camp it was serendipitous that Submerged came right after I finished it as it takes the opposite approach, plonking you in a wide open area and letting you have at it. Whilst my preference for these types of games still tends towards the guided Submerged is a decent little exploration game, even if it errs on the simplistic side.
You are Miku, a determined young girl who’s come to this sunken city in the hopes of finding help for you brother, Taku. He is gravely injured, suffering from a might slash across his chest that threatens to take his life. You must explore this city, clambering through ruined buildings and scaling crumbling towers, looking for supplies to restore Taku to health. You can’t help but feel you’re being watched however as this wild city seems to have eyes on every corner. Still you push forward, your love for your brother driving you forward.
Submerged runs on the Unreal 4 engine and, whilst it’s not going to bring your PC to its knees with the graphics, it does have a great style and aesthetic. It’s one of those games where it’s best visual moments are the ones when you’re in a wide open space, the sprawling ruined city laid out before you. Up close it starts to lose its magic as there’s a lot of repeated asset use without a lot of variety. Still there were numerous times when my wife would peek over my shoulder and exclaim “Pretty!” at my screen so that has to count for something.
The core game play of Submerged is one of exploration as you’re set free in a ruined city to look for supplies, secrets and upgrades for your boat. You could say that there was a platforming aspect to Submerged as well, since you have to scale buildings and ferret your way through their innards, however it’s quite limited in nature. Thankfully you don’t have to stumble blindly through every building to find what you need as your telescope can highlight things on your map for you. Other than that there’s really not much else to speak of in Submerged as it really is quite a simple game.
Whilst you’ll be scaling great heights there’s no threat to falling off and having to start over as the platfroming is strictly controlled. You can’t accidentally let go form a platform, leap to your death or walk off the edge to fall down onto another platform. This does mean that there’s really no tension in any of the climbing sections however, unlike nearly every other platform game I’ve played. At the same time it is kind of nice to switch off and just meander through these sections and it does give you something of an incentive to explore a little more. Still if you were looking for a platforming challenge Submerged isn’t the game you’re looking for.
Submerged behaves pretty much as expected however there are a few little quirks that I feel bear mentioning. There’s obviously something a little off about the day/night cycles as, whilst they seem to work fine, the sun and moon don’t move in a smooth motion. Instead they seem to move in fast increments, something which is readily apparent when there’s long shadows cast on a building. Additionally some of the visual clues for climbing, like the vines and whatnot, aren’t exactly clear on what you can and can’t climb on first blush. A wall covered in small vines? Climbable. A wall covered in large vines? Not climbable however you can climb pipes which are roughly the same size. Of course once you figure these quirks out it’s easy to spot them but it does make for some frustrating moments.
In terms of story Submerged opts to tell it primarily through the use of hieroglyphics that are revealed to you when you complete an objective. Whilst it’s a novel approach I can’t help but feel that it was done mostly in the aid of easing the localization of Submerged more than anything, kind of like why The Sims speak gibberish rather than an actual language. Thus the story, whilst a little touching at some points, lacks any real depth or development that would draw you in. The history of the city is somewhat interesting however the fact that you only have a few pictures to go on means that there’s really not a whole lot to explore, in story terms.
Submerged is a decent experience with a wide open world to explore through stress free platforming. The above average visuals and soundtrack, combined with the relatively low challenge, do make Submerged one of the more relaxing experiences I’ve played in recent memory. However that simplicity and lack of challenge means there’s not much to really draw you in as the story, whilst serviceable, does little to draw you in. Overall whilst I’d recommend giving Submerged a go if you’re into exploration type games there’s just not a lot in there for the general gaming populace.
Submerged is available on PC, XboxOne and PlayStation4 right now for $19.99 on all platforms. Game was played on the PC with 2 hours of total playtime and 30% of the achievements unlocked.