Magic Leap: Next Level Virtual Reality.

It’s rare that we see a technology come full circle like virtual reality has. Back in the 90s there was a surge of interest in it with the large, clunky Virtuality machines being found in arcades and pizza joints the world over. Then it fell by the wayside, the expensive machines and the death of the arcades cementing them as a 90s fad. However the last few years have seen a resurgence in interest in VR with numerous startups and big brands hoping to bring the technology to the consumer. For the most part they’re all basically the same however there’s one that’s getting some attention and when you see the demo below you’ll see why.

Taken at face value the above demo doesn’t really look like anything different from what current VR systems are capable of however there is one key difference: no reference cards or QR codes anywhere to be seen. Most VR works off some form of visual cue so that it can determine things like distance and position however Magic Leap’s system appears to have no such limitation. What’s interesting about this is that they’ve repurposed another technology in order to gather the required information. In the past I would’ve guessed a scanning IR laser or something similar but it’s actually a light-field sensor.

Just like the ones that power the Lytro and the Illum.

Light-field sensors differ from traditional camera sensors by being able to capture directional information about the light in addition to the brightness and colour. For the consumer grade cameras we’ve seen based on this technology it meant that pictures could be refocused after the image was taken and even given a subtle 3D effect. For Magic Leap however it appears that they’re using a light field sensor to map out the environment they’re in, providing them a 3D picture of what it’s looking at. Then, with that information, they can superimpose a 3D model and have it realistically interact with the world (like the robot disappearing behind the table leg and the solar system reflecting off the table).

Whilst Magic Leap’s plans might be a little more sky high than an entertainment device (it appears they want to be a successful version of Google Glass) that’s most certainly going to be where their primary market will be. Whilst we’ve welcomed smartphones into almost every aspect of our lives it seems that an always on, wearable device like this is still irksome enough that widespread adoption isn’t likely to happen. Still though even in that “niche” there’s a lot of potential for technology like this and I’m sure Magic Leap will have no trouble finding hordes of willing beta testers.


Abandoned Games Can be Legally Resurrected, Free of DRM.

Many older games, like those that were built before the time when the Internet was as ubiquitous as it is today, are playable so long as you can figure out how to install them. This can be no small feat in some instances although emulators like DOSbox do a lot of the heavy lifting for you. However for slightly more modern games, especially those that relied on DRM or activation servers in order to work, getting them installed is only half the battle. Quite often those activation servers have long since shut down, leaving you with few options if you want to enjoy an older title. Typically this meant turning to the less than legitimate sources for a cracked version of the main executable, free from the checks that would otherwise prevent it from working. This practice however is now legitimized thanks to a ruling by the Library of Congress spurred on by the Electronic Freedom Foundation.


The ruling allows gamers to circumvent any measures of abandoned games that would prevent “local play” of a copy that they legally purchased. Essentially this means that if a central server is shut down (or made inactive without explanation for 6 months) then you’re free to do whatever you need to in order to resurrect it. Considering so many of us now rely on Steam or other digital distribution platforms this ruling is critical to ensuring that we’ll be able to access our games should the unthinkable happen. It also means that more recent abandonware titles that had central DRM servers can now be legally resurrected. For many of us who still enjoy old games this certainly is a boon although it does come with a couple caveats.

Probably the biggest restriction that the Library of Congress placed on this ruling was that multiplayer services were not covered by this exemption. What that means is that, should a game have a multiplayer component, creating the backend component to support it is still not a legal activity. Additionally should the mechanisms be contained within a console the exemption does not cover modification of said console in order to resurrect the game. Whilst I can understand why circumventing console protections wasn’t included (that’s essentially an open season notice to pirates) the multiplayer one feels like it should have been included. Indeed a lot of games thrived on their multiplayer scene and not being able to bring back that component could very well mean it never gets brought back at all.

The exemptions come as part of the three yearly review that the Library of Congress conducts of the Digital Millennium Copyright Act (DMCA). In the past exemptions have also been granted for things such as jailbreaking phones and the fair use of sampled content from protected media. There’s potential in a future review for the exemptions to be extended which could potentially open up further modification capabilities in order to preserve our access to legally purchased games. However the Entertainment Software Association has been fervent in its defence of both the multiplayer and console modification arguments so it will be a tough fight to win any further exemptions.

These exemptions are good news for all gamers as it means that many more titles will be playable long into the distant future. We might not have the full freedom we need yet but it’s an important first step towards ensuring that the games of our, and future generation’s, time remain playable to all.

Bad vs Sad Endings

Bad vs Sad Endings in Games.

In the age of sequels, spin-offs and re-releases we find ourselves in true endings to games are becoming increasingly rare. AAA titles will always have an eye towards a sequel or another instalment, often at the cost of a succinct narrative that ends satisfactorily. Story-first games have gone some way to alleviate this problem, focusing on narrative elements rather than gameplay, however they are still mostly in the minority. More interestingly though are the games which don’t have the Hollywood ending that many have come to expect and are incorrectly labelled as ending poorly. It’s these kinds of games which challenge our preconceived ideas about what it means for something to end well versus it ending nicely.

Bad vs Sad Endings

To illustrate my point I want to show you two examples of games where the ending wasn’t Hollywoodized but one was well executed whilst the other was not (and there will be MAJOR PLOT SPOILERS for both). The first being probably one of the most lamented endings in recent gaming history: Mass Effect 3. The second being one of the more sleeper hits of its time, well known as being a standout IP among story-first gamers: Red Dead Redemption. Both of these games share a commonality in that their ending was tragic, leaving you feel like you were done a great injustice by the eventual outcome, however the former did so in a way that was incongruent to the rest of the story whilst the latter was the bitter conclusion that was built up over the entire game.

Mass Effect’s story, and the effect you could have on it, was the selling point that attracted many gamers to the franchise. You could sculpt Shepard, both literally and figuratively, into the character that you wanted them to be. Decisions you made echoed throughout the whole storyline and you had to bear the weight of their outcomes whether they were what you intended or not. The ultimate (and original, I won’t talk about the DLC’s efforts to remedy the issues) ending however threw all of this away, that burden you carried through the entire game cast aside in favour of an endotron 3000 deus ex machina that asked you to choose one of three possible outcomes. Fans of the series weren’t upset at the fact that the Mass Effect trilogy was coming to an end, we all knew what we were in for from the start, we were upset that so much of what we built up meant nothing in the end.

Shepard was also not built up to be a tragic hero. Sure there are many tough decisions you had to make along the way, many of which resulted in dire consequences, however central to this was the fact that Shepard was able to overcome them all. His untimely end (or weirdly lack thereof for one ending which made little sense) was completely out of line with the character that had been built up to that point. There was every chance to start moulding Shepard for such a fate from the first title, heck even the final instalment had ample opportunity to do so, but the Starchild ending fell flat because it was a round hole solution to the square peg of Shepard.

John Marston, on the other hand, is a tragic hero character who’s incredibly sad story was built up from the opening scene. From the very beginning you know that Marston has a past that he’s trying to escape from but it’s catching up to him faster than he can run. There are moments where you think everything is going to work out, small glimmers of hope that this next thing will set him free, but they all come back around eventually. The entire story is one of struggle against himself, his past and the future he’s trying to build for his family and the sacrifices he needs to make in order for this to happen.

The ultimate ending, one which I replayed several times over in the hopes that there was some way I could overcome the odds, is the end the ultimate conclusion that had been built up over the course of the entire game. It’s not the ending I wanted (as the anger I felt at the end will attest) but it was the ending the story needed. Should they have strayed away from it, instead allowing Marston to live on with his past no longer bearing over him, that would be completely ruin him as a character. I might not have felt great after it happened but it was one of those endings that stuck with me long after the console was off and made me question how I felt about the whole story and not just its conclusion.

As you’ve likely picked up on the crux of what makes an ending good or bad, regardless of what emotional state it leaves you in, is whether or not the story has been built up to service its conclusion. This isn’t something that’s unique to video games either but its something that’s been given new light with the medium. As the medium matures we will increasingly see titles that buck the Hollywood happy ending trend and we’ll have to continually ask ourselves what it means for a game to end well. One thing will remain certain though; the conclusion to a story must be supported by all that preceded it.

Wendelstein 7-X

Not Your Typical Fusion Reactor: The Stellarator.

When you read news about fusion it’s likely to be about a tokamak type reactor. These large doughnut shaped devices have dominated fusion research for the past 3 decades mostly because of their relative ease of construction when compared to other designs. That’s not to say they’re not without their drawbacks, as the much delayed ITER project can attest to, however we owe much of the recent progress in this field to the tokamak design. However there are other contenders that, if they manage to perform at similar levels to tokamaks, could take over as the default design for future fusion reactors. One such design is called the stellarator and its latest incarnation could be the first reactor to achieve the dream: steady state fusion.

Wendelstein 7-X

Compared to a tokamak, which has an uniform shape, the stellarator’s containment vessel appears buckled and twisted. This is because of the fundamental design difference between the two reactor types. You see in order to contain the hot plasma, which reaches temperatures of 100 million degrees celsius, fusion reactors need to contain it with a magnetic field. Typically there are two types of fields, one that provides the pinch or compressing effect (poloidal field) and another field to keep the plasma from wobbling about and hitting the containment vessel (toroidal field). In a tokamak the poloidal field comes from within the plasma itself by running a large current through the plasma and the poloidal field from the large magnets that run the length of the vessel. A stellarator however provides both the toroidal and poloidal fields externally requiring no plasma current but necessitating a wild magnet and vessel design (pictured above). Those requirements are what have hindered stellarator design for some time however with the advent of computer aided design and construction they’re starting to become feasible.

The Wendelstein 7-X, the successor to the 7-AS, is a stellarator that’s been a long time in the making, originally scheduled to have been fully constructed by 2006. However due to the complexity and precision required of the stellarator design, which was only completed with the aid of supercomputer simulations, construction only completed last year. The device itself is a marvel of modern engineering with the vast majority of the construction being completed by robots, totalling some 1.1 million hours. The last year has seen it pass several critical validation tests, including containment vessel pressure tests and magnetic field verification. Where it really gets interesting though is where their future plans lead; to steady state power generation.

The initial experiment will be focused on short duration plasmas with the current microwave generators able to produce 10MW in 10 second bursts or 1MW for 50 seconds. This is dubbed Operational Phase 1 and will serve to validate the stellarator’s design and operating parameters. Then, after the completion of some additional construction work to include a water cooling transfer system, Operational Phase 2 will begin which will allow the microwave system to operate in a true steady state configuration, up to 30 minutes. Should Wendelstein 7-X be able to accomplish this it will be a tremendous leap forward for fusion research and could very well pave the way for the first generation of commercial reactors based on this design.

Of course we’re still a long way away from reaching that goal but this, coupled with the work being done at ITER, means that we’re far closer than we ever were to achieving the fusion dream. It might still be another 20 years away, as it always is, but never before have we had so many reactor designs in play at the scales we have today. We’ll soon have two (hopefully) validated designs done at scale that can achieve steady state plasma operations. Then it simply becomes a matter of economics and engineering, problems that are far easier to overcome. No matter how you look at it the clean, near limitless energy future we’ve long dream of is fast approaching us and that should give us all great hope for the future.


Planetbase: They’re All Dead and It’s All My Fault.

For some reason the gaming community has thrived on titles that are, for want of a better word, incredibly brutal. The trend started to take root after the first Dark Souls game which prided itself on not holding the players hand, nor caring if it proved too difficult to be enjoyable. On first look such games were the antithesis to the base ethos of games: that they be fun above all else. However such games, when played well, provide a sense of satisfaction beyond those who are perhaps a little more forgiving. Planetbase is a city building game in this vein, putting you in control of an offworld colony which, if managed incorrectly, will have dire consequences.


You are the invisible hand that will guide these colonists to establishing a viable colony. Upon landing on your planet of choice, with your colony ship full of resources and a handful of aspiring colonists, it’s up to you to give them everything they’ll need to survive. In the beginning their needs are simple, oxygen and water being all you’ll need to make it through the first night, but after that you must find a way to provide them everything they need. Like all closed ecosystems though these things need to be created in balance and should that not be done you will quickly find yourself facing catastrophe. Will you be the leader that leads your colony to success? Or will you become the agent of their destruction?

Planetbase has that Unity-esque feeling that most games developed on the engine have. It’s hard to quantify exactly what it is but like Flash games before them they all seem to share a similar visual style that became something of a trademark. This is especially true for Planetbase which feels like the colonist version of Kerbal Space Program. The simple visual style is partly due to performance reasons, something which could become a concern with larger bases. The visual simplicity also helps a lot with making sure you can keep track of your base layout, something which becomes increasingly difficult as your base grows. Overall, whilst Planetbase won’t win any awards for its graphics, they are far more than sufficient and are perfectly suited to the type of game that it is.


Your goal in Planetbase is simple: you have to build a self-sustaining colony on a new world. As you click your way through the tutorial this seems rather easy, there’s a logical progression to the structures you need to build in order to satisfy the growing needs of your colonists. However once you’re in the real game it’s easy to forget a critical step which leads to the untimely demise of your entire colony (like the above screenshot, taken not 5 minutes into my first game, can attest to). Like all city building games there’s numerous resources that you need to collect, create and manage in order to ensure that everyone in the colony has everything they need. A lack of resources in one place ultimately leads to issues in other areas of your colony and, without proper treatment, life ending consequences. The game may warn you of your impending doom every so often but that can often come too late, the alarm bell serving only to inform you of the inevitable.

Getting through your first night sounds like an easy enough challenge but it’s one that’s incredibly easy to get wrong. Should you fail to build your power array and storage too late you won’t have enough to make it through the night. If you forget to build your water extractor you won’t be able to generate enough oxygen, asphyxiating everyone before they have a chance to build the life saving solution. Thankfully once you’ve figured out these challenges (which are all addressed well enough in the tutorial) surviving the first night becomes child’s play, but the game past that point still provides a significant challenge.


Past the first night your goals turn towards building all the components you’ll need for self sufficiency and that means generating many of the required resources yourself. The first two major ones you’ll need to create are metal and bioplastic which allow you to create all the structures you’ll need. For most players metal is the first roadblock that they’ll encounter as it’s the first thing you run out of and one of the harder ones to produce. There are several strategies to deal with this (and I’ll talk about my approach a bit later) however it’s likely to be the main resource which keeps you back for a long time. Once you’ve got a production line of these two resources going the pace of the game slows down significantly as you look towards planning your future expansions.

Typically the next issue most people run into is food as your colony gains more and more people. What was interesting about this though is how many factors can influence the simple problem of not having enough food for everyone and every single one can mean people start going hungry. Not enough biologists to tend to the plants? They won’t make enough food. Not enough mealmakers in the canteen? People will have to wait for meals and there might not be enough to go around. Didn’t monitor the number of colonists you have? Keeping the landing pad open to colonists constantly might not be the greatest idea as your food production might simply be unable to cope. It took me a good 3 hours to get food working sustainably and even then it wasn’t the most efficient process.


Indeed if you really want to succeed at building a colony then you have to start thinking in much broader terms from the very get go. Whilst the smaller structures are far cheaper and quicker to construct they are by and large incredibly inefficient. The greatest example of this is the biodome with the smallest one only allowing you a third of the number of plants of the largest but costing far more in relative terms. This means that, if building a big colony is your goal, you’ll have judge which buildings to build big right off the bat and which to hold off on. For me it took a good 6 hours of play time before I reached this point and that’s when I was able to finally build a colony that wasn’t always at the brink of disaster.

Once you’ve got that all sorted then the final challenge you’ll face is getting the layout of your base right. Whilst this isn’t as impactful as the other resource challenges I’ve mentioned before it is something you’ll need to consider as your base grows in size. Placement of things like oxygen generators, processing plants and high traffic areas like bunks and canteens can radically impact the efficiency of a single colonist. If you get the layout wrong most of the time it just means progress is a lot slower than it can be but can sometimes lead to base destroying issues. One of the best examples I had of this was having one of my bunkers too far away from an oxygen generator which, when it got full at night when people went to sleep, meant that it ran out of oxygen.


Despite all these challenges though Planetbase managed to grip me in a way that few games have, tapping into that part of my brain that needs to know how this complicated system works so I can exploit it. Indeed whilst it took me 8 hours to reach 100 colonists I barely realised I had spent that much time in it, forgetting myself for hours at a time whilst I watched my little puppets go about their daily lives. There were some frustrating moments of course but they are the kinds of stories these games thrive on, those moments where a lapse in concentration or missing component ends up having unintended consequences. It may not be for everyone (unless a brutal version of Sim City is your cup of tea) but for those of us that thrive on challenges like this it’s definitely worth playing.

Rating: 8.75/10

Planetbase is available on PC right now for $19.99. Total play time was approximately 8 hours with 25% of the achievements unlocked.

The Incredibly Complexity of Ant’s Antenna Cleaning.

We’ve all watched ants go about their business. They scurry along the ground or up walls, busying themselves with transporting all sorts of things back to their nest. Every so often though you’ll see them stop and begin cleaning themselves, rubbing their antennae vigorously for quite a while before they continue the task at hand. If you’re like me you thought that was a pretty simple thing, all animals need to keep themselves clean, but that simple process belies some incredible evolutionary adaptations that ants have. Indeed as the video shows these adaptations are so advanced that replicating them could provide some benefits for the semiconductor industry.

This translation of evolutionary adaptations being translated to technical applications is called biomimicry and has played a pivotal role in technological development for quite a while. Some of the most notable examples include the development of velcro which takes inspiration from the hooks present on burs which allowed them to attach to an animal’s fur in order to spread their seed over a greater distance. The combo that the ants have could prove useful for semiconductors which are very susceptible to contamination, with other potential applications at the micro scale that require similar filtration and cleaning.

Isn’t it amazing what millions of years of evolution can come up with!


The Kilogram Will Soon be Scientific.

Of all the scientific standards the one that is still yet to be defined in pure scientific terms is the kilogram. Whilst all SI units, like meters, have their basis in real world objects they have since been redefined in pure scientific terms. The meter, once defined by the length of a pendulum with a half-second period, is now defined as the distance light travels in a specific time frame. The reasoning for redefining these measurements in absolute scientific terms has to do with reproducibility of standard objects as it’s almost impossible to create two objects that are exactly identical. Such is the issue that the kilogram has faced for much of its life, but soon it will change.


The picture above depicts a replica of the International Prototype Kilogram, a platinum-iridium cylinder machined to exacting specifications which defines the current day kilogram. It’s almost exactly sized brother, Le Grande K, is the standard by which all other kilogram measures are compared. There are numerous cylinders like this all around the world and they’re periodically compared to each other to ensure that they’re roughly in alignment. However over time there’s been fluctuations noted between the prime cylinder and its siblings which causes scientists all sorts of grief. Essentially since the kilogram weights are different, even by only micrograms, these variations need to be accounted for when using the kilogram as a standard. It would be far better if it was rigidly defined as then scientists would be able to verify their instruments themselves rather than having to rely on a physical object.

It seems we may have finally reached that point.

The trouble, you see, with defining something as nebulous as the kilogram in pure scientific terms is that it needs to be reproducible and verifiable. The International Committee for Weights and Measures (CIPM) agreed to express the kilogram in terms of Planck’s constant (a link between a photon’s energy and its frequency). Essentially experiments would need to be designed to calculate the Planck value using the standard kilogram weight as a measure, which would then allow scientists to describe the kilogram as a function of a physical constant. There were numerous experiments designed to test this however the two that have come out on top were: creating a single crystal silicon sphere and counting the atoms in it and using a device called a watt balance to measure the standard kilogram against an electrical force. These are both scientifically sound ways of approaching the experiment however the latter method struggled to get the required results.

Essentially, whilst the experiment was capable of producing usable results, they couldn’t get the level of tolerances that would be required for verification of Planck’s Constant. It took several rounds of experiments, and several different research teams, to close in on the issues however in August this year they managed to hone in on Planck’s Constant with an uncertainty of 12 parts per billion, enough for the CIPM to accept the results for use in verifying a standard kilogram. This means that these results will likely now for the basis for scientists the world over to validate and calibrate devices that reference the kilogram without having to get their hands on one of the platinum-iridium cylinders.

The change of definition isn’t slated to come into effect until July 2017 and there’s further experimentation to be done between now and then. There is potential for one of the experiments to cause an upset with the other as any deviations from the currently accepted results will require confirmation from both. Currently the silicon sphere experimenters are in the process of procuring some additional test items for investigation which could potential cause this whole thing to start over again. However with the watt-balance experiment now having most of the major kinks worked out it’s unlikely this will occur and the further experimentation will ensure that the error rate is reduced even further.

It won’t mean much of a change to our everyday lives, we’ll continue weighing things with the same scales as we did before, but it will mean a monumental change in the way we conduct scientific research. Finally ridding ourselves of the last physical objects that define our measurements will free us from their variability, making them accurate in the most true sense. It’s been a long time coming but there’s light at the end of the tunnel and we’ll soon have no need for those platinum-iridium cylinders. Well, not unless you fancy yourself a really expensive paperweight.


Aliens and Exoplanets.

As far as we know right now we’re alone in the universe. However the staggering size of the universe suggests that life should be prevalent elsewhere and we (or they) have the unenviable task of tracking it (or us) down. We’re also not quite sure to look for as whilst we have solid ideas about our kind of life there’s no guarantees that they hold universally true across the galaxy. So when it comes to observing phenomena the last reason researchers should resort to is “aliens did it” as we simply have no way of verifying that was the case. It does make for some interesting speculation however like with the current wave of media hysteria surrounding KIC 8462852, or Tabby’s star as it’s more informally called.


KIC 8462852 was one of 145,000 stars that was being constantly monitored by the Kepler spacecraft, a space telescope that was designed to directly detect exoplanets. Kepler’s planet detection method relies on a planet transiting (I.E. passing in front of) its parent star during its observation period. When the planet does this it ever so slightly drops the brightness of the star and this can give us insights into the planet’s size, orbit and composition. This method has proven to be wildly successful with the number of identified exoplanets increasing significantly thanks to Kepler’s data. KIC 8462852 has proved particularly interesting however as its variation in brightness is way beyond anything we’ve witnessed elsewhere.

Indeed instead of the tiny dips we’re accustomed to seeing, an Earth-like planet around a main sequence star like ours produces a chance of about 84 parts per million, KIC 8462852 has dipped a whopping 15% and 22% on separate occasions. Typically this isn’t particularly interesting, there are many stars with varying output for numerous reasons, however KIC 8462852 is a F-type main sequence star which is very similar to ours (which is a G-type if you’re wondering). These don’t vary wildly in output and the scientists have ruled out issues with equipment and other potential phenomena so what we’re left with is a star with varying output with no great explanation. Whatever is blocking that light has to be huge, at least half the width of the star itself.

There are a few potential candidates to explain this, most notably a cloud of comets on an elliptical orbit that happens to transit our observation path. How that exactly came to be is anyone’s guess, and indeed it would be a rare phenomenon, but it’s looking to be the best explanation we currently have. A massive debris field has currently been ruled out due to a lack of infrared radiation, something which would be present due to the star heating the debris field. This has led to some speculation as to what could cause something like this to happen and some have looked towards intelligent life as the cause.

How could an alien race make a star’s output dip that significantly you ask? Well the theory goes that any sufficiently advanced civilization will eventually require the entire energy output of their star in order to fuel their activities. The only way to do that is to encase the star in a sphere (called a Dyson Sphere) in order to capture all of the energy that it releases. Such a megastructure couldn’t be built instantly however and so to an outside observer the star’s output would likely look weird as the structure was built around it. Thus KIC 8462852, with its wild fluctuations of output, could be in the process of being encased in one such structure for use by another civilization.

Of course such a hypothesis makes numerous leaps that are not supported by any evidence we currently have at our disposal. The research is thankfully focused on finding a more plausible explanation, something which we are capable of finding by engaging in further observations of this particular star. Should all these attempts fail to explain the phenomena, something which I highly doubt will happen, only then should we start toying with the idea that this is the work of some hyper-advanced alien civilization. Whilst the sci-fi nerd me wants to leap at the possibility of a Dyson sphere being built in our backyard I honestly can’t entertain an idea when I know there are so many other plausible options out there.

It is fun to dream, though.

The Beginners Guide Review Screenshot Wallpaper Title Screen

The Beginner’s Guide: Where For Art Thou Coda?

We gamers sometimes forget how personal games are for their creators. Often they’re a reflection both the creator’s intent and the creator themselves, especially for games that are created by one person or small independent studios. I think this is partly due to the arms-length relationship most of us have with games due to the developer/publisher ecosystem, something which removes much of the potential for a personal connection. The Beginner’s Guide however is a game that attempts to connect with the player on a very personal level and, I feel, is the developer’s way of working through some of the issues he endured after the success of a previous title.

The Beginners Guide Review Screenshot Wallpaper Title Screen

The Beginner’s Guide is a narrated collection of games from the developer’s friend who’s named Coda. They’re a loose set of quirky titles, many of which defy conventional gaming standards by having things like unsolvable puzzles, areas of grand detail that are completely inaccessible and mechanics that are actively hostile towards the player. The narrator wants to show you these titles because he wants to encourage Coda to start making games again and feels like the only way to do so is to show his craft to the wider world. Whether that will be effective or not is something we might never know, but that might not be the most interesting thing about The Beginner’s Guide.

Graphically The Beginner’s Guide certainly feels like a group of cobbled together games with varying art styles permeating throughout the course of the game. Knowing that it’s built on the Source engine gives you some insight into where the aesthetic is coming from as it does feel like an overgrown set of mods for Half Life. Apart from that there’s not much to speak of in terms of visual aesthetic as the game is much more about the levels themselves, rather than how they look.

The Beginners Guide Review Screenshot Wallpaper Prison


Now this is usually the point in the review where I give you an overview of the mechanics and gameplay before I delve into each of them to give you a feel for what you can expect. However with The Beginner’s Guide, whilst there are mechanics which I could discuss, I don’t feel that’s the real point of the game at all. Instead The Beginner’s Guide is a well crafted narrative, told through the medium of games, about how the game’s developer (Davey Wreden of The Stanley Parable fame) struggled with the burden of success. Indeed it becomes very clear towards the end that Coda is a fictional character and these creations that we’re playing through are actually the product of the narrator who is dealing with his issues through the creation of this game.

I’ll admit that for the vast majority of the game I played along, figuring that this was just a quirky set of games that was cobbled together for the fun of it. Indeed there was a part of me that was annoyed at Wreden for doing so, charging me $10 for the privilege of playing games he himself did not create. However towards the end, where it’s revealed that Coda had abandoned Wreden because he simply couldn’t be around him any more, it becomes clear that this is a story of fiction. At that point though the game changed for me, instead of wondering who Coda was and why he left now I wanted to know why Wreden would create something like this. It didn’t take long to find out.

The Beginners Guide Review Screenshot Wallpaper The Machine

After rifling through numerous discussion threads I eventually landed on his blog, specifically the most recent post which is about The Stanley Parable’s widespread acclaim. In it he details what the success of that game has meant to him and the burden which he feels he carries for everyone who’s played it. Whilst I might not have reached the level of fame and acclaim that he has I can very much relate to the burden that success can bring to you; how success is supposed to negate all feelings of doubt or worry and erase all problems in your life. Indeed success can do quite the opposite, often dredging up issues or exacerbating current ones.

The Beginner’s Guide then serves as a catharsis for all these feelings, an expression of all the mixed feelings that a creator feels when their work is recognised and praised widely. The not-so-subtle hints towards Coda’s creative machine no longer working, the fear of being public, wanting to recluse himself away from society, all these take on new meaning when you realise they’re actually about the developer himself and not the fictional being of Coda. In that regard The Beginner’s Guide is one of the most personal games I’ve ever played and I’m very glad I did.

The Beginners Guide Review Screenshot Wallpaper Final Maze

The Beginner’s Guide is a personal journey, both for the player and the developer. It’s Davey Wreden working through his trials and tribulations that the success of The Stanley Parable brought him and you’re along here for the ride. Indeed The Beginner’s Guide shows how games can be used as a medium to work through things like this, just like more traditional mediums have been in the past. It might not be a game for everyone, especially for those expecting something more along the lines of The Stanley Parable, but it’s a wonderful experience all the same. One that had me playing long after I closed the game down.

Rating: 8.5/10

The Beginner’s Guide is available on PC right now for $9.99. Total play time was 1.5 hours.

3D Printing With Rocks and String.

Ever since my own failed attempt to build a 3D printer I’ve been fascinated by the rapid progress that has been made in this field. In under a decade 3D printing has gone from a niche hobby, one that required numerous hours to get working, to a commodity service. The engineering work has then been translated to different fields and numerous materials beyond simple plastic. However every so often someone manages to do 3D printing in a way that I had honestly never thought of, like this project where they 3D print a sculpture using rocks and string:

Whilst it might not be the most automated or practical way to create sculptures it is by far one of the most novel. Like a traditional selective laser sinter printer each new layer is formed by piling a layer of material over the previous. This is then secured by placing string on top of it, forming the eventual shape of the sculpture. They call this material reversible concrete which is partly true, the aggregate they appear to be using looks like the stuff you’d use in concrete, however I doubt the structural properties match that of its more permanent brethren. Still though it’s an interesting idea that could have some wider applications outside the arts space.