Computers are better than humans at a lot of things but there are numerous problem spaces where they struggle. Anything with complex branching or large numbers of possibilities forces them into costly jumps, negating the benefits of their ability to think in microsecond increments. This is why it took computers so long from beating humans at something like tic-tac-toe, a computationally simple game, to beating humans at chess. However one game has proven elusive to even the most cutting edge AI developers, the seemingly simple game Go. This is because unlike chess or other games, which often rely on brute forcing out many possible moves and calculating the best one, Go has an incomprehensibly large number of possible moves making such an approach near impossible. However Google’s DeepMind AI, using their AlphaGo algorithms, has successfully defeated the top European player and will soon face its toughest challenge yet.
Unlike previous game playing AIs, which often relied on calculating board scores of potential moves, AlphaGo is a neural network that’s undergone whats called supervised learning. Essentially they’ve taken professional level Go games and fed their moves into a neural network. Then it’s told which outcomes lead to success and which ones don’t, allowing the neural network to develop it’s own pattern recognition for winning moves. This isn’t what let them beat a top Go player however as supervised learning is a well established principle in the development of neural networks. Their secret sauce appears to be a combination of an algorithm called Monte Carlo Tree Search (MCTS) and the fact that they pitted the AI against itself in order for it to get better.
MCTS is a very interesting idea, one that’s broadly applicable to games with a finite set of moves or those with set limits on play. Essentially what a MCTS will do is select moves at random and play them out until they’re finished. Then, when the outcome of that play out is determined, the moves made are then used to adjust the weightings of how successful those potential moves were. This, in essence, allows you to determine what set of moves are most optimal by refining down the problem space to what is the most ideal set. Of course the tradeoff here is between how long and deep you want the network to search and how long you have to decide to make a move.
This is where the millions of games that AlphaGo played against itself comes into play as it allowed the both the neural networks and the MCTS algorithm to be greatly refined. In their single machine tests it only lost to other Go programs once out of almost 500 games. In the match played against Fan Hui however he was matched against a veritable army of hardware, some 170 GPUs and 1200 CPUs. That should give you some indication of just how complex Go is and what it’s taken to get to this point.
AlphaGo’s biggest challenge is ahead of it though as it prepares to face down the current top Go player of the last decade, Lee Sedol. In terms of opponents Lee is an order of magnitude higher being a 9th Dan to Fan’s 2nd Dan. How they structure the matches and their infrastructure to support AlphaGo will be incredibly interesting but whether or not it will come out victorious is anyone’s guess.
I always have a slight feeling of cognitive dissonance when it comes to narratives that are player controlled. On the one hand I love that it allows me to imprint myself upon the character, crafting them into the person I want them to be in the game’s world. On the other hand however I sometimes feel like doing that runs contrary to what the true nature of the character might be, especially when I’m operating on imperfect information about said character. Oxenfree, the first title from Night School Studios (who count former Telltale Games and Disney staff among them), falls somewhere in the middle but still provides a great player driven narrative experience.
Oxenfree puts you in control of Alex, a teenager on the cusp of adulthood who’s heading out to an end of year rager with a bunch of her friends. Among them are your best friend Ren, his current crush Nona, a girl who used to date your brother Clarissa and your newly minted step-brother Jonas. The night starts off normal enough with everyone engaging in a rousing game of “Truth or Slap” however things start to quickly come unraveled as Ren beguiles you into investigating some of the island’s more paranormal features. From then on the night changes from being one of drunken revelry into a fight against a paranormal force.
The visual style of Oxenfree harks back to a time of pre-rendered backgrounds with simple 3D visuals layered on top of them. The backgrounds have a kind of textured paper look about them, as if they’re part of an arts project. The character models are quite simplistic, obviously done in that way to blend in more seamlessly with the backgrounds. However unlike the games which this art style pays homage to Oxenfree makes heavy use of lighting and visual effects, both in terms of aesthetics as well as forming part of the plot mechanics. Overall, from a visual perspective, Oxenfree is very well crafted and is done in a way that amplifies the story rather than distracting from it.
In terms of gameplay Oxenfree is primarily focused on the narrative and the dialogue choices you make as a player. You’re usually given 3 different options when responding, each of which can direct the story in a certain way. The main puzzle mechanic comes in the form of a radio which you tune to different stations, either to listen in for clues or to resonate with objects which will cause something to happen. There’s also some other puzzles which range in the form of simple to nigh on impossible although thankfully the latter, even if failed completely, will not stop you from progressing the narrative.
Oxenfree gets credit for keeping the story linear in nature whilst giving you the freedom to explore should you choose to do so. Too often I’ve played similarly styled games which lock core story elements behind inordinate numbers of puzzles, detracting from the narrative. The puzzle mechanics might be simple but they’re enough to keep you engaged through the times when there’s less dialogue about. One criticism I will level at them however is the “improved” radio which just doubles the number of frequencies you have to cycle through. Honestly that just adds tedium as you have to scroll through far more things in order to find the right frequency.
Oxenfree’s narrative deals with a lot of heavy subjects and does so through the lens of a teenage coming of age story. The paranormal aspects, whilst being downright scary in their own way, are used more as a mechanic to explore these issues rather than just being a license to do whacky things. You, as Alex, have quite a lot of control over how the story develops and this can radically change how you feel about the characters and, most interestingly, how they feel about each other. I really can’t say much more without wading into spoiler territory but suffice to say that Oxenfree delivers a solid narrative that deals well with issues that the video game medium is still coming to grips with.
Oxenfree is a powerful narrative driven game, one that shows how simplicity in all things but story can still add up to a great experience. The visual style pays homage to simpler times where pre-rendered backgrounds were a tool to get around the limitations of thte day. The mechanics are simple and do their best to get out of the way of the story. The story is what makes Oxenfree worth playing, both from the core story aspect as well as the level of control that the player is given over shaping it. For those who love a good story, or just a decent thriller, then Oxenfree is definitely worth a play through.
Oxenfree is available on PC and XboxOne right now for $19.99. Game was played on the PC with around 3 hours of total play time and 38% of the achievements unlocked.
Our cosmic backyard is still a mostly undiscovered place. Sure we know of all the major planets that share the same orbital plane as us but discoveries like the dwarf planets in the asteroid and kuiper belts are still recent events. Indeed the more we look at the things that are right next door to us the more it leads us to question just how some of these things came to be. It was the strange orbits of a few kuiper belt objects that led to the most recent discovery: the potential existence of a 9th planet orbiting our sun.
Why, I hear you ask, if we have a 9th planet have we not come across it before? Well, if confirmed, the reasons for us not seeing this planet before are simple: it’s just too damn far away. Pluto, which was discovered in 1930, is some 7.4 billion kilometers away from the sun at its closest approach whilst Planet 9 (as it is being called) is 5 times that distance at the same point in its orbit. Since planets don’t produce their own light we can only see them when they reflect light of their parent star and, that far out, our sun is a dim speck that barely illuminates anything. That, coupled with the fact that its orbit is perpendicular to ours, makes detection rather difficult and we’ve only found it now due to the effect it’s having on other kuiper belt objects.
The researchers who made the discovery, Konstantin Batygin and Mike Brown (previously credited with the discovery of a dwarf planet, Sedna), were first intrigued by a group of kuiper belt objects that all shared relatively similar orbital properties. Now due to the sheer number of objects that happen to be in the area it’s likely that this will occur by chance sometimes however they often result in unstable orbits. These objects seemed to be quite happy in their strange orbits however so there either had to be a large body, likely a planet, keeping them in line or some other force was at play. In order to verify this one way or the other a planetary model was developed and then simulated to see what other effects a planet might have.
Their simulations predicted that there should also be other kuiper belt objects with orbits that were perpendicular to Planet 9’s orbit. Looking at the data gathered on the numerous objects that exist within the kuiper belt the researchers found 5 objects that matched the simulation’s predictions, a good indicator that a planet is responsible for both them and the other peculiar orbits. This also helped to confirm some attributes of the planet like it’s potential mass (10 times that of earth) and its likely orbital period (10,000+ years). Interestingly enough this helps to fill in a gap in our solar system’s construction as current models predict the most common type of planet is one of Planet 9’s mass.
The researchers are now looking to directly image the planet in order to confirm that it exists. There’s potential for it to show up in data already collected however that will only work if it was currently close to the sun. If it was further out then time will be required on some of the larger ground based or potentially one of the space based telescopes in order to observe it. Either way direct confirmation is some way off but is surely forthcoming.
We humans were born in stars. Our elements were forged in the crucible of exploding stars, ones that had come to the end of their life and then erupted in a single cataclysmic event. This process has been going on for billions of years which is why we find our universe full of many of the elements that make up the periodic table and not just a melange of hydrogen. Like stars supernovae come in a variety of shapes and sizes and a recently observed one, dubbed ASASSN-15lh, sets the record for the brightest one ever observed. In fact it was so bright that we’re just barely able to explain how it might have happened.
ASASSN-15lh was first observed just over a year ago and initially showed up as a transient spot on observations conducted by the All Sky Automated Survey for SuperNovae. Further observations, conducted by the du Pont Telescope in Chile and the South African Large Telescope, confirmed that it was a noteworthy event that required further investigations. The final observation was then conducted by the Swift Space Telescope which then resulted in Central Bureau of Astronomical Telegrams designating it SN 2015L. The observations confirmed that this was the most luminous supernova ever to occur, something which pushes the boundaries of our understanding about how big events like this can get.
Now most blips don’t warrant the level of scrutiny that ASASSN-15lh received however the spectrum of the supernova, provided by the du Pont Telescope, was incredibly unusual. The spectrum would match that of a previously seen superluminous supernova but only if the light had been significantly red-shifted (I.E. that it happened so far from Earth that the wavelengths of light had been stretched by the expansion of space to look more red). This is where the observation from the African Large Telescope comes into play as it confirmed that the light had undergone significant redshifting. This then meant that they were looking at an incredibly bright supernova, 3 times brighter than the previous record holder.
How supernova can get this bright is an incredibly interesting process. Essentially it relies on the star shedding its outer layers first and then forming whats called a Magnetar core. These neutron star variants are shrouded in a magnetic field so powerful that it’s lethal to life at distances even up to 1000km away from it. This magnetar would then have to spin incredibly fast, completing a full revolution every millisecond (the theoretical maximum for these kinds of stars). Then, as the star began to slow, giant magnetic winds would billow forth, slamming into the outer hydrogen layers and producing a shockwave of incredible luminance.
To put it in perspective just how bright ASASSN-15lh is if it were to have happened anywhere in our galaxy it would be visible by the naked eye during the day. If it happened in our cosmic backyard it would be as luminous as the moon. At its peak ASASSN-15lh shone 20 times brighter than all the stars in the Milky Way combined.
This explanation however relies on everything happening at a perfect maximum in order to produce something as bright as this. Whilst it’s quite possible that the magnetar explanation is sufficient it’s right on the edge of our understanding and so it’s very possible that there’s other mechanics at work here that influenced the final outcome. It’s taken a year of obsverations and research to get to this point so it’s likely that the data gathered on ASASSN-15lh has numerous more insights to give us on how such incredible events occur.
For me the incredible scale of things like this fill me with a sense of wonder and amazement. To think a single entity could dwarf an entire galaxy like that, even if for only a brief moment, gives you an incredible amount of perspective on all things. Indeed the fact that the atoms and molecules that constitute me were born in such places gives me a sense of connectedness to the universe and all the wonders that dwell within it.
If there’s one thing that SpaceX has shown us is that landing a rocket from space onto a barge in the middle of the ocean is, well, hard. Whilst they’ve successfully landed one of their Falcon-9 first stages on land not all of their launches will match that profile, hence the requirement for their drone barge. However that barge presents its own set of challenges although the last 2 failed attempts were due to a lack of hydraulic fluid and slower than expected throttle response. Their recent launch, which was delivering the Jason 3 earth observation satellite into orbit, managed to land successfully again however failed to stay upright at the last minute.
Elon stated that the failure was due to one of the lockout collets (basically a clamp) not locking properly on one of the legs. Looking at the video above you can see which one of those legs is the culprit as you can see it sliding forward and ultimately collapsing underneath. The current thinking is that the failure was due to icing caused by heavy fog at liftoff although a detailed analysis has not yet been conducted. Thankfully this time around the pieces they have to look at are a little bigger than last times rather catastrophic explosion.
Whilst it might seem like landing on a drone ship is always doomed to failure we have to remember that this is what the early stages of NASA and other space programmes looked like. Keeping a rocket like that upright under its own strength, on a moving barge no less, is a difficult endeavour and the fact that they’ve managed to successfully land twice (but fail to remain upright) shows that they’re most of the way there. I’m definitely looking forward to their next attempt as there’s a very high likelihood of that one finally succeeding.
The payload it launched is part of the Ocean Surface Topography from Space mission which aims to map the height of the earth’s oceans over time. It joins one of its predecessors (Jason-2) and combined they will be able to map approximately 95% of the ice-free oceans in the world every 10 days. This allows researchers to study climate effects, providing forecasting for cyclones and even tracking animals. Jason-3 will enable much more high resolution data to be captured and paves the way for a future, single mission that will be planned to replace both of the current Jason series satellites.
SpaceX is rapidly decreasing the access costs to space and once they perfect the first stage landing on both sea and land they’ll be able to push it down even further. Hopefully they’ll extend this technology to their larger family of boosters, once of which is scheduled to be test flown later this year. That particular rocket will reduce launch costs by a factor of 4, getting us dangerously close to the $1,000/KG limit that, when achieved, will be the start of a new era of space access for all.
Despite what others seem to think I’ve always liked the idea behind cryptocurrencies. A decentralized method of transferring wealth between parties, free from the influence of outside parties, has an enormous amount of value as a service. Bitcoin was the first incarnation of this idea to actually work, creating the ideas that power the proof-of-work system and the decentralized nature that was critical to its success. However the Bitcoin community and I soon parted ways as my writings on its use as a speculative investment vehicle rubbed numerous people the wrong way. It seems that the tenancy to run against the groupthink runs all the way to the top of the Bitcoin community and may ultimately spell its demise.
Bitcoin, for those who haven’t been following it, has recently faced a dilemma. The payment network is currently limited by the size of each “block”, basically the size of the entry in the decentralized ledger, which puts an upper limit on the number of transactions that can be processed per second. The theoretical upper limit was approximately 7 per second however further development on the blockchain meant that the upper limit was less than half that. Whilst that still sounds like a lot of transactions (~600,000/day) it’s a far cry from what regular payment institutions do. This limitation needs to be addressed as the Bitcoin network already experiences severe delays in confirming transactions and it won’t get any better as time goes on.
Some of the core Bitcoin developers proposed an extension to the core Bitcoin framework called Bitcoin XT. The fork of the original client increased the block size to 8MB and proposed to double the size every 2 years up to 10 times, making the final block size somewhere around 8GB. This would’ve helped Bitcoin overcome some of the fundamental issues it is currently facing but it wasn’t met with universal approval. The developers decided to leave it up to the community to decide as the Bitcoin XT client was still compatible with the current network. The community would vote with its hashing power and the change could happen without much further interaction.
However the idea of a split between the core developers sent ripples through the community. This has since culminated in one of the lead developers leaving the project, declaring that it has failed.
His resignation sparked a quick downturn in the Bitcoin market, seeing the price shed about 20% of its price immediately. Whilst this isn’t the death knell of Bitcoin (since it soon regained some of the lost ground) it does show why the Bitcoin XT idea was so divisive. Bitcoin, whilst structured in a decentralised manner, has become anything but that with the development of large mining pools which control the lion’s share of the Bitcoin processing market. The resistance to change has largely come from them and those with a monetary interest in Bitcoin remaining the way it is: under their control. Whilst many will still uphold it as a currency of the people the unfortunate fact is that Bitcoin is far from that now, and is in need of change.
It is here where Bitcoin finds itself at a crossroads. There’s no doubt that it will soon run up hard against its own limitations and change will have to come eventually. The question is what kind of change and whether or not it will be to the benefit of all or just the few. The core tenants which first endeared me to cryptocurrencies still hold true within Bitcoin however its current implementation and those who control its ultimate destiny seem to be at odds with them. Suffice to say Bitcoin’s future is looking just as tumultuous as its past and that’s never been one of its admirable qualities.
Reaction based games have never really been my strong suit. Ever since the brutality that is the Battletoads bike level I think I’ve been left scarred, the flashbacks of the nigh impossible stretch haunting my thoughts whenever I face a similar challenge. There is something to be said for those kinds of challenges though as, given enough tries, you will eventually succeed. However there are other reaction based games which have no such safety net, forcing you to develop strategies to cope with the unknowable path that lies before you. Linea is an example of these kinds of games, one where a random set of obstacles must be overcome in order for you to be victorious.
The idea behind Linea is simple: avoid all the obstacles for 60 seconds. This is, of course, much harder than it sounds as whilst the objects you need to avoid a telegraphed before they’ll reach you it’s not like you have forever to make up your mind. Hesitate, or make a wrong move that you try to correct, and you’ll likely collide with something, sending you all the way back to the start. However once you start again the pattern of objects will change which means there’s no amount of memorization that will help you succeed. Instead you have to learn to understand the visual cues being given to you and, critically, translate them into the right kind of movement.
This is where Linea’s minimalistic aesthetic is both a blessing and a curse. Whilst there’s little extraneous stuff to distract you there’s just enough to trigger you to react in the wrong ways. The level below, for instance, has obstacles that you will avoid automatically if you do nothing. However should you only look at the top or bottom half of them chances are you’ll think you need to do something to avoid them and, unfortunately, hit them. Whilst there are some repeating patterns within the randomness (or that could just be me noticing patterns in RNG, I’m not 100% sure) you’ll need to hone your reflexes in order to beat Linea, something which I simply haven’t had the patience for over the past week.
Although this doesn’t count either way in terms of my review one thing I did think would be cool would be to code an AI to play the game for me. The games simplicity lends itself well to a first time project and would cover all the basics of visual processing, input management and look-ahead algorithms. This could also just be me thinking it’d be easier for me to code something than to actually, you know, beat the game myself but it’s one of the few games in a long time where I thought that would be an interesting thing to do. (For the record the last one I can remember wanting to do that for was Super Meat Boy).
Linea is a challenging reaction based game that’s sure to delight fans of the genre. The core game is fast paced with minimal downtime, ensuring that you’re not doing much else but bashing your head against the game’s primary challenge. The minimal visual aesthetic is great to look at but also a punishing part of the core game, making it that much more difficult to visually distinguish everything on screen. Whilst it might not be my usual cup of tea I was surprised I played it for as long as I did, especially when I started craving after a few achievements.
Linea is available on PC right now for $1.99. Total play time was 1 hour with 62% of the achievements unlocked.
Dry ice is a very interesting substance, both from a scientific and “it’s just plain cool” point of view. Many are familiar with the billowing clouds of smoke it can produce when placed in water, seemingly a staple of anything that needs to be made to look spooky. Others will know it for its culinary applications, able to cool things down far more rapidly than any fridge or freezer. However whats less understood is the mechanisms of how dry ice actually works which is what can produce some rather interesting effects like those shown below.
Dry ice is the solid form of carbon dioxide which, thanks to its useful properties, has found many everyday applications. It’s also quite easy to manufacture as carbon dioxide is a byproduct of many other processes. This gas is then trapped and pressurized, changing it into a liquid form. Then the pressure is released, causing some of the liquid to boil off which rapidly cools down the remaining liquid. This then forms a kind of carbon dioxide snow which can then be compressed into blocks or small pellets. Industrial applications often use the large blocks whilst the pellets are used in more everyday applications.
The video above demonstrates a property of dry ice that’s not completely obvious if you don’t know what to look for. Carbon dioxide doesn’t have a liquid state at atmospheric pressures which means that it transitions directly from a solid to a gas, bypassing the liquid phase. This process is called sublimation and means that the entire surface of dry ice is constantly emitting carbon dioxide gas. When you put something on top of it, like a large metal part shown above, the gas has to squeeze past the surface in order to escape. This is akin to pulling the ends of a balloon apart to make that loud screeching noise which is why this part appears to “scream”.
There are many other videos of people producing similar effects with dry ice and other metal objects like spoons and pennies. One interesting thing I noted from some of the other ones that the screaming effect would often stop after a short period of time. I believe this is due to the metal’s temperature approaching that of the dry ice which means that the dry ice no longer sublimates. The part in the video above is likely carrying quite a bit of heat which is why the screaming continues on for so long.
Quite fascinating, if I do say so myself.
The Oculus Rift Kickstarter campaign showed that there was a want for virtual reality to start making a comeback. However the other side of that equation, the ones who’d be delivering experiences through the VR platform, weren’t really prepared to capitalize on that. There are numerous reasons for this but mostly it comes down to consumer VR still being a nascent industry with the proper tooling still not there to make the experience seamless. Unfortunately it’s something of a chicken and egg problem: standards and tooling won’t fully emerge until there’s a critical mass of users and those users won’t appear until those standards are in place. This is why the high price of the Oculus Rift consumer model costs far more than its sticker price.
Many looked towards the Oculus Rift as the definitive VR headset, something which Oculus has obviously taken into account when designing it. Whilst I, as an early adopter of many pieces of technology, may appreciate the no-holds-barred approach for devices like this I know this limits broader appeal. Whilst this is sometimes a good strategy in order to get your production line stood up (ala Tesla when they produced the Roadster and then the Model S) the Oculus already had that in the previous two iterations of the dev kit. I think what many were expecting then was the Model T of VR headsets and what they got instead was a Rolls Royce Phantom.
However Oculus is no longer the only name in the game anymore with both the HTC VIVE PRE and the PlayStationVR headsets scheduled to come out in the first half of this year. Both of these are targetting at much more reasonable price point, although they admit that their headsets are not as premium as the Oculus Rift is. Whilst Oculus’ preorders may have surpassed their expectations I still feel that they alienated a good chunk of their market going for the price point that they did. For those who balked at the Oculus’ price the other two headsets could prove to be a viable alternative and that could spell trouble for Oculus.
Whilst Oculus won’t be going anywhere soon as a company (thanks entirely to the Facebook acquisition) they will likely struggle to cement their position as the market leader in the VR headset space. Indeed the higher price point, which according to Oculus is the bare minimum they can charge for it, won’t come down significantly until economies of scale kick in. Lower sales volumes means that takes much longer to come into effect and, potentially, HTC and Sony could be well on their way to mass produced headsets that are a fraction the cost of the Oculus.
In the end it comes down to which of the headsets provide a “good enough” experience for the most attractive price. There will always be a market for a premium version of a product however it’s rare that those models are the ones most frequently purchased. Oculus’ current price point puts it out of the reach of many, a gap which HTC and Sony will rush into fill in no short order. The next year will then become a heated battle for who takes the VR crown, showing which product strategy was the right one. For now my money is on the cheaper end of the spectrum and I’m waiting to be proved wrong.
Announced back in 2007 Google’s Lunar X-Prize was an incredibly ambitious idea. Originally the aim was to spur the then nascent private space industry to look beyond low earth orbit, hoping to see a new lunar rover land on the moon by 2012. As with all things space though these things take time and as the deadline approached not one of the registered teams had made enough meaningful progress towards even launching a craft. That deadline now extends to the end of this year and many of the teams are much closer to actually launching something. One of them has been backed by Audi and have their sights set on more than just the basic requirements.
The team, called Part Time Scientists (PTS), has designed a rover that’s being called the Audi Lunar Quattro. Whilst details are scant as to what the specifications are the rover recently made a debut at the Detroit Auto Show where a working prototype was showcased. In terms of capabilities it looks to be focused primarily on the X-Prize objectives, sporting just a single instrument pod which contains the requisite cameras. One notable feature it has is the ability to tilt its solar panels in either direction, allowing it to charge more efficiency during the lunar day. As to what else in under the hood we don’t yet know but there are a few things we can infer from what their goals are for the Audi Lunar Quattro’s mission.
The Google Lunar X-Prize’s main objective is for a private company (with no more than 10% government funding) to land a rover on the moon, drive it 500m and stream the whole thing in real time back to earth in high definition. It’s likely that the large camera on the front is used for the video stream whilst the two smaller ones either side are likely stereoscopic imagers to help with driving it on the lunar surface. PTS have also stated that they want to travel to the resting site of the Lunar Roving Vehicle left behind by Apollo 17. This likely means that much of the main body of the rover is dedicated to batteries as they’ll need to move some 2.3KM in order to cover off that objective.
There’s a couple other objectives they potentially could be shooting for although the relative simplicity of the rover rules out a few of them. PTS have already said they want to go for the Apollo Heritage Prize so it wouldn’t be a surprise if they went for the Heritage Prize as well. There’s the possibility they could be going for the range prize as if their rover is capable of covering half the distance then I don’t see any reason why it couldn’t do it again. The rover likely can’t get the Survival Prize as surviving a Lunar night is a tough challenge with a solar powered craft. I’d also doubt its ability to detect water as that single instrument stalk doesn’t look like it could house the appropriate instrumentation to accomplish it.
One thing that PTS haven’t yet completed though, and this will be crucial to them succeeding, is locking in a launch contract. They’ve stated that they want to launch a pair of rovers in the 3rd quarter of 2017 however without a launch deal signed now I’m skeptical about whether or not this can take place. Only 2 teams competing for the Lunar X-Prize have locked in launch contracts to date and with the deadline fast approaching it’s going to get harder to find a rocket that has the required capabilities.
Still it’s exciting to see the Lunar X-Prize begin to bear fruit. The initial 5 year timeline was certainly aggressive but it appears to have helped spur on numerous companies towards achieving the lofty goal. Whilst it might take another 5 years past that original deadline to fulfill it the lessons learned and technology developed along the way will prove invaluable both on the moon and back here on earth. Whilst we’re not likely to see a landing inside of this year I’m sure we’ll something the year afterwards. That’s practically tomorrow, when you’re talking in space time.