We’ve known for some time that water exists in some forms on Mars. The Viking program, which consisted of both orbiter and lander craft, showed that Mars’ surface had many characteristics that must have been shaped by water. Further probes such as Mars Odyssey and the Phoenix Lander showed that much of the present day water that Mars holds is present at the poles, trapped in the vast frozen tundra. There’s been a lot of speculation about how liquid water could exist on Mars today however no conclusive proof had been found. That was until today when NASA announced it had proof that liquid water flows on Mars, albeit in a very salty form.
The report comes out of the Georgia Institute of Technology with collaborators from NASA’s Ames Research Center, Johns Hopkins University, University of Arizona and the Laboratoire de Planétologie et Géodynamique. Using data gathered from the Mars Reconnaissance Orbiter the researchers had identified that there were seasonal geologic features on Mars’ surface. These dark lines (pictured above) were dubbed recurring slope lineae would change over time, darkening and appearing to flow during the warmer months and then fading during the colder months. It has been thought for some time that these slopes were indicative of liquid water flows however there wasn’t any evidence to support that theory.
This is where the MRO’s Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) comes into play. This instrument was specifically designed to detect water on Mars by looking at varying wavelengths of light emitted from the planet’s surface. Once the target sites were identified CRISM was then pointed at them and their surface composition analysed. What was found at the RSL sites were minerals called hydrated salts which, when mixed with water, would lower the freezing point of the water significantly. Interestingly these hydrated salts were only detected in places were the RSL features were particularly wide as other places, where the RSLs were slimmer, did not show any signs of hydrated salts.
These salts, called perchlorates, have been seen before by several other Mars missions although they’ve never been witnessed in hydrated form before. These perchlorates can potentially keep water from freezing at temperatures down to -70°C. Additionally some of these perchlorates can be used in the manufacturing of rocket fuel, something which could prove to be quite valuable for future missions to Mars. Of course they’re likely not in their readily usable form, requiring some processing on site before they can be utilized.
Data like this presents many new opportunities for further research on Mars. It’s currently postulated that these RSLs are likely the result of a shallow subsurface flow which is wicking up to the surface when the conditions are warmer. If this is the case then these sites would be the perfect place for a rover to investigate as there’s every chance it could directly sample martian water at these sites. Considering that wherever we find liquid water on Earth we find life then there’s great potential for the same thing to happen on Mars. If there isn’t then that will also tell us a lot which means its very much worth investigating.
Scale is something that’s hard to comprehend when it comes to celestial sized objects. The sheer vastness of space is so far beyond anything that we see in our everyday lives that it becomes incomprehensible. Yet in such scale I find perspective and understanding, knowing that the universe is far greater than anything going on in just one of its countless planets. To really grasp that scale though you have to experience it; to understand that even in our cosmic backyard the breadth of space is astounding. That’s just what the following video does:
You’d be forgiven for not knowing that Amazon founder Jeff Bezos had founded a private space company. Blue Origin, as it’s known, isn’t one for the spotlight as whilst it was founded in 2000 (2 years before SpaceX) it wasn’t revealed publicly until some years later. The company has had a handful of successful test launches however, focusing primarily on the suborbital space with Vertical Takeoff/Vertical Landing (VTVL) capable rockets. Indeed their latest test vehicle, the New Shepard, was successfully launched at the beginning of this year. Outside of that though you’d be hard pressed to find out much more about Blue Origin however today they have announced that they will be launching from Cape Canaveral, using the SLC-36 complex which used to be used for the Atlas launch system.
It might not sound like the biggest deal however the press conference held for the announcement provided us some insight into the typically secretive company. For starters Blue Origins efforts have thus far been focused on space tourism, much like Virgin Galactic was. Indeed all their previous craft, including the latest New Shepard design, were suborbital craft designed to take people to the edge of space and back. This new launch site however is designed with much larger rockets in mind, ones that will be able to carry both humans and robotic craft alike into Earth’s orbit, putting them in direct competition with SpaceX and other private launch companies.
The new rocket, called Very Big Brother (pictured above), is slated to be Blue Origin’s first entry into the market. Whilst raw specifications aren’t yet forthcoming we do know that it will be based off Blue Origin’s BE-4 engine which is being co-developed with United Launch Alliance. This engine is slated to be the replacement for the RD-180 which is currently used as part of the Atlas-V launch vehicle. Comparatively speaking the engine is about half as powerful when compared to the RD-180, meaning that if the craft is similarly designed to the Atlas-V it’s payload will be somewhere in the 4.5 to 9 tonne range to LEO. Of course this could be wildly different to what they’re planning and we likely won’t know much more until the first craft launches.
Interestingly the craft is going to retain the VTVL capability that its predecessors had. This is interesting because no sizeable craft has that capability. SpaceX has been trying very hard to get it to work with the first stages of their Falcon-9 however they have yet to have a successful landing yet. Blue Origin likely won’t beat SpaceX to the punch on this however but it’s still interesting to see other companies adopting similar strategies in order to make their rockets reusable.
Also of note is the propellant that the rocket will use for the BE-4 engine. Unlike most rockets, which either run on liquid hydrogen/liquid oxeygen or RP-1(kerosene)/liquid oxygen the BE-4 will use natural gas and liquid oxygen. Indeed it has only been recently that methane has been considered as a viable propellant as I could not find an example of a mission that has flown using the fuel. However there must be something to it as SpaceX is going to use it for their forthcoming Raptor engines.
I’m starting to get the feeling that Blue Origin and SpaceX are sharing a coffee shop.
It’s good to finally get some more information out of Blue Origin, especially since we now know their ambitions are far beyond that of suborbital pleasure junkets. They’re entering a market that’s now swarming with competition however they’ve got both the capital and strategic relationships to at least have a good go at it. I’m very interested to see what they do at SLC-36 as more competition in this space is a good thing for all concerned.
The way we get most of the scientific data back from the rovers we currently have on Mars is through an indirect method. Currently there are four probes orbiting Mars (Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter and MAVEN) all of which contain communications relays, able to receive data from the rovers and then retransmit it back to Earth. This has significant advantages, mostly being that the orbiters have longer periods with which to communicate with Earth. Whilst all the rovers have their own direct connections back to Earth they’re quite limited, usually several orders of magnitude slower. Whilst current rovers won’t have their communication links improved for future missions having a better direct to Earth link could prove valuable, something which researchers at the University of California, Los Angeles (UCLA) have started to develop.
The design is an interesting one essentially being a flat panel of phased antenna array elements using a novel construction. The reasoning behind the design was that future Mars rover missions, specifically looking towards the Mars 2020 mission, would have constraints around how big of an antenna it could carry. Taking this into account, along with the other constraint that NASA typically uses X-band for deep space communications like this, the researchers came up with the design to maximise the gain of the antenna. The result is this flat, phased array design which, when tested in a prototype 4 x 4 array, closely matched their simulated performance metrics.
With so many orbiters around Mars it might seem like a better direct to Earth communications relay wouldn’t be useful however there’s no guarantees that those relays will always be available. Currently mission support for most of those orbiters is slated to end in the near future with the furthest one out slated for decommissioning in 2024 (MAVEN). Since there’s a potential new rover slated to land sometime in 2020, and since we know how long these things can last once they’ve landed, having better on board communications might become crucial to the ongoing success of the mission. Indeed should any of the other rovers still be functioning at that time the new rover may have to take on board the relay responsibilities and that would demand a much better antenna design.
There’s still more research to be done with this particular prototype, namely scaling it up from its current 4 x 4 design to the ultimate 16 x 16 panel. Should the design prove to scale as expected then there’s every chance that you might see an antenna based on this design flying with an orbiter in the near future. I’m definitely keen to see how this progresses as, whilst it might have the singular goal of improving direct to Earth communications currently, the insights gleaned from this design could lead to better designs for all future deep space craft.
Black holes are a never ending source of scientific intrigue. They form when a star of appropriate mass, approximately 5 to 10 times the mass of our own star, reaches the end of its life and begins to fuse heavier and heavier elements. At this stage the outward pressure exerted by those fusion reactions cannot overcome the gravity from its mass and it slowly begins to collapse inwards. Eventually, in a calamitous event known as a supernova, it shrinks down to a point mass of infinite density and nothing, not even light, can escape its gravitational bounds. Properties like that mean black holes do very strange things, most of which aren’t explained adequately by current models of our universe. One such thing is called the Information Paradox which has perplexed scientists for as long as the idea as black holes has been around.
The paradox stems from the interaction between general relativity (Einstein’s description of gravity as a property of spacetime) and quantum mechanics (the processes that affect atoms, photons and other particles). Their interaction suggests that physical information about anything that crosses the black hole’s event horizon could be destroyed. The problem with this is that it violates the generally held idea that if we have information about a system in one point in time we should be able to determine its state at any point in time. Put simply it means that, when you’re looking at a black hole, if something falls into it you have no way of determining when that happened because the information is destroyed.
However renown physicist Stephen Hawking, whose work on black holes is so notable that one feature of them (Hawking Radiation) is named after him, has theorized that the information might not be lost at all. Instead of the information being lost or stored within the black hole itself Hawking states that the information is stored as a super-translation (or a hologram, a 2D representation of 3D data) in the event horizon. Whilst for all practical purposes this means that the information is lost, I.E. you likely wouldn’t be able to reconstruct the system state prior to the particles crossing the event horizon, it would solve the paradox.
The idea might not be as radical as you first think as other researchers in the area, like Gerard t’Hooft (who was present at the conference where Hawking presented this idea), have been exploring similar ideas in the same vein. There’s definitely a lot of research to be done in this area, mostly to see whether or not the idea can be supported by current models or whether it warrants fundamental changes. If the idea holds up to further scrutiny then it’ll solve one of the most perplexing properties of black holes but there are still many more that await.
Space history of the past few decades is dominated by the Space Shuttle. Envisioned as a revolution in access to space it was designed to be launched numerous times per year, dramatically reducing the costs of access to space. The reality was unfortunately not in line with the vision as the numerous design concessions made, coupled with the incredibly long average turnaround time for missions, meant that the costs far exceeded that of many other alternative systems. Still it was an iconic craft, one that several generations will point to as the one thing they remember about our trips beyond our atmosphere. What few people realise though is that there was potential for the shuttle to have a Russian sister and her name was Buran.
The Buran project started in 1974, only 5 or so years after the Space Shuttle program was kicked off by NASA. The goals of both projects were quite similar in nature, both aiming to develop a reusable craft that could deliver satellites, cosmonauts and other cargo into orbit. Indeed when you look at the resulting craft, one of which is shown above in its abandoned complex at the Baikonur Cosmodrome, the similarities are striking. It gets even more interesting when you compare their resulting specifications as they’re almost identical with only a meter or two difference between them. Of course under the hood there’s a lot of differences, especially when it comes to the primary purpose of the Buran launch system,
The propulsion system of the Buran differed significantly from the Shuttle with the boosters being a liquid oxygen/hydrogen mix rather than a solid rocket fuel. There are advantages to this, chief among them being able to shut down the engines once you start them (something solid rocket boosters can’t do) however at the same time these were not designed to be reusable, unlike their Shuttle compatriots. This would mean that the only reusable part of the Buran launch system was the orbiter itself which would increase the per-launch cost. Additionally the Buran included a fully autonomous flight control system from the get go, something the Shuttle only received during an upgrade later in its life.
That last part is somewhat telling of Buran’s true purpose as, whilst it could service non-military goals, it was primarily developed to assist Russia’s (then the Soviet Union) military interests. Indeed the winged profile of the craft enables many mission profiles that are simply of no interest to non-military agencies and having it fully autonomous from the get go shows it was meant more conflict than research. Indeed when commenting on the programme’s cancellation a Russian cosmonaut commented that the Buran didn’t have any civilian tasks planned for it and, with a lack of requirements to fuel a military programme, it was cancelled.
That was not before it saw numerous test flights, including a successful orbital test flight. The achievements that the Buran made during its single flight are not to be underestimated as it was the first craft to perform such a flight fully unmanned and to make a fully automated landing. That latter feat is even more impressive when you consider that there was a very strong crosswind, some 60 kilometers per hour, and it managed to land mere meters off its originally intended mark. Indeed had Russia continued development of the Buran shuttle there’s every chance that it would have been a much more advanced version of its American sister for a very long time.
Today however the Buran shuttles and their various test components lie scattered around the globe in varying states of disrepair and decay. Every so often rumours about a resurrection of the program surface, however it’s been so long since the program was in operation that such a program would only share the name and little more. Russia’s space program has continued on to great success however, their Soyuz craft becoming the backbone of many of humanity’s endeavours in space. Whilst the Buran may never have become the icon for space that its sister Shuttle did it remains the highly advanced concept that could have been, a testament to the ingenuity and capability of the Russian space program.
When it comes to exoplanets the question that I often hear asked is: why are they all largely the same? The answer lies in the methods that we use for detecting exoplanets, the most successful of which is observing the gravitational pull that planets have on their host stars. This method requires that planets make a full orbit around their parent start in order for us to detect them which means that many go unnoticed, requiring observation times far beyond what we’re currently capable of. However there are new methods which are beginning to bear fruit with one of the most recent discoveries being a planet called 51-Eridani-b.
Unlike most other exoplanets, whose presence is inferred from the data we gather on their parent star, 51-Eridani-b is the smallest exoplanet that we’ve ever imaged directly. Whilst we didn’t get anything like the artist’s impression above it’s still quite an achievement as planets are usually many orders of magnitude dimmer than their parent stars. This makes directly imaging them incredibly difficult however this new method, which has been built into a device called the Gemini Planet Imager, allows us to directly image a certain type of exoplanet. The main advantage of this method is that it does not require a lengthy observation time to produce results although like other methods it also has some limitations.
The Gemini Planet Imager was built for the Gemini South Telescope in Chile, the sister telescope of the more famous Gemini North Telescope in Hawaii. Essentially it’s an extremely high contrast imager, one that’s able to detect a planet that’s one ten millionth as bright as its parent star. Whilst this kind of sensitivity is impressive even it can’t detect Earth-like planets around a star similar to our sun. Instead the planets that we’re likely to detect are young jupiter planets which are still hot from their formation being far more luminous than a planet typically is. This is exactly what 51-Eridani-b is, a fiery young planet that orbits a star that’s about 5 times as bright as our own.
Equally as impressive is the technology behind the Gemini Planet Imager which enables it to directly image planets like this. The first part is a coronagraph, a specially designed interference device which allows us to block out the majority of a parent star’s light. Behind that is a set of adaptive optics, essentially a set of tiny mirrors that can make micro-adjustments in order to counteract atmospheric distortions. It has to do this since, unlike space based telescopes, there’s a lot of turbulent air between us and the things we want to look at. These mirrors, which are deformable at the micro level using MEMS, are able to do this with incredible precision.
With the successful discovery of 51-Eridani-b I’m sure further discoveries won’t be far off. Whilst the Gemini Planet Imager might only be able to discover a certain type of planet it does prove that the technology platform works. This then means that improvements can be made, expanding its capabilities further. I have no doubt that future versions of this technology will be able to directly image smaller and smaller planets, one day culminating in a direct image of an Earth-like planet around a sun-like star. That, dear read, will be a day for the history books and it all began here with 51-Eridani-b.
You’d think that long duration space travel was something of a solved problem, given the numerous astronauts who’ve spent multiple months aboard the International Space Station. For some aspects of space travel this is correct but there are still many challenges that face astronauts who’d venture deeper into space. One of the biggest challenges is radiation shielding as whilst we’ve been able to keep people alive in-orbit they’re still under the protective shield of the Earth’s magnetic field. For those who go outside that realm the dangers of radiation are very real and currently we don’t have a good solution for dealing with it. The solution to this problem could come out of research being done at CERN using a new type of superconducting material.
The material is called Magnesium diboride (MgB₂) and is currently being used as part of the LHC High Luminosity Cold Powering project. MgB₂ has the desirable property of having the highest critical temperature (the point at which it becomes superconducting) of any conventional superconducting materials, some −234°C, about 40°C above absolute zero. Compared to other conventional superconductors this is a much easier temperature to work with as others usually only become superconducting at around 11°C above absolute zero. At the same time creating the material is relatively easy and inexpensive making it an ideal substance to investigate for use in other applications. In terms of applications in space the Superconductors team at CERN are working with the European Space Radiation Superconducting Shield (SR2S) project which is looking at MgB₂ as a potential basis for a superconducting magnetic shield.
Of the numerous solutions that have been proposed to protect astronauts from cosmic radiation during long duration space flight a magnetic shield is one of the few solutions that has shown promise. Essentially it would look to recreate the kind of magnetic field that’s present on earth which would deflect harmful cosmic rays away from the spacecraft. In order to generate a field large and strong enough to do this however we’d have to rely on superconductors which does introduce a lot of complexity. A MgB₂ based shield, with its lower superconducting temperature, could achieve the required field with far less requirements on cooling and power, both of which are at a premium on spacecraft.
There’s still a lot of research to go between now and a working prototype however the research team at S2RS have a good roadmap to taking the technology from the lab to the real world. The coming months will focus on quantifying what kind of field they can produce with a prototype coil, demonstrating the kinds of results they can expect. From there it will be a matter of scaling it up and working out all the parameters required for operation in space like power draw and cooling requirements.
It’s looking good for a first generation shield of this nature to be ready in time for when the first long duration flights are scheduled to occur in the future, something which is a necessity for those kinds of missions. Indeed I believe this research is certain to pave the way for the numerous private space companies and space faring nations who have set their sights beyond earth orbit.
Since its inception back in 1960 the Search for Extraterrestrial Intelligence (SETI) has scanned our skies looking for clues of intelligent life elsewhere in our universe. As you might have already guessed the search has yet to bear any fruit since, as far as we’re concerned, no one has been sending signals to us, at least not in the way we’re listening for them. The various programs that make up the greater SETI aren’t particularly well funded however, often only getting a couple hours at a time on any one radio telescope on which to make their observations. That’s all set to change however as Russian business magnate Yuri Milner is going to inject an incredible $100 million into the program over 10 years.
SETI, for the unaware, is a number of different projects and experiments all designed to seek out extraterrestrial life through various means. Traditionally this has been done by scanning the sky for radiowaves, looking for signals that are artificial in nature. Whilst the search has yet to find anything that would point towards a signal of intelligent origin there have been numerous other signals found which, upon further investigation, have turned out to have natural sources. Other SETI programs have utilized optical telescopes to search for communications using laser based communications, something which we have actually begun investigating here on earth recently. There are also numerous other, more niche programs under the SETI umbrella (like those looking for things like Dyson Spheres are other mega engineering projects) but they all share the common goal of answering the same questions: are we alone here?
Since these programs don’t strictly advance science in any particular field they’re not well funded at all, often only getting a handful of hours on telescopes per year. This means that, even though such a search is likely to prove difficult and fruitless for quite a long time, we’re really only looking for a small fraction of the year. The new funds from Yuri Milner will bolster the observation time substantially, allowing for continuous observations for extended periods of time. This will both increase the chances of finding something whilst also providing troves of data that will also be useful for other scientific research.
As Yuri says whilst we’re not expecting this increased funding to instantly result in a detection event the processes we’ll develop along the way, as well as the data we gather, will teach us a lot about the search itself. The more we try the more we’ll understand what methods haven’t proved fruitful, narrowing down the possible search areas for us to investigate. The science fiction fan in me still hopes that we’ll find something, just a skerrick, that shows there’s some other life out there. I know we won’t likely find anything for decades, maybe centuries, but that hope of finding something out there is what’s driving this program forward.
It’s been a decade in the making but today, after such a long wait, we can now see Pluto and Charon for what they are.
And they’re absolutely stunning.
The image on the left is the high resolution image taken by the LORRI camera a few days before its closest approach (which you’ve undoubtedly seen already) with the one on the right being a recently released image of Charon. Neither of these images are the sharpest available, indeed for both Pluto and Charon we have images with up to 10 times the resolution streaming back to us right now, but they are already proving to be fruitful grounds for science. Indeed these two images have already given us insights into other celestial bodies within our solar system. Of course the most interesting thing about these pictures is what they reveal about Pluto and Charon themselves and the insights are many.
The biggest surprise is just how “young” the surfaces of both Pluto and Charon are, devoid of the impact craters that are commonplace on celestial bodies that lack an atmosphere. What this means is that both Pluto’s and Charon’s surfaces have been geologically active in the recent past, on the order of some 100 million years ago or less. There’s even a chance that their surfaces are geologically active today. If they are geologically active today it means that our current theories about the mechanism for this happening aren’t complete and there’s another way for a planet’s surface to refresh themselves.
You see current thinking is that for an icy moon or planet to be able to churn its surface over on a regular basis an outside force has to be acting on them. This is based on the current set of icy moons that orbit around our two gas giants, their giant gravitational fields bending and warping their surfaces as they orbit. However neither Charon nor Pluto has the required mass to induces stresses of that magnitude however their surfaces are still as geologically young as any of the other ice moons. So there must be another mechanism in action here, one that allows even small icy planets and moons to refresh their surfaces on a continual basis. As to what this mechanism is we are not sure but in the coming months I’m sure the scientists at NASA will have some amazing theories about how it works.
The most striking feature of Pluto is the heart which has been tentatively dubbed Tombaugh Regio for Pluto’s discoverer. It consists of 2 different lobes with the one on the left being noticeably smoother than the one on the right. It is currently being theorized that the one on the left is a giant impact crater that was then filled up with nitrogen snow (Pluto’s surface is 98% frozen nitrogen). Considering the resolution of the images we’ll have access too soon I’m sure there will be more than info to figure out the heart’s origin and any other surprising things about Pluto’s surface.
Charon on the other hand appears to be littered with giant canyons, many of them several miles tall. It’s possible that whatever is responsible for the young surface of Charon is also responsible for these giant canyons, something we’ll have to wait for the high resolution images to figure out. Also of note is the giant dark patch on Charon’s polar region which is thought to be a thin deposit of dark material with a sharp geological feature underneath it. As to what that is exactly we’re not sure but the next few months will likely reveal its secrets to us.
These two images alone are incredible, showing us worlds that were simply blurs of different coloured light for almost a century. We most certainly don’t have the full picture yet, the data that New Horizons has will take months to get back to us, but they’ve already provided valuable insight into Pluto, Charon and the solar system in which we live. I can’t wait to see what else we discover as it’s bound to shake up our understanding of the universe once again.