Mars is the most studied planet other than our own, currently playing host to no less than 7 different craft currently operating both in orbit and on its surface. It’s of interest to us due to its similarity to Earth, giving us an insight into how certain processes can affect planets differently. Mars is also the easiest of our sister planets to explore, being relatively close and having an atmosphere that won’t outright destroy craft that dare land on it. Still for all that research it still manages to surprise us, most recently by revealing the fact that liquid water still flows on it. We’re still far from done with it however and the MAVEN craft has just revealed some key insights into Mars’ atmosphere and the history behind its current state.
Mars’ atmosphere is extremely thin, over 100 times less dense than the atmosphere here on Earth. To put that in perspective that’s about the same density as the air here is on Earth at an altitude of about 30KM, or about 3 times as high as your typical jet airliner flies. It’s also almost all carbon dioxide with a small smattering of nitrogen and other trace elements. However it wasn’t always this way as numerous studies have revealed that it must have held a much thicker atmosphere in the past. What has remained something of a mystery is just how Mars came to lose its atmosphere and whether those same processes were in effect today. MAVEN, a craft specifically designed to figure this out, has made some key discoveries and it seems that the long held belief that the sun is to blame is true.
For a planet to lose its atmosphere there’s really only two places it can go. In some cases the planet itself can absorb the atmosphere, driving chemical reactions that pull all the gases down into more solid forms. This specific scenario was investigated on Mars however the lack of the kinds of minerals we’d expect to see, mostly carbonates given Mars’ mostly carbon dioxide atmosphere, means that this was unlikely to be the case. The second way is for it to lose the atmosphere to the vacuum of space which can happen in a number of ways, usually through the planet being unable to hold onto its atmosphere. This latter theory has proved to be correct although it’s far more interesting than Mars simply being too small.
In the past Mars would have looked a lot like Earth, a small blue marble wrapped in protective gases. Back then the core of Mars was still active, generating a magnetic field much like that on Earth. However, after a time, the core began to cool and the engine behind the giant magnetic field began to fade. As this field weakened the solar wind began to erode the atmosphere, slowly stripping it away. Today Mars’ magnetic field is around 40 times weaker than Earth’s, no where near enough to stop this process which is still continuing to this day. For Mars it seems that its diminutive core was what sealed its fate, unable to sustain its protective magnetic shield from the relentless torment of our sun.
Whilst this has been the prevailing theory for some time its good to get confirmation from hard data to support it. Our two closest solar relatives, Venus and Mars, provide insights into how planets can develop and what changes produce what outcomes. Knowing things like this helps us to understand our own Earth and what impacts our behaviour might have on it. Mars might not ever see its atmosphere again but at least we now know what it might have looked like once, and where it has gone.
The last decade and a half has seen an explosion in the private space industry. We’ve seen multiple new companies started many of which have now flown successful missions to the International Space Station. This is partly due to the regulatory framework that the USA adopted to spur on the private space industry as previously it was impenetrable for all but a few giant multinationals. Today congress passed a bill that ensures this regulatory framework can continue as is for some time whilst also providing a few provisions that will see a few major space projects continue for a while longer. In short it means that the amazing progress we’ve seen from the private space industry is likely to continue for at least the next decade.
Up until 2004 building and flying your own spacecraft (within the USA) was effectively illegal. Provisions were then made to allow commercial space flights by adopting a “learning period”, essentially preventing the FAA from enforcing flight regulations on private space companies. Whilst this doesn’t make them exempt from any law, ostensibly this transfers the responsibility onto any participants in private space flights, it does give private space companies the room they need to develop their technologies. That period was set to end next year however the recently passed bill will extend that for another 7 years before the Department of Transportation takes over and begins to fully regulate the industry.
There’s also further provisions for ensuring that private space companies can compete and innovate without unnecessary burdens. The first provision is the extension of the indemnification of commercial launches, essentially a risk sharing framework that ensures US based private space companies can compete with overseas launches. There’s also a directive to several government agencies to develop the proper oversight framework for commercial space activities. This will mean a formalization of the many ad-hoc processes that are currently used and should hopefully mean a reduction in some of the headaches that private space companies currently face.
Probably the biggest bit of news out of this bill however was the provision for extending the USA’s involvement in the International Space Station to 2024, a 4 year extension over the current mission time frame. The last time the deadline was extended was 6 years ago and nearly everyone thought that would be the end of it since that matched the originally intended lifespan of the station. Without a replacement forthcoming (Tiangong doesn’t count) this gives us a little more breathing room to come up with a replacement or better plan for the future of our only manned space station.
One interesting provision, and one I’m sure Planetary Resources is excited about, is the establishment of legal rights to resources recovered from space by a private entity. Essentially this means that if you were to say, mine an asteroid and send its resources down to Earth, you now have the same legal rights over them as you would if you mined them here. There’s also a directive in there for the president to pursue off-world resource exploration and recovery which will likely mean increased focus in this space. It’s still something of a nascent industry so it’s good to see it getting recognition at this level.
Of course all of this comes without additional budgetary measures for NASA et. al. to meet these goals however it does lay a firm groundwork for more funding to be put aside. Hopefully when the next budget rolls around these additional objectives will be taken into consideration as otherwise it could just end up putting more strain on NASA’s current projects. For the private space industry however it means a long extension for the conditions they’ve enjoyed over the past decade, conditions which have seen amazing progress. Hopefully the next decade is just as good as the first.
Our sun is an incredibly violent thing, smashing atoms together at an incredible rate that results in the outpouring of vast torrents of energy into our solar system. Yet from certain perspectives it takes on a serene appearance, its surface ebbing and flowing as particles trace out some of its vast magnetic field. Indeed that’s exactly what the following video shows: a gorgeous composition of imagery taken from NASA’s Solar Dynamics Observatory. Whilst not all of us have the luxury of a 4K screen it’s still quite breathtaking to behold and definitely worth at least a few minutes of your time.
SDO has been in orbit for 5 years now keeping an almost unbroken eye on our parent star. Its primary mission is to better understand the relationship that our earth and the sun have, especially those which have a direct impact on daily life. To achieve this SDO is observing the sun in multiple wavelengths all at once (shown as different colours in this video) and on a much smaller timescale than previous craft have attempted. This has led to insights into how the sun generates its magnetic field, what it looks like and how the complex fusion processes influence the sun’s varying outputs like solar wind, energetic particles and variations in its solar output. Those images aren’t just rich with scientific data however as they showcase the sun’s incredible beauty.
So, how’s the serenity? 😉
Space history of the past few decades is dominated by the Space Shuttle. Envisioned as a revolution in access to space it was designed to be launched numerous times per year, dramatically reducing the costs of access to space. The reality was unfortunately not in line with the vision as the numerous design concessions made, coupled with the incredibly long average turnaround time for missions, meant that the costs far exceeded that of many other alternative systems. Still it was an iconic craft, one that several generations will point to as the one thing they remember about our trips beyond our atmosphere. What few people realise though is that there was potential for the shuttle to have a Russian sister and her name was Buran.
The Buran project started in 1974, only 5 or so years after the Space Shuttle program was kicked off by NASA. The goals of both projects were quite similar in nature, both aiming to develop a reusable craft that could deliver satellites, cosmonauts and other cargo into orbit. Indeed when you look at the resulting craft, one of which is shown above in its abandoned complex at the Baikonur Cosmodrome, the similarities are striking. It gets even more interesting when you compare their resulting specifications as they’re almost identical with only a meter or two difference between them. Of course under the hood there’s a lot of differences, especially when it comes to the primary purpose of the Buran launch system,
The propulsion system of the Buran differed significantly from the Shuttle with the boosters being a liquid oxygen/hydrogen mix rather than a solid rocket fuel. There are advantages to this, chief among them being able to shut down the engines once you start them (something solid rocket boosters can’t do) however at the same time these were not designed to be reusable, unlike their Shuttle compatriots. This would mean that the only reusable part of the Buran launch system was the orbiter itself which would increase the per-launch cost. Additionally the Buran included a fully autonomous flight control system from the get go, something the Shuttle only received during an upgrade later in its life.
That last part is somewhat telling of Buran’s true purpose as, whilst it could service non-military goals, it was primarily developed to assist Russia’s (then the Soviet Union) military interests. Indeed the winged profile of the craft enables many mission profiles that are simply of no interest to non-military agencies and having it fully autonomous from the get go shows it was meant more conflict than research. Indeed when commenting on the programme’s cancellation a Russian cosmonaut commented that the Buran didn’t have any civilian tasks planned for it and, with a lack of requirements to fuel a military programme, it was cancelled.
That was not before it saw numerous test flights, including a successful orbital test flight. The achievements that the Buran made during its single flight are not to be underestimated as it was the first craft to perform such a flight fully unmanned and to make a fully automated landing. That latter feat is even more impressive when you consider that there was a very strong crosswind, some 60 kilometers per hour, and it managed to land mere meters off its originally intended mark. Indeed had Russia continued development of the Buran shuttle there’s every chance that it would have been a much more advanced version of its American sister for a very long time.
Today however the Buran shuttles and their various test components lie scattered around the globe in varying states of disrepair and decay. Every so often rumours about a resurrection of the program surface, however it’s been so long since the program was in operation that such a program would only share the name and little more. Russia’s space program has continued on to great success however, their Soyuz craft becoming the backbone of many of humanity’s endeavours in space. Whilst the Buran may never have become the icon for space that its sister Shuttle did it remains the highly advanced concept that could have been, a testament to the ingenuity and capability of the Russian space program.
It’s been a decade in the making but today, after such a long wait, we can now see Pluto and Charon for what they are.
And they’re absolutely stunning.
The image on the left is the high resolution image taken by the LORRI camera a few days before its closest approach (which you’ve undoubtedly seen already) with the one on the right being a recently released image of Charon. Neither of these images are the sharpest available, indeed for both Pluto and Charon we have images with up to 10 times the resolution streaming back to us right now, but they are already proving to be fruitful grounds for science. Indeed these two images have already given us insights into other celestial bodies within our solar system. Of course the most interesting thing about these pictures is what they reveal about Pluto and Charon themselves and the insights are many.
The biggest surprise is just how “young” the surfaces of both Pluto and Charon are, devoid of the impact craters that are commonplace on celestial bodies that lack an atmosphere. What this means is that both Pluto’s and Charon’s surfaces have been geologically active in the recent past, on the order of some 100 million years ago or less. There’s even a chance that their surfaces are geologically active today. If they are geologically active today it means that our current theories about the mechanism for this happening aren’t complete and there’s another way for a planet’s surface to refresh themselves.
You see current thinking is that for an icy moon or planet to be able to churn its surface over on a regular basis an outside force has to be acting on them. This is based on the current set of icy moons that orbit around our two gas giants, their giant gravitational fields bending and warping their surfaces as they orbit. However neither Charon nor Pluto has the required mass to induces stresses of that magnitude however their surfaces are still as geologically young as any of the other ice moons. So there must be another mechanism in action here, one that allows even small icy planets and moons to refresh their surfaces on a continual basis. As to what this mechanism is we are not sure but in the coming months I’m sure the scientists at NASA will have some amazing theories about how it works.
The most striking feature of Pluto is the heart which has been tentatively dubbed Tombaugh Regio for Pluto’s discoverer. It consists of 2 different lobes with the one on the left being noticeably smoother than the one on the right. It is currently being theorized that the one on the left is a giant impact crater that was then filled up with nitrogen snow (Pluto’s surface is 98% frozen nitrogen). Considering the resolution of the images we’ll have access too soon I’m sure there will be more than info to figure out the heart’s origin and any other surprising things about Pluto’s surface.
Charon on the other hand appears to be littered with giant canyons, many of them several miles tall. It’s possible that whatever is responsible for the young surface of Charon is also responsible for these giant canyons, something we’ll have to wait for the high resolution images to figure out. Also of note is the giant dark patch on Charon’s polar region which is thought to be a thin deposit of dark material with a sharp geological feature underneath it. As to what that is exactly we’re not sure but the next few months will likely reveal its secrets to us.
These two images alone are incredible, showing us worlds that were simply blurs of different coloured light for almost a century. We most certainly don’t have the full picture yet, the data that New Horizons has will take months to get back to us, but they’ve already provided valuable insight into Pluto, Charon and the solar system in which we live. I can’t wait to see what else we discover as it’s bound to shake up our understanding of the universe once again.
There are numerous risks that spacecraft face when traversing the deep black of space. Since we’ve sent many probes to many locations most of these risks are well known and thus we’ve built systems to accommodate them. Most craft carry with them fully redundant main systems, ensuring that if the main one fails that the backup can carry on the task that the probe was designed to do. The systems themselves are also built to withstand the torturous conditions that space throws at them, ensuring that even a single piece of hardware has a pretty good chance of surviving its journey. However sometimes even all that engineering can’t account for what happens out there and yesterday that happened to New Horizons.
New Horizons is a mission led by NASA which will be the first robotic probe to make a close approach to Pluto. Its primary mission is to capture the most detailed view of Pluto yet, generating vast amounts of data about our most diminutive dwarf planet. Unlike many similar missions though New Horizons won’t be entering Pluto’s orbit, instead it will capture as much data as it can as it whips by Pluto at a blistering 17 km/s. Then it will set its sights on one of the numerous Kuiper Belt objects where it will do the same. This mission has been a long time in the making launching in early 2006 and is scheduled to “arrive” at pluto in the next 10 days.
However, just yesterday, the craft entered safe mode.
What caused this to happen is not yet known however one good piece of news is that the craft is still contactable and operating within expected parameters for an event of this nature. Essentially the primary computer sensed a fault and, as it is programmed to do in this situation, switched over to the backup system and put the probe into safe mode. Whilst NASA engineers have received some information as to what the fault might be they have opted to do further diagnostics before switching the probe back onto its primary systems. This means that science activities that were scheduled for the next few days will likely be delayed whilst these troubleshooting process occur. Thankfully there were only a few images scheduled to be taken and there should be ample time to get the probe running before its closest approach to Pluto.
The potential causes behind an event of this nature are numerous but since the probe is acting as expected in such a situation it is most likely recoverable. My gut feeling is that it might have been a cosmic ray flipping a bit, something which the processors that probes like New Horizons are designed to detect. As we get more data trickled back down (it takes 9 hours for signals to reach New Horizons) we’ll know for sure what caused the problem and what the time frame will be to recover.
Events like this aren’t uncommon, nor are they unexpected, but having one this close to the mission’s ultimate goal, especially after the long wait to get there, is sure to be causing some heartache for the engineers at NASA. New Horizons will only have a very limited opportunity to do the high resolution mapping that it was built to do and events like these just up the pressure on everyone to make sure that the craft delivers as expected. I have every confidence that the team at NASA will get everything in order in no time at all however I’m sure there’s going to be some late nights for them in the next few days.
Godspeed, New Horizons.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
Outside of earth Europa is probably the best place for life as we know it to develop. Beneath the radiation soaked exterior, which consists of an ice layer that could be up to 20KM thick, lies a vast ocean that stretches deep into Europa’s interior. This internal ocean, though bereft of any light, could very well harbor the right conditions to support the development of complex life. However if we’re ever going to entertain the idea of exploring the depths of that vast and dark place we’ll first need a lot more data on Europa itself. Last week NASA has greenlit the Europa Clipper mission which will do just that, slated for some time in the 2020 decade.
Exploration of Europa has been relatively sparse, with the most recent mission being the New Horizons probe which imaged Europa on its Jupiter flyby on its path to Pluto. Indeed the majority of missions that have imaged Europa have been flybys with the only long duration mission being the Galileo probe that was in orbit around Jupiter for 8 years which included numerous flybys of Europa. The Europa Clipper mission would be quite similar in nature with the craft conducting multiple flybys rather than staying in orbit. The mission would include the multiple year journey to our jovian brother and no less than 45 flybys of Europa once it arrived.
It might seem odd that an observation mission would opt to do numerous flybys rather than a continuous orbit however there are multiple reasons for this. For starters Jupiter has a powerful radiation belt that stretches some 700,000KM out from the planet, enveloping Europa. This means that any craft that dares enter Jupiter’s orbit its lifetime is usually somewhat limited and should NASA have opted for an orbital mission rather than a flyby one the craft’s expected lifetime wouldn’t be much more than a month or so. Strictly speaking this might not be too much of an issue as you can make a lot of observations in a month however the real challenge comes from getting that data back down to Earth.
Deep space robotic probes are often capable of capturing a lot more information than they’re able to send back in real time, leading to them storing a lot of information locally and transmitting it back over a longer period of time. If the Europa clipper was orbital this would mean it would only have 30 days with which to send back information, not nearly enough for the volumes of data that modern probes can generate. The flybys though give the probe more than enough time to dump all of its data back down to Earth whilst it’s coasting outside of Jupiter’s harsh radiation belts, ensuring that all data gathered is returned safely.
Hopefully the data that this craft brings back will pave the way for a potential mission to the surface sometime in the future. Europa has so much potential for harboring life that we simply must investigate it and the data gleaned from the Europa Clipper mission will provide the basis for a future landing mission. Of course such a mission is likely decades away however I, and many others, believe that a mission to poke beneath the surface of Europa is the best chance we have of finding alien life. Even if we don’t that will provide valuable insight into the conditions for forming life and will help point our future searches.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.
Human spaceflight is, to be blunt, an unnecessarily complicated affair. Us humans require a whole host of things to make sure we can survive the trip through the harsh conditions of space, much more than our robotic companions require. Of course whilst robotic missions may be far more efficient at performing the missions we set them out on that doesn’t further our desire to become a multi-planetary species and thus the quest to find better ways to preserve our fragile bodies in the harsh realms of space continues. One of the biggest issues we face when travelling to other worlds is how we’ll build our homes there as traditional means will simply not work anywhere else that we currently know of. This is when novel techniques, such as 3D printing come into play.
Much of the construction we engage in today relies on numerous supporting industries in order to function. Transplanting these to other worlds is simply not feasible and taking prefabricated buildings along requires a bigger (or numerous smaller) launch vehicles in order to get the required payload into orbit. If we were able to build habitats in situ however then we could cut out the need for re-establishing the supporting infrastructure or bringing prefabricated buildings along with us, something which would go a long way to making an off-world colony sustainable. To that end NASA has started the 3D Printed Habitat Challenge with $2.25 million in prizes to jump start innovation in this area.
The first stage of the competition is for architects and design students to design habitats that maximise the benefits that 3D printing can provide. These will then likely be used to fuel further designs of habitats that could be constructed off-world. The second part of the competition, broken into 2 stages, is centered on the technology that will be used to create those kinds of structures. The first focuses on technology required to use materials available at site as a feed material for 3D printing, something which is currently only achieved with very specific feedstock. The second, and ultimately the most exciting, challenge is to actually build a device capable of using onsite materials (as well as recyclables) to create a habitable structure with a cool $1.1 million to those who satisfy the challenge. Doing that would be no easy feat of course but the technology created along the way will prove invaluable to future manned missions in our solar system.
We’re still likely many years away from having robots on the moon that can print us endless 3D habitats but the fact that NASA wants to spur innovation in this area means that they’re serious about pursuing a sustainable human presence offworld. There’s likely numerous engineering challenges that we’ll need to overcome, especially between different planets, but it’s far easier to adapt a current technology than it is to build one from scratch. I’m very keen to see the entries to this competition as they could very well end up visiting other planets to build us homes there.