There are numerous risks that spacecraft face when traversing the deep black of space. Since we’ve sent many probes to many locations most of these risks are well known and thus we’ve built systems to accommodate them. Most craft carry with them fully redundant main systems, ensuring that if the main one fails that the backup can carry on the task that the probe was designed to do. The systems themselves are also built to withstand the torturous conditions that space throws at them, ensuring that even a single piece of hardware has a pretty good chance of surviving its journey. However sometimes even all that engineering can’t account for what happens out there and yesterday that happened to New Horizons.
New Horizons is a mission led by NASA which will be the first robotic probe to make a close approach to Pluto. Its primary mission is to capture the most detailed view of Pluto yet, generating vast amounts of data about our most diminutive dwarf planet. Unlike many similar missions though New Horizons won’t be entering Pluto’s orbit, instead it will capture as much data as it can as it whips by Pluto at a blistering 17 km/s. Then it will set its sights on one of the numerous Kuiper Belt objects where it will do the same. This mission has been a long time in the making launching in early 2006 and is scheduled to “arrive” at pluto in the next 10 days.
However, just yesterday, the craft entered safe mode.
What caused this to happen is not yet known however one good piece of news is that the craft is still contactable and operating within expected parameters for an event of this nature. Essentially the primary computer sensed a fault and, as it is programmed to do in this situation, switched over to the backup system and put the probe into safe mode. Whilst NASA engineers have received some information as to what the fault might be they have opted to do further diagnostics before switching the probe back onto its primary systems. This means that science activities that were scheduled for the next few days will likely be delayed whilst these troubleshooting process occur. Thankfully there were only a few images scheduled to be taken and there should be ample time to get the probe running before its closest approach to Pluto.
The potential causes behind an event of this nature are numerous but since the probe is acting as expected in such a situation it is most likely recoverable. My gut feeling is that it might have been a cosmic ray flipping a bit, something which the processors that probes like New Horizons are designed to detect. As we get more data trickled back down (it takes 9 hours for signals to reach New Horizons) we’ll know for sure what caused the problem and what the time frame will be to recover.
Events like this aren’t uncommon, nor are they unexpected, but having one this close to the mission’s ultimate goal, especially after the long wait to get there, is sure to be causing some heartache for the engineers at NASA. New Horizons will only have a very limited opportunity to do the high resolution mapping that it was built to do and events like these just up the pressure on everyone to make sure that the craft delivers as expected. I have every confidence that the team at NASA will get everything in order in no time at all however I’m sure there’s going to be some late nights for them in the next few days.
Godspeed, New Horizons.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
Outside of earth Europa is probably the best place for life as we know it to develop. Beneath the radiation soaked exterior, which consists of an ice layer that could be up to 20KM thick, lies a vast ocean that stretches deep into Europa’s interior. This internal ocean, though bereft of any light, could very well harbor the right conditions to support the development of complex life. However if we’re ever going to entertain the idea of exploring the depths of that vast and dark place we’ll first need a lot more data on Europa itself. Last week NASA has greenlit the Europa Clipper mission which will do just that, slated for some time in the 2020 decade.
Exploration of Europa has been relatively sparse, with the most recent mission being the New Horizons probe which imaged Europa on its Jupiter flyby on its path to Pluto. Indeed the majority of missions that have imaged Europa have been flybys with the only long duration mission being the Galileo probe that was in orbit around Jupiter for 8 years which included numerous flybys of Europa. The Europa Clipper mission would be quite similar in nature with the craft conducting multiple flybys rather than staying in orbit. The mission would include the multiple year journey to our jovian brother and no less than 45 flybys of Europa once it arrived.
It might seem odd that an observation mission would opt to do numerous flybys rather than a continuous orbit however there are multiple reasons for this. For starters Jupiter has a powerful radiation belt that stretches some 700,000KM out from the planet, enveloping Europa. This means that any craft that dares enter Jupiter’s orbit its lifetime is usually somewhat limited and should NASA have opted for an orbital mission rather than a flyby one the craft’s expected lifetime wouldn’t be much more than a month or so. Strictly speaking this might not be too much of an issue as you can make a lot of observations in a month however the real challenge comes from getting that data back down to Earth.
Deep space robotic probes are often capable of capturing a lot more information than they’re able to send back in real time, leading to them storing a lot of information locally and transmitting it back over a longer period of time. If the Europa clipper was orbital this would mean it would only have 30 days with which to send back information, not nearly enough for the volumes of data that modern probes can generate. The flybys though give the probe more than enough time to dump all of its data back down to Earth whilst it’s coasting outside of Jupiter’s harsh radiation belts, ensuring that all data gathered is returned safely.
Hopefully the data that this craft brings back will pave the way for a potential mission to the surface sometime in the future. Europa has so much potential for harboring life that we simply must investigate it and the data gleaned from the Europa Clipper mission will provide the basis for a future landing mission. Of course such a mission is likely decades away however I, and many others, believe that a mission to poke beneath the surface of Europa is the best chance we have of finding alien life. Even if we don’t that will provide valuable insight into the conditions for forming life and will help point our future searches.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.
The Rosetta mission’s journey to comet Churyumov–Gerasimenko 67P spanned some 10 years, nearly all of that spent idling through space as it performed the numerous gravity assists required to get up the required speed. By comparison the mere 60 hours that the Philae Lander, the near cubic meter sized daughter craft of the parent Rosetta satellite, seemed almost insignificant by comparison but thankfully it was able to return some data before it went dead. There was some speculation that, maybe, once the comet got close enough to the sun the lander would have enough power to come back online and resume its activities. Chances were slim though as it had landed in a high walled crater that blocked much of the sun from hitting.
However, just under 12 hours ago, Philae made contact with Rosetta.
To say that the chances of Philae waking up were slim was putting it lightly given the trials and tribulations it went through during its landing attempt. In the extremely weak gravity field of its parent comet the 100KG lander weighs a mere 1g meaning the slightest push could send it tumbling across the surface or, even worse, out into space. This wouldn’t have been an issue if Philae’s landing hooks had fired but they unfortunately failed meaning it had no way with which to hang onto the surface. Thankfully it seems that an outgassing event hasn’t blown our little lander away and, after the Rosetta craft turned on its receiver to listen for it, we’ve finally made contact with Philae.
Reestablishing contact with Philae is a boon to the Rosetta mission as the lander contains a wealth of data that we could not retrieve when it was last active, due to time constraints. After the initial burst of 300 that the ESA was able to retrieve during this first contact after it went dark there are still some 8000 packets left to collect. These will provide some great insight into what happened to the lander during the dark period and what it’s been up to since it finally woke up. Early indications are that Philae has actually been awake before it was just unable to make contact with the Rosetta probe for whatever reason. We’ll likely know a lot more as the ESA team gets more time to analyze the data.
This also doesn’t appear to simply be a spurious occurrence either as the telemetry data indicates that Philae is operating at a balmy -35°C and is generating some 24 watts of power off its solar panels. Considering that its panels were rated for 32 watts at 3AUs from the sun (it is currently 1.4AUs as of writing) that’s not bad considering that it’s in something of a crater which would limit its sun exposure dramatically. This figure can only be expected to increase as time goes on meaning that Philae will likely be able to keep transmitting data and continue the experiments that it was unable to do previously. One such example is drilling into the surface of its parent comet, something which was attempted previously but didn’t prove successful.
Spacecraft coming back from the dead like this are a rare occurrence and it’s an absolute joy to hear that Philae has awoken from its 7 month slumber. It’s brief 60 hour mission will hopefully now be extended several times over, allowing us to conduct the full array of experiments and gather valuable data. What insights it will dredge up is anyone’s guess but suffice to say that Philae’s reawakening is a boon to both the ESA and the greater science community at large.
Solar sails sound like something that’s strictly science fiction but they’ve had a surprising amount of real world success over the past 5 years. Back in 2010 Japan launched their IKAROS craft, an ambitious project that had its sights set on a fully solar sail powered mission to Venus which it successfully completed in December of the same year. Nanosail-D2 (D1 was lost when the Falcon rocket carrying it failed to reach orbit) followed shortly afterwards and, whilst it had some issues deploying from its parent satellite, eventually managed to deploy and stay in orbit for some time. The most recent mission, headed up by the Planetary Society who took over the Nanosail project from NASA, called LightSail-A announced that they had successfully deployed their sail which bodes well for their future missions.
Whilst this isn’t exactly new territory for solar sails as a technology it is a rather important validation of the technological platform that the Planetary Society wants to use going forward. LightSail-A was built on a three unit cubesat platform with one unit dedicated to the core electronics platform and the other two holding the solar sail. It’s essentially another version of the Nanosail-D type craft that NASA launched when they were in charge of the program although I’m sure there’s some fundamental differences under the hood. What’s really interesting about LightSail-A though is that it’s entirely funded by the Planetary Society through their member dues and a wildly successful Kickstarter campaign, raising the requisite $1.8 million to get their craft into orbit.
It hasn’t been smooth sailing for this little craft however, something which seems to be par for the course with solar sail projects. Two days after launch LightSail-A fell out of contact with earth, rendering it unable to deploy its sail. All hopes were then pinned on LightSail-A rebooting itself which it did just over a week later. Just a few days after that however an issue with the battery system, which had failed to charge after the solar panels had deployed, knocked the craft out of communication again. 4 days ago however contact was reestablished and, just one short day afterwards LightSail-A confirmed that it had deployed its sails. Today the Planetary Society released the first image captured from LightSail-A (shown above), confirming that the sails had been deployed.
The amount of time that the craft has left up in orbit is anyone’s guess as the original mission duration was planned for two to ten days after the sail had been deployed. The altitude of LightSail-A’s orbit means that it can’t be used to test the propulsion capabilities as the atmospheric drag is far greater than any thrust that the sail can generate. The next week or so will give the Planetary Society enough time to shake down the rest of the systems, hopefully working out any further kinks before they attempt their next mission, currently planned for sometime next year.
It might not be the most revolutionary nor sexy of space missions however the fact that this happened on the back of support from the public is what makes LightSail-A’s accomplishments significant. Solar sails have the potential to revolutionize the way our spacecraft access deep space, enabling faster and more efficient missions to other celestial bodies within our solar system. We may be a decade or so away from seeing it being adapted in earnest but without missions like LightSail-A we’d be waiting for much, much longer.
Human spaceflight is, to be blunt, an unnecessarily complicated affair. Us humans require a whole host of things to make sure we can survive the trip through the harsh conditions of space, much more than our robotic companions require. Of course whilst robotic missions may be far more efficient at performing the missions we set them out on that doesn’t further our desire to become a multi-planetary species and thus the quest to find better ways to preserve our fragile bodies in the harsh realms of space continues. One of the biggest issues we face when travelling to other worlds is how we’ll build our homes there as traditional means will simply not work anywhere else that we currently know of. This is when novel techniques, such as 3D printing come into play.
Much of the construction we engage in today relies on numerous supporting industries in order to function. Transplanting these to other worlds is simply not feasible and taking prefabricated buildings along requires a bigger (or numerous smaller) launch vehicles in order to get the required payload into orbit. If we were able to build habitats in situ however then we could cut out the need for re-establishing the supporting infrastructure or bringing prefabricated buildings along with us, something which would go a long way to making an off-world colony sustainable. To that end NASA has started the 3D Printed Habitat Challenge with $2.25 million in prizes to jump start innovation in this area.
The first stage of the competition is for architects and design students to design habitats that maximise the benefits that 3D printing can provide. These will then likely be used to fuel further designs of habitats that could be constructed off-world. The second part of the competition, broken into 2 stages, is centered on the technology that will be used to create those kinds of structures. The first focuses on technology required to use materials available at site as a feed material for 3D printing, something which is currently only achieved with very specific feedstock. The second, and ultimately the most exciting, challenge is to actually build a device capable of using onsite materials (as well as recyclables) to create a habitable structure with a cool $1.1 million to those who satisfy the challenge. Doing that would be no easy feat of course but the technology created along the way will prove invaluable to future manned missions in our solar system.
We’re still likely many years away from having robots on the moon that can print us endless 3D habitats but the fact that NASA wants to spur innovation in this area means that they’re serious about pursuing a sustainable human presence offworld. There’s likely numerous engineering challenges that we’ll need to overcome, especially between different planets, but it’s far easier to adapt a current technology than it is to build one from scratch. I’m very keen to see the entries to this competition as they could very well end up visiting other planets to build us homes there.
Whilst the mainstream media would have you believe that the bright spots on Ceres were a surprise to everyone they’ve actually been something we’ve known about for quite some time. However in the past they seemed to come and go making consistent observations of them rather difficult. With the Dawn craft now in a stable orbit around Ceres we are now in the position to observe them much more closely, bringing us ever closer to understanding what the heck it is. There’s still a lot more for us to understand but the first round of preliminary observations have provided some very good insight into the bright spot’s composition and its likely origin.
The first revelation to come out of Dawn’s observations was that the bright spot was in fact not a singular entity and is made up of several spots. There’s 2 large primary bright spots that are accompanied by a bunch of smaller ones which indicates that, as we make better observations, that those larger spots are most likely made up of multiple smaller spots as well. As the above ground map indicates there are actually a bunch of other bright spots dotted over Ceres’ landscape however none of them were close enough together to be observable before Dawn began making closer approaches. The origins of these spots remain something of a mystery however there are several prevailing theories about how they could have been created.
Ceres has been observed as having a very tenuous atmosphere which could only have arisen from outgassing or sublimation from its core. In early 2014 observations of Ceres detected some localized cryovolcanoes which are dumping some 3KG of water out into space every second supporting the theory that there’s some form of water hidden within Ceres. This supports the theory that these bright spots are most likely water ice (which would have the required reflectivity) but at the same time water in a vacuum tends to sublimate very quickly which begs the question of how long these bright spots have been around and how long they’ll last.
It’s quite possible that the ice in the crater was revealed by a recent impact and thus we’re just lucky that the bright spot is there for us to observe it. Considering that Ceres sits within the asteroid belt between Mars and Jupiter this is a very real possibility although that does then raise the question of why we’re not seeing more bright spots than we currently are. This is what then fuels other, more exotic, theories about what’s at the base of that crater such as a large metallic deposit. However evidence to support those theories isn’t yet forthcoming however once Dawn starts making closer approaches there is potential for some to come to light.
Needless to say the next few months of observations will prove extremely valuable in determining the bright spots’ elusive nature. Whilst the reality is likely to be far more dull and boring than any of the exotic theories make it out to be it’s still an exciting prospect, one that will give us insight into how solar systems like ours form.
MESSENGER was a great example of how NASA’s reputation for solid engineering can extend the life of their spacecraft far beyond anyone’s expectations. Originally slated for a one year mission once it reached it’s destination (a 7 year long journey in itself) MESSENGER continued to operate around Mercury for another 3 years past its original mission date, providing all sorts of great data on the diminutive planet that hugs our sun. However after being in orbit for so long its fuel reserves ran empty leaving it unable to maintain its orbit. Then last week MESSENGER crash landed on Mercury’s surface putting an end to the 10 year long mission. However before that happened MESSENGER sent back some interesting data around Mercury’s past.
As MESSENGER’s orbit deteriorated it creeped ever closer to the surface of Mercury allowing it to take measurements that it couldn’t do previously due to concerns about the spacecraft not being able to recover from such a close approach. During this time, when MESSENGER was orbiting at a mere 15KMs (just a hair above the max flight ceiling of a modern jetliner) it was able to use its magnetometer to detect the magnetic field emanating from the rocks on Mercury’s surface. These fields showed that the magnetic field that surrounds Mercury is incredibly ancient, dating back almost 4 billion years (right around the creation of our solar system). This is interesting for a variety of reasons but most of all because of how similar Mercury’s magnetic field is to ours.
Of all the planets in our solar system only Earth and Mars have a sustained magnetic field that comes from an internal dynamo of undulating molten metals. Whilst the gas giants also generate magnetic fields they come from a far more exotic form of matter (metallic hydrogen) and our other rocky planets, Venus and Mars, have cores that have long since solidified, killing any significant field that might have once been present. Mercury’s field is much weaker than Earth’s, on the order of only 1% or so, but it’s still enough to produce a magnetosphere that deflects the solar wind. Knowing how Mercury’s field evolved and changed over time will give us insights not only into our own magnetic field but of those planets in our solar system who have long since lost theirs.
There’s likely a bunch more revelations to come from the data that MESSENGER gathered over all those years it spent orbiting our tiny celestial sister but discoveries like this, ones that could only be made in the mission’s death throes, feel like they have a special kind of significance. Whilst it might not be the stuff that makes headlines around the world it’s the kind of incremental discovery that gives us insight into the inner workings of planets and their creation, something we will most definitely need to understand as we venture further into space.
Science reporting and science have something of a strained relationship. Whilst most scientists are modest and humble about the results that they produce the journalists who report on it often take the opposite approach, something which I feel drives the disillusionment of the public when it comes to announcing scientific progress. This rift is most visible when it comes to research that challenges current scientific thinking something which, whilst needs to be done on a regular basis to strengthen the validity of our current thinking, also needs to be approached with the same trepidation as any other research. However from time to time things still slip through the cracks like the latest news that the EmDrive may, potentially, be creating warp bubbles.
Initially the EmDrive, something which I blogged about late last year when the first results became public, was a curiosity that had an unknown mechanism of action necessitating further study. The recent results, the ones which are responsible for all the hubbub, were conducted within a vacuum chamber which nullified the criticism that the previous results were due to something like convection currents rather than another mechanism. That by itself is noteworthy, signalling that the EmDrive is something worth investigating further to see what’s causing the force, however things got a little crazy when they started shining lasers through it. They found that the time of flight of the light going through the EmDrive’s chamber was getting slowed down somehow which, potentially, could be caused by distortions in space time.
The thing to note here though is that the previous test was conducted in atmosphere, not in a vacuum like the previous test. This introduces another variable which, honestly, should have been controlled for as it’s entirely possible that that effect is caused by something as innocuous as atmospheric distortions. There’s even real potential for this to go the same way as the faster than light neutrinos with the astoundingly repeatable results being created completely out of nothing thanks to equipment that wasn’t calibrated properly. Whilst I’m all for challenging the fundamental principles of science routinely and vigorously we must remember that extraordinary claims require extraordinary evidence and right now there’s not enough of that to support many of the conclusions that the wider press has been reaching.
What we mustn’t lose sight of here though is that the EmDrive, in its current form, points at a new mechanism of generating thrust that could potentially revolutionize our access to the deeper reaches of space. All the other spurious stuff around it is largely irrelevant as the core kernel of science that we discovered last year, that a resonant cavity pumped with microwaves can produce thrust in the absence of any reaction mass, seems to be solid. What’s required now is that we dive further into this and figure out just how the heck it’s generating that force because once we understand that we can further exploit it, potentially opening up the path to even better propulsion technology. If it turns out that it does create warp bubbles than all the better but until we get definitive proof on that speculating along that direction really doesn’t help us or the researchers behind it.