Space history of the past few decades is dominated by the Space Shuttle. Envisioned as a revolution in access to space it was designed to be launched numerous times per year, dramatically reducing the costs of access to space. The reality was unfortunately not in line with the vision as the numerous design concessions made, coupled with the incredibly long average turnaround time for missions, meant that the costs far exceeded that of many other alternative systems. Still it was an iconic craft, one that several generations will point to as the one thing they remember about our trips beyond our atmosphere. What few people realise though is that there was potential for the shuttle to have a Russian sister and her name was Buran.
The Buran project started in 1974, only 5 or so years after the Space Shuttle program was kicked off by NASA. The goals of both projects were quite similar in nature, both aiming to develop a reusable craft that could deliver satellites, cosmonauts and other cargo into orbit. Indeed when you look at the resulting craft, one of which is shown above in its abandoned complex at the Baikonur Cosmodrome, the similarities are striking. It gets even more interesting when you compare their resulting specifications as they’re almost identical with only a meter or two difference between them. Of course under the hood there’s a lot of differences, especially when it comes to the primary purpose of the Buran launch system,
The propulsion system of the Buran differed significantly from the Shuttle with the boosters being a liquid oxygen/hydrogen mix rather than a solid rocket fuel. There are advantages to this, chief among them being able to shut down the engines once you start them (something solid rocket boosters can’t do) however at the same time these were not designed to be reusable, unlike their Shuttle compatriots. This would mean that the only reusable part of the Buran launch system was the orbiter itself which would increase the per-launch cost. Additionally the Buran included a fully autonomous flight control system from the get go, something the Shuttle only received during an upgrade later in its life.
That last part is somewhat telling of Buran’s true purpose as, whilst it could service non-military goals, it was primarily developed to assist Russia’s (then the Soviet Union) military interests. Indeed the winged profile of the craft enables many mission profiles that are simply of no interest to non-military agencies and having it fully autonomous from the get go shows it was meant more conflict than research. Indeed when commenting on the programme’s cancellation a Russian cosmonaut commented that the Buran didn’t have any civilian tasks planned for it and, with a lack of requirements to fuel a military programme, it was cancelled.
That was not before it saw numerous test flights, including a successful orbital test flight. The achievements that the Buran made during its single flight are not to be underestimated as it was the first craft to perform such a flight fully unmanned and to make a fully automated landing. That latter feat is even more impressive when you consider that there was a very strong crosswind, some 60 kilometers per hour, and it managed to land mere meters off its originally intended mark. Indeed had Russia continued development of the Buran shuttle there’s every chance that it would have been a much more advanced version of its American sister for a very long time.
Today however the Buran shuttles and their various test components lie scattered around the globe in varying states of disrepair and decay. Every so often rumours about a resurrection of the program surface, however it’s been so long since the program was in operation that such a program would only share the name and little more. Russia’s space program has continued on to great success however, their Soyuz craft becoming the backbone of many of humanity’s endeavours in space. Whilst the Buran may never have become the icon for space that its sister Shuttle did it remains the highly advanced concept that could have been, a testament to the ingenuity and capability of the Russian space program.
It’s been a decade in the making but today, after such a long wait, we can now see Pluto and Charon for what they are.
And they’re absolutely stunning.
The image on the left is the high resolution image taken by the LORRI camera a few days before its closest approach (which you’ve undoubtedly seen already) with the one on the right being a recently released image of Charon. Neither of these images are the sharpest available, indeed for both Pluto and Charon we have images with up to 10 times the resolution streaming back to us right now, but they are already proving to be fruitful grounds for science. Indeed these two images have already given us insights into other celestial bodies within our solar system. Of course the most interesting thing about these pictures is what they reveal about Pluto and Charon themselves and the insights are many.
The biggest surprise is just how “young” the surfaces of both Pluto and Charon are, devoid of the impact craters that are commonplace on celestial bodies that lack an atmosphere. What this means is that both Pluto’s and Charon’s surfaces have been geologically active in the recent past, on the order of some 100 million years ago or less. There’s even a chance that their surfaces are geologically active today. If they are geologically active today it means that our current theories about the mechanism for this happening aren’t complete and there’s another way for a planet’s surface to refresh themselves.
You see current thinking is that for an icy moon or planet to be able to churn its surface over on a regular basis an outside force has to be acting on them. This is based on the current set of icy moons that orbit around our two gas giants, their giant gravitational fields bending and warping their surfaces as they orbit. However neither Charon nor Pluto has the required mass to induces stresses of that magnitude however their surfaces are still as geologically young as any of the other ice moons. So there must be another mechanism in action here, one that allows even small icy planets and moons to refresh their surfaces on a continual basis. As to what this mechanism is we are not sure but in the coming months I’m sure the scientists at NASA will have some amazing theories about how it works.
The most striking feature of Pluto is the heart which has been tentatively dubbed Tombaugh Regio for Pluto’s discoverer. It consists of 2 different lobes with the one on the left being noticeably smoother than the one on the right. It is currently being theorized that the one on the left is a giant impact crater that was then filled up with nitrogen snow (Pluto’s surface is 98% frozen nitrogen). Considering the resolution of the images we’ll have access too soon I’m sure there will be more than info to figure out the heart’s origin and any other surprising things about Pluto’s surface.
Charon on the other hand appears to be littered with giant canyons, many of them several miles tall. It’s possible that whatever is responsible for the young surface of Charon is also responsible for these giant canyons, something we’ll have to wait for the high resolution images to figure out. Also of note is the giant dark patch on Charon’s polar region which is thought to be a thin deposit of dark material with a sharp geological feature underneath it. As to what that is exactly we’re not sure but the next few months will likely reveal its secrets to us.
These two images alone are incredible, showing us worlds that were simply blurs of different coloured light for almost a century. We most certainly don’t have the full picture yet, the data that New Horizons has will take months to get back to us, but they’ve already provided valuable insight into Pluto, Charon and the solar system in which we live. I can’t wait to see what else we discover as it’s bound to shake up our understanding of the universe once again.
There are numerous risks that spacecraft face when traversing the deep black of space. Since we’ve sent many probes to many locations most of these risks are well known and thus we’ve built systems to accommodate them. Most craft carry with them fully redundant main systems, ensuring that if the main one fails that the backup can carry on the task that the probe was designed to do. The systems themselves are also built to withstand the torturous conditions that space throws at them, ensuring that even a single piece of hardware has a pretty good chance of surviving its journey. However sometimes even all that engineering can’t account for what happens out there and yesterday that happened to New Horizons.
New Horizons is a mission led by NASA which will be the first robotic probe to make a close approach to Pluto. Its primary mission is to capture the most detailed view of Pluto yet, generating vast amounts of data about our most diminutive dwarf planet. Unlike many similar missions though New Horizons won’t be entering Pluto’s orbit, instead it will capture as much data as it can as it whips by Pluto at a blistering 17 km/s. Then it will set its sights on one of the numerous Kuiper Belt objects where it will do the same. This mission has been a long time in the making launching in early 2006 and is scheduled to “arrive” at pluto in the next 10 days.
However, just yesterday, the craft entered safe mode.
What caused this to happen is not yet known however one good piece of news is that the craft is still contactable and operating within expected parameters for an event of this nature. Essentially the primary computer sensed a fault and, as it is programmed to do in this situation, switched over to the backup system and put the probe into safe mode. Whilst NASA engineers have received some information as to what the fault might be they have opted to do further diagnostics before switching the probe back onto its primary systems. This means that science activities that were scheduled for the next few days will likely be delayed whilst these troubleshooting process occur. Thankfully there were only a few images scheduled to be taken and there should be ample time to get the probe running before its closest approach to Pluto.
The potential causes behind an event of this nature are numerous but since the probe is acting as expected in such a situation it is most likely recoverable. My gut feeling is that it might have been a cosmic ray flipping a bit, something which the processors that probes like New Horizons are designed to detect. As we get more data trickled back down (it takes 9 hours for signals to reach New Horizons) we’ll know for sure what caused the problem and what the time frame will be to recover.
Events like this aren’t uncommon, nor are they unexpected, but having one this close to the mission’s ultimate goal, especially after the long wait to get there, is sure to be causing some heartache for the engineers at NASA. New Horizons will only have a very limited opportunity to do the high resolution mapping that it was built to do and events like these just up the pressure on everyone to make sure that the craft delivers as expected. I have every confidence that the team at NASA will get everything in order in no time at all however I’m sure there’s going to be some late nights for them in the next few days.
Godspeed, New Horizons.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
Outside of earth Europa is probably the best place for life as we know it to develop. Beneath the radiation soaked exterior, which consists of an ice layer that could be up to 20KM thick, lies a vast ocean that stretches deep into Europa’s interior. This internal ocean, though bereft of any light, could very well harbor the right conditions to support the development of complex life. However if we’re ever going to entertain the idea of exploring the depths of that vast and dark place we’ll first need a lot more data on Europa itself. Last week NASA has greenlit the Europa Clipper mission which will do just that, slated for some time in the 2020 decade.
Exploration of Europa has been relatively sparse, with the most recent mission being the New Horizons probe which imaged Europa on its Jupiter flyby on its path to Pluto. Indeed the majority of missions that have imaged Europa have been flybys with the only long duration mission being the Galileo probe that was in orbit around Jupiter for 8 years which included numerous flybys of Europa. The Europa Clipper mission would be quite similar in nature with the craft conducting multiple flybys rather than staying in orbit. The mission would include the multiple year journey to our jovian brother and no less than 45 flybys of Europa once it arrived.
It might seem odd that an observation mission would opt to do numerous flybys rather than a continuous orbit however there are multiple reasons for this. For starters Jupiter has a powerful radiation belt that stretches some 700,000KM out from the planet, enveloping Europa. This means that any craft that dares enter Jupiter’s orbit its lifetime is usually somewhat limited and should NASA have opted for an orbital mission rather than a flyby one the craft’s expected lifetime wouldn’t be much more than a month or so. Strictly speaking this might not be too much of an issue as you can make a lot of observations in a month however the real challenge comes from getting that data back down to Earth.
Deep space robotic probes are often capable of capturing a lot more information than they’re able to send back in real time, leading to them storing a lot of information locally and transmitting it back over a longer period of time. If the Europa clipper was orbital this would mean it would only have 30 days with which to send back information, not nearly enough for the volumes of data that modern probes can generate. The flybys though give the probe more than enough time to dump all of its data back down to Earth whilst it’s coasting outside of Jupiter’s harsh radiation belts, ensuring that all data gathered is returned safely.
Hopefully the data that this craft brings back will pave the way for a potential mission to the surface sometime in the future. Europa has so much potential for harboring life that we simply must investigate it and the data gleaned from the Europa Clipper mission will provide the basis for a future landing mission. Of course such a mission is likely decades away however I, and many others, believe that a mission to poke beneath the surface of Europa is the best chance we have of finding alien life. Even if we don’t that will provide valuable insight into the conditions for forming life and will help point our future searches.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.
Human spaceflight is, to be blunt, an unnecessarily complicated affair. Us humans require a whole host of things to make sure we can survive the trip through the harsh conditions of space, much more than our robotic companions require. Of course whilst robotic missions may be far more efficient at performing the missions we set them out on that doesn’t further our desire to become a multi-planetary species and thus the quest to find better ways to preserve our fragile bodies in the harsh realms of space continues. One of the biggest issues we face when travelling to other worlds is how we’ll build our homes there as traditional means will simply not work anywhere else that we currently know of. This is when novel techniques, such as 3D printing come into play.
Much of the construction we engage in today relies on numerous supporting industries in order to function. Transplanting these to other worlds is simply not feasible and taking prefabricated buildings along requires a bigger (or numerous smaller) launch vehicles in order to get the required payload into orbit. If we were able to build habitats in situ however then we could cut out the need for re-establishing the supporting infrastructure or bringing prefabricated buildings along with us, something which would go a long way to making an off-world colony sustainable. To that end NASA has started the 3D Printed Habitat Challenge with $2.25 million in prizes to jump start innovation in this area.
The first stage of the competition is for architects and design students to design habitats that maximise the benefits that 3D printing can provide. These will then likely be used to fuel further designs of habitats that could be constructed off-world. The second part of the competition, broken into 2 stages, is centered on the technology that will be used to create those kinds of structures. The first focuses on technology required to use materials available at site as a feed material for 3D printing, something which is currently only achieved with very specific feedstock. The second, and ultimately the most exciting, challenge is to actually build a device capable of using onsite materials (as well as recyclables) to create a habitable structure with a cool $1.1 million to those who satisfy the challenge. Doing that would be no easy feat of course but the technology created along the way will prove invaluable to future manned missions in our solar system.
We’re still likely many years away from having robots on the moon that can print us endless 3D habitats but the fact that NASA wants to spur innovation in this area means that they’re serious about pursuing a sustainable human presence offworld. There’s likely numerous engineering challenges that we’ll need to overcome, especially between different planets, but it’s far easier to adapt a current technology than it is to build one from scratch. I’m very keen to see the entries to this competition as they could very well end up visiting other planets to build us homes there.
MESSENGER was a great example of how NASA’s reputation for solid engineering can extend the life of their spacecraft far beyond anyone’s expectations. Originally slated for a one year mission once it reached it’s destination (a 7 year long journey in itself) MESSENGER continued to operate around Mercury for another 3 years past its original mission date, providing all sorts of great data on the diminutive planet that hugs our sun. However after being in orbit for so long its fuel reserves ran empty leaving it unable to maintain its orbit. Then last week MESSENGER crash landed on Mercury’s surface putting an end to the 10 year long mission. However before that happened MESSENGER sent back some interesting data around Mercury’s past.
As MESSENGER’s orbit deteriorated it creeped ever closer to the surface of Mercury allowing it to take measurements that it couldn’t do previously due to concerns about the spacecraft not being able to recover from such a close approach. During this time, when MESSENGER was orbiting at a mere 15KMs (just a hair above the max flight ceiling of a modern jetliner) it was able to use its magnetometer to detect the magnetic field emanating from the rocks on Mercury’s surface. These fields showed that the magnetic field that surrounds Mercury is incredibly ancient, dating back almost 4 billion years (right around the creation of our solar system). This is interesting for a variety of reasons but most of all because of how similar Mercury’s magnetic field is to ours.
Of all the planets in our solar system only Earth and Mars have a sustained magnetic field that comes from an internal dynamo of undulating molten metals. Whilst the gas giants also generate magnetic fields they come from a far more exotic form of matter (metallic hydrogen) and our other rocky planets, Venus and Mars, have cores that have long since solidified, killing any significant field that might have once been present. Mercury’s field is much weaker than Earth’s, on the order of only 1% or so, but it’s still enough to produce a magnetosphere that deflects the solar wind. Knowing how Mercury’s field evolved and changed over time will give us insights not only into our own magnetic field but of those planets in our solar system who have long since lost theirs.
There’s likely a bunch more revelations to come from the data that MESSENGER gathered over all those years it spent orbiting our tiny celestial sister but discoveries like this, ones that could only be made in the mission’s death throes, feel like they have a special kind of significance. Whilst it might not be the stuff that makes headlines around the world it’s the kind of incremental discovery that gives us insight into the inner workings of planets and their creation, something we will most definitely need to understand as we venture further into space.
Science reporting and science have something of a strained relationship. Whilst most scientists are modest and humble about the results that they produce the journalists who report on it often take the opposite approach, something which I feel drives the disillusionment of the public when it comes to announcing scientific progress. This rift is most visible when it comes to research that challenges current scientific thinking something which, whilst needs to be done on a regular basis to strengthen the validity of our current thinking, also needs to be approached with the same trepidation as any other research. However from time to time things still slip through the cracks like the latest news that the EmDrive may, potentially, be creating warp bubbles.
Initially the EmDrive, something which I blogged about late last year when the first results became public, was a curiosity that had an unknown mechanism of action necessitating further study. The recent results, the ones which are responsible for all the hubbub, were conducted within a vacuum chamber which nullified the criticism that the previous results were due to something like convection currents rather than another mechanism. That by itself is noteworthy, signalling that the EmDrive is something worth investigating further to see what’s causing the force, however things got a little crazy when they started shining lasers through it. They found that the time of flight of the light going through the EmDrive’s chamber was getting slowed down somehow which, potentially, could be caused by distortions in space time.
The thing to note here though is that the previous test was conducted in atmosphere, not in a vacuum like the previous test. This introduces another variable which, honestly, should have been controlled for as it’s entirely possible that that effect is caused by something as innocuous as atmospheric distortions. There’s even real potential for this to go the same way as the faster than light neutrinos with the astoundingly repeatable results being created completely out of nothing thanks to equipment that wasn’t calibrated properly. Whilst I’m all for challenging the fundamental principles of science routinely and vigorously we must remember that extraordinary claims require extraordinary evidence and right now there’s not enough of that to support many of the conclusions that the wider press has been reaching.
What we mustn’t lose sight of here though is that the EmDrive, in its current form, points at a new mechanism of generating thrust that could potentially revolutionize our access to the deeper reaches of space. All the other spurious stuff around it is largely irrelevant as the core kernel of science that we discovered last year, that a resonant cavity pumped with microwaves can produce thrust in the absence of any reaction mass, seems to be solid. What’s required now is that we dive further into this and figure out just how the heck it’s generating that force because once we understand that we can further exploit it, potentially opening up the path to even better propulsion technology. If it turns out that it does create warp bubbles than all the better but until we get definitive proof on that speculating along that direction really doesn’t help us or the researchers behind it.
It’s been 17 years since the first part of the International Space Station was launched into orbit and since then it’s become a symbol of humanity’s ability and desire to go further in space. The fact that NASA and Roscosmos have remained cooperative throughout all the tumultuous times that their parent countries have endured speaks to the greater goal that they both seek, along with all of the other participating nations. However, just like any other piece of equipment, the ISS will eventually wear out requiring replacement or significant revamping in order to keep going. The current plans are to keep it going through to 2024 however past that date it’s likely that the ISS will meet its firey end, burning up in a controlled re-entry back to Earth.
Russia had made its intent clear when this fateful time arrived: it would detach all its current modules and then form its own space station in orbit to continue operations. Such an exercise, whilst possible, would be non-trivial in nature and by Russia’s own accounts would likely only give those modules another 4 years worth of life before the maintenance costs on the aging hardware outstripped any potential benefits. Thus the pressure has been on to start looking towards designing a replacement orbital space station, one that can support humanity’s activities in space for the next few decades.
Roscosmos recently announced that they had committed to building the ISS’s replacement with NASA with the details to be forthcoming. NASA, whilst praising Russia’s commitment to continuing ISS operations to 2024, didn’t speak to a potential future space station. Whilst they didn’t outright deny that NASA and Russia aren’t or won’t be working on a future space station together they have said in the past that they’d hope that the private space industry would be able to provide such capability soon. That’s looking like it will be happening too, given that Bigelow is hoping to ship their BEAM module to the ISS by the end of this year.
There’s every chance that NASA and Roscosmos have been in talks behind the scenes to work on the next generation space station and Russia simply jumped the gun on announcing the collaboration. It does seem a little odd however as their previous announcement of breaking away from the ISS when the deorbit date came was rather…hostile and most expected NASA and Roscosmos to simply part ways at that point. Doing an about face and announcing a collaboration is great news however it just seems odd that NASA wouldn’t say something similar if they were actually doing it. So either Russia’s just really excited to make an announcement or there’s a larger play happening here, but I can’t imagine NASA being guilted into committing to building another ISS.
I’m hopeful that it’s not a lot of hot air as the ISS has proven to be both a valuable science experiment as well as an inspirational icon to spur the next generation to pursue a career beyond the Earth’s surface. We’ve learnt many lessons from building the now football field sized station in orbit and the next one we build can be that much better because of them. That, combined with the numerous benefits that comes from international collaboration on a project of this scale, means that there’s still an incredible amount of value to derive from something like the ISS and I hope Roscosmos’ ambition is based in reality.