If there’s one thing that SpaceX has shown us is that landing a rocket from space onto a barge in the middle of the ocean is, well, hard. Whilst they’ve successfully landed one of their Falcon-9 first stages on land not all of their launches will match that profile, hence the requirement for their drone barge. However that barge presents its own set of challenges although the last 2 failed attempts were due to a lack of hydraulic fluid and slower than expected throttle response. Their recent launch, which was delivering the Jason 3 earth observation satellite into orbit, managed to land successfully again however failed to stay upright at the last minute.
Elon stated that the failure was due to one of the lockout collets (basically a clamp) not locking properly on one of the legs. Looking at the video above you can see which one of those legs is the culprit as you can see it sliding forward and ultimately collapsing underneath. The current thinking is that the failure was due to icing caused by heavy fog at liftoff although a detailed analysis has not yet been conducted. Thankfully this time around the pieces they have to look at are a little bigger than last times rather catastrophic explosion.
Whilst it might seem like landing on a drone ship is always doomed to failure we have to remember that this is what the early stages of NASA and other space programmes looked like. Keeping a rocket like that upright under its own strength, on a moving barge no less, is a difficult endeavour and the fact that they’ve managed to successfully land twice (but fail to remain upright) shows that they’re most of the way there. I’m definitely looking forward to their next attempt as there’s a very high likelihood of that one finally succeeding.
The payload it launched is part of the Ocean Surface Topography from Space mission which aims to map the height of the earth’s oceans over time. It joins one of its predecessors (Jason-2) and combined they will be able to map approximately 95% of the ice-free oceans in the world every 10 days. This allows researchers to study climate effects, providing forecasting for cyclones and even tracking animals. Jason-3 will enable much more high resolution data to be captured and paves the way for a future, single mission that will be planned to replace both of the current Jason series satellites.
SpaceX is rapidly decreasing the access costs to space and once they perfect the first stage landing on both sea and land they’ll be able to push it down even further. Hopefully they’ll extend this technology to their larger family of boosters, once of which is scheduled to be test flown later this year. That particular rocket will reduce launch costs by a factor of 4, getting us dangerously close to the $1,000/KG limit that, when achieved, will be the start of a new era of space access for all.
Second chances in space missions are exceedingly rare. When something goes wrong it often means either a total loss of the mission or a significantly reduced outlook for what the mission can accomplish. Primarily this comes down to the tight engineering challenges that space missions face, with multiple redundant systems only able to cope with so much. Still every so often they happen and sometimes a new mission is born out one that might have been a failure. Such is the story of JAXA’s Akatsuki craft, one that has been lying in wait for the last 5 years waiting for its chance to fulfill its mission.
Akatsuki launched 5 years ago aboard JAXA’s H-IIA rocket. It was to be JAXA’s second interplanetary probe after their first, Nozomi, failed to reached its intended destination over a decade prior. The insertion burn was confirmed to have started on schedule, however after the communications blackout period where the probe was behind Venus it failed to reestablish communications at the expected time. It was found drifting away from Venus in safe mode, indicating that it had undergone some form of failure. In this state it was operating in a very low bandwidth mode and so it took some time to diagnose what had happened. As it turned out the main engine had fired for only 3 minutes before failing, not enough to put it in the required orbit.
However it was enough to put Akatsuki on a leading orbit with Venus, one that would eventually bring it back around to meet the planet some time in the future. It was then decided that JAXA would attempt recovery of the craft into a new orbit around Venus, a highly elliptical orbit with a period of almost 2 weeks (the originally intended orbital period was approximately 30 hours). Investigation into the craft’s damaged sustained during the first initial burn showed that the main engine was unusable and so the insertion burn would be performed by its attitude thrusters. JAXA had a lot of time to plan this as the next scheduled rendezvous would not happen for another 5 years.
Following some initial maneuvers back in July and September Akatsuki began its orbital insertion burn on the 7th of December. The small attitude thrusters, designed to keep the spacecraft oriented, fired for 20 straight minutes far beyond what they were originally designed for. They did their job however and 2 days later JAXA announced that they had successfully entered orbit around Venus, albeit in the far more elliptical orbit than they originally planned.
The extended duration in space has likely taken its toll on Akatsuki and so JAXA is currently undertaking a detailed investigation of its current status. 3 of its 6 cameras have shown to be fully functioning with the remainder scheduled to be brought online very soon. Scientific experiments using Akatsuki’s instruments won’t begin until sometime next year however as an orbital correction maneuver is planned to reduce Akatsuki’s orbit slightly. However JAXA is confident that the majority of their science objectives can be met, an amazing boon to both their team and the wider scientific community.
It’s incredibly heartening to see JAXA successfully recover the Akatsuki craft after such a monumental setback. The research conducted using data from the Akatsuki craft will give us insights into why Venus is such a strange beast, rotating slowly in opposite direction to every other planet in our solar system. Whilst I’d never wish failure upon anyone I know the lessons learnt from this experience will bolster JAXA’s future missions and, hopefully, their next one won’t suffer a similar fate.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
There’s numerous stories about the heydays of rocket engineering, when humanity was toying around with a newfound power that we had little understanding of. People who lived near NASA’s test rocket ranges reported that they’d often wait for a launch and the inevitable fireball that would soon follow. Today launching things into space is a well understood territory and catastrophic failures are few and far between. Still when you’re putting several thousand tons worth of kerosene and oxygen together then putting a match to them there’s still the possibility that things will go wrong and, unfortunately for a lot of people, something did with the latest launch of the Orbital Sciences Antares rocket.
The mission that it was launching was CRS Orb-3, the third resupply mission to the International Space Station using Orbital Sciences Cygnus craft. The main payload consisted mostly of supplies for the ISS including food, water, spare parts and science experiments. Ancillary payloads included a test version of the Akryd satellites that Planetary Resources are planning to use to scout near Earth asteroids for mining and a bunch of nano Earth observation satellites by Planet Labs. The loss of this craft, whilst likely insured against loss of this nature, means that all of these projects will have their timelines set back significantly as the next Antares launch isn’t planned until sometime next year.
NASA and Orbital Sciences haven’t released any information yet about what caused the crash however from the video footage it appears that the malfunction started in the engines. The Antares rocket uses a modified version of the Russian AJ-26 engine who’s base design dates back to the 1960s when it was slated for use in the Russian Moon shot mission. The age of the design isn’t an inherently bad thing, as Orbital Sciences have shown the rockets were quite capable of putting things into orbit 4 times in the past, however the fact that Antares is the only rocket to use them does pose some concerns. The manufacturer of the engines have denied that their engines were to blame, citing that it was heavily modified by Aerojet prior to being used, however it’s still probably too early to rule anything in or out.
One thing I’ve seen some people pick up on is the “Engines at 108%” as an indication of their impending doom. The above 100% ratings typically come from the initial design specifications which aim to meet a certain power threshold. Many engines exceed this when they’re finally constructed and thus any power generated above the designed maximum is designated in this fashion. For most engines this isn’t a problem, the Shuttle routinely ran it’s engines at 110% during the initial stages of takeoff, so them being throttled over 100% during the ascent stage likely wasn’t an issue for the engines. We’ll know more when NASA and Orbital Sciences release the telemetry however.
Hopefully both Orbital Sciences and NASA can narrow down the cause of this crash quickly so it doesn’t affect any of the future CRS launches. Things like this are never good for the companies involved, especially when the launch system only has a handful of launches under its belt. The next few weeks will be telling for all involved as failures of this nature are rarely due to a single thing and are typically a culmination of a multitude of different factors leading up to the unfortunate, explosive demise of the craft.
It did make for a pretty decent light show, though.
The Kepler Mission is by far one of the most exciting things NASA has done in recent memory. It’s goal was simple, observe a patch of stars continuously for a long period of time in order to detect the planets that orbit them. It’s lone instrument for doing so is a highly sensitive photometer designed to detect the ever so subtle changes in brightness of a parent star when one of its planets transits in front of it. Whilst the chances are low of everything lining up just right so that we can witness such an event the fact that Kepler could monitor some 145,000 stars at once meant that we were almost guaranteed to see a great deal of success.
Indeed we got just that.
The first six weeks of Kepler’s operation proved to be highly successful with 5 planets discovered, albeit ones that would likely be inhospitable due to their close proximity to their parent stars. The years since then have proved to be equally fruitful with Kepler identifying thousands of potential exoplanet candidates with hundreds of them since being confirmed via other methods. These discoveries have reshaped our idea of what our universe looks like with a planetary system like our own now thought to be a relatively common occurrence. Whilst we’re still a long way from finding our home away from home there’s a ton of tantalizing evidence suggesting that such places are numerous with untold numbers of them right in our own galaxy.
However earlier this year Kepler was struck with an insurmountable problem. You see in order to monitor that field of stars precisely Kepler relied on a set of reaction wheels to ensure it was pointed in the right direction at all times. There are a total of 4 of them on board and Kepler only needed 3 of them in order to keep the precision up at the required level. Unfortunately it had previously had one fail forcing the backup wheel to kick into motion. Whilst that had been running fine for a while on May 15th this year another reaction wheel failed and Kepler was unable to maintain its fix on the star field. At the time this was thought to be the end of the mission and, due to the specialized nature of the hardware, likely the end of Kepler’s useful life.
However, thanks to some incredibly clever mechanics, Kepler may rise again.
Whilst there are only 2 functioning reaction wheels NASA scientists have determined that there’s another source of force for them to use. If they orient Kepler in a certain way so that its solar panels are all evenly lit by the sun (the panels wrap around the outer shell of the craft) there’s a constant and reliable force applied to them. In conjunction with the 2 remaining reaction wheels this is enough to aim it, albeit at a different patch of the sky than originally intended. Additionally it won’t be able to keep itself on point consistently like it did previously, needing to reorient itself every 3 months or so which means it will end up studying a different part of the sky.
Whilst this is a massive deviation from its original intended purpose it could potentially breathe a whole new life into the craft, prolonging its life significantly. Considering the numerous discoveries it has already helped us achieve continuing its mission in any way possible is a huge boon to the science community and a testament to NASA’s engineering prowess. We’re still at the initial stages of verifying whether or not this will work as intended but I’m very confident it will, meaning we’ll be enjoying Kepler aided discoveries for a long time to come.
The Proton series of rockets are one of the longest running in the history of spaceflight. They made their debut back in 1965 when the first of them was used to launch the Proton series of scientific satellites which were super high energy cosmic particle detectors. Since then they’ve become the mainstay of the Russian space program being used for pretty much everything from communication satellites to launching the Soyuz and Progress crafts that service the International Space Station. In that time they’ve seen some 384 launches total making it one of the most successful launch platforms to date. However that number also includes 44 full and partial failures, including a few high profile ones that I blogged about a couple years back.
Unfortunately it appears that history has repeated itself today with another Proton crashing in a rather spectacular fashion:
To put this in perspective there’s been about 37 total launches of the Proton rocket since 2010 with 5 of them being either partial or full failures. This isn’t out of line with the current failure rate of the program which hovers around 11% but 4 of those have happened in the last 2 years which is cause for concern. The primary problem seems to be related to the upper stage as 3 of the recent 4 have been due to that failing which can be attributed to it being a revised component that only came into service recently. This particular crash however was not an upper stage failure as it happened long before that component could come online, indicating the problem is with the first stage.
The reasoning behind why this crash ended so spectacularly is pretty interesting as it highlights some of the design differences between the American and Russian designs. Most American launchers have a launch termination system built into them for situations like this, allowing the ground crew to self destruct the rocket mid air should anything like this happen. Russian rockets don’t have such systems and prefer to simply shut down the engines when failures like this happen. However for the safety of the ground crew the engines won’t shut off prior to 42 seconds after launch which is why you see this particular rocket continuously firing right up until it tears itself apart.
Additionally the Russian rockets use a rocket fuel mixture that consists of Unsymmetrical Dimethylhydrazine and Nitrogen Tetroxide. When these two compounds mix together they react in a highly energetic hypergolic reaction, meaning they burn without requiring any ignition source. This is where the giant orange fireball comes from as the aerodynamic stresses on the craft ruptured the fuel and oxidizer tanks, causing them to come into contact and ignite. Other rocket designs usually use liquid oxygen and kerosene which don’t automatically ignite and thus wouldn’t typically produce a fireball like that but the launch termination systems usually ensure that all the remaining fuel is consumed anyway.
Needless to say this doesn’t reflect well on Russia’s launch capabilities but it should be taken in perspective. Whilst the recent failure rate is a cause for concern it has to be noted that the R-7, the rocket that launches both the Progress and Soyuz craft to the ISS, has experienced 0 failures in the same time frame with a very comparable number of launches. It’s quite likely that the failure isn’t part of a larger systemic issue since we’ve had multiple successful launches recently and I’m sure we’ll know the cause sooner rather than later. Hopefully Russia can get the issue resolved before too long and avoid such dramatic incidents in the future.
With my daily helping of all things TechCrunch, GigaOM, VentureBeat and what have you I pretty much can’t go a day without hearing about yet another up and coming start-up that’s poised to take the world by storm. Whilst I was developing Lobaco these kinds of stories were the inspiration fuel that kept me going as it seemed like even the most wacky ideas were securing funding and it was my fervent belief that should I follow in their footsteps that I’d then also reach some level of success. Of course 1 year and 1 failed Y-Combinator application later taught me that the road to success isn’t always paved in the same way for you as it is for others.
Indeed I vented my frustrations with all these positive stories, likening it to inspiration fatigue.
After coming to that realization I started trying to seek out the stories of failure, stories of people who were in situations like mine and what caused their idea to fail. Such stories would provide me with a framework of what to avoid and what I should be doing that I’m not doing now giving me a much better shot at achieving success. Trying to find such information amongst my feed reader proved to be quite fruitless except for the tales of large companies that were in the long downward spiral of decline. This is to be expected however as a failing start-up that’s only received seed or series A level funding doesn’t seem like much of a story since 90% of them fail anyway.
The Startup Genome project then was exactly what I was looking for as when I first read about them they were looking to gather information from both sides of the table. I’ll be honest though I was sceptical that they’d ever come up with anything, figuring they were just another think tank that would use metrics that no one could be reasonably expected to apply to the real world. That all changed when I read their first report, especially their insights on premature scaling:
Since February we’ve amassed a dataset of over 3200 high growth technology startups. Our latest research found that the primary cause of failure is premature scaling, an affliction that 70% of startups in our dataset possess.The difference in performance between startups that scale prematurely and startups that scale properly is pretty striking. We found that:– No startup that scaled prematurely passed the 100,000 user mark.– 93% of startups that scale prematurely never break the $100k revenue per month threshold.– Startups that scale properly grow about 20 times faster than startups that scale prematurely.
Russia’s space program has a reputation for sticking to ideas once they’ve got them right. Their Soyuz (pronounced sah-yooz) craft are a testament to this, having undergone 4 iterations since their initial inception but still sharing many of the base characteristics that were developed decades ago. The Soyuz family are also the longest serving series of spacecraft in history and with it only having 2 fatal accidents in that time they are well regarded as the safest spacecraft around. It’s no wonder then that 2 of the Soyuz capsules remain permanently docked to the International Space Station to serve as escape pods in the even of a catastrophe, a testament to the confidence the space industry has with them.
Recent news however has brought other parts of the Russia space program into question, namely their Proton launch stack. Last week saw a Proton launched communications satellite ending up in the wrong orbit when the upper orbital insertion model failed to guide it to the proper geostationary orbit. Then just this week saw another Proton launched payload, this time a Progress craft bound for the ISS, crashed shortly after launch:
The robotic Progress 44 cargo ship blasted off atop a Soyuz U rocket at 9 a.m. EDT (1300 GMT) from the central Asian spaceport of Baikonur Cosmodrome in Kazakhstan and was due to arrive at the space station on Friday.
“Unfortunately, about 325 seconds into flight, shortly after the third stage was ignited, the vehicle commanded an engine shutdown due to an engine anomaly,” NASA station program manager Mike Suffredini told reporters today. “The vehicle impacted in the Altai region of the Russian Federation.”
Now an unmanned spacecraft failing after launch wouldn’t be so much of a problem usually (apart from investigating why it happened) but the reason why this particular failure has everyone worried is the similarity between the human carrying Soyuz capsule and the Progress cargo craft that was on top of it. In essence they’re an identical craft with the Progress having a fuel pod instead of a crew capsule allowing it to refuel the ISS on orbit. A failure then with the Progress craft calls into question the Soyuz as well, especially when there’s been 2 launches so close to each other that have experienced problems.
From a crew safety perspective however the Soyuz should still be considered a safe craft. If an event such as the one that happened this week had a Soyuz rather than a Progress on top of it the crew would have been safe thanks to the launch escape system that flies on top of all manned Soyuz capsules. When a launch abort event occurs these rockets fire and pull the capsule safely away from the rest of the launch stack and thanks to the Soyuz’s design it can then descend back to earth on its usual ballistic trajectory. It’s not the softest of landings however, but it’s easily survivable.
The loss of cargo bound for the ISS does mean that some difficult decisions have to be made. Whilst they’re not exactly strapped for supplies at the moment (current estimates have them with a year of breathing room) the time required to do a full investigation into the failure does push other resupply and crew replacement missions back significantly. Russia currently has the only launch system capable of getting humans to and from the ISS and since they’re only a 3 person craft this presents the very real possibility that the ISS crew will be scaled back. Whilst I’m all aflutter for SpaceX their manned flights aren’t expected to come online until the middle of the decade and they’re the most advanced option at this point. If the problems with the Proton launch stack can be sorted expediently then the ISS may remain fully crewed, but only time will tell if this is the case.
The Soyuz and Progress series have proven to be some of the most reliable spacecraft developed to date and I have every confidence that Russia will be able to overcome these problems as they have done so in the past. Incidents like this demonstrate how badly commercialization of rudimentary space activities is required, especially when one of the former space powers doesn’t seem that interested in space anymore. Thankfully the developing private space industry is more than up to the challenge and we’re only a few short years away from these sorts of problems boiling down to switching manufacturers, rather than curtailing our efforts in space completely.
I often find myself deconstructing stories and ideas to find out what the key factors were in their success or failure. It’s the engineer training in me that’s trying to find out what are key elements for something to swing one way or another hoping to apply (or remove) those traits from my own endeavors, hoping to emulate the success stories. It follows then that I spend a fair amount of my time looking introspectively, analyzing my own ideas and experiences to see how future plans line up against my set of criteria for possible future success. One of the patterns I’ve noticed from doing all this analysis is the prevalence of the idea that should you fail at something that automatically you’re the one who did something wrong and it wasn’t the idea that was at fault.
Take for instance Tim Ferriss author of two self help books, The 4 Hour Work Week and The 4 Hour Body, who has undoubtedly helped thousands of people achieve goals that they had never dreamed of attempting in the past. I’ve read both his books and whilst I believe there’s a lot of good stuff in there it’s also 50% horse shit, but that rule applies to any motivator or self help proprietor. One of the underpinnings of his latest book was the slow carb diet, aimed at shedding layers of fat and oodles of weight in extremely short periods of time. I haven’t tried it since it doesn’t line up with my current goals (I.E. gaining weight) but those who have and didn’t experience the results got hit back with this reply from the man himself:
The following will address 99%+ of confusion:
– If you have to ask, don’t eat it.
– If you haven’t had blood tests done, I don’t want to hear that the diet doesn’t work.
– If you aren’t measuring inches or haven’t measured bodyfat % with an accurate tool (BodPod, etc. and NOT bodyfat scales), I don’t want to hear that the diet doesn’t work.
– If you’re a woman and taking measurements within 10 days prior to menstruation (which I advise against in the book), I don’t want to hear about the lack of progress.
Whilst being a classic example of Wally Blocking¹ this also places all blame for failure on the end user, negating any possibility that the diet doesn’t work for everyone (and it really can’t, but that’s another story). However admitting that this diet isn’t for everyone would undermine it’s credibility and those who experienced failure would, sometimes rightly, put the failure on the process rather than themselves.
Motivators aren’t the only ones who outright deny that there’s a failure with their process, it’s also rife with the proponents of Agile development techniques. Whilst I might be coming around to some of the ideas since I found I was already using them its not uncommon to hear about those who’ve experimented Agile and haven’t had a great deal of success with it. The response from Agile experts is usually that you’re doing it wrong and that you’re inability to adhere strictly to the Agile process is what lead to your failure, not that agile might not be appropriate for your particular product or team. Of course this is a logical fallacy, akin to the no true Scotsman idea, and doing the research would show you that Agile isn’t appropriate everywhere with other methods producing great results
In the end it all boils down to the fact that not every process is perfect and can never be appropriate for any situation. Blaming the end user may maintain the illusion that your process is beyond reproach but realistically you will eventually have to face hard evidence that you can’t design a one size fits all solution, especially for anything that will be used by a large number of people. For those of you who have tried a “guaranteed to succeed” process like those I’ve described above and failed it would be worth your effort to see if the fault truly lies within you or the process simply wasn’t appropriate for what you were using it for, even if it was marketed to you as such.
¹I tried to find an online reference to this saying but can’t seem to find it anywhere. In essence Wally Blocking someone stems from the Wally character in Dilbert who actively avoids doing any work possible. One of his tactics is when asked to do some piece of work place an unnecessarily large prerequisite on getting the work done, usually on the person requesting it. This will usually result in either the person doing the work themselves or getting someone else to do it, thus Wally had blocked any potential work from coming his way.