Space history of the past few decades is dominated by the Space Shuttle. Envisioned as a revolution in access to space it was designed to be launched numerous times per year, dramatically reducing the costs of access to space. The reality was unfortunately not in line with the vision as the numerous design concessions made, coupled with the incredibly long average turnaround time for missions, meant that the costs far exceeded that of many other alternative systems. Still it was an iconic craft, one that several generations will point to as the one thing they remember about our trips beyond our atmosphere. What few people realise though is that there was potential for the shuttle to have a Russian sister and her name was Buran.
The Buran project started in 1974, only 5 or so years after the Space Shuttle program was kicked off by NASA. The goals of both projects were quite similar in nature, both aiming to develop a reusable craft that could deliver satellites, cosmonauts and other cargo into orbit. Indeed when you look at the resulting craft, one of which is shown above in its abandoned complex at the Baikonur Cosmodrome, the similarities are striking. It gets even more interesting when you compare their resulting specifications as they’re almost identical with only a meter or two difference between them. Of course under the hood there’s a lot of differences, especially when it comes to the primary purpose of the Buran launch system,
The propulsion system of the Buran differed significantly from the Shuttle with the boosters being a liquid oxygen/hydrogen mix rather than a solid rocket fuel. There are advantages to this, chief among them being able to shut down the engines once you start them (something solid rocket boosters can’t do) however at the same time these were not designed to be reusable, unlike their Shuttle compatriots. This would mean that the only reusable part of the Buran launch system was the orbiter itself which would increase the per-launch cost. Additionally the Buran included a fully autonomous flight control system from the get go, something the Shuttle only received during an upgrade later in its life.
That last part is somewhat telling of Buran’s true purpose as, whilst it could service non-military goals, it was primarily developed to assist Russia’s (then the Soviet Union) military interests. Indeed the winged profile of the craft enables many mission profiles that are simply of no interest to non-military agencies and having it fully autonomous from the get go shows it was meant more conflict than research. Indeed when commenting on the programme’s cancellation a Russian cosmonaut commented that the Buran didn’t have any civilian tasks planned for it and, with a lack of requirements to fuel a military programme, it was cancelled.
That was not before it saw numerous test flights, including a successful orbital test flight. The achievements that the Buran made during its single flight are not to be underestimated as it was the first craft to perform such a flight fully unmanned and to make a fully automated landing. That latter feat is even more impressive when you consider that there was a very strong crosswind, some 60 kilometers per hour, and it managed to land mere meters off its originally intended mark. Indeed had Russia continued development of the Buran shuttle there’s every chance that it would have been a much more advanced version of its American sister for a very long time.
Today however the Buran shuttles and their various test components lie scattered around the globe in varying states of disrepair and decay. Every so often rumours about a resurrection of the program surface, however it’s been so long since the program was in operation that such a program would only share the name and little more. Russia’s space program has continued on to great success however, their Soyuz craft becoming the backbone of many of humanity’s endeavours in space. Whilst the Buran may never have become the icon for space that its sister Shuttle did it remains the highly advanced concept that could have been, a testament to the ingenuity and capability of the Russian space program.
When it comes to exoplanets the question that I often hear asked is: why are they all largely the same? The answer lies in the methods that we use for detecting exoplanets, the most successful of which is observing the gravitational pull that planets have on their host stars. This method requires that planets make a full orbit around their parent start in order for us to detect them which means that many go unnoticed, requiring observation times far beyond what we’re currently capable of. However there are new methods which are beginning to bear fruit with one of the most recent discoveries being a planet called 51-Eridani-b.
Unlike most other exoplanets, whose presence is inferred from the data we gather on their parent star, 51-Eridani-b is the smallest exoplanet that we’ve ever imaged directly. Whilst we didn’t get anything like the artist’s impression above it’s still quite an achievement as planets are usually many orders of magnitude dimmer than their parent stars. This makes directly imaging them incredibly difficult however this new method, which has been built into a device called the Gemini Planet Imager, allows us to directly image a certain type of exoplanet. The main advantage of this method is that it does not require a lengthy observation time to produce results although like other methods it also has some limitations.
The Gemini Planet Imager was built for the Gemini South Telescope in Chile, the sister telescope of the more famous Gemini North Telescope in Hawaii. Essentially it’s an extremely high contrast imager, one that’s able to detect a planet that’s one ten millionth as bright as its parent star. Whilst this kind of sensitivity is impressive even it can’t detect Earth-like planets around a star similar to our sun. Instead the planets that we’re likely to detect are young jupiter planets which are still hot from their formation being far more luminous than a planet typically is. This is exactly what 51-Eridani-b is, a fiery young planet that orbits a star that’s about 5 times as bright as our own.
Equally as impressive is the technology behind the Gemini Planet Imager which enables it to directly image planets like this. The first part is a coronagraph, a specially designed interference device which allows us to block out the majority of a parent star’s light. Behind that is a set of adaptive optics, essentially a set of tiny mirrors that can make micro-adjustments in order to counteract atmospheric distortions. It has to do this since, unlike space based telescopes, there’s a lot of turbulent air between us and the things we want to look at. These mirrors, which are deformable at the micro level using MEMS, are able to do this with incredible precision.
With the successful discovery of 51-Eridani-b I’m sure further discoveries won’t be far off. Whilst the Gemini Planet Imager might only be able to discover a certain type of planet it does prove that the technology platform works. This then means that improvements can be made, expanding its capabilities further. I have no doubt that future versions of this technology will be able to directly image smaller and smaller planets, one day culminating in a direct image of an Earth-like planet around a sun-like star. That, dear read, will be a day for the history books and it all began here with 51-Eridani-b.
You’d think that long duration space travel was something of a solved problem, given the numerous astronauts who’ve spent multiple months aboard the International Space Station. For some aspects of space travel this is correct but there are still many challenges that face astronauts who’d venture deeper into space. One of the biggest challenges is radiation shielding as whilst we’ve been able to keep people alive in-orbit they’re still under the protective shield of the Earth’s magnetic field. For those who go outside that realm the dangers of radiation are very real and currently we don’t have a good solution for dealing with it. The solution to this problem could come out of research being done at CERN using a new type of superconducting material.
The material is called Magnesium diboride (MgB₂) and is currently being used as part of the LHC High Luminosity Cold Powering project. MgB₂ has the desirable property of having the highest critical temperature (the point at which it becomes superconducting) of any conventional superconducting materials, some −234°C, about 40°C above absolute zero. Compared to other conventional superconductors this is a much easier temperature to work with as others usually only become superconducting at around 11°C above absolute zero. At the same time creating the material is relatively easy and inexpensive making it an ideal substance to investigate for use in other applications. In terms of applications in space the Superconductors team at CERN are working with the European Space Radiation Superconducting Shield (SR2S) project which is looking at MgB₂ as a potential basis for a superconducting magnetic shield.
Of the numerous solutions that have been proposed to protect astronauts from cosmic radiation during long duration space flight a magnetic shield is one of the few solutions that has shown promise. Essentially it would look to recreate the kind of magnetic field that’s present on earth which would deflect harmful cosmic rays away from the spacecraft. In order to generate a field large and strong enough to do this however we’d have to rely on superconductors which does introduce a lot of complexity. A MgB₂ based shield, with its lower superconducting temperature, could achieve the required field with far less requirements on cooling and power, both of which are at a premium on spacecraft.
There’s still a lot of research to go between now and a working prototype however the research team at S2RS have a good roadmap to taking the technology from the lab to the real world. The coming months will focus on quantifying what kind of field they can produce with a prototype coil, demonstrating the kinds of results they can expect. From there it will be a matter of scaling it up and working out all the parameters required for operation in space like power draw and cooling requirements.
It’s looking good for a first generation shield of this nature to be ready in time for when the first long duration flights are scheduled to occur in the future, something which is a necessity for those kinds of missions. Indeed I believe this research is certain to pave the way for the numerous private space companies and space faring nations who have set their sights beyond earth orbit.
Since its inception back in 1960 the Search for Extraterrestrial Intelligence (SETI) has scanned our skies looking for clues of intelligent life elsewhere in our universe. As you might have already guessed the search has yet to bear any fruit since, as far as we’re concerned, no one has been sending signals to us, at least not in the way we’re listening for them. The various programs that make up the greater SETI aren’t particularly well funded however, often only getting a couple hours at a time on any one radio telescope on which to make their observations. That’s all set to change however as Russian business magnate Yuri Milner is going to inject an incredible $100 million into the program over 10 years.
SETI, for the unaware, is a number of different projects and experiments all designed to seek out extraterrestrial life through various means. Traditionally this has been done by scanning the sky for radiowaves, looking for signals that are artificial in nature. Whilst the search has yet to find anything that would point towards a signal of intelligent origin there have been numerous other signals found which, upon further investigation, have turned out to have natural sources. Other SETI programs have utilized optical telescopes to search for communications using laser based communications, something which we have actually begun investigating here on earth recently. There are also numerous other, more niche programs under the SETI umbrella (like those looking for things like Dyson Spheres are other mega engineering projects) but they all share the common goal of answering the same questions: are we alone here?
Since these programs don’t strictly advance science in any particular field they’re not well funded at all, often only getting a handful of hours on telescopes per year. This means that, even though such a search is likely to prove difficult and fruitless for quite a long time, we’re really only looking for a small fraction of the year. The new funds from Yuri Milner will bolster the observation time substantially, allowing for continuous observations for extended periods of time. This will both increase the chances of finding something whilst also providing troves of data that will also be useful for other scientific research.
As Yuri says whilst we’re not expecting this increased funding to instantly result in a detection event the processes we’ll develop along the way, as well as the data we gather, will teach us a lot about the search itself. The more we try the more we’ll understand what methods haven’t proved fruitful, narrowing down the possible search areas for us to investigate. The science fiction fan in me still hopes that we’ll find something, just a skerrick, that shows there’s some other life out there. I know we won’t likely find anything for decades, maybe centuries, but that hope of finding something out there is what’s driving this program forward.
It’s been a decade in the making but today, after such a long wait, we can now see Pluto and Charon for what they are.
And they’re absolutely stunning.
The image on the left is the high resolution image taken by the LORRI camera a few days before its closest approach (which you’ve undoubtedly seen already) with the one on the right being a recently released image of Charon. Neither of these images are the sharpest available, indeed for both Pluto and Charon we have images with up to 10 times the resolution streaming back to us right now, but they are already proving to be fruitful grounds for science. Indeed these two images have already given us insights into other celestial bodies within our solar system. Of course the most interesting thing about these pictures is what they reveal about Pluto and Charon themselves and the insights are many.
The biggest surprise is just how “young” the surfaces of both Pluto and Charon are, devoid of the impact craters that are commonplace on celestial bodies that lack an atmosphere. What this means is that both Pluto’s and Charon’s surfaces have been geologically active in the recent past, on the order of some 100 million years ago or less. There’s even a chance that their surfaces are geologically active today. If they are geologically active today it means that our current theories about the mechanism for this happening aren’t complete and there’s another way for a planet’s surface to refresh themselves.
You see current thinking is that for an icy moon or planet to be able to churn its surface over on a regular basis an outside force has to be acting on them. This is based on the current set of icy moons that orbit around our two gas giants, their giant gravitational fields bending and warping their surfaces as they orbit. However neither Charon nor Pluto has the required mass to induces stresses of that magnitude however their surfaces are still as geologically young as any of the other ice moons. So there must be another mechanism in action here, one that allows even small icy planets and moons to refresh their surfaces on a continual basis. As to what this mechanism is we are not sure but in the coming months I’m sure the scientists at NASA will have some amazing theories about how it works.
The most striking feature of Pluto is the heart which has been tentatively dubbed Tombaugh Regio for Pluto’s discoverer. It consists of 2 different lobes with the one on the left being noticeably smoother than the one on the right. It is currently being theorized that the one on the left is a giant impact crater that was then filled up with nitrogen snow (Pluto’s surface is 98% frozen nitrogen). Considering the resolution of the images we’ll have access too soon I’m sure there will be more than info to figure out the heart’s origin and any other surprising things about Pluto’s surface.
Charon on the other hand appears to be littered with giant canyons, many of them several miles tall. It’s possible that whatever is responsible for the young surface of Charon is also responsible for these giant canyons, something we’ll have to wait for the high resolution images to figure out. Also of note is the giant dark patch on Charon’s polar region which is thought to be a thin deposit of dark material with a sharp geological feature underneath it. As to what that is exactly we’re not sure but the next few months will likely reveal its secrets to us.
These two images alone are incredible, showing us worlds that were simply blurs of different coloured light for almost a century. We most certainly don’t have the full picture yet, the data that New Horizons has will take months to get back to us, but they’ve already provided valuable insight into Pluto, Charon and the solar system in which we live. I can’t wait to see what else we discover as it’s bound to shake up our understanding of the universe once again.
There are numerous risks that spacecraft face when traversing the deep black of space. Since we’ve sent many probes to many locations most of these risks are well known and thus we’ve built systems to accommodate them. Most craft carry with them fully redundant main systems, ensuring that if the main one fails that the backup can carry on the task that the probe was designed to do. The systems themselves are also built to withstand the torturous conditions that space throws at them, ensuring that even a single piece of hardware has a pretty good chance of surviving its journey. However sometimes even all that engineering can’t account for what happens out there and yesterday that happened to New Horizons.
New Horizons is a mission led by NASA which will be the first robotic probe to make a close approach to Pluto. Its primary mission is to capture the most detailed view of Pluto yet, generating vast amounts of data about our most diminutive dwarf planet. Unlike many similar missions though New Horizons won’t be entering Pluto’s orbit, instead it will capture as much data as it can as it whips by Pluto at a blistering 17 km/s. Then it will set its sights on one of the numerous Kuiper Belt objects where it will do the same. This mission has been a long time in the making launching in early 2006 and is scheduled to “arrive” at pluto in the next 10 days.
However, just yesterday, the craft entered safe mode.
What caused this to happen is not yet known however one good piece of news is that the craft is still contactable and operating within expected parameters for an event of this nature. Essentially the primary computer sensed a fault and, as it is programmed to do in this situation, switched over to the backup system and put the probe into safe mode. Whilst NASA engineers have received some information as to what the fault might be they have opted to do further diagnostics before switching the probe back onto its primary systems. This means that science activities that were scheduled for the next few days will likely be delayed whilst these troubleshooting process occur. Thankfully there were only a few images scheduled to be taken and there should be ample time to get the probe running before its closest approach to Pluto.
The potential causes behind an event of this nature are numerous but since the probe is acting as expected in such a situation it is most likely recoverable. My gut feeling is that it might have been a cosmic ray flipping a bit, something which the processors that probes like New Horizons are designed to detect. As we get more data trickled back down (it takes 9 hours for signals to reach New Horizons) we’ll know for sure what caused the problem and what the time frame will be to recover.
Events like this aren’t uncommon, nor are they unexpected, but having one this close to the mission’s ultimate goal, especially after the long wait to get there, is sure to be causing some heartache for the engineers at NASA. New Horizons will only have a very limited opportunity to do the high resolution mapping that it was built to do and events like these just up the pressure on everyone to make sure that the craft delivers as expected. I have every confidence that the team at NASA will get everything in order in no time at all however I’m sure there’s going to be some late nights for them in the next few days.
Godspeed, New Horizons.
It seems somewhat trite to say it but rocket science is hard. Ask anyone who lived near a NASA testing site back in the heydays of the space program and they’ll regale you with stories of numerous rockets thundering skyward only to meet their fate shortly after. There is no universal reason behind rockets exploding as there are so many things in which a failure leads to a rapid, unplanned deconstruction event. The only universal truth behind sending things into orbit atop a giant continuous explosion is that one day one of your rockets will end up blowing itself to bits. Today that has happened to SpaceX.
The CRS-7 mission was SpaceX’s 7th commercial resupply mission to the International Space Station with its primary payload consisting of around 1800kgs of supplies and equipment. The most important piece of cargo it was carrying was the International Docking Adapter (IDA-1) which would have been used to convert one of the current Pressurized Mating Adapters to the new NASA Docking System. This would have allowed resupply craft such as the Dragon capsule to dock directly with the ISS rather than being grappled and attached, which is currently not the preferred method for coupling craft (especially for crew egress in emergency). Other payloads included things like the Meteor Shower Camera which was actually a backup camera as the primary was lost in the Antares rocket explosion of last year.
Elon Musk tweeted shortly after the incident that the cause appears to be an overpressure event in the upper stage LOX tank. Watching the video you can see what he’s alluding to here as shortly after take off there appears to be a rupture in the upper tank which leads to the massive cloud of gas enveloping the rocket. The event happened shortly after the rocket reached max-q, the point at which the aerodynamic stresses on the craft have reached their maximum. It’s possible that the combination of a high pressure event coinciding with max-q was enough to rupture the tank which then led to its demise. SpaceX is still continuing its investigation however and we’ll have a full picture once they conduct a full fault analysis.
A few keen observers have noted that unlike other rocket failures, which usually end in a rather spectacular fireball, it appears that the payload capsule may have survived. The press conference held shortly after made mention of telemetry data being received for some time after the explosion had occurred which would indicate that the capsule did manage to survive. However it’s unlikely that the payload would be retrievable as no one has mentioned seeing parachutes after the explosion happened. It would be a great boon to the few secondary payloads if they were able to be recovered but I’m certain none of them are holding their breath.
This marks the first failed launch out of 18 for SpaceX’s Falcon-9 program, a milestone I’m sure none were hoping they’d mark. Putting that in perspective though this is a 13 year old space company who’s managed to do things that took their competitors decades to do. I’m sure the investigations that are currently underway will identify the cause in short order and future flights will not suffer the same fate. My heart goes out to all the engineers at SpaceX during this time as it cannot be easy picking through the debris of your flagship rocket.
Outside of earth Europa is probably the best place for life as we know it to develop. Beneath the radiation soaked exterior, which consists of an ice layer that could be up to 20KM thick, lies a vast ocean that stretches deep into Europa’s interior. This internal ocean, though bereft of any light, could very well harbor the right conditions to support the development of complex life. However if we’re ever going to entertain the idea of exploring the depths of that vast and dark place we’ll first need a lot more data on Europa itself. Last week NASA has greenlit the Europa Clipper mission which will do just that, slated for some time in the 2020 decade.
Exploration of Europa has been relatively sparse, with the most recent mission being the New Horizons probe which imaged Europa on its Jupiter flyby on its path to Pluto. Indeed the majority of missions that have imaged Europa have been flybys with the only long duration mission being the Galileo probe that was in orbit around Jupiter for 8 years which included numerous flybys of Europa. The Europa Clipper mission would be quite similar in nature with the craft conducting multiple flybys rather than staying in orbit. The mission would include the multiple year journey to our jovian brother and no less than 45 flybys of Europa once it arrived.
It might seem odd that an observation mission would opt to do numerous flybys rather than a continuous orbit however there are multiple reasons for this. For starters Jupiter has a powerful radiation belt that stretches some 700,000KM out from the planet, enveloping Europa. This means that any craft that dares enter Jupiter’s orbit its lifetime is usually somewhat limited and should NASA have opted for an orbital mission rather than a flyby one the craft’s expected lifetime wouldn’t be much more than a month or so. Strictly speaking this might not be too much of an issue as you can make a lot of observations in a month however the real challenge comes from getting that data back down to Earth.
Deep space robotic probes are often capable of capturing a lot more information than they’re able to send back in real time, leading to them storing a lot of information locally and transmitting it back over a longer period of time. If the Europa clipper was orbital this would mean it would only have 30 days with which to send back information, not nearly enough for the volumes of data that modern probes can generate. The flybys though give the probe more than enough time to dump all of its data back down to Earth whilst it’s coasting outside of Jupiter’s harsh radiation belts, ensuring that all data gathered is returned safely.
Hopefully the data that this craft brings back will pave the way for a potential mission to the surface sometime in the future. Europa has so much potential for harboring life that we simply must investigate it and the data gleaned from the Europa Clipper mission will provide the basis for a future landing mission. Of course such a mission is likely decades away however I, and many others, believe that a mission to poke beneath the surface of Europa is the best chance we have of finding alien life. Even if we don’t that will provide valuable insight into the conditions for forming life and will help point our future searches.
Your garden variety telescope is usually what’s called a refracting telescope, one that uses a series of lenses to enlarge far away objects for your viewing pleasure. For backyard astronomy they work quite well, often providing a great view of our nearby celestial objects, however for scientific observations they’re usually not as desirable. Instead most large scientific telescopes use what’s called a reflecting telescope which utilizes a large mirror which then reflects the image onto a sensor for capture. The larger the mirror the bigger and more detailed picture you can capture, however bigger mirrors come with their own challenges especially when you want to launch them into space. Thus researchers are always looking for novel ways to create a mirror and one potential avenue that NASA is pursuing is, put simply, a little fabulous.
One method that many large telescopes use to get around the problem of creating huge mirrors is to use numerous smaller ones. This does introduce some additional complexity, like needing to make sure all the mirrors align properly to produce a coherent image on the sensor, however that does come with some added benefits like being able to eliminate distortions created by the atmosphere. NASA’s new idea takes this to an extreme, replacing the mirror with a cloud of glitter-like particles held in place with lasers. Each of those particles then acts like a tiny mirror, much like their larger counterparts . Then, on the sensor side, software is being developed to turn the resulting kaleidoscope of colours back into a coherent image.
Compared to traditional mirrors on telescopes, especially space based ones like the Hubble, this has the potential to both significantly reduce weight whilst at the same time dramatically increasing the size of the mirror we can use. The bigger the mirror the more light that can be captured and analysed and a mirror designed with this cloud of particles could be many times greater than its current counterparts. The current test apparatus (shown above) uses a traditional lens covered in glitter which was used to validate the concept by using 2 simulated “stars” that shone through it. Whilst the current incarnation used multiple exposures and a lot of image processing to create the final image it does show that the concept could work however it requires much more investigation before it can be used for observations.
A potential mission to verify the technology in space would use a small satellite with a prototype cloud, no bigger than a bottle cap in size. This would be primarily aimed at verifying that the cloud could be deployed and manipulated in space as designed and, if that proved successful then they could move on to capturing images. Whilst there doesn’t appear to be a strict timeline for that yet this concept, called Orbiting Rainbows, is part of the NASA Innovative Advanced Concepts program and so research on the idea will likely continue for some time to come. Whether it will result in an actual telescope however is anyone’s guess but such technology does show incredible promise.
The Rosetta mission’s journey to comet Churyumov–Gerasimenko 67P spanned some 10 years, nearly all of that spent idling through space as it performed the numerous gravity assists required to get up the required speed. By comparison the mere 60 hours that the Philae Lander, the near cubic meter sized daughter craft of the parent Rosetta satellite, seemed almost insignificant by comparison but thankfully it was able to return some data before it went dead. There was some speculation that, maybe, once the comet got close enough to the sun the lander would have enough power to come back online and resume its activities. Chances were slim though as it had landed in a high walled crater that blocked much of the sun from hitting.
However, just under 12 hours ago, Philae made contact with Rosetta.
To say that the chances of Philae waking up were slim was putting it lightly given the trials and tribulations it went through during its landing attempt. In the extremely weak gravity field of its parent comet the 100KG lander weighs a mere 1g meaning the slightest push could send it tumbling across the surface or, even worse, out into space. This wouldn’t have been an issue if Philae’s landing hooks had fired but they unfortunately failed meaning it had no way with which to hang onto the surface. Thankfully it seems that an outgassing event hasn’t blown our little lander away and, after the Rosetta craft turned on its receiver to listen for it, we’ve finally made contact with Philae.
Reestablishing contact with Philae is a boon to the Rosetta mission as the lander contains a wealth of data that we could not retrieve when it was last active, due to time constraints. After the initial burst of 300 that the ESA was able to retrieve during this first contact after it went dark there are still some 8000 packets left to collect. These will provide some great insight into what happened to the lander during the dark period and what it’s been up to since it finally woke up. Early indications are that Philae has actually been awake before it was just unable to make contact with the Rosetta probe for whatever reason. We’ll likely know a lot more as the ESA team gets more time to analyze the data.
This also doesn’t appear to simply be a spurious occurrence either as the telemetry data indicates that Philae is operating at a balmy -35°C and is generating some 24 watts of power off its solar panels. Considering that its panels were rated for 32 watts at 3AUs from the sun (it is currently 1.4AUs as of writing) that’s not bad considering that it’s in something of a crater which would limit its sun exposure dramatically. This figure can only be expected to increase as time goes on meaning that Philae will likely be able to keep transmitting data and continue the experiments that it was unable to do previously. One such example is drilling into the surface of its parent comet, something which was attempted previously but didn’t prove successful.
Spacecraft coming back from the dead like this are a rare occurrence and it’s an absolute joy to hear that Philae has awoken from its 7 month slumber. It’s brief 60 hour mission will hopefully now be extended several times over, allowing us to conduct the full array of experiments and gather valuable data. What insights it will dredge up is anyone’s guess but suffice to say that Philae’s reawakening is a boon to both the ESA and the greater science community at large.