The world of fusion is currently dominated by a single project: The International Thermonuclear Experimental Reactor. It is by far the biggest project ever undertaken in the field of fusion, aiming to create a plant capable of producing sustained bursts of 500MW. Unfortunately due to the nature of fusion and the numerous nations involved in the project it’s already a decade behind where it was supposed to be with conservative estimates having it come online sometime in 2027. Now this isn’t an area I’d usually considered ripe for private industry investment (it’s extremely risky and capital intensive) but it appears that a few start-ups are actually working in this area and the designs they’re coming up with are quite incredible.
There’s 2 main schools of thinking in the world of fusion today: inertial confinement and magnetic confinement. The former attempts to achieve fusion by using incredible amounts of pressure, enough so that the resulting reaction plasma is 100 times more dense than lead. It was this type of fusion that reached a criticla milestone late last year with the NIF producing more energy in the reaction than they put into it. The latter is what will eventually power ITER which, whilst it has yet to provide a real (non-extrapolated) Q value of greater than 1 it still has had much of the basic science validated on it, thus providing the best basis from which to proceed with. What these startups are working on though is something in between these two schools of thinking which, potentially, could see fusion become commercially viable sooner rather than later.
The picture above is General Fusion’s Magnetized Target Fusion reactor a new prototype that combines magnetic confinement with aspects of its inertial brethren. In the middle is a giant core of molten lead that’s spinning fast enough to produce a hollowed out center (imagine it like an apple with the core removed). The initial plasma is generated outside this sphere and contained using a magnetic field after which it’s injected into the core of the molten lead sphere. Then pistons on the outside of the molten sphere compress it down rapidly, within a few millionths of a second, causing the internal plasma to rapidly undergo fusion reactions. The resulting heat from the reaction can then be used in traditional power generators, much like it would in other nuclear reactors.
The design has a lot of benefits like the fact that the molten lead ball that’s being used for containment doesn’t suffer from the same neutron degradation that other designs typically suffer from. From what I can tell though the design does have some rather hefty requirements when it comes to precision as the compression of the molten lead sphere needs to happen fast and symmetrically. The previous prototypes I read about used explosives to do this, something which isn’t exactly sustainable (well, at least from my point of view anyway). Still the experiments thus far haven’t disproved the theory so it’s definitely a good area for research to continue in.
Whether these plucky upstarts in fusion will be able to deliver the dream faster than ITER though is something I’m not entirely sure about. Fusion has been just decades away for the better part of a century now and whilst there’s always the possibility these designs solve all the issues that the other’s have it could just as easily go the other way. Still it’s really exciting to see innovation in this space as I honestly thought the 2 leading schools of thought were basically it. So this is one of those occasions when I’m extraordinarily happy to be proven wrong and I hope they can dash my current skepticism again in the not too distant future.
The representation of climate change science in the media has, up until recently, been rather poor. Far too many engaged in debates and articles that gave the impression there was still 2 sides to the argument when in fact the overwhelming majority of evidence only favours one side. The last few years have seen numerous campaigns to rectify this situation and whilst we still haven’t convinced everyone of the real facts it’s been great to see a reduction in the number of supposed “fair” debates on the topic. However if a recent study around the general population’s knowledge on this topic is anything to go by lack of knowledge might not be the problem at all, it might just be the culture surrounding it.
A recent study done by Professor Dan Kahan of Yale university was done in order to understand just how literate people were on the issues of general science as well as climate change science. The results are rather surprising (and ultimately disturbing) as whilst you’d tend to think that a better general understanding of science would lead to a better understanding of the risks associated with climate change the study actually shows that isn’t a predictor at all. Indeed the strongest predictor of was actually their left-right political affiliation with the amount of scientific knowledge actually increasing the divide between them. This leads us to a rather ugly conclusion that educating people about the facts behind climate change is most likely not going to change their opinion of it.
Whilst the divide along party lines isn’t going to shock anyone the fact that both sides of the political landscape are about as educated as each other on the topic was a big surprise to me. I had always thought that it was more ignorance than anything else as a lot of arguments I had had around climate change usually centered on the lack of scientific consensus. Had I dug further into their actual knowledge though it seems that they may have been more knowledgeable on it than I would first think, even if the conclusions they drew from the evidence were out of touch with reality. This signals that we, as those interested in spreading the facts and evidence as accepted by the wider scientific community, need to rephrase the debate from one of education to something else that transcends party lines.
What that solution would be though is something I just don’t have a good answer to. At an individual level I know I can usually convince most people of the facts if I’m given enough time with someone (heck up until 5 years ago I was on the other side of the debate myself) but the strategies I use there simply don’t scale to the broader population. Taking the politics out of an issue is no simple task, and one I’d wager has never been done successfully before, but until we find a way to break down the party lines on the issue of climate change I feel that meaningful progress will always be a goal that’s never met.
Venus is probably the most peculiar planet that we have in our solar system. If you were observing it from far away you’d probably think that it was a twin of Earth, and for the most part you’d be right, but we know that it’s nothing like the place we call home. It’s atmosphere is a testament to the devastation that can be wrought by global warming with the surface temperature exceeding 400 degrees. Venus is also the only planet that spins in the opposite (retrograde) direction to every other planet, a mystery that still remains unsolved. Still for all we know about our celestial sister there’s always more to be learned and that’s where the Venus Express comes in.
Launched back in 2005 the Venus Express mission took the platform developed for the Mars Express mission and tweaked it for observational use around Venus. The Venus Express’ primary mission was the long term observation of Venus’ atmosphere as well as some limited study of its surface (a rather difficult task considering Venu’s dense atmosphere). It arrived at Venus back in early 2006 and has been sending data back ever since with its primary mission being extended several times since then. However the on board fuel resources are beginning to run low so the scientists controlling the craft proposed a daring idea: do a controlled deep dive into the atmosphere to gather even more detailed information about Venus’ atmosphere.
Typically the Venus Express orbits around 250KM above Venus’ surface, a pretty typical height for observational activities. The proposed dive however had the craft diving down to below 150KM, an incredibly low altitude for any craft to attempt. To put it in perspective the “boundary of space” (referred to as the Karman line) is about 100KM above Earth’s surface, putting this craft not too far off that boundary. Considering that Venus’ atmosphere is far more dense than Earth’s the risks you run by diving down that low are increased dramatically as the drag you’ll experience at that height will be far greater. Still, even with all those risks, the proposed dive went ahead last week.
The amazing thing about it? The craft survived.
The dive brought the craft down to a staggering 130KM above Venus’ surface during which it saw some drastic changes in its operating environment. The atmospheric density increased a thousandfold between the 160KM and 130KM, significantly increasing the drag on the spacecraft. This in turn led to the solar panels experiencing heating over 100 degrees, enough to boil water on them. It’s spent about a month at various low altitudes before the mission team brought it back up out of the cloudy depths, where its orbit will now slowly degrade over time before it re-enters the atmosphere one last time.
It’s stuff like this that gets me excited about space and the science we can do in it. I mean we’ve got an almost decade old craft orbiting another planet and we purposefully plunged it down, just in the hopes that we’d get some better data. Not only did it manage to do that but it came back out the other side, still ready and raring to go. If that isn’t a testament to our talents in engineering and orbital mechanics prowess then I don’t know what is.
Ever since I can remember my joints have always been prone to popping and cracking. It was the worst when I was a child as I couldn’t really sneak around anywhere without my ankles loudly announcing my presence, thwarting my attempt at whatever shenanigans I was up to. Soon after I discovered the joy of cracking my knuckles and most other joints in my body, much to the chagrin of those around me. However even though I was warned of health effects (which I’m pretty sure is bunk) I never looked up the actual mechanism behind the signature sound and honestly it’s actually quite interesting:
Interestingly though whilst cavitation in the synovial fluid is one of the better explanations for where the sound originates there’s still some other mechanisms which can cause similar audible effects. Rapid stretching of ligaments can also result in similar noises, usually due to tendons snapping from one position to another. Some sounds are also the result of less benign activities like tearing of intra-articular adhesions tearing, although that usually goes hand in hand with a not-so-minor injury to the joint.
There’s also been a little more investigation into the health effects of cracking your knuckles than what the video alludes to. A recent study of 215 patients in the age range of 50 to 89 showed that, regardless of how long a person had been cracking their knuckles, there was no relationship between cracking and osteoarthritis in those joints. Now this was a retrospective study (in terms of people telling the researchers of how much they cracked their knuckles) so there’s potential for biases to slip in there but they did use radiographs to determine if they had arthritis or not. There’s no studies around other joints however, although I’d wager that the mechanisms, and thus their effects, are very similar throughout the body.
And now if you’ll excuse me I’ll be off to disgust my wife by cracking every joint in my body
It’s been almost 6 years since I first began writing this blog. If you dare to troll through the early archives there’s no doubt that the writing in there is of lower quality, much of it to do with me still trying to find my voice in this medium. Now, some 1300+ posts later, the hours I’ve invested in developing this blog my writing has improved dramatically and every day I feel far more confident in my abilities to churn out a blog post that meets a certain quality threshold. I attribute much of that to my dedication to writing at least once a day, an activity which has seen me invest thousands of hours into improving my craft. Indeed I felt that this was something of an embodiment of the 10,000 hour rule at work, something that newly released research says isn’t the main factor at play.
The study conducted by researchers at Princeton University (full text available here) attempted to discern just how much of an impact deliberate practice had on performance. They conducted a meta analysis of 150 studies that investigated the relationship between these two variables and classified them along major domains as well as the methodology used to gather performance data. The results show that whilst deliberate practice can improve your performance within a certain domain (and which domain its in has a huge effect on how great the improvement is) it’s not the major contributor in any case. Indeed the vast majority of improvements are due to factors that reside outside of deliberate practice which seemingly throws the idea of 10,000 hours worth of practice being the key component to mastering something.
To be clear though the research doesn’t mean that practice is worthless, indeed in pretty much every study conducted there’s a strong correlation between increased performance and deliberate practice. What this study does show though is that there are factors outside of deliberate practice which have a greater influence on whether or not your performance improves. Unfortunately determining what those factors are was out of the scope of the study (it’s only addressed in passing in the final closing statements of the report) but there are still some interesting conclusions to be made about how one can go about improving themselves.
Where deliberate practice does seem to help with performance is with activities that have a predictable outcome. Indeed performances for routine activities show a drastic improvement when deliberate practice is undertaken whilst unpredictable things, like aviation emergencies, show less improvement. We also seem to overestimate our own improvement due to practice alone as studies that relied on people remembering past performances showed a much larger improvement than studies that logged performances over time. Additionally for the areas which showed the least amount of improvement due to deliberate practice it’s likely that there’s no good definition for “practice” within these domains, meaning it’s much harder to quantify what needs to be practiced.
So where does this leave us? Are we all doomed to be good at only the things which our nature defines for us, never to be able to improve on anything? As far as the research shows no, deliberate practice might not be the magic cure all for improving but it is a great place to start. What we need to know now is what other factors play into improving performances within their specific domains. For some areas this is already well defined (I can think of many examples in games) but for other domains that are slightly more nebulous in nature it’s entirely possible that we’ll never figure out the magic formula. Still at least now you don’t worry so much about the hours you put in, as long as you still, in fact, put them in.
Liquid nitrogen is a scientific staple that I’m sure we’re pretty much all familiar with. It’s a great demonstration of how the melting and boiling points can vary wildly and, of course, everyone loves shattering a frozen banana or two. However seeing the other stages of elemental gases is typically impossible as getting the required temperature is beyond the reach of most high school science labs. However there is a trick that we can use to, in essence, trick nitrogen into forming a solid: reducing the pressure to a near vacuum. The results of doing so are just incredible with the nitrogen behaving in some really peculiar ways:
The initial stages of the nitrogen transitioning into a solid is pretty standard with the reduced pressure resulting in the superheated boiling, plunging the temperature of the remaining liquid. The initial freezing is also something many will be familiar with as it closely mimics what happens when water freezes (although lacking water’s peculiar property of expanding when freezing). The sudden, and rather explosive, crystalline formation after that however took me by surprise as I’ve never really seen anything of that nature before. The closest thing I could think of was the fracturing of a Prince Rupert’s Drop although the propagation of the nitrogen crystalline structure seems to be an order of magnitude or two slower than that.
What really got me about this video is that it wasn’t done by a science channel or vlogger, it’s done by a bunch of chefs. Liquid nitrogen has been used in various culinary activities for over a century, mostly due to its extreme low temperatures which form much smaller ice crystals in the food that it chills. It should come as no surprise really as there’s been a huge surge in the science behind cooking with the field of molecular gastronomy taking off in recent decades. It just goes to show that interesting science can be done almost anywhere you care to look and its applications are likely far more wide reaching than you’d first think.
In the beginning, the one where time itself began, the theory goes that matter and antimatter were created in equal amounts. When matter and antimatter meet they annihilate each other in a perfect transformation of matter into energy which should have meant that our universe consisted of nothing else. However, for some strange reason, the universe has a small preference for matter over antimatter, to the tune of 1 parts in 10 billion. This is why our universe is the way it is, filled with billions of galaxies and planets, with the only remnant of the cataclysmic creation being the cosmic microwave background that permeates our universe with bizarre consistency. The question of why our universe has a slight preference for matter has puzzled scientists for the better part of a century although we’re honing in on an answer.
If you had the ability to see microwaves then the night sky would have a faint glow about it, one that was the same no matter which direction you looked in. This uniform background radiation is a relic of the early universe where matter and antimatter were continuously annihilating each other, leaving behind innumerable photons that now permeate every corner of the known universe. What’s rather perplexing is that we haven’t observed any primordial antimatter left over from the big bang, only the matter that makes up the observable universe. This lack of antimatter means that, for some reason or another, our universe has an asymmetry in it that has a preference for matter. Where this asymmetry lies though is still unknown but we’re slowly eliminating its hiding spots.
The Antihydrogen Laser Physics Apparatus (ALPHA) team at CERN has been conducting experiments with antimatter for some time now. They have been successfully capturing antiprotons for several years and have recently moved up to capturing antihydrogen atoms. Their approach to doing this is quite novel as traditional means of capturing antimatter usually revolve around strong magnetic fields which limit what kinds of analysis you can do on them. ALPHA’s detector can transfer the antihydrogen away from their initial capture region to another one which has a uniform electric field, allowing them to perform measurements on them. Antihydrogen is electrically neutral, much like its twin hydrogen, so the field shouldn’t deflect them. The results have shown that antihydrogen particles have a charge that’s equivalent to 0, showing that it shares the same properties as its regular matter brethren.
This might not sound like a much of a revelation however it was a potential spot for the universe’s asymmetry to pop up in. Had the charge of the antihydrogen atom been significantly different from that of hydrogen it would’ve been a clue as to the source of the universe’s preference for matter. We’ve found that not to be the case so it means that the asymmetry exists somewhere else. While this doesn’t exactly tell us where it might be it does rule out one possibility which is about as good as it gets in modern science. There’s still many more experiments to be done by the ALPHA team and I have no doubt they’ll be significant contributors to modelling just similar matter and antimatter are.
Whilst I might tend towards nuclear being the best option to satisfy our power needs (fission for now, fusion for the future) I see little reason for us to not pursue renewable technologies. Solar and wind have both proven to be great sources of energy that, even at the micro scale, have proven to be great sources of energy that have great returns on investment. Even the more exotic forms of renewable energy, like wave power and biomass, have proven that they’re more than just another green dream. However the renewable energy which I believe has the most potential is concentrated solar thermal which, if engineered right, can produce power consistently over long periods of time.
Solar thermal isn’t a recent technology with functioning plants operating in Spain since 2007. However compared to most other forms of power generation it’s still in its nascent stages with the numerous different approaches being trialled to figure out how to best set up and maintain a plant of this nature. This hasn’t stopped the plants from generating substantial amounts of power in the interim however with the largest capable of generating 392MW which might not sound like a lot when you compare it to some coal fueled giants but they do it without consuming any non-renewable fuel. What’s particularly exciting for me is that our own CSIRO is working on developing this technology and just passed a historic milestone.
The CSIRO maintains an Energy Center up in Newcastle where they develop both energy efficient building designs as well as renewable energy systems. Of the numerous systems they have there (including a traditional photovoltaic system, wind turbine and gas fired microturbine) are two concentrating solar thermal towers capable of generating 500KW and 1MW respectively. Their larger array recently generated supercritical steam at temperatures that could melt aluminium, an astonishing achievement. This means that their generating turbines can operate far more efficiently than traditional subcritical designs can, allowing them to generate more power. Whilst they admit they’re still a ways off a commercial level implementation the fact they were able to do it with a small array is newsworthy in itself as even the larger plants overseas haven’t achieved such a goal yet.
Looking at the designs they have on their website it seems their design is along the traditional lines of solar thermal, using the steam created to directly feed into the turbine to generate electricity. This, of course, suffers from the age old problem that you only generate power when the sun is shining, limiting its effectiveness to certain parts of the day. The current solution to this is to use a heat storage medium, molten salts being the currently preferred option, to capture heat for later use. Thankfully it seems the CSIRO is investigating different heat storage mediums, including molten salts, to augment their solar thermal plant with. I’m not sure if it would be directly compatible with their current set up (you usually heat the molten salts directly and then use them to generate steam down the line) but it’s good to see that they’re considering all aspects of solar thermal power generation.
Considering just how much of Australia is barren desert that’s bathed in the suns radiation solar thermal seems like the smart choice for generating large amounts of power without the carbon footprint that typically comes along with it. The research work that is being done at the CSIRO and abroad means that this technology is not just an environmentalist’s dream, it’s a tangible product that is already proving to have solid returns on investment. If all goes well we might be seeing our first solar thermal plant sooner than you’d think, something I think all of us can get excited about.
It may seem like scientists spend an inordinate time studying water but there’s a pretty good reason for that. Water is fundamental to all forms of life on Earth so understanding its origins and what roles it plays is crucial to understanding how life came to be and where we might find it. The vast majority of Earth’s water is contained in its oceans which were thought to have formed when comets bombarded its surface, seeding them across the world. However recent research has shown that the oceans may have formed in a different way and that Earth may have much more water contained in it than previously thought.
A recent study done by Steven Jacobsen and his team at Northwestern University has revealed that Earth has a subsurface reservoir that may contain 3 times the volume of the Earth’s surface oceans. They discovered this information by using data from a wide variety of seismometers, those instruments that measure the intensity of the pressure waves of earthquakes, and figuring out how the waves travelled through the Earth’s interior. This is nothing new, it’s how we’ve figured out the rough compositions of the different layers of the Earth’s inner layers previously, however Jacobsen postulated that water in ringwoodite would slow the waves. After testing a sample of ringwoodite to confirm this theory (shown above) his team found data to support the existence of a large layer of ringwoodite in the Earth’s mantle. Whilst this isn’t a subsurface ocean like some heavenly bodies in our solar system have it is a rather interesting discovery, one that supports an entirely different theory of how our surface oceans formed.
The initial hypothesis (at least the one I’m familiar with) is that the Earth bound itself together out of all the varying bits of debris that existed after the sun had formed itself. At this point Earth was a ball of lava, a fiendishly unfriendly environment devoid of any kind of life. Then, as the planet cooled, comets rained down on its surface, supplying the vast amounts of water we now see today. The discovery of this layer of ringwoodite on the other hand suggests that the water may have been present during the initial formation and that instead of other comets providing all the water it instead seeped up, filling all the crevices and crags of the Earth’s surface. It’s interesting because it now links Earth more directly to our other celestial neighbours, those which you’d never consider Earth-like at all.
Saturn’s Europa and Jupiter’s Ganymede for instance are both hypothesized to have vast bodies of water under their surfaces. Up until this discovery you would be forgiven for thinking that their initial formation was likely due to their immediate environment (I.E. those massive gas giants right next to them) however it’s more likely that all heavenly bodies form along a similar path. Thus oceans like ours are probably more likely than not for planets of similar size to ours. Of course there are also numerous other factors that can push things in one way or another (see Mars and Venus for examples of Earth like planets are nothing like Earth) but such similarities really can’t be ignored.
In all honesty this discovery surprised me as I had always been a subscriber to the “comet bombardment” theory of Earth’s oceans. This evidence however points towards an origin story where water formed a core part of Earth’s structure, only to worm its way to the surface long after it cooled. Come to think of it this probably also explains (at least partially) how Earth’s atmosphere likely came to existence, the gases slowly seeping out until it was blanketed in carbon dioxide, only to be turned into the atmosphere we know today by plants. I’m keen to see what other insights can be gleaned for this data as I’m sure this isn’t the only thing Jacobsen’s team discovered.
Correction: My good friend Louise correctly pointed out that our atmosphere started off being almost completely carbon dioxide and only had the composition we know today thanks to plans. She also pointed out I used the wrong “it’s” in the title which, if I didn’t know any better, would say to me that she wants to be my copy editor
The biggest challenge we face when exploring space is the almost incomprehensible amount of travel we have to do just to get to other heavenly bodies to explore. The fastest craft we’ve ever launched, the New Horizons probe, will take approximately 9 years to reach Pluto and would still take tens of thousands of years to reach another star once it’s completed that initial mission. There are many ways of tackling this problem but even if we travel as fast as the fastest thing known (light) there are still parts of our galaxy that would take thousands of years to reach. Thus if we want to expand our reach beyond that of our cosmic backyard we must find solutions that allow us to travel faster than the speed of light. One such solution that every sci-fi fan will be familiar with is the warp drive.
Now many will be familiar with the concept, a kind of space engine that allows a craft to travel faster than the speed of light, however fewer will know that it actually has roots in sound science. Essentially whilst nothing can travel faster than light space itself can expand at a rate faster than light travels, a property we have already observed. The trick, of course, is being able to manipulate space in such a way that it shrinks in front of you and expands behind you, something which required a kind of exotic matter that, as of yet, has not been created nor observed. However if you watch the video above (and I highly recommend you do if you can spare the hour) you’ll see that there’s been some amazing progress in validating the science behind the warp drive model and it’s quite incredible.
For me the most amazing thing about the presentation was the use of a toroidal capacitor as a space warping device. The idea of a warp drive has long hinged on the idea that a new type of matter would be required in order to create the expanding and contracting regions of space. However White’s experiments are instead seeking to validate if a positive energy density field could create the required negative pressure zone, negating the need to actually create exotic matter. As he states in the video however the results are non-negative but not conclusive so we don’t know if they’re creating a warp field yet but further experimentation should show us one way or another. Of course I’m hoping for research in the positive direction as the other improvements White and his team made to the original Alcubierre designs (reducing the energy required to sustain the field) mean that this could have many practical applications.
The video also goes on to talk about Q-Thrusters or Quantum Vacuum Plasma Thrusters which I’ve written about here previously. What I didn’t know was just how well those thrusters scaled up with bigger power sources and if their models are anything to go by they could make many missions within our solar system very feasible, even for human exploration. Keen observers will note that a 2MW power supply that comes in at 20 tons is likely to be some kind of fissile reactor, something which we’re going to have to adopt if we want to use this technology effectively. Indeed this is something I’ve advocated for in the past (in my armchair mission to Europa) but it’s something that’s going to have to be overcome politically first before the technology will see any further progress.
Still this is all incredibly exciting stuff and I can’t wait to hear further on how these technologies develop.