Beyond the LHC: AWAKE.

The Large Hadron Collider has proven to be the boon to particle physics that everyone had imagined to be but it’s far from done yet. We’ll likely be getting great data out of the LHC for a couple decades to come, especially with the current and future upgrades that are planned. However it has its limit and considering the time it took to build the LHC many are looking towards what will replace it when the time comes. Trouble is that current colliders like the LHC can only get more powerful by being longer, something which the LHC struggled with at its 27KM length. However there are alternatives to current particle acceleration technologies and one of them is set to be trialled at the LHC next year.


The experiment is called AWAKE and was approved by the CERN board back in 2013. Recently however it was granted additional funding in order to pursue its goal. At its core the AWAKE experiment is a fundamentally different approach to particle acceleration, one that could dramatically reduce the size of accelerators. It won’t be the first accelerator of this type to ever be built, indeed proof of concept machines already exist at over a dozen facilities around the world, however it will be the first time CERN has experimented with the technology. All going well the experiment is slated to see first light sometime towards the end of next year with their proof of concept device.

Traditional particle colliders work on alternating electric fields to propel particles forward, much like a rail gun does with magnetic fields. Such fields place a lot of engineering constraints on the containment vessels with more powerful fields requiring more energy which can cause arcing if driven too high. To get around this particle accelerators typically favour length over field strength, allowing the particles a much longer time to accelerate before collision. AWAKE however works on a different principle, one called Plasma Wakefield Acceleration.

In a Wakefield accelerator instead of particles being directly accelerated by an electric field they’re instead injected into a specially constructed plasma. First a set of charged particles, or laser light, is sent through the plasma. This then sets off an oscillation within the plasma creating alternating regions of positive and negative charge. Then when electrons are injected into this oscillating plasma they’re accelerated, chasing the positive regions which are quickly collapsing and reforming in front of them. In essence the electrons surf on the oscillating wave, allowing them to achieve much greater velocities in a much quicker time. The AWAKE project has a great animation of the experiment here.

The results of this experiment will be key to the construction of future accelerators as there’s only so much further we can go with current technology. Wakefield based accelerators have the potential to push us beyond the current energy collision limits, opening up the possibility of understanding physics beyond our current standard model. Such information is key to understanding our universe as it stands today as there is so much beauty and knowledge still out there, just waiting for us to discover it.


The Chemistry of the Volkswagen Scandal.

The Volkswagen emissions scandal is by far one of the most egregious acts of deceptive engineering we’ve seen in a long time. Whilst the full story of how and why it came about won’t be known for some time the realities of it are already starting to become apparent. What really intrigued me however isn’t so much the drama that has arisen out of this scandal but the engineering and science I’ve had to familiarize myself with to understand just what was going on. As it turns out there’s some quite interesting chemistry at work here and, potentially, Volkswagen have shot themselves in the foot just because they didn’t want to use too much of a particular additive:

The additive in question is called AdBlue and is comparatively cheap ($1/litre seems pretty common) when compared to other fluids that most modern cars require. The problem that Volkswagen appears to have faced was that they didn’t have the time or resources require to retrofit certain models with the system when it became apparent that they couldn’t meet emissions standards. As to why they chose to defeat the emissions testing devices instead of simply delaying a model release (a bad, but much better, situation than what they currently find themselves in) is something we probably won’t know for a while.

Regardless it was an interesting aside to the larger scandal as I wasn’t familiar with this kind of diesel technology previously. Indeed now that I understand it the scandal seems all the more absurd as the additive is cheap, the technology well known and has successful implementations in many other vehicles. Still it introduced me to some interesting engineering and science that I wasn’t privy to before, so there is that at least.


Researchers Create Long Term Memory Encoding Prosthesis.

The brain is still largely a mystery to modern science. Whilst we’ve mapped out the majority of what parts do what we’re still in the dark about how they manage to accomplish their various feats. Primarily this is a function of the brain’s inherit complexity, containing some 100 trillion connections between the billions of neurons that make up its meagre mass. However like all problems the insurmountable challenge of decrypting the brain’s functions is made easier by looking at smaller parts of it. Researchers at the USC Viterbi School of Engineering and the Wake Forest Baptist Medical Center have been doing just that and have been able to recreate a critical part of the brain’s functionality in hardware.


The researchers have recreated the part of the brain called the hippocampus, the part of the brain that’s responsible for translating sensory input into long term memories. In patients that suffer from diseases like Alzheimer’s this is usually the first part that gets damaged, preventing them from forming new memories (but leaving old ones unaffected). The device they have created can essentially replace part of the hippocampus, facilitating the same encoding functions that a non-damaged section would provide. Such a device has the potential to drastically increase the quality of life of many people, enabling them to once again form new memories.

The device comes out of decades of research into how the brain processes sensory input into long term memories. The researchers initially tested their device on laboratory animals, implanting the device into healthy subjects. Then they recorded the input and output of the hippocampus, showing how the signals were translated for long term storage. This data was then used to create a model of this section of the hippocampus, allowing the researchers to then take over the job of encoding those signals. Previous research showed that, even when the animal’s long term memory function was impaired through drugs, the prosthesis was able to generate new memories.

That in and of itself is impressive however the researchers have been replicating their work with human patients. Using nine test subjects, all of whom had the requisite electrodes implanted in the right regions to treat chronic seizures, the researchers utilized the same process to develop a human based model. Whilst they haven’t yet used that to help in the creation of new memories in humans they have proven that their human model produces the same signals as the hippocampus does in 90% of cases. For patients who currently have no ability to form new long term memories this could very well be enough to drastically improve their quality of life.

This research has vast potential as there are many parts of the brain that could be mapped in the same way. The hippocampus is critical in the formation of non-procedural long term memories however there are other sections, like the motor and visual cortices, which could benefit from similar mapping. There’s every chance that those sections can’t be mapped directly like this but it’s definitely an area of potentially fruitful research. Indeed whilst we still not know how the brain stores information we might be able to repair the mechanisms that feed it, and that could help a lot of people.


Light Based Memory Paves the Way for Optical Computing.

Computing as we know it today is all thanks to one plucky little component: the transistor. This simple piece of technology, which is essentially an on/off switch that can be electronically controlled, is what has enabled the computing revolution of the last half century. However it has many well known limitations most of which stem from the fact that it’s an electrical device and is thus constrained by the speed of electricity. That speed is about 1/100th of that of light so there’s been a lot of research into building a computer that uses light instead of electricity. One of the main challenges that an optical computer has faced is storage as light is a rather tricky thing to pin down and the conversion process into electricity (so it can be stored in traditional memory structures) would negate many of the benefits. This might be set to change as researchers have developed a non-volatile storage platform based on phase-change materials.


The research comes out of the Karlsruhe Institute of Technology with collaborations from the universities of Münster, Oxford, and Exeter. The memory cell which they’ve developed can be written at speeds of up to 1GHz, impressive considering most current memory devices are limited to somewhere around a 1/5th of that. The actual memory cell itself is made up of phase-change material (a material that can shift between crystalline and amorphous states) Ge2Sb2Te5, or GST for short. When this material is exposed to a high-intensity light beam its state will shift. This state can then be read later on by using less intense light, allowing a data cell to be changed and erased.

One novel property that the researchers have discovered is that their cell is capable of storing data in more than just a binary format. You see the switch between amorphous and crystalline states isn’t distinct like it is with a transistor which essentially means that a single optical cell could store more data than a single electrical cell. Of course to use such cells with current binary architecture would mean that these cells would need a proper controller to do the translation but that’s not exactly a new idea in computing. For a completely optical computer however that might not be required but such an idea is still a way off from seeing a real world implementation.

The only thing that concerns me about this is the fact that it’s based on phase change materials. There’s been numerous devices based on them, most often in the realms of storage, which have purported to revolutionize the world of computing. However to date not one of them has managed to escape the lab and the technology has always been a couple years away. It’s not that they don’t work, they almost always do, more that they either can’t scale or producing them at volume proves to be prohibitively expensive. This light cell faces the unique challenge that a computing platform built for it currently doesn’t exist yet and I don’t think it can compete with traditional memory devices without it.

It is a great step forward however for the realm of light based computing. With quantum computing likely being decades or centuries away from becoming a reality and traditional computing facing more challenges than it ever has we must begin investigating alternatives. Light based computing is one of the most promising fields in my mind and it’s great to see progress when it’s been so hard to come by in the past.


3D Printed Prosthesis Regenerates Nerves.

Nerve damage has almost always been permanent. For younger patients there’s hope for full recovery after an incident but as we get older the ability to repair nerve damage decreases significantly. Indeed by the time we reach our 60s the best we could hope for is what’s called “protective sensation”, the ability to determine things like hot from cold. The current range of treatments are mostly limited to grafts, often using nerves from the patient’s own body to repair the damage, however even those have limited success in practice. However that could all be set to change with the development of a process which can produce nerve regeneration conduits using 3D scanning and printing.


The process was developed by a collaboration of numerous scientists from the following institutions: University of Minnesota, Virginia Tech, University of Maryland, Princeton University, and Johns Hopkins University. The research builds upon one current cutting edge treatment which uses special structures to trigger regeneration, called nerve guidance conduits. Traditionally such conduits could only be produced in simple shapes, meaning they were only able to repair nerve damage in straight lines. This new treatment however can work on any arbitrary nerve structure and has proven to work in restoring both motor and sensory function in severed nerves both in-vitro (in a petri dish) and in-vivo (in a living thing).

How they accomplished this is really quite impressive. First they used a 3D scanner to reproduce the structure of the nerve they’re trying to regenerate, in this case it was the sciatic nerve (pictured above). Then they used the resulting model to 3D print a nerve guidance conduit that was the exact size and shape required. This was then implanted into a mouse who had a 10mm gap in their sciatic nerve (far too long to be sewn back together). This conduit then successfully triggered the regeneration of the nerve and after 10 weeks the rat showed a vastly improved ability to walk again. Since this process had only been verified on linear nerves before this process shows great promise for regenerating much more complicated nerve structures, like those found in us humans.

The great thing about this is that it can be used for any arbitrary nerve structure. Hospitals equipped with such a system would be able to scan the injury, print the appropriate nerve guide and then implant it into the patient all on site. This could have wide reaching ramifications for the treatment of nerve injuries, allowing far more to be treated and without the requisite donor nerves needing to be harvested.

Of course this treatment has not yet been tested in humans but the FDA has approved similar versions of this treatment in years past which have proven to be successful. With that in mind I’m sure that this treatment will prove successful in a human model and from there it’s only a matter of time before it finds its way into patients worldwide. Considering how slow progress has been in this area it’s quite heartening to see dramatic results like this and I’m sure further research into this area will prove just as fruitful.


Growing Up on a Farm Prevents Asthma.

I grew up in a rural community just outside of Canberra. Whilst we had a large plot of land out there we didn’t have a farm, but many people around us did including a friend of mine up the road. We’d often get put to work by their parents when we wandered up there, tending to the chickens or help wrangle the cows when they got unruly. Thinking back to those times it was interesting to note that barely anyone I knew who lived out there didn’t suffer from asthma. Indeed the only people I knew who did were a couple kids at school who had grown up elsewhere. As it turns out there’s a reason for this: “farm dust” has shown to have a protective effect when it comes to allergies, meaning farm kids simply won’t develop conditions like asthma.


There’s been research in the past that identified a correlative relationship between living on a farm and a resistance to developing allergies and asthma. Researchers at the Vlaams Instituut voor Biotechnologie (VIB) in Belgium took this idea one step further and looked for what the cause might be. They did this by exposing young mice to farm dust (a low dose endotoxin called bacterial lipopolysaccharide) and then later checked their rates of allergy and asthma development against a control group. The research showed that mice exposed to the farm dust did not develop the severe allergic reactions at all whilst the control group did at the expected rate.

The researchers also discovered the mechanism by which farm dust provides its benefits. When developing lungs are exposed to farm dust the mucus membranes in the respiratory tract react much less severely to common allergens than those without exposure do. This is because when the dust reacts with the mucus membranes the body produces more of a protein called A20 which is responsible for the protective effects. Indeed when the researches deactivated the protein the protective effects were lost, proving that it was responsible for the reduction in allergen reactivity.

What was really interesting however was the genetic profiling that the VIB researchers did of 2000 farm kids after their initial research with mice. They found that, for the vast majority of children who grew up on farms, that they had the protection granted to them by their increased A20 production. However not all the children had it but for them it was found that they had a genetic variant which caused the A20 protein to malfunction. This is great news because it means that the mouse model is an accurate one and can be used in the development of treatments for asthma and other A20 related conditions.

Whilst this doesn’t mean a month long holiday at the farm will cure you of your asthma (this only works for developing lungs, unfortunately) it does provide a fertile area for further research. This should hopefully lead to the swift development of a vaccine for asthma, a condition that has increased in prevalence over the past couple decades. Hopefully this will also provide insight into other allergies as whilst they might not have the exact same mechanism for action there’s potential for other treatment avenues to be uncovered by this research.


Latest LHC Results Suggest Non-Standard Model Physics Afoot.

If you cast your mind back to your high school science days you’ll likely remember being taught certain things about atoms and what they’re made up of. The theories you were taught, things like the strong/weak forces and electromagnetism, form part of what’s called the Standard Model of particle physics. This model was born out of an international collaboration of many scientists who were looking to unify the world of subatomic physics and, for the most part, has proved extremely useful in guiding research. However it has its limitations and the Large Hadron Collider was built in order to test them. Whilst the current results have largely supported the Standard Model there is a growing cache of evidence that runs contrary to it, and the latest findings are quite interesting.


The data comes out of the LHCb detector from the previous run that was conducted from 2011 to 2012. The process that they were looking into is called B meson decay, notable for the fact that it creates a whole host of lighter particles including 2 leptons (called the tau lepton and the muon). These particles are of interest to researchers as the Standard Model makes a prediction about them called Lepton Universality. Essentially this theory states that, once you’ve corrected for mass, all leptons are treated equally by all the fundamental forces. This means that they should all decay at the same rate however the team investigating this principle found a small but significant difference in the rate in which these leptons decayed. Put simply should this phenomena be confirmed with further data it would point towards non-Standard Model particle physics.

The reason why scientists aren’t decrying the Standard Model’s death just yet is due to the confidence level at which this discovery has been made. Right now the data can only point to a 2σ (95%) confidence that their data isn’t a statistical aberration. Whilst that sounds like a pretty sure bet the standard required for a discovery is the much more difficult 5σ level (the level at which CERN attained before announcing the Higgs-Boson discovery). The current higher luminosity run that the LHC is conducting should hopefully provide the level of data required although I did read that it still might not be sufficient.

The results have gotten increased attention because they’re actually not the first experiment to bring the lepton universality principle into question. Indeed previous research out of the Stanford Linear Accelerator Center’s (SLAC) BaBar experiment produced similar results when investigating lepton decay. What’s quite interesting about that experiment though is that it found the same discrepancy through electron collisions whilst the LHC uses higher energy protons. The difference in method with similar results means that this discrepancy is likely universal, requiring either a new model or a reworking of the current one.

Whilst it’s still far too early to start ringing the death bell for the Standard Model there’s a growing mountain of evidence that suggests it’s not the universal theory of everything it was once hoped to be. That might sound like a bad thing however it’s anything but as it would open up numerous new avenues for scientific research. Indeed this is what science is built on, forming hypothesis and then testing them in the real world so we can better understand the mechanics of the universe we live in. The day when everything matches our models will be a boring day indeed as it will mean there’s nothing left to research.

Although I honestly cannot fathom that every occurring.


Age Related Cognitive Motor Decline Starts at 24, But It’s Not All Bad News.

Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.


The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:

  1. Is there age-related slowing of Looking-Doing Latency?
  2. Can expertise directly ameliorate this decline?
  3. When does this decline begin?

In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?

Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.

As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.


Record Warmest Temp for Superconductor Achieved.

Superconductors are the ideal electrical conductors, having the desirable attribute of no electrical resistance allowing 100% efficiency for power transmitted along them. Current applications of superconductors are limited to areas where their operational complexity (most of which comes from the cooling required to keep them in a superconducting state) is outweighed by the benefits they provide. Such complexity is what has driven the search for a superconductor that can operate at normal temperatures as they would bring about a whole new swath of applications that are currently not feasible. Whilst we’re still a long way from that goal a new temperature record has been set for superconductivity: a positively balmy -70°C.

superconductor2The record comes out of the Naval Research Laboratory in Washington DC and was accomplished using hydrogen sulfide gas. Compared to other superconductors, which typically take the form of some exotic combination metals, using a gas sounds odd however what they did to the gas made it anything but your run of the mill rotten egg gas. You see to make the hydrogen sulfide superconducting they first subject the gas to extreme pressures, over 1.5 million times that of normal atmospheric pressures. This transforms the gas into its metallic form which they then proceeded to cool down to its supercritical temperature.

Such a novel discovery has spurred on other researchers to investigate the phenomena and the preliminary results that are coming out are promising. Most of the other labs which have sought to recreate the effect have confirmed at least one part of superconductivity, the fact that the highly pressurized hydrogen sulfide gas has no electrical resistance. Currently unconfirmed from other labs however is the other effect: the expulsion of all magnetic fields (called the Meissner effect). That’s likely due to this discovery still being relatively new so I’m sure confirmation of that effect is not far off.

Whilst this is most certainly a great discovery, one that has already spurred on new wave of research into high temperature superconductors, the practical implications of it are still a little unclear. Whilst the temperature is far more manageable than its traditional counterparts the fact that it requires extreme pressures may preclude it from being used. Indeed large pressurized systems present many risks that often require just as complex solutions to manage them as cryogenic systems do. In the end more research is required to ascertain the operating parameters of these superconductors and, should their benefits outweigh their complexity, then they will make their way into everyday use.

Despite that though it’s great to see progress being made in this area, especially one that has the potential to realise the long thought impossible dream of a room temperature semiconductor. The benefits of a such a technology are so wide reaching that it’s great to see so much focus on it which gives us hope that achieving that goal is just a matter of time. It might not be tomorrow, or the next decade, but the longest journeys begun with a single step, and what a step this is.



Standing 2 Hours a Day Shows Potential Benefits.

You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.


The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.

The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.

As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.

It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.