Science

98159_web

Latest LHC Results Suggest Non-Standard Model Physics Afoot.

If you cast your mind back to your high school science days you’ll likely remember being taught certain things about atoms and what they’re made up of. The theories you were taught, things like the strong/weak forces and electromagnetism, form part of what’s called the Standard Model of particle physics. This model was born out of an international collaboration of many scientists who were looking to unify the world of subatomic physics and, for the most part, has proved extremely useful in guiding research. However it has its limitations and the Large Hadron Collider was built in order to test them. Whilst the current results have largely supported the Standard Model there is a growing cache of evidence that runs contrary to it, and the latest findings are quite interesting.

98159_web

The data comes out of the LHCb detector from the previous run that was conducted from 2011 to 2012. The process that they were looking into is called B meson decay, notable for the fact that it creates a whole host of lighter particles including 2 leptons (called the tau lepton and the muon). These particles are of interest to researchers as the Standard Model makes a prediction about them called Lepton Universality. Essentially this theory states that, once you’ve corrected for mass, all leptons are treated equally by all the fundamental forces. This means that they should all decay at the same rate however the team investigating this principle found a small but significant difference in the rate in which these leptons decayed. Put simply should this phenomena be confirmed with further data it would point towards non-Standard Model particle physics.

The reason why scientists aren’t decrying the Standard Model’s death just yet is due to the confidence level at which this discovery has been made. Right now the data can only point to a 2σ (95%) confidence that their data isn’t a statistical aberration. Whilst that sounds like a pretty sure bet the standard required for a discovery is the much more difficult 5σ level (the level at which CERN attained before announcing the Higgs-Boson discovery). The current higher luminosity run that the LHC is conducting should hopefully provide the level of data required although I did read that it still might not be sufficient.

The results have gotten increased attention because they’re actually not the first experiment to bring the lepton universality principle into question. Indeed previous research out of the Stanford Linear Accelerator Center’s (SLAC) BaBar experiment produced similar results when investigating lepton decay. What’s quite interesting about that experiment though is that it found the same discrepancy through electron collisions whilst the LHC uses higher energy protons. The difference in method with similar results means that this discrepancy is likely universal, requiring either a new model or a reworking of the current one.

Whilst it’s still far too early to start ringing the death bell for the Standard Model there’s a growing mountain of evidence that suggests it’s not the universal theory of everything it was once hoped to be. That might sound like a bad thing however it’s anything but as it would open up numerous new avenues for scientific research. Indeed this is what science is built on, forming hypothesis and then testing them in the real world so we can better understand the mechanics of the universe we live in. The day when everything matches our models will be a boring day indeed as it will mean there’s nothing left to research.

Although I honestly cannot fathom that every occurring.

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

Age Related Cognitive Motor Decline Starts at 24, But It’s Not All Bad News.

Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:

  1. Is there age-related slowing of Looking-Doing Latency?
  2. Can expertise directly ameliorate this decline?
  3. When does this decline begin?

In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?

Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.

As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.

superconductor2

Record Warmest Temp for Superconductor Achieved.

Superconductors are the ideal electrical conductors, having the desirable attribute of no electrical resistance allowing 100% efficiency for power transmitted along them. Current applications of superconductors are limited to areas where their operational complexity (most of which comes from the cooling required to keep them in a superconducting state) is outweighed by the benefits they provide. Such complexity is what has driven the search for a superconductor that can operate at normal temperatures as they would bring about a whole new swath of applications that are currently not feasible. Whilst we’re still a long way from that goal a new temperature record has been set for superconductivity: a positively balmy -70°C.

superconductor2The record comes out of the Naval Research Laboratory in Washington DC and was accomplished using hydrogen sulfide gas. Compared to other superconductors, which typically take the form of some exotic combination metals, using a gas sounds odd however what they did to the gas made it anything but your run of the mill rotten egg gas. You see to make the hydrogen sulfide superconducting they first subject the gas to extreme pressures, over 1.5 million times that of normal atmospheric pressures. This transforms the gas into its metallic form which they then proceeded to cool down to its supercritical temperature.

Such a novel discovery has spurred on other researchers to investigate the phenomena and the preliminary results that are coming out are promising. Most of the other labs which have sought to recreate the effect have confirmed at least one part of superconductivity, the fact that the highly pressurized hydrogen sulfide gas has no electrical resistance. Currently unconfirmed from other labs however is the other effect: the expulsion of all magnetic fields (called the Meissner effect). That’s likely due to this discovery still being relatively new so I’m sure confirmation of that effect is not far off.

Whilst this is most certainly a great discovery, one that has already spurred on new wave of research into high temperature superconductors, the practical implications of it are still a little unclear. Whilst the temperature is far more manageable than its traditional counterparts the fact that it requires extreme pressures may preclude it from being used. Indeed large pressurized systems present many risks that often require just as complex solutions to manage them as cryogenic systems do. In the end more research is required to ascertain the operating parameters of these superconductors and, should their benefits outweigh their complexity, then they will make their way into everyday use.

Despite that though it’s great to see progress being made in this area, especially one that has the potential to realise the long thought impossible dream of a room temperature semiconductor. The benefits of a such a technology are so wide reaching that it’s great to see so much focus on it which gives us hope that achieving that goal is just a matter of time. It might not be tomorrow, or the next decade, but the longest journeys begun with a single step, and what a step this is.

 

standing-desk-technology

Standing 2 Hours a Day Shows Potential Benefits.

You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.

standing-desk-technology

The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.

The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.

As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.

It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.

Using Plastic Balls to Cover a Water Resevoir.

There are some things that, at first glance, seem so absurd that you have to wonder why it was being done. Many are quick to point out even if something looks stupid, but it works, then it isn’t stupid. Indeed that’s what I first thought when I heard that Los Angeles Department of Water and Power was filling up their water reservoirs with millions upon millions of plastic balls as it sounded like some form of a joke. As it turns out it’s anything but and compared to other solutions to the problem it’s actually quite an ingenious project (not to mention how soothing dumping that many balls out of a truck sounds):

The first thing that comes to mind is why use millions of plastic balls instead of say, a giant shade structure to cover the resevoir? As it turns out constructing something like that would be an order of magnitude more expensive, on the order of $300 million compared to the total project cost of the shade balls of approximately $34 million. The balls themselves will last approximately 10 years before they start degrading at which point they’ll likely start splitting in half. Putting that in perspective you’d need the shade structure to last almost 100 years before it would be a better option than the balls, a pretty staggering statistic.

The balls provide numerous benefits, the largest of which is the reduction of water lost to evaporation in the reservoirs. The current reservoirs, which stretch over some 175 acres, hold about 3.3 billion gallons of water and about 10% of that is lost every year to evaporation. These little balls will then save some 300 million gallons of water a year from being lost. Additionally chemicals such as chlorine and bromide can combine into bromate (a potential carcinogen) under sunlight, something which these little plastic balls will help prevent.

In all honesty when I first saw this I thought it was a joke, a viral video that was advertising a plastic company or something equally as banal. However digging further into it the science of it is sound, the cost is far cheaper than the alternatives and the benefits of doing it outweigh the costs.

Colour me impressed.

will2dtinbet

Stanene: Graphene’s Metallic Brother.

Graphene has proven to be a fruitful area of scientific research, showing that atom thick layers of elements exhibit behaviours that are wildly different from their thicker counterparts. This has then spurred on research into how other elements behave when slimmed down to atom thick layers producing such materials as silicene (made from silicon) and phosphorene (made from phosphorous). Another material in the same class as these, stanene which is made from an atom thick layer of tin, has been an active area of research due to the potential properties that it might have. Researchers have announced that they have, for the first time, created stanene in the lab and have begun to probe its theoretical properties.

will2dtinbetNot all elements have the ability to form these 2D structures however researchers at Stanford University in California predicted a couple years ago that tin should be able to form a stable structure. This structure then lent itself to numerous novel characteristics, chief among them being the ability for an electric current to pass through it without producing waste heat. Of course without a real world example to test against such properties aren’t of much use and so the researchers have spent the last couple years developing a method to create a stanene sheet. That research has proved fruitful as they managed to create a stanene layer on top a supporting substrate of bismuth telluride.

The process that they used to create the stanene sheet is pretty interesting. First they create a chamber that has a base of bismuth telluride. Then they vaporize tin and introduce it into the chamber, allowing it to deposit itself onto the bismuth telluride base. It’s a similar process to what some companies use to create synthetic diamonds, called chemical vapor deposition. For something like stanene it ensures that the resulting sheet is created uniformly, ensuring that the underlying structure is consistent. The researchers have then used this resulting stanene sheet to test the theoretical properties that were modelled previously.

Unfortunately the stanene sheet produced by this method does not appear to have the theoretical properties that the theoretical models would indicate. The problem seems to stem from the bismuth telluride base that they used for the vapor deposition process as it’s not completely inert. This means that it interacts with the stanene sheet, contaminating it and potentially disrupting the topological insulator properties which it should exhibit. The researchers are investigating different surfaces to mitigate this effect so it’s likely that we’ll have a pure stanene sheet in the not too distant future.

Should this research prove fruitful it could open up many new avenues of research for materials development. Stanene has properties that would make it extremely ideal for use in electronics, being able to dramatically increase the efficiency of interconnects. Large scale implementations would likely still be a while off but if they could make the vapor deposition process work then there’s immediate applications for it in the world of microelectronics. Hopefully the substrate issue is sorted out soon and we’ll see consumerization of the technology begin in earnest.

 

bees.cipamericas

Vaccination So Effective Even Bees Do It.

The unequivocal effectiveness of vaccinations has seen many of the world’s worst and most debilitating diseases relegated to the history books. Gone are the days when millions of people were afflicted with diseases that could leave them permanently disabled, enabling many more to live long and healthy lives. Before their invention however developing an immunity to a disease often meant enduring it, something ranged from a mild inconvenience to a life threatening prospect. Our biology takes care of part of that, with some immunity passing down from mother to child, however we’d never witnessed that outside our branch on the biology tree of life. New research shows though that bees in fact have their own form of natural immunity that queens pass onto their workers.

bees.cipamericas

The research, conducted by scientists at Stanford University and published in PLOS Pathogens a couple days ago, shows that queen bees immunize their worker bees against certain types of pathogens that would otherwise devastate the colony. The mechanism by which this works is actually very similar to the way many vaccines work today. Essentially the queen bee, who rarely leaves the hive, is fed on a combination of pollen and nectar called royal jelly. This food actually contains a variety of pathogens which typically would be deadly to the bees.

However the queen bee has what’s called a fat body, an organ which functions similarly to our liver. Once the pathogen has been broken down in the queen bee’s gut it’s then transferred to the fat body where parts of the pathogen are wrapped up in a protein called vitellogenin. This is then passed onto her offspring who, when they hatch, now have immunity to pathogens that would otherwise kill them. What’s interesting about this process is that it has the potential for aiding current bee populations which have been collapsing around the world over the past decade.

Whilst the root cause of the widespread colony collapse is still under intense debate there are several potential causes which could be mitigated by using this mechanism. Essentially we could devise vaccines for some of the potential problems that bee colonies face and introduce them via spraying flowers with them. Then, when the pollen is brought back to the queen, all the subsequent bees would get the immunity, protecting them from the disease. This could also aid in making the end product better for humans, potentially eradicating problems like botulism toxin which sometimes makes its way into honey.

It’s always interesting to see common attributes like this pop up across species as it gives us an idea of how much of our evolutionary lineage is shared. Whilst we don’t share a lot in common with bees there are a lot of similar mechanisms at play, suggesting our evolutionary paths deviated at a common ancestor a long time ago. Something like this, whilst not exactly a revolution, does have the potential to benefit both us and our buzzing companions. Hopefully this leads to positive progress in combating colony collapse which is beneficial for far more than just lovers of honey.

Programming Magnetic Fields.

Everyone is familiar with the traditional bar magnet, usually painted in red and blue denoting the north and south poles respectively.You’re also likely familiar with their behaviour, put opposite poles next to each other and they’ll attract but put the same poles next to each other and they repel. If you’ve taken this one step further and played around with iron filings (or if you’re really lucky a ferrofluid) you’ll be familiar with the magnetic field lines that magnets generate, giving you some insight into why magnets function the way they do. What you’re not likely familiar with is magnets that have had their polarity printed onto them which results in some incredible behaviour.

The demonstrations they have with various programmed magnets are incredibly impressive as they exhibit behaviour that you wouldn’t expect from a traditional magnet. Whilst some of the applications they talk about seem a little pie in the sky at their current scaling (like the frictionless gears, since the amount of torque they could handle is directionally proportional to field strength) a lot of the others would appear to have immediate commercial applications. The locking magnets for instance seem like they’d be great solution for electronic locks although maybe not for your front door just yet.

What I’d be interested to see is how scalable their process is and whether or not that same programmability could be applied to electromagnets as well. The small demonstrator magnets that they have show what the technology is capable of doing however there are numerous applications that would require much bigger and bulkier versions of them. Similarly electromagnets, which are widely used for all manner of things, could benefit greatly from programmed magnetic fields. With the fundamentals worked out though I’m sure this is just an engineering challenge and that’s the easy part, right?

cms_event-display-7july2015

LHC Starts Back Up, Where to From Here?

It was 3 years ago that particle physicists working with CERN at the Large Hadron Collider announced they had verified the existence of the Higgs-Boson. It was a pivotal moment in scientific history, demonstrating that the Standard Model of particle physics fundamental basis is solid. Prior to this announcement the LHC had been shut down for a planned upgrade, one that would see the energy of the resulting collisions doubled from 3.5TeV per beam to 7TeV. This upgrade was scheduled to take approximately 2 years and would open up new avenues for particle physics research. Just last week, almost 3 years to the day after the Higgs-Boson announcement, the LHC began collisions again. The question that’s on my mind, and I’m sure many others, is just what is LHC looking for now?

cms_event-display-7july2015

Whilst the verification of the Higgs-Boson adds a certain level of robustness to the Standard Model many researchers have theorized of physics beyond this model at the energies that the LHC is currently operating at. Of these models one that will be explored by the LHC in its current data collection run is Supersymmetry, a model which predicts that each particle which belongs to one of the two elementary classes (bosons or fermions) has a “superpartner” in the other. An example of this would be an electron, which is a fermion, would have a superpartner called a selectron which would be a boson. These particles share all the same properties with the exception of their spin and so should be easy to detect, theoretically. However no such particles have been detected, even in the same run where the Higgs-Boson was. The new, higher energy level of the LHC has the potential to create some of these particles and could provide evidence to support supersymmetry as a model.

Further to the supersymmetry model is every new particle physicist’s favourite theory: String Theory. Now I’ll have to be honest here I’m not exactly what you’d call String Theory’s biggest fan since, whilst it makes some amazing predictions, it has yet to be supported by any experimental evidence. At its core String Theory theorizes that all point like particles are made up of one-dimensional strings, often requiring the use of multi-dimensional physics (10 or 26 dimensions depending on which model you look at) in order to make them work. However since they’re almost purely mathematical in nature there has yet to be any links made between the model and the real world, precluding it from being tested. Whilst the LHC might provide insight into this I’m not exactly holding my breath but I’ll spin on a dime if they prove me wrong.

Lastly, and probably most excitingly for me, is the prospect of discovering the elusive dark matter particle. Due to its nature, I.E. only interacting with ordinary matter through gravity, we’re unlikely to be able to detect dark matter particles in the LHC directly. Instead, should the LHC generate a dark matter particle, we’ll be able to infer its existence by the  energy it takes away from the collision. No such discrepancy was noted at the last run’s energy levels so it will be interesting to see if a doubling of the collision energy leads to the generation of a dark matter particle.

Suffice to say the LHC has a long life ahead of it with plenty of envelope pushing science to be done. This current upgrade is planned to last them for quite some time with the next one not scheduled to take place until 2022, more than enough time to generate mountains of data to either support or refute our current models for particle physics.

Euler’s Disk: Surprising Complexity.

As long time readers will know I’m a fan of simple experiments or demonstrations that have some underpinning scientific phenomenon. It was things like these that first spurred my interest in science, especially since places like Questacon (a must visit place if you ever find yourself in Canberra) were filled to the brim with experiments like them. Thus whenever I find one I feel compelled to share it, not so much for myself but in the hopes that when someone sees it their curiosity will be piqued and they’ll pass that same passion onto others. In that vein I give you Euler’s Disk, one of the most fascinating science based toys I’ve come across:

The disk gets its name from Leonhard Euler, an eighteenth century physicist and mathematician who was behind such revelations as infinitesimal calculus and many other fundamental things. He studied the disk as part of his other research however it wasn’t until recently that they found themselves back in the limelight again. Back in 2000 Cambridge researcher Keith Moffatt demonstrated that air resistance played only a small part in the rate in which the disk slowed down with the vast majority coming from the rolling resistance between the surface and the disk’s edge.

What interest me about it most is the gradual speed up of the revolutions coupled with the increasingly bizarre noise that accompanies it. Then, right at the end when it appears to be spinning at its fastest the disk stops, as if some outside force robbed it of all its momentum instantly. This demonstrates how momentum is conserved as the rate of precession of the disk increases as it spins downward. Explaining the phenomenon though is much harder than just watching it however, which is why it’s such a great scientific toy.