As we go further and further down into the world of infinitesimally small physics the rules we use at the macro level start to break down. Where once we had defined rules that governed the behaviour of bodies interacting with each other we quickly end up in the realm of possibilities rather than definites, something which causes no end of grief to those seeking to understand it. Indeed whenever I feel like I’m getting close to understanding a fraction of what quantum mechanics is something else comes out of left field that ruins it, leaving me with a bunch of disjointed pieces of information that I try to make sense of yet again. Today I bring you one such piece which both makes complete sense yet is completely nonsensicalPhysicists at our very own Australian National University designed an experiment to test the wave/particle duality that single atoms can exhibit. Their experiment consisted of a stream of single helium atoms that were fired down an apparatus that contained 2 light gates which, if activated, would cause a interference pattern when measured (indicating a wave). However should only one of the gates be open then the particle would travel down a single path (indicating a particle). The secret sauce to their experiment was that the second gate, the one which would essentially force the particle to travel as a wave, was turn on randomly but only after the particle would have already traversed the gate. This essentially proves the theory that, when we’re operating at the quantum level, nothing is certain until measurements are made.
Extrapolating from this you can make some pretty wild theories about the mechanism of action here although there are only a few that can truly make sense. My favourite (and the one that’s least likely to be real) is that the information about the gate activation travelled back in time and informed the particle of the state before it traversed them, meaning that it was inevitable for it to be measured that way. Of course the idea of information travelling back in time violates a whole slew of other physical laws but if that proved to be correct the kind of science we could pursue from it would be straight out of science fiction. I know that’s not going to happen but there’s a part of me that wants to believe.
The far more mundane (and more likely) explanation for this phenomena is that the atom exists as both a particle and a wave simultaneously until it is observed at which point it collapses down into the only possibility that make sense. Whilst some may then extend this to mean things like “The world doesn’t exist unless you’re looking at it” it’s actually a far more nuanced problem, one that requires us to understand what constitutes measurement at a quantum level. At a fundamental level most of the issues arise out of the measurement altering the thing you’re trying to observe although I’m sure there’s far more to it than that.
I’m honestly not sure where these results will take us as whilst it provides evidence for one interpretation of quantum mechanics I don’t know where the future research might be focused. Such an effect doesn’t appear to be something we can make use of, given the fact that measurement needs to take place for it to (in essence) actually happen, but I’ll be the first to admit that my knowledge of this area is woefully limited.
Perhaps I should take a wander down to the university, although I fear I’d only walk out of there more confused than ever…
Today’s workplaces value the appearance of being productive rather than actual productivity. This seemingly nonsensical behaviour stems from the inability of many companies to accurately define performance metrics or other assessable criteria on which to judge someone’s productivity and thus they rely on the appearance of someone being busy as a judge instead. This is what leads many to engage in activities which, on the surface, make you appear busy but are either outright wasteful or horribly inefficient. As someone who has spent the vast majority of his professional career working himself out of a job I’ve found this behaviour particularly abhorrent, especially when it comes back around to bite me.
You see for anyone who is highly effective at their job there’s a tendency to get through your work faster than what would be usually expected and, consequently, they will often seek additional tasks to fill the rest of their working week. The trouble is that once their baseline job functions have been satisfied the tasks remaining are usually the low priority ones that either don’t really require the attention of a highly effective worker or won’t produce any meaningful outputs. Indeed I found this out the hard way many times as my investment in automating many of my routine tasks would often see me doing mundane things like updating documentation templates or reorganising file structures. Such tasks are a killer for highly effective workers and new research from Duke University, University of Georgia, and University of Colorado finally adds some scientific evidence to this.
First the researchers looked at how people would assign tasks to different workers based on a single attribute: self control. Predictably the participants in the study assigned more work to those with better self control with the rationale that they would be more effective at completing the work. Whilst that might not be a revolutionary piece of research it sets the foundation for the next hypothesis: does that additional work burden said efficient worker? Because for a work environment where all are rewarded at the same level doing more work for the same benefit is a burden to efficient workers and that’s what the second piece of research sought to find out.
In a study of 400 employees it was found that effective employees were not only aware of the additional burden placed on them they often felt that their boss and fellow employees weren’t aware of the burden that it placed on them. The end result of this study was to conclude that efficient workers should not be rewarded with additional work but instead with opportunities or better compensation. Engaging in the other behaviour instead encourages everyone to do the least amount of work required to fulfill their duties as there’s no incentive to be efficient nor productive beyond that. Again this might seem like an obvious conclusion but the current zeitgeist of today’s working environments still runs contrary to this conclusion.
I do feel incredibly lucky to be working for a company which adheres to this ethos of rewarding efficiency and actual productivity rather than the appearance of being busy. However it took me 7 years and almost as many jobs to finally come across a company that functions in this regard so the everyman’s workplace still has a long way to go. Whilst research like this might not have much of an effect on changing the general workplace environment hopefully the efficient workers of the world can find solace in the fact that science is on their side.
Or, at the very least, realise that they should work that system to their advantage.
Cancer drugs are, to be honest, a club being used where a scalpel is needed. Most modern chemotherapy treatments hinge on the principle that certain drugs will kill the cancer quicker than the patient as their indiscriminate nature makes no distinction between fast growing cancer cells and regular ones. Thus any form of treatment that can either reduce the amount of drugs used or get them to target cancer cells specifically is keenly researched as they can drastically improve the quality of life of the patient whilst increasing overall effectiveness. Such improvements are few and far between and rarely come hand in hand. A new development, coming off the back of the “unboiled” egg research announced earlier this year, however may improve both fronts for current cancer treatments.
The initial research, which I refrained from writing on at the time, is pretty interesting even if the headlines don’t exactly match the reality. Essentially the researchers, based out of University of California (Irvine Campus) and chemists within Australia, have developed a process to take cooked egg protein and revert part of it back to its original form. The process they do this with is rather interesting and begins with them liquefying the egg using an urea based substance. This now liquid cooked egg, which at a protein level is still all tangled up, is then put into a machine called a vortex fluidic device (VFD) which applies an incredible amount of shear force to those proteins. This forces the proteins to untangle themselves and return to their original form. While this might sound like a whole lot of nothing it essentially allows for the mass manufacture of proteins that aren’t jumbled or misfolded which are invaluable to many areas of research.
More recent research however has employed the use of this device in conjunction with a widely used cancer drug, carboplatin. Carboplatin was introduced some 30 years ago and is favoured due to its reduced and more manageable side effects when compared to drugs that use a similar method of action. However that reduced effectiveness means that a higher dosage is required to achieve the same level of treatment, on the order of 4 times or so. Carboplatin is also a stable drug which doesn’t break down as rapidly as other drugs do, however this also means that it can readily pass through the body with up to 90% of the dosage being recoverable from a patient’s urine. Using the VFD however has the potential to change that dramatically.
The same researchers behind the original discovery have used the VFD to embed carboplatin in molecules that are called lipid mimics which are powerful antioxidants. This has done through previous methods however the use of the VFD has increased the rate at which the drug was embedded in the mimics, from 17% to 75%. This means that the drug will be about 4 times as effective in delivering its payload, allowing doctors to significantly reduce the amount used to achieve the same results. This will dramatically improve patient’s quality of life through better outcomes and significantly reduce side effects. Such a process could also be applied to other treatments as the lipid mimics are capable of storing water soluble active agents as well.
It might not be the most headline grabbing title however it has the potential to significantly increase the effectiveness of current cancer treatments whilst keeping the patient’s quality of life high. Like all improvements it’s likely going to be specific to certain treatments and types of cancer however it will likely lead onto further research that will hopefully improve all areas of cancer research.
The question of where life came from on our Earth is one that has perplexed scientists and philosophers alike for centuries. Whilst we have really robust models for how life evolved to the point it’s at today how it first arose is still something of a mystery. Even if you adhere to the idea of panspermia, that the original building blocks of life were seeded on our planet from some other faraway place, that still raises the question of how that seed of life first came to be. The idea of life coming arising from the chemical soup that bathed the surface of the young earth is commonly referred to as abiogenesis but before that process took place something else had to occur and that’s where chemical evolution steps in.
We’ve know for quite a while that, given the right conditions, some of life’s most essential building blocks can arise out chemical reactions. The young earth was something of a massive chemical reactor and these such reactions were commonplace, flooding the surface with the building blocks that life would use to assemble itself. However the jump from pure chemical reactions to the development of other attributes critical to life, like cell walls, is not yet clear although the ever closing gap between chemical evolution and regular evolution suggests that there must be something. It’s likely that there’s no one thing responsible for triggering the explosion of life which is what makes the search for the secret all the more complicated.
However like all scientific endeavours it’s not something that I believe is beyond our capability to understand. There have been so many mysteries of the universe that were once thought impossible to understand that we have ended up mastering. Understanding the origins of life here on Earth will bolster our searches for it elsewhere in the universe and, maybe one day, lead us to find a civilization that’s not of our own making. To me that’s an incredibly exciting prospect and is one of the reasons why theories like this are so fascinating.
Human spaceflight is, to be blunt, an unnecessarily complicated affair. Us humans require a whole host of things to make sure we can survive the trip through the harsh conditions of space, much more than our robotic companions require. Of course whilst robotic missions may be far more efficient at performing the missions we set them out on that doesn’t further our desire to become a multi-planetary species and thus the quest to find better ways to preserve our fragile bodies in the harsh realms of space continues. One of the biggest issues we face when travelling to other worlds is how we’ll build our homes there as traditional means will simply not work anywhere else that we currently know of. This is when novel techniques, such as 3D printing come into play.
Much of the construction we engage in today relies on numerous supporting industries in order to function. Transplanting these to other worlds is simply not feasible and taking prefabricated buildings along requires a bigger (or numerous smaller) launch vehicles in order to get the required payload into orbit. If we were able to build habitats in situ however then we could cut out the need for re-establishing the supporting infrastructure or bringing prefabricated buildings along with us, something which would go a long way to making an off-world colony sustainable. To that end NASA has started the 3D Printed Habitat Challenge with $2.25 million in prizes to jump start innovation in this area.
The first stage of the competition is for architects and design students to design habitats that maximise the benefits that 3D printing can provide. These will then likely be used to fuel further designs of habitats that could be constructed off-world. The second part of the competition, broken into 2 stages, is centered on the technology that will be used to create those kinds of structures. The first focuses on technology required to use materials available at site as a feed material for 3D printing, something which is currently only achieved with very specific feedstock. The second, and ultimately the most exciting, challenge is to actually build a device capable of using onsite materials (as well as recyclables) to create a habitable structure with a cool $1.1 million to those who satisfy the challenge. Doing that would be no easy feat of course but the technology created along the way will prove invaluable to future manned missions in our solar system.
We’re still likely many years away from having robots on the moon that can print us endless 3D habitats but the fact that NASA wants to spur innovation in this area means that they’re serious about pursuing a sustainable human presence offworld. There’s likely numerous engineering challenges that we’ll need to overcome, especially between different planets, but it’s far easier to adapt a current technology than it is to build one from scratch. I’m very keen to see the entries to this competition as they could very well end up visiting other planets to build us homes there.
I am always amazed when something that I think I understand completely turns out to be far more complicated than I first thought. The anodizing process was one of these things as, back in the day, I had investigated anodizing some of my PC components as a way of avoiding having to go through the laborious process of painting them. Of course I stopped short after finding out the investments I’d need to make in order to do it properly (something my student budget could not afford) but the amount of time I poured into researching it left me with a good working knowledge of how it worked. What I didn’t know was what it could achieve when titanium was used for anodizing as it’s able to produce an entire rainbow’s worth of colours.
The wave of colours you see the metal rapidly transition through aren’t some kind of trick it’s one of the interesting properties of how the thickness of a deposited titanium layer interferes with light passing through it. As the thickness of the layer increases the interference increases, starting off with a kind of blue colour and then shifting through many different wavelengths before finally settling on the regular metallic colour that we’re all familiar with. This process can be accurately controlled by varying the voltages applied during the anodizing process as that determines the resulting thickness of the layer that’s deposited onto the host material. In the above example they’re going for a full coating, hence why the bar rapidly flashes through different colours before settling down.
These kinds of reactions always fascinate me as it shows how things can behave in extraordinarily different ways if we just vary a small few parameters in one way or the other. It’s one of those principles that drove us to discover things like graphene which, at its heart, is just another arrangement of carbon but the properties it has are wildly different to the carbon that most of us are familiar with. It just goes to show that when you think you know science is always ready to throw you another curveball and that’s why I find things like this so exciting.
Science reporting and science have something of a strained relationship. Whilst most scientists are modest and humble about the results that they produce the journalists who report on it often take the opposite approach, something which I feel drives the disillusionment of the public when it comes to announcing scientific progress. This rift is most visible when it comes to research that challenges current scientific thinking something which, whilst needs to be done on a regular basis to strengthen the validity of our current thinking, also needs to be approached with the same trepidation as any other research. However from time to time things still slip through the cracks like the latest news that the EmDrive may, potentially, be creating warp bubbles.
Initially the EmDrive, something which I blogged about late last year when the first results became public, was a curiosity that had an unknown mechanism of action necessitating further study. The recent results, the ones which are responsible for all the hubbub, were conducted within a vacuum chamber which nullified the criticism that the previous results were due to something like convection currents rather than another mechanism. That by itself is noteworthy, signalling that the EmDrive is something worth investigating further to see what’s causing the force, however things got a little crazy when they started shining lasers through it. They found that the time of flight of the light going through the EmDrive’s chamber was getting slowed down somehow which, potentially, could be caused by distortions in space time.
The thing to note here though is that the previous test was conducted in atmosphere, not in a vacuum like the previous test. This introduces another variable which, honestly, should have been controlled for as it’s entirely possible that that effect is caused by something as innocuous as atmospheric distortions. There’s even real potential for this to go the same way as the faster than light neutrinos with the astoundingly repeatable results being created completely out of nothing thanks to equipment that wasn’t calibrated properly. Whilst I’m all for challenging the fundamental principles of science routinely and vigorously we must remember that extraordinary claims require extraordinary evidence and right now there’s not enough of that to support many of the conclusions that the wider press has been reaching.
What we mustn’t lose sight of here though is that the EmDrive, in its current form, points at a new mechanism of generating thrust that could potentially revolutionize our access to the deeper reaches of space. All the other spurious stuff around it is largely irrelevant as the core kernel of science that we discovered last year, that a resonant cavity pumped with microwaves can produce thrust in the absence of any reaction mass, seems to be solid. What’s required now is that we dive further into this and figure out just how the heck it’s generating that force because once we understand that we can further exploit it, potentially opening up the path to even better propulsion technology. If it turns out that it does create warp bubbles than all the better but until we get definitive proof on that speculating along that direction really doesn’t help us or the researchers behind it.
There’s nothing like a healthy dose of snakeoil to remind you that some ideas, whilst sounding amazing in theory, are just not worth pursuing. In this age of 3D renders and photoshop it doesn’t take long for an idea to make its way into what looks like a plausible reality and the unfortunate truth of the Internet holding novelty above all else means such ideas can permeate quickly before they’re given the initial sanity check. Worst still is when well established companies engage in this behaviour, ostensibly to bolster their market presence in one way or another with an idea that may only have a passing relationship with reality. In that vein I present to you the Goodyear BH03, a concept idea that will simply never work:
Sounds cool right? Your tyres can help charge the battery of your shiny new electric car by using the heat it generates from the road and even from the sun when it’s parked outside! Indeed it sounds like such a great idea it makes you wonder why it’s taken so long for someone to think of it as even regular cars to could do with a little extra juice in the battery, potentially avoiding those embarrassing calls to the NRMA to get a jumpstart.
Of course the real reason as to why it hasn’t been done before is because it simply won’t do what they say it will.
You see translating heat into electricity is a notoriously inefficient exercise. Even RTGs, the things that we use to power our deep space craft like Voyager, can only achieve a conversion rate of some 10% of the total heat emitted. That means that kilowatts of heat generated by a red hot lump of decaying plutonium end up being maybe a hundred or so watts of usable electricity. Compare that to the surface area of a tyre, which is at most a square meter, receiving approximately 1KW worth of sun energy under ideal conditions, and you can maybe get 400W under perfect conditions with ideal conversion rates with all 4 tyres.
If you say the tyres spend about 8 hours a day under those conditions (again incredibly ideal) and you’ll get a grand total of 3.2KW into the batteries which, if we use a Tesla as an example, would give you about 15kms worth of range. If you want a more realistic figure with say only half the tyre exposed and the ideal duration much smaller then you’re looking at cutting that figure to less than half. It’s the same problem with putting solar panels on the roof of electric cars, they’re simply not going to be worth the investment because the power they generate will, unfortunately, be minimal.
Still they look cool, I guess.
Back in my school days I thought that skill was an innate thing, a quality that you were born with that was basically immutable. Thus things like study and practice always confused me as I felt that I’d either get something or I wouldn’t which is probably why my academic performance back then was so varied. Today however I don’t believe anyone is below mastering a skill, all that is required is that you put the required amount of time and (properly focused) practice in and you’ll eventually make your way there. Innate ability still counts for something though as there are things you’re likely to find much easier than others and some people are even just better in general at learning new skills. Funnily enough that latter group of people likely has an attribute that you wouldn’t first associate with that skill: lower overall brain activity.
Research out of the University of California – Santa Barbara has shown that people who are most adept at learning new tasks actually show a lower overall brain activity level than their slow learning counterparts. The study used a fMRI machine to study the subject’s brains whilst they were learning a new task over the course of several weeks and instead of looking at a specific region of the brain the researchers focused on “community structures”. These are essentially groups of nodes within the brain that are densely interconnected with each other and are likely in heavy communication. Over the course of the study the researchers could identify which of these community structures remained in communication and those that didn’t whilst measuring the subject’s mastery of the new skill they were learning.
What the researchers found is that people who were more adept at mastering the skill showed a rapid decrease in the overall brain activity used whilst completing the task. For the slower learners many of the regions, namely things like the visual and motor cortexs, remained far more active for a longer period, showing that they were more actively engaged in the learning process. As we learn skills much of the process of actually doing that skill gets offloaded, becoming an automatic part of what we do rather than being a conscious effort. So for the slow learners these parts of the brain remained active for far longer which could, in theory, mean that they were getting in the way of making the process automatic.
For me personally I can definitely attest to this being the case, especially with something like learning a second language. Anyone who’s learnt a different language will tell you that you go through a stage of translating things into your native language in your head first before re-translating them back into the target language, something that you simply can’t do if you want to be fluent. Eventually you end up developing your “brain” in that language which doesn’t require you to do that interim translation and everything becomes far more automatic. How long it takes you to get to that stage though varies wildly, although the distance from your native language (in terms of grammatical structure, syntax and script) is usually the primary factor.
It will be interesting to see if this research leads to some developmental techniques that allow us to essentially quieten down parts of our brain in order to aid the learning process. Right now all we know is that some people’s brains begin the switch off period quicker than others and whatever is causing that is the key to accelerating learning. Whether or not that can be triggered by mental exercises or drugs is something we probably won’t know for a while but it’s definitely an area of exciting research possibilities.
There’s an interesting area of research that’s dubbed biomimicry which is dedicated to looking at nature and figuring out how we can use the solutions it has developed in other areas. Evolution, which has been chugging away in the background for millions of years, has come up with some pretty solid solutions and so investigating them for potential uses seems like a great catalyst for innovation. However there are times when we see things in nature that you can’t help but feel like nature was looking at us and replicated something that we had developed. That’s what I felt when I saw this video of an erodium seed drilling itself into the ground:
As you can probably guess the secret to this seed’s ability to work its way into the ground comes from the long tendril at the top (referred to as an awn). This awn coils itself up when conditions are dry, waiting for a change. Then when the humidity begins to increase the awn begins to unfurl, slowly spinning the seed in a drilling motion. The video you see above is a sped up process with water being added at regular intervals to demonstrate how the process works.
The evolutionary advantage that this seed has developed allows it to germinate in soils that would otherwise be inhospitable to them. The drilling motion allows the seed head to penetrate the ground with much more ease, allowing it to break through coarse soils that would have otherwise proved impenetrable. How this adaptation would have developed is beyond me but suffice to say this is what led to the erodium species of plants dominating otherwise hostile areas like rockeries or alpines.
Up until I saw that video I thought things like drilling were a distinctly human invention, something we had discovered through our experimentation with inclined planes. However like many things it turns out there are fundamental principles which aren’t beyond nature’s ability to replicate, it just needs the right situation and a lot of time for it to occur. I’m sure the more I dig (pun intended) the more examples I could find of this but I’m sure that each example I found would amaze me just as much as this did.