There’s nothing like a healthy dose of snakeoil to remind you that some ideas, whilst sounding amazing in theory, are just not worth pursuing. In this age of 3D renders and photoshop it doesn’t take long for an idea to make its way into what looks like a plausible reality and the unfortunate truth of the Internet holding novelty above all else means such ideas can permeate quickly before they’re given the initial sanity check. Worst still is when well established companies engage in this behaviour, ostensibly to bolster their market presence in one way or another with an idea that may only have a passing relationship with reality. In that vein I present to you the Goodyear BH03, a concept idea that will simply never work:
Sounds cool right? Your tyres can help charge the battery of your shiny new electric car by using the heat it generates from the road and even from the sun when it’s parked outside! Indeed it sounds like such a great idea it makes you wonder why it’s taken so long for someone to think of it as even regular cars to could do with a little extra juice in the battery, potentially avoiding those embarrassing calls to the NRMA to get a jumpstart.
Of course the real reason as to why it hasn’t been done before is because it simply won’t do what they say it will.
You see translating heat into electricity is a notoriously inefficient exercise. Even RTGs, the things that we use to power our deep space craft like Voyager, can only achieve a conversion rate of some 10% of the total heat emitted. That means that kilowatts of heat generated by a red hot lump of decaying plutonium end up being maybe a hundred or so watts of usable electricity. Compare that to the surface area of a tyre, which is at most a square meter, receiving approximately 1KW worth of sun energy under ideal conditions, and you can maybe get 400W under perfect conditions with ideal conversion rates with all 4 tyres.
If you say the tyres spend about 8 hours a day under those conditions (again incredibly ideal) and you’ll get a grand total of 3.2KW into the batteries which, if we use a Tesla as an example, would give you about 15kms worth of range. If you want a more realistic figure with say only half the tyre exposed and the ideal duration much smaller then you’re looking at cutting that figure to less than half. It’s the same problem with putting solar panels on the roof of electric cars, they’re simply not going to be worth the investment because the power they generate will, unfortunately, be minimal.
Still they look cool, I guess.
Back in my school days I thought that skill was an innate thing, a quality that you were born with that was basically immutable. Thus things like study and practice always confused me as I felt that I’d either get something or I wouldn’t which is probably why my academic performance back then was so varied. Today however I don’t believe anyone is below mastering a skill, all that is required is that you put the required amount of time and (properly focused) practice in and you’ll eventually make your way there. Innate ability still counts for something though as there are things you’re likely to find much easier than others and some people are even just better in general at learning new skills. Funnily enough that latter group of people likely has an attribute that you wouldn’t first associate with that skill: lower overall brain activity.
Research out of the University of California – Santa Barbara has shown that people who are most adept at learning new tasks actually show a lower overall brain activity level than their slow learning counterparts. The study used a fMRI machine to study the subject’s brains whilst they were learning a new task over the course of several weeks and instead of looking at a specific region of the brain the researchers focused on “community structures”. These are essentially groups of nodes within the brain that are densely interconnected with each other and are likely in heavy communication. Over the course of the study the researchers could identify which of these community structures remained in communication and those that didn’t whilst measuring the subject’s mastery of the new skill they were learning.
What the researchers found is that people who were more adept at mastering the skill showed a rapid decrease in the overall brain activity used whilst completing the task. For the slower learners many of the regions, namely things like the visual and motor cortexs, remained far more active for a longer period, showing that they were more actively engaged in the learning process. As we learn skills much of the process of actually doing that skill gets offloaded, becoming an automatic part of what we do rather than being a conscious effort. So for the slow learners these parts of the brain remained active for far longer which could, in theory, mean that they were getting in the way of making the process automatic.
For me personally I can definitely attest to this being the case, especially with something like learning a second language. Anyone who’s learnt a different language will tell you that you go through a stage of translating things into your native language in your head first before re-translating them back into the target language, something that you simply can’t do if you want to be fluent. Eventually you end up developing your “brain” in that language which doesn’t require you to do that interim translation and everything becomes far more automatic. How long it takes you to get to that stage though varies wildly, although the distance from your native language (in terms of grammatical structure, syntax and script) is usually the primary factor.
It will be interesting to see if this research leads to some developmental techniques that allow us to essentially quieten down parts of our brain in order to aid the learning process. Right now all we know is that some people’s brains begin the switch off period quicker than others and whatever is causing that is the key to accelerating learning. Whether or not that can be triggered by mental exercises or drugs is something we probably won’t know for a while but it’s definitely an area of exciting research possibilities.
There’s an interesting area of research that’s dubbed biomimicry which is dedicated to looking at nature and figuring out how we can use the solutions it has developed in other areas. Evolution, which has been chugging away in the background for millions of years, has come up with some pretty solid solutions and so investigating them for potential uses seems like a great catalyst for innovation. However there are times when we see things in nature that you can’t help but feel like nature was looking at us and replicated something that we had developed. That’s what I felt when I saw this video of an erodium seed drilling itself into the ground:
As you can probably guess the secret to this seed’s ability to work its way into the ground comes from the long tendril at the top (referred to as an awn). This awn coils itself up when conditions are dry, waiting for a change. Then when the humidity begins to increase the awn begins to unfurl, slowly spinning the seed in a drilling motion. The video you see above is a sped up process with water being added at regular intervals to demonstrate how the process works.
The evolutionary advantage that this seed has developed allows it to germinate in soils that would otherwise be inhospitable to them. The drilling motion allows the seed head to penetrate the ground with much more ease, allowing it to break through coarse soils that would have otherwise proved impenetrable. How this adaptation would have developed is beyond me but suffice to say this is what led to the erodium species of plants dominating otherwise hostile areas like rockeries or alpines.
Up until I saw that video I thought things like drilling were a distinctly human invention, something we had discovered through our experimentation with inclined planes. However like many things it turns out there are fundamental principles which aren’t beyond nature’s ability to replicate, it just needs the right situation and a lot of time for it to occur. I’m sure the more I dig (pun intended) the more examples I could find of this but I’m sure that each example I found would amaze me just as much as this did.
Nearly all of us are born with what we’d consider less than ideal memories. We’ll struggle to remember where our keys our, draw a blank on that new coworker’s name and sometimes pause much longer than we’d like to remember a detail that should be front of mind. The idealised pinnacle, the photographic (or more accurately the eidetic) memory, always seems like an elusive goal, something you have to be born with rather than achieve. However it seems that our ability to forget might actually come from an evolutionary adaptation, enabling us to remember the pertinent details that helped us survive whilst suppressing those that might otherwise hinder us.
The idea isn’t a new one, having existed in some form since at least 1997, but it’s only recently that researchers have had the tools to study the mechanism in action. You see it’s rather difficult to figure out which memories are being forgotten for adaptive reasons, I.E. to improve the survival of the organism, and which ones are simply forgotten due to other factors. The advent of functional Magnetic Resonance Imaging (fMRI) has allowed researchers to get a better idea of what the brain is doing at any one point, allowing them to set up situations to see what the brain is doing when it’s forgetting something. The results are quite intriguing, demonstrating that at some level forgetting might be an adaptive mechanism.
Back in 2007 researchers at Stanford University investigated the prospect that adaptive forgetting was potentially a mechanism for reducing the amount of brain power required to select the right memories for a particular situation. The hypothesis goes that remembering is an act of selecting a specific memory for a goal related activity. Forgetting then functions as an optimization mechanism, allowing the brain to more easily select the right memories by suppressing competing memories that might not be optimal. The research supported this notion, showing decreased activity in anterior cingulated cortex which is activated when people are weighing choices (like figuring out which memory is relevant).
More recent research into this phenomena, conducted by researchers at various institutes at the University of Birmingham and various institutes in Cambridge, focused on finding out if the active recollection of a specific memory hindered the remembering of others. Essentially this means that the act of remembering a specific memory would come at the cost of other, competing memories which in turn would lead to them being forgotten. They did this by getting subjects to view 144 picture and word associations and were then trained to remember 72 of them (whilst they were inside a fMRI machine). They were then given another set of associations for each word which would serve as the “competitive” memory for the first.
The results showed some interesting findings, some which may sound obvious on first glance. Attempting to recall the second word association led to a detriment in the subject’s ability to recall the first. That might not sound groundbreaking to start off with but subsequent testing showed a progressive detriment to the recollection of competing memories, demonstrating they were being actively repressed. Further to this the researchers found that their subject’s brain activity was lower for trained images than ones that weren’t part of the initial training set, an indication that these memories were being actively suppressed. There was also evidence to suggest that the trained memories showed the most average forgetting as well as increased activity in a region of the brain known to be associated with adaptive forgetting.
Whilst this research might not give you any insight into how to improve your memory it does give us a rare look into how our brain functions and why certain it behaves in ways we believe to be sub-optimal. Potentially in the future there could be treatments available to suppress that mechanism however what ramifications that might have on actual cognition is anyone’s guess. Needless to say though it’s incredibly interesting to find out why our brains do the things we do, even if we wished they did the exact opposite most of the time.
Medicine has long known about the potential causes of Alzheimer’s however finding a safe and reliable treatment has proven to be far more elusive. Current treatments centre on alleviating the symptoms of the disease, combating things like memory loss and cognitive function. However whilst these may provide some relief and quality of life improvement they do nothing to treat the underlying cause which is a combination of amyloid plaques and neurofibrillary tangles. Current research has heavily focused on the former which blocks communications between neurons in the brain and, so the theory goes, removing them will restore cognitive function. Recently two treatments have shown some incredibly positive results with one of them not too far off seeing widespread trials.
A drug company called Biogen has developed a drug called Aducanumab which has shown a significant effect in reducing the cognitive decline of Alzheimer’s patients. It’s an antibody that helps trigger an immune system response and was created by investigating the antibodies present in healthy aged donors, with the reasoning going that they had successfully resisted Alzheimer’s related symptoms. The recent large clinical study showed an effect far beyond what the researchers were expecting, including a dose dependent effect. The drug is not yet available for widespread distribution, there’s still one more late stage trial to go, however it could see a wide market release as soon as 2018. It’s still far from a cure but the drug is capable of significantly slowing the progress of the disease, opening up the opportunity for other treatments to be far more effective.
New research from the Queensland Brain Institute at the University of Queensland investigated using focused ultrasound to help break up amyloid plaques. Essentially this treatment disrupts the blood-brain barrier temporarily, allowing microglial cells (which are essentially clean up cells) to enter the particular region of the brain and remove the plaques. After a short period of time, the research shows a couple hours or so, the blood-brain barrier is fully restored ensuring that there are no on-going complications. This allows the body to remove the plaques naturally, hopefully facilitating the restoration of cognitive function.
In the mouse model used the researchers found that they could fully restore the memories of 75% of the subjects affected, an incredibly promising result. Of course the limitations of a mouse model mean that further research is required to find out if it would work as well in humans but there’s already precedent for using this kind of technology for treatment of other brain related conditions. Considering that the mechanism of action is similar to that of Aducanumab (removal of amyloid plaques) the side effects and limitations are likely to be similar, so it will be interesting to see how this develops.
It’s great to see conditions and diseases like this, ones that used to be a long and undignified death sentence, slowly meeting their end at the hands of science. Treatments like this have the potential to vastly improve the quality of life of our later years, meaning we can still be active members of society for much longer. I’m confident that one day we’ll have these conditions pinned down to the point where they’re no more of a worry than any other chronic, but controlled condition.
Much to the surprise of many I used to be a childcare worker back in the day. It was a pretty cruisy job for a uni student like myself, being able to show up after classes, take care of kids for a few hours and then head off home to finish off my studies (or World of Warcraft, as it mostly was). I consider it a valuable experience for numerous reasons not least of which is an insight into some of the public health issues that arise from having a bunch of children all packed into tight spaces. The school which I worked at had its very first peanut allergy ever when I was first there and I watched as the number of children who suffered from it increased rapidly.
Whilst the cause of this increase in allergic reactions is still somewhat unclear it’s well understood that the incident rate of food allergies has dramatically increased in developed countries in the last 20 years or so. There are quite a few theories swirling around as to what the cause will be but suffice to say that hard evidence to support any of them hasn’t been readily forthcoming. The problem for this is the nature of the beast as studies to investigate one cause or the other are plagued with variables that researchers are simply unable to control. However for researchers at the King’s College in London they’ve been able to conduct a controlled study with children who were at-risk of developing peanut allergies and have found some really surprising results.
The study involved 640 children who were all considered to be at a high risk of developing a peanut allergy due to other conditions they currently suffered from (eczema and egg allergies) aged between 4 and 11 months. They were then randomly split into 2 groups, one whose parents were advised to feed them peanut products at least 3 times per week and the other told to avoid. The results are quite staggering showing that when compared to the control group the children who were exposed to peanut products at an early age had an 80% reduced risk in developing the condition. This almost completely rules out early exposure as a risk factor for developing a peanut allergy, a notion that seems to be prevalent among many modern parents.
Indeed this gives credence to the Hygiene Hypothesis which theorizes that the lack of early exposure to pathogens and infections is a likely cause for the increase in allergic responses that children develop. Whilst this doesn’t mean you should let your kids frolic in the sewers it does indicate that keeping them in a bubble likely isn’t protecting them as much as you might think. Indeed the old adage of letting kids be kids in this regard rings true as early exposure to these kinds of things will likely help more than harm. Of course the best course of action is to consult with your doctor and devise a good plan that mitigates overall risk, something which budding parents should be doing anyway.
It’s interesting to see how many of the conditions that plague us today are the results of our affluent status. The trade offs we’ve made have obviously been for the better overall, as our increased lifespans can attest to, however there seems to be aspects of it we need to temper if we want to overcome these once rare conditions. It’s great to see this kind of research bearing fruit as it means that further study into this area will likely become more focused and, hopefully, just as valuable as this study has proven to be.
Whenever I think of a tidally locked planet, like say Mercury, the only image that comes to mind is one that is barren of all life. You see for tidally locked systems the face of the smaller body is always pointing towards the larger one, like our Moon is towards Earth. For planets and suns this means that the surface of the tidally locked planet would typically turn into an inferno with the other side becoming a frigid wasteland, both devoid of any kind of life. However new research shows that these planets might not be the lifeless rocks we once thought them to be and, in fact, they could be far more Earthlike than we previously thought.
Scientists have long theorized that planets of this nature could potentially harbour a habitable band around their terminator, a tenuous strip that exists between the freezing depths of the cold side and the furnace of the hot side. Such a planet wouldn’t have the day/night cycles that we’re accustomed to however and it would be likely that any life that evolved there would have adapted to the permanent daylight. There’d also be some pretty extreme winds to contend with as well due to the massive differences in temperature although how severe they were would be heavily dependent on the thickness of the atmosphere. Still it’s possible that that little band could harbor all sorts of life, despite the conditions that bookended its environment.
However there’s another theory that states that these kinds of planets might not be the one sided hotbeds that we initially thought them to be. Instead of being fully tidally locked with their parent star planets like this might actually still rotate thanks to the heavy winds that would whip across their surface. These winds would push against the planets surface, giving it enough rotation to overcome the tidal locking caused by the parent star’s gravity. There’s actually an example of this within our own solar system: Venus which by all rights should be tidally locked to our Sun. However it’s not although it’s extremely long days and retrograde rotation (it spins the opposite way to every other planet) hints at the fact that its rotation is caused by forces that a different to that from every other planet.
Counterintuitively it seems that Venus’ extremely thick atmosphere might be working against it in this regard as the modelling done shows that planets with thinner atmospheres would actually experience a higher rotational rate. This means that an Earthlike planet that should be tidally locked would likely not be and the resulting motion would be enough to make the majority of the planet habitable. In turn this would mean that many of the supposedly tidally locked planets we’ve discovered could actually turn out to be habitable candidates.
Whilst these are just beautiful models for now they can hopefully drive the requirements for future craft and observatories here on Earth that will be able to look for the signatures of these kinds of planets. Considering that our detection methods are currently skewed towards detecting planets that are close to their parent stars this will mean a much greater hit rate for habitable candidates, providing a wealth of data to validate against. Whether we’ll be able to get some direct observations of such planets within the next century or more is a question we won’t likely have an answer to soon, but hopefully one day we will.
Vaccines are responsible for preventing millions upon millions of deaths each year through the immunity they grant us to otherwise life threatening diseases. Their efficacy and safety is undisputed (at least from a scientific perspective anyway, which is the only way that matters honestly) and this mostly comes from the fact that they use our own immune system as the mechanism of action. A typical vaccine uses part of the virus to trigger the immune system to produce the right antibodies without having to endure the potentially deadly symptoms that the virus can cause. This response is powerful enough to provide immunity from those diseases and so researchers have long looked for ways of harnessing the body’s natural defenses against other, more troubling conditions and a recent development could see vaccines used to treat a whole host of things that you wouldn’t think would be possible.
Conditions that are currently considered terminal, like cancer, often stem from the body lacking the ability to mount a defensive response. For cancer this is because the cells themselves look the same as normal healthy cells, despite their nature to reproduce in an uncontrolled fashion, which means that the immune system ignores them. These cells do have signatures that we can detect however and we can actually program people’s immune systems to register those cells as foreign, triggering an immune response. However this treatment (which relies on extracting the patient’s white blood cells, turning them into dendritic cells and programming them with the tumour’s antigens) is expensive and of limited on-going effectiveness. However the new treatment devised by researchers at the National Institute of Biomedical Imaging and Bioengineering uses a novel method which drastically increases this treatment’s effectiveness and duration.
The vaccine they’ve created uses 3D nano structures which, when injected into a patient, form a sort of microscopic haystack (pictured above). These structures can be loaded with all sorts of compounds however in this particular experiment they loaded them with the antigens found on a specific type of cancer cells. Once these rods have been injected they then capture within them the dendritic cells that are responsible for triggering an immune response. The dendritic cells are then programmed with the cancer antigens and, when released, trigger a body wide immune response. The treatment was highly effective in a mouse model with a 90% survival rate for animals who would have otherwise died at 25 days.
The potential for this is quite staggering as it provides us another avenue to elicit an immune response, one that appears to be far less invasive and more effective than current alternatives provide. Of course such treatments are still like years away from seeing clinical trials but with such promising results in the mouse model I’m sure it will happen eventually. What will be interesting to see is if this method of delivery can be used to deliver traditional vaccines as well, potentially paving the way for more vaccines to be administered in a single dose. I know that it seems like every other week we come up with another cure for cancer but this one seems to have some real promise behind it and I can’t wait to see how it performs in us humans.
Modern in-vitro fertilisation (IVF) treatments are a boon to couples who might otherwise not be able to conceive naturally. They’re also the only guaranteed method by which couples who have inherited conditions or diseases can avoid passing them on to their offspring through a process called preimplantation genetic diagnosis. However current methods are limited to selection only, being able to differentiate between a set of potential embryos and selecting the most viable ones. New techniques have been developed that can go further than this, replacing damaged genetic material from one parent with that of another individual, creating a child that essentially has three parents but none of the genetic defects. Up until today such a process wasn’t strictly legal however the UK has now approved this method, opening the treatment up to all those affected.
The process is relatively straightforward involving the standard IVF procedure initially with the more radical steps following later. For this particular condition, where the mitochondria (which are essentially the engines of our cells) are damaged, the nucleus of a fertilized (but non-viable) embryo can be transplanted into a healthy donor egg which can then be implanted. Alternatively the egg itself can be repaired in much the same fashion before fertilization occurs. The resulting embryo then doesn’t suffer from the mitochondrial defect and will be far more likely to result in a successful pregnancy, much to the joy of numerous people seeking such treatment.
Of course when things like this come up inevitably the conversation tends towards designer babies, genetic modifications and all the other “playing god” malarkey that seems to plague embryo related treatments. For starters this treatment, whilst it does give the child three parents doesn’t fool around with the embryo’s core genetic material. Instead it’s simply replacing the damaged/non-functional mitochondria from one person with that of another individual. This will have no more influence on any of their characteristics than the environment they grew up in. Although, to perfectly honest, I wouldn’t see any issue with people going down to a deeper level anyway, for multiple reasons.
We’re already playing fast and loose with the natural way of doing things with the numerous treatments we have at our disposal that have rapidly increased life expectancy across the globe. If you indulge in such treatments then you’re already playing god as you’re interfering with the world’s natural way things get killed off. Extending such treatments to our ability to procreate isn’t much of a stretch honestly and should we be able to create the genetic best of ourselves through science then I really can’t see a problem with it. Sure there needs to be some ethical bounds put on it, just like there are for any kind of medical treatment, but I don’t see being able to choose your baby’s hair or eye colour being that far removed from the treatments we currently use to select the best embryos for IVF.
That’s the transhumanist in me talking however and I know not everyone shares my rather liberal views of the subject. Regardless this treatment is no where near that and simply provides an opportunity to those who didn’t have it before. Hopefully the approval of this method will extend to other treatments as well, ensuring that the the option to procreate is available to everyone, not just those of us who were born with genetic capability to.
Abstract mathematical principles are often obtuse ideas that don’t have any direct correlation to the real world. Indeed for the majority of the time I spent in university I had no idea how the concepts I was being taught could be applied in the real world, that was until the final unit where they showed us just how all these esoteric formulas and algorithms could be applied. However there are times when the real world and the land of pure mathematics cross paths and when they do the results can be quite amazing. Thus I present to you the Fibonacci Zoetrope:
The Fibonacci Sequence is one of the more commonly known mathematical concepts, one that can be seen often in nature. It can be used to approximate the Golden Spiral which everyone will readily recognise as the shape of a common sea shell. It also appears in sunflowers arising out of the fact that the interior of the flower is most efficiently filled in a Fibonacci like sequence, giving it an evolutionary advantage. The sculptures you see in the video above uses these same sequences to produce some rather interesting patterns which, when combined with a video camera, produce the illusion of motion that isn’t there.
The trick works due to the way modern cameras work, capturing individual frames at precise intervals. If you were looking at this in real life it would look like a blur of motion instead of the strange movement that you see in this video. However you would be able to see this with your own eyes if you used a strobe that pulsed at regular intervals, much like the modern Zoetropes do. Depending on the speed of the rotation and the image capture interval you’ll see very different kinds of motion and, if you time it precisely, it could appear to not move at all.
I really love these crossovers between art and science as they demonstrate some incredibly complicated ideas without having to dive into reams of proofs and scientific papers. The creation of the sculptures themselves is also a feat of modern engineering as some of those structures are simply not possible to create without 3D printing. I might lament not being as talented as the people who created this video but I think it’s for the best as otherwise my hose would be covered in all sorts of weird and wonderful sculptures inspired by random mathematical principles.