Posts Tagged‘research’

Learning Stuff

Slow Learner? You Might Be Thinking Too Hard.

Back in my school days I thought that skill was an innate thing, a quality that you were born with that was basically immutable. Thus things like study and practice always confused me as I felt that I’d either get something or I wouldn’t which is probably why my academic performance back then was so varied. Today however I don’t believe anyone is below mastering a skill, all that is required is that you put the required amount of time and (properly focused) practice in and you’ll eventually make your way there. Innate ability still counts for something though as there are things you’re likely to find much easier than others and some people are even just better in general at learning new skills. Funnily enough that latter group of people likely has an attribute that you wouldn’t first associate with that skill: lower overall brain activity.

Learning Stuff

Research out of the University of California – Santa Barbara has shown that people who are most adept at learning new tasks actually show a lower overall brain activity level than their slow learning counterparts. The study used a fMRI machine to study the subject’s brains whilst they were learning a new task over the course of several weeks and instead of looking at a specific region of the brain the researchers focused on “community structures”. These are essentially groups of nodes within the brain that are densely interconnected with each other and are likely in heavy communication. Over the course of the study the researchers could identify which of these community structures remained in communication and those that didn’t whilst measuring the subject’s mastery of the new skill they were learning.

What the researchers found is that people who were more adept at mastering the skill showed a rapid decrease in the overall brain activity used whilst completing the task. For the slower learners many of the regions, namely things like the visual and motor cortexs, remained far more active for a longer period, showing that they were more actively engaged in the learning process. As we learn skills much of the process of actually doing that skill gets offloaded, becoming an automatic part of what we do rather than being a conscious effort. So for the slow learners these parts of the brain remained active for far longer which could, in theory, mean that they were getting in the way of making the process automatic.

For me personally I can definitely attest to this being the case, especially with something like learning a second language. Anyone who’s learnt a different language will tell you that you go through a stage of translating things into your native language in your head first before re-translating them back into the target language, something that you simply can’t do if you want to be fluent. Eventually you end up developing your “brain” in that language which doesn’t require you to do that interim translation and everything becomes far more automatic. How long it takes you to get to that stage though varies wildly, although the distance from your native language (in terms of grammatical structure, syntax and script) is usually the primary factor.

It will be interesting to see if this research leads to some developmental techniques that allow us to essentially quieten down parts of our brain in order to aid the learning process. Right now all we know is that some people’s brains begin the switch off period quicker than others and whatever is causing that is the key to accelerating learning. Whether or not that can be triggered by mental exercises or drugs is something we probably won’t know for a while but it’s definitely an area of exciting research possibilities.

url-1024x683

Forgetting Might be an Adaptive Advantage.

Nearly all of us are born with what we’d consider less than ideal memories. We’ll struggle to remember where our keys our, draw a blank on that new coworker’s name and sometimes pause much longer than we’d like to remember a detail that should be front of mind. The idealised pinnacle, the photographic (or more accurately the eidetic) memory, always seems like an elusive goal, something you have to be born with rather than achieve. However it seems that our ability to forget might actually come from an evolutionary adaptation, enabling us to remember the pertinent details that helped us survive whilst suppressing those that might otherwise hinder us.

url-1024x683

The idea isn’t a new one, having existed in some form since at least 1997, but it’s only recently that researchers have had the tools to study the mechanism in action. You see it’s rather difficult to figure out which memories are being forgotten for adaptive reasons, I.E. to improve the survival of the organism, and which ones are simply forgotten due to other factors. The advent of functional Magnetic Resonance Imaging (fMRI) has allowed researchers to get a better idea of what the brain is doing at any one point, allowing them to set up situations to see what the brain is doing when it’s forgetting something. The results are quite intriguing, demonstrating that at some level forgetting might be an adaptive mechanism.

Back in 2007 researchers at Stanford University investigated the prospect that adaptive forgetting was potentially a mechanism for reducing the amount of brain power required to select the right memories for a particular situation. The hypothesis goes that remembering is an act of selecting a specific memory for a goal related activity. Forgetting then functions as an optimization mechanism, allowing the brain to more easily select the right memories by suppressing competing memories that might not be optimal. The research supported this notion, showing decreased activity in anterior cingulated cortex which is activated when people are weighing choices (like figuring out which memory is relevant).

More recent research into this phenomena, conducted by researchers at various institutes at the University of Birmingham and various institutes in Cambridge, focused on finding out if the active recollection of a specific memory hindered the remembering of others. Essentially this means that the act of remembering a specific memory would come at the cost of other, competing memories which in turn would lead to them being forgotten. They did this by getting subjects to view 144 picture and word associations and were then trained to remember 72 of them (whilst they were inside a fMRI machine). They were then given another set of associations for each word which would serve as the “competitive” memory for the first.

The results showed some interesting findings, some which may sound obvious on first glance. Attempting to recall the second word association led to a detriment in the subject’s ability to recall the first. That might not sound groundbreaking to start off with but subsequent testing showed a progressive detriment to the recollection of competing memories, demonstrating they were being actively repressed. Further to this the researchers found that their subject’s brain activity was lower for trained images than ones that weren’t part of the initial training set, an indication that these memories were being actively suppressed. There was also evidence to suggest that the trained memories showed the most average forgetting as well as increased activity in a region of the brain known to be associated with adaptive forgetting.

Whilst this research might not give you any insight into how to improve your memory it does give us a rare look into how our brain functions and why certain it behaves in ways we believe to be sub-optimal. Potentially in the future there could be treatments available to suppress that mechanism however what ramifications that might have on actual cognition is anyone’s guess. Needless to say though it’s incredibly interesting to find out why our brains do the things we do, even if we wished they did the exact opposite most of the time.

blair-lavatubes

Lava Tubes on the Moon Could House Massive Colonies.

Establishing lunar colonies seems like the next logical step, it’s our closest celestial body after all, however it might surprise you to learn that doing that might in fact be a lot harder than establishing a similarly sized colony on Venus or Mars. Without an atmosphere to speak of our Moon’s surface is an incredibly harsh place with the full brunt of our sun’s radiation bearing down on it. That’s only half the problem too as since the day/night cycles last 2 weeks you’ll spend half your time in perpetual darkness at temperatures fast approaching absolute zero. There are ways around it however and recent research has led to some rather interesting prospects.

blair-lavatubes

Whilst the surface of the Moon might be unforgiving going just a little bit below the surface negates many of the more undesirable aspects. Drilling into the surface is one option however that’s incredibly resource intensive, especially when you consider that all the gear required to do said drilling would need to be sent from Earth. The alternative is to use structures that are already present on the Moon such as caverns and other natural structures. We know that these kinds of formations are already present on the Moon thanks to the high resolution imagery and gravity mapping we’ve done (the Moon’s gravity field is surprisingly non-uniform) but just how big these structures could be has remained somewhat of a mystery.

Researchers at Purdue university decided to investigate just how big structures like these could be, specifically looking at how big lava tubes could get if they existed on the Moon. During its formation, which would have happened when a large object collided with the then primordial Earth, the surface of the Moon would have been ablaze with volcanic activity. However due to its much smaller size that activity has long since ceased but it would have still left behind the tell tale structures of its more tumultuous history. The researchers then modelled how big these tubes could have gotten given the conditions present on the Moon and came up with a rather intriguing discovery: they’d be huge.

When you see the outcome of the research it feels like an obvious conclusion, of course they’d be bigger since there’s less gravity, but the fact that they’re an order of magnitude bigger than what we’d see on Earth is pretty astounding. The picture above gives you some sense of scale for these potential structures, able to fit several entire cities within them with an incredible amount of room to spare. Whilst using such structures as a basis for a future lunar colony presents a whole host of challenges it does open up the possibility to the Moon having much more usable space than we first thought.

pb_hero

Early Exposure Key to Reducing Allergy Development.

Much to the surprise of many I used to be a childcare worker back in the day. It was a pretty cruisy job for a uni student like myself, being able to show up after classes, take care of kids for a few hours and then head off home to finish off my studies (or World of Warcraft, as it mostly was). I consider it a valuable experience for numerous reasons not least of which is an insight into some of the public health issues that arise from having a bunch of children all packed into tight spaces. The school which I worked at had its very first peanut allergy ever when I was first there and I watched as the number of children who suffered from it increased rapidly.

pb_hero

Whilst the cause of this increase in allergic reactions is still somewhat unclear it’s well understood that the incident rate of food allergies has dramatically increased in developed countries in the last 20 years or so. There are quite a few theories swirling around as to what the cause will be but suffice to say that hard evidence to support any of them hasn’t been readily forthcoming. The problem for this is the nature of the beast as studies to investigate one cause or the other are plagued with variables that researchers are simply unable to control. However for researchers at the King’s College in London they’ve been able to conduct a controlled study with children who were at-risk of developing peanut allergies and have found some really surprising results.

The study involved 640 children who were all considered to be at a high risk of developing a peanut allergy due to other conditions they currently suffered from (eczema and egg allergies) aged between 4 and 11 months. They were then randomly split into 2 groups, one whose parents were advised to feed them peanut products at least 3 times per week and the other told to avoid. The results are quite staggering showing that when compared to the control group the children who were exposed to peanut products at an early age had an 80% reduced risk in developing the condition. This almost completely rules out early exposure as a risk factor for developing a peanut allergy, a notion that seems to be prevalent among many modern parents.

Indeed this gives credence to the Hygiene Hypothesis which theorizes that the lack of early exposure to pathogens and infections is a likely cause for the increase in allergic responses that children develop. Whilst this doesn’t mean you should let your kids frolic in the sewers it does indicate that keeping them in a bubble likely isn’t protecting them as much as you might think. Indeed the old adage of letting kids be kids in this regard rings true as early exposure to these kinds of things will likely help more than harm. Of course the best course of action is to consult with your doctor and devise a good plan that mitigates overall risk, something which budding parents should be doing anyway.

It’s interesting to see how many of the conditions that plague us today are the results of our affluent status. The trade offs we’ve made have obviously been for the better overall, as our increased lifespans can attest to, however there seems to be aspects of it we need to temper if we want to overcome these once rare conditions. It’s great to see this kind of research bearing fruit as it means that further study into this area will likely become more focused and, hopefully, just as valuable as this study has proven to be.

3D Vaccine Structure

3D Vaccines Pave the Way for Supercharged Immune Systems.

Vaccines are responsible for preventing millions upon millions of deaths each year through the immunity they grant us to otherwise life threatening diseases. Their efficacy and safety is undisputed (at least from a scientific perspective anyway, which is the only way that matters honestly) and this mostly comes from the fact that they use our own immune system as the mechanism of action. A typical vaccine uses part of the virus to trigger the immune system to produce the right antibodies without having to endure the potentially deadly symptoms that the virus can cause. This response is powerful enough to provide immunity from those diseases and so researchers have long looked for ways of harnessing the body’s natural defenses against other, more troubling conditions and a recent development could see vaccines used to treat a whole host of things that you wouldn’t think would be possible.

3D Vaccine Structure

Conditions that are currently considered terminal, like cancer, often stem from the body lacking the ability to mount a defensive response. For cancer this is because the cells themselves look the same as normal healthy cells, despite their nature to reproduce in an uncontrolled fashion, which means that the immune system ignores them. These cells do have signatures that we can detect however and we can actually program people’s immune systems to register those cells as foreign, triggering an immune response. However this treatment (which relies on extracting the patient’s white blood cells,  turning them into dendritic cells and programming them with the tumour’s antigens) is expensive and of limited on-going effectiveness. However the new treatment devised by researchers at the National Institute of Biomedical Imaging and Bioengineering uses a novel method which drastically increases this treatment’s effectiveness and duration.

The vaccine they’ve created uses 3D nano structures which, when injected into a patient, form a sort of microscopic haystack (pictured above). These structures can be loaded with all sorts of compounds however in this particular experiment they loaded them with the antigens found on a specific type of cancer cells. Once these rods have been injected they then capture within them the dendritic cells that are responsible for triggering an immune response. The dendritic cells are then programmed with the cancer antigens and, when released, trigger a body wide immune response. The treatment was highly effective in a mouse model with a 90% survival rate for animals who would have otherwise died at 25 days.

The potential for this is quite staggering as it provides us another avenue to elicit an immune response, one that appears to be far less invasive and more effective than current alternatives provide. Of course such treatments are still like years away from seeing clinical trials but with such promising results in the mouse model I’m sure it will happen eventually. What will be interesting to see is if this method of delivery can be used to deliver traditional vaccines as well, potentially paving the way for more vaccines to be administered in a single dose. I know that it seems like every other week we come up with another cure for cancer but this one seems to have some real promise behind it and I can’t wait to see how it performs in us humans.

Vaccination

Herd Immunity Broken in Parts of Northern California.

Vaccines are incredibly beneficial for two reasons. The first is the obvious one; for the individual receiving them they provide near-immunity to a whole range of horrendous diseases, many of which can prove fatal or have lifelong consequences for those who become infected. The risks associated with them are so small it’s hard to even connect them with the vaccines themselves and are far more likely to simply be the background noise than anything else. Secondly, when a majority of the population is vaccinated individuals who can’t be vaccinated (such as newborns) or those idiots who simply choose not to gain the benefit of herd immunity. This prevents most diseases from spreading within a community, providing the benefits of vaccinations to those who don’t have them. However there’s a critical point where herd immunity stops working and that’s exactly what’s starting to happen in northern California.

Vaccination

A recent study conducted by researchers working for Kaiser Permanente analysed the vaccination records for some 154,000 individuals in the Northern California region. The records cover approximately 40% of the total insured individuals in the area so the sample size is large enough for it to be representative of the larger whole. The findings are honestly quite shocking showing that there were multiple pockets of under-immunization (children not recieving the required number of vaccinations) which were signficantly above the regional mean, on the order of 18~23% within a cluster. Worst still the rate of vaccination refusal, where people declined any vaccinations at all, was up to 13.5%. It’s a minority of people but it’s enough to completely eradicate herd immunity for several horrible diseases.

For diseases like pertussis (whooping cough) and measles the herd immunity rate may only start kicking in at the 95% vaccination rate, mostly due to how readily they can spread from person to person. That means that only 5% of the population has to forego these vaccinations before herd immunity fails, putting at risk individuals directly in harms way. Other diseases still maintain herd immunity status down to 85% vaccination rates which some of the clusters were getting dangerously close to breaking. It’s clusters like this that are behind the resurgence of diseases which were effectively eradicated decades ago, something which is doing far more harm than any vaccine ever has.

It all comes down to the misinformation spread by several notable public figures that vaccinations are somehow linked to other conditions. It’s been conclusively proven again and again that vaccines have no link to any of these conditions and the side effects from a vaccination rarely amount to more than a sore arm or a fever. It’s one thing to make a decision that only affects yourself but the choice not to vaccinate doesn’t, it puts many other individuals at risk, most of whom cannot do anything to change their situation. You can however and the choice not to is so incredibly selfish I can’t begin to explain my frustration with it.

Hopefully one day reason will prevail over popularity when it comes to things like this. It’s infuriating to think that people are putting both themselves and others at risk just because some celebrity told them that vaccines were doing them more harm than good when the reality is nothing like that. I know I’ve beaten this horse several times since it died but it seems the bounds of human stupidity is indeed limitless and if I can make even just a small difference in those figures than I feel compelled to do so. You should to as the anti-vaxxers need a good and thorough flogging with the facts, one that shouldn’t stop until they realise the error of their ways.

Supercritical CO2 Extraction

Life Without Water.

All life as we know it has one basic need: water. The amount of water required to sustain life is a highly variable thing, from creatures that live out their whole lives in our oceans to others who can survive for months at a time without a single drop of water. However it would be short sighted of us to think that water was the be all and end all of all life in our universe as such broad assumptions have rarely panned out to be true under sustained scrutiny. That does leave us with the rather puzzling question of what environments and factors are required to give rise to life, something we don’t have a good answer to since we haven’t yet created life ourselves. We can study how some of the known biological processes function in other environments and whether that might be a viable place for life to arise.

Supercritical CO2 Extraction

Researchers at the Washington State University have been investigating the possibility of fluids that could potentially take the place of water in life on other planets. Water has a lot of properties that make it conducive to producing life (as we know it) like dissolving minerals, forming bonds and so on. The theory goes that should a liquid have similar properties to that of water then, potentially, an environment rich in said substance could give rise to life that uses that liquid as its base rather than water. Of course finding something with those exact properties is a tricky endeavour but these researchers may have stumbled onto an unlikely candidate.

Most people are familiar with the triple point of substances, the point where a slight change in pressure or temperature can change it from any of its one three states (solid, liquid, gas) instantly. Above there however there’s another transition called the supercritical point where the properties of the gaseous and liquid phases of the substance converge producing a supercritical fluid. For carbon dioxide this results in a substance that behaves like a gas with the density of its liquid form, a rather peculiar state of matter. It’s this form of carbon dioxide that the researchers believe could replace water as the fluid of life elsewhere, potentially life that’s even more efficient than what we find here.

Specifically they looked at how enzymes behaved in supercritical CO2 and found that they were far more stable than the same ones that they had residing in water. Additionally the enzymes became far more selective about the molecules that they bound to, making the overall process far more efficient than it otherwise would have been. Perhaps the most interesting thing about this was that they found organisms were highly tolerant of this kind of fluid as several bacteria and their enzymes were found to be present in the fluid. Whilst this isn’t definitive proof for life being able to use supercritical CO2 as a replacement for water it does lend credence to the idea that life could arise in places where water is absent.

Of course whether that life would look like anything we’d recognise is something that we won’t really know for a long time to come. An atmosphere of supercritical C02 would likely be an extremely hostile place to our kind of life, more akin to Venus than our comfortable Earth, making exploration quite difficult. Still this idea greatly expands our concept of what life might be and what might give rise to it, something which has had an incredibly inward view for far too long. I have little doubt that one day we’ll find life not as we know it, I’m just not sure if we’ll know it when we see it.

Frustration vred bærbar gal irriteret

Violence in All Media, Including Games, Does Not Lead to Real World Violence.

There’s no doubt that the media we consume has an effect on us, the entire advertising and marketing industry is built upon that premise, however just how big that impact can be has always been a subject of debate. Most notably the last few decades have been littered with debate around how much of an impact violent media has on us and whether it’s responsible for some heinous acts committed by those who have consumed it. In the world of video games there’s been dozens of lab controlled studies done that shows consumption of violent games leads towards more aggressive behaviour but the link to actual acts of violence could not be drawn. Now researchers from Stetson University have delved into the issue and there doesn’t appear to be a relationship between the two at all.

Frustration vred bærbar gal irriteret

The study, which was a retrospective analysis of reports of violence and the availability of violent media, was broken down into two parts. The first part of the study focused on homicide rates and violence in movies between 1920 and 2005 using independent data sources. The second then focused on incidents of violence in video games using the ESRB ratings from 1996 to 2011 and correlated them with rates of youth violence over the same period. Both periods of study found no strong correlation between violence in media and acts of actual violence, except for a brief period in the early 90s (although the trend quickly swung back the other way, indicating the result was likely unrelated).

Violent video games are often used as an outlet for those looking for something to blame but often the relationship between them and the act of violence is completely backwards. It’s not that violent video games are causing people to commit these acts, instead those who are likely to commit these acts are also likely to engage in other violent media. Had the reverse been true then there would have been a distinct correlation between the availability of violent media and acts of real world violence but, as the study shows, there’s simply no relationship between them at all.

Hopefully now the conversation will shift from video games causing violence (or other antisocial behaviours) to a more nuanced discussion around the influences games can have on our attitudes, behaviours and thought processes. There’s no doubt that we’re shaped by the media we consume however the effects are likely much more subtle than most people would like to think they are. Once these more subtle influences are understood we can then work towards curtailing any negative aspects that they might have rather than using a particular medium as a scapegoat for deplorable behaviour.

Child with learning difficulties

Praising Effort, Process is Better Than Praising Ability.

For much of my childhood people told me I was smart. Things that frustrated other kids, like maths, seemed to come easy to me and this led to many people praising my ability. I never felt particularly smart, I mean there were dozens of other kids who were far more talented than I was, but at that age it’s hard to deny the opinions of adults, especially the ones who raised you. This led to an unfortunate misconception that stayed with me until after I left university: the idea that my abilities were fixed and that anything I found hard or difficult was simply beyond my ability. It’s only been since then, some 8 years or so, that I learnt that any skill or problem is within my capability, should I be willing to put the effort in.

Child with learning difficulties

It’s a theme that will likely echo among many of my generation as we grew up with parents who were told that positive reinforcement was the way to make your child succeed in the world. It’s only now, after decades of positive reinforcement failing to produce the outcomes it decried, we’re beginning to realise the folly of our ways. Much of the criticism of our generation focuses on this aspect, that we’re too spoilt, too demanding when compared to previous generations. If there’s one good thing to come out of this however it’s that research has shown that the praising a child’s ability isn’t the way to go, you should praise them for the process they go through.

Indeed once I realised that things like skills, abilities and intelligence were primarily a function of the effort and process you went through to develop them I was suddenly greeted with a world of achievable goals rather than roadblocks. At the same time I grew to appreciate those at the peak of their abilities as I knew the amount of effort they had put in to develop those skills which allowed them to excel. Previously I would have simply dismissed them as being lucky, winning the genetic lottery that gave them all the tools they needed to excel in their field whilst I languished in the background.

It’s not a silver bullet however as the research shows the same issues with positive reinforcement arise if process praise is given too often. The nuances are also unknown at this point, like how often you should give praise and in what fashion, but these research does show that giving process praise in moderation has long lasting benefits. I’d be interested to see how well this translates into adults as well since my experience has been vastly positive once I made the link between effort and results. I can’t see it holding true for everyone, as most things don’t in this regard, but if it generally holds then I can definitely see a ton of benefits from it being implemented.

RNGesus

Uncertainty is More Rewarding Than Certainty.

If you’ve ever spent a decent amount of time playing a MMORPG chances are you’ve come up against the terror that is the Random Number Generator (RNG). No matter how many times you run a dungeon to get that one item to complete your set or kill that particular mob to get that item you need to complete that quest it just never seems to happen. However, sometimes, everything seems to go your way and all your Christmases seem to come at once and the game has you in its grasp again. Whilst RNGesus might be a cruel god to many he’s the reason that many of us keep coming back and now there’s solid science to prove it.

RNGesus

It’s long been known that random rewards are seen as more rewarding than those that are given consistently. Many online games, notably those on social networks, have tapped into that mechanic in order to keep users engaged far longer than they would have otherwise. Interestingly though this seems to run contrary to what many players will tell you, often saying that they’d prefer a guaranteed reward after a certain amount effort or time committed. As someone who’s played through a rather large number of games that utilize both mechanics I can tell you that both types of systems will keep me returning however nothing beats the rush of finding a really good item from the hands of RNGesus.

Indeed my experience seems to line up with the recent study published by the University of Chicago which shows that people are more motivated by random rewards than they are by consistent ones. It sounds quite counter-intuitive when you think about it, I mean who would take a random reward over a guaranteed one, but the effect remains consistent across the multiple experiments that they conducted. Whilst the mechanism of what triggers this isn’t exactly known it’s speculated that randomness leads to excitement, much like the the infinitesimally small odds of winning the lottery are irrelevant to the enjoyment some people derive from playing it.

However the will of RNGesus needs to be given a guiding hand sometimes to ensure that he’s not an entirely cruel god. Destiny’s original loot system was a pretty good example of this as you could be bless with a great drop only to have the reveal turn it into something much less than what you’d expect it to be. Things like this can easily turn people off games (and indeed I think this is partly responsible for the middling reviews it received at launch) so there needs to be a balance struck so players don’t feel hard done by.

I’d be very interested to see the effect of random rewards that eventually become guaranteed (I.E. pseudo-random rewards). World of Warcraft implemented a system like this for quest drops a couple years ago and it was received quite well. This went hand in hand with their guaranteed reward systems (tokens/valor/etc.) which have also been met with praise. Indeed I believe the mix of these two systems, random rewards with guaranteed systems on the side, seems to be the best mix in order to keep players coming back. I definitely know I feel more motivated to play when I’m closer to a guaranteed reward which can, in turn, lead to more random ones.

It’s always interesting to investigate these non-intuitive behaviours as it can give us insight into why we humans act in seemingly irrational ways when it comes to certain things. We all know we’re not strict rational actors, nor are we perfect logic machines, but counter-intuitive behaviour is still quite a perplexing field of study. At least we’ve got definitive proof now that random rewards are both more rewarding and more motivating than their consistent brethren although how that knowledge will help the world is an exercise I’ll leave up to the reader.