Posts Tagged‘research’

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

Age Related Cognitive Motor Decline Starts at 24, But It’s Not All Bad News.

Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:

  1. Is there age-related slowing of Looking-Doing Latency?
  2. Can expertise directly ameliorate this decline?
  3. When does this decline begin?

In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?

Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.

As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.

standing-desk-technology

Standing 2 Hours a Day Shows Potential Benefits.

You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.

standing-desk-technology

The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.

The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.

As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.

It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.

DARPA_SyNAPSE_16_Chip_Board

An Artificial Brain in Your Pocket.

Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.

DARPA_SyNAPSE_16_Chip_Board

The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.

But why, I hear you asking, would you want a computerized brain in your pocket?

IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.

The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.

Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.

will2dtinbet

Stanene: Graphene’s Metallic Brother.

Graphene has proven to be a fruitful area of scientific research, showing that atom thick layers of elements exhibit behaviours that are wildly different from their thicker counterparts. This has then spurred on research into how other elements behave when slimmed down to atom thick layers producing such materials as silicene (made from silicon) and phosphorene (made from phosphorous). Another material in the same class as these, stanene which is made from an atom thick layer of tin, has been an active area of research due to the potential properties that it might have. Researchers have announced that they have, for the first time, created stanene in the lab and have begun to probe its theoretical properties.

will2dtinbetNot all elements have the ability to form these 2D structures however researchers at Stanford University in California predicted a couple years ago that tin should be able to form a stable structure. This structure then lent itself to numerous novel characteristics, chief among them being the ability for an electric current to pass through it without producing waste heat. Of course without a real world example to test against such properties aren’t of much use and so the researchers have spent the last couple years developing a method to create a stanene sheet. That research has proved fruitful as they managed to create a stanene layer on top a supporting substrate of bismuth telluride.

The process that they used to create the stanene sheet is pretty interesting. First they create a chamber that has a base of bismuth telluride. Then they vaporize tin and introduce it into the chamber, allowing it to deposit itself onto the bismuth telluride base. It’s a similar process to what some companies use to create synthetic diamonds, called chemical vapor deposition. For something like stanene it ensures that the resulting sheet is created uniformly, ensuring that the underlying structure is consistent. The researchers have then used this resulting stanene sheet to test the theoretical properties that were modelled previously.

Unfortunately the stanene sheet produced by this method does not appear to have the theoretical properties that the theoretical models would indicate. The problem seems to stem from the bismuth telluride base that they used for the vapor deposition process as it’s not completely inert. This means that it interacts with the stanene sheet, contaminating it and potentially disrupting the topological insulator properties which it should exhibit. The researchers are investigating different surfaces to mitigate this effect so it’s likely that we’ll have a pure stanene sheet in the not too distant future.

Should this research prove fruitful it could open up many new avenues of research for materials development. Stanene has properties that would make it extremely ideal for use in electronics, being able to dramatically increase the efficiency of interconnects. Large scale implementations would likely still be a while off but if they could make the vapor deposition process work then there’s immediate applications for it in the world of microelectronics. Hopefully the substrate issue is sorted out soon and we’ll see consumerization of the technology begin in earnest.

 

bees.cipamericas

Vaccination So Effective Even Bees Do It.

The unequivocal effectiveness of vaccinations has seen many of the world’s worst and most debilitating diseases relegated to the history books. Gone are the days when millions of people were afflicted with diseases that could leave them permanently disabled, enabling many more to live long and healthy lives. Before their invention however developing an immunity to a disease often meant enduring it, something ranged from a mild inconvenience to a life threatening prospect. Our biology takes care of part of that, with some immunity passing down from mother to child, however we’d never witnessed that outside our branch on the biology tree of life. New research shows though that bees in fact have their own form of natural immunity that queens pass onto their workers.

bees.cipamericas

The research, conducted by scientists at Stanford University and published in PLOS Pathogens a couple days ago, shows that queen bees immunize their worker bees against certain types of pathogens that would otherwise devastate the colony. The mechanism by which this works is actually very similar to the way many vaccines work today. Essentially the queen bee, who rarely leaves the hive, is fed on a combination of pollen and nectar called royal jelly. This food actually contains a variety of pathogens which typically would be deadly to the bees.

However the queen bee has what’s called a fat body, an organ which functions similarly to our liver. Once the pathogen has been broken down in the queen bee’s gut it’s then transferred to the fat body where parts of the pathogen are wrapped up in a protein called vitellogenin. This is then passed onto her offspring who, when they hatch, now have immunity to pathogens that would otherwise kill them. What’s interesting about this process is that it has the potential for aiding current bee populations which have been collapsing around the world over the past decade.

Whilst the root cause of the widespread colony collapse is still under intense debate there are several potential causes which could be mitigated by using this mechanism. Essentially we could devise vaccines for some of the potential problems that bee colonies face and introduce them via spraying flowers with them. Then, when the pollen is brought back to the queen, all the subsequent bees would get the immunity, protecting them from the disease. This could also aid in making the end product better for humans, potentially eradicating problems like botulism toxin which sometimes makes its way into honey.

It’s always interesting to see common attributes like this pop up across species as it gives us an idea of how much of our evolutionary lineage is shared. Whilst we don’t share a lot in common with bees there are a lot of similar mechanisms at play, suggesting our evolutionary paths deviated at a common ancestor a long time ago. Something like this, whilst not exactly a revolution, does have the potential to benefit both us and our buzzing companions. Hopefully this leads to positive progress in combating colony collapse which is beneficial for far more than just lovers of honey.

farnsworth not fair

At The Quantum Level Measurement is Everything.

As we go further and further down into the world of infinitesimally small physics the rules we use at the macro level start to break down. Where once we had defined rules that governed the behaviour of bodies interacting with each other we quickly end up in the realm of possibilities rather than definites, something which causes no end of grief to those seeking to understand it. Indeed whenever I feel like I’m getting close to understanding a fraction of what quantum mechanics is something else comes out of left field that ruins it, leaving me with a bunch of disjointed pieces of information that I try to make sense of yet again. Today I bring you one such piece which both makes complete sense yet is completely nonsensicalfarnsworth not fairPhysicists at our very own Australian National University designed an experiment to test the wave/particle duality that single atoms can exhibit. Their experiment consisted of a stream of single helium atoms that were fired down an apparatus that contained 2 light gates which, if activated, would cause a interference pattern when measured (indicating a wave). However should only one of the gates be open then the particle would travel down a single path (indicating a particle). The secret sauce to their experiment was that the second gate, the one which would essentially force the particle to travel as a wave, was turn on randomly but only after the particle would have already traversed the gate. This essentially proves the theory that, when we’re operating at the quantum level, nothing is certain until measurements are made.

Extrapolating from this you can make some pretty wild theories about the mechanism of action here although there are only a few that can truly make sense. My favourite (and the one that’s least likely to be real) is that the information about the gate activation travelled back in time and informed the particle of the state before it traversed them, meaning that it was inevitable for it to be measured that way. Of course the idea of information travelling back in time violates a whole slew of other physical laws but if that proved to be correct the kind of science we could pursue from it would be straight out of science fiction. I know that’s not going to happen but there’s a part of me that wants to believe.

The far more mundane (and more likely) explanation for this phenomena is that the atom exists as both a particle and a wave simultaneously until it is observed at which point it collapses down into the only possibility that make sense. Whilst some may then extend this to mean things like “The world doesn’t exist unless you’re looking at it” it’s actually a far more nuanced problem, one that requires us to understand what constitutes measurement at a quantum level. At a fundamental level most of the issues arise out of the measurement altering the thing you’re trying to observe although I’m sure there’s far more to it than that.

I’m honestly not sure where these results will take us as whilst it provides evidence for one interpretation of quantum mechanics I don’t know where the future research might be focused. Such an effect doesn’t appear to be something we can make use of, given the fact that measurement needs to take place for it to (in essence) actually happen, but I’ll be the first to admit that my knowledge of this area is woefully limited.

Perhaps I should take a wander down to the university, although I fear I’d only walk out of there more confused than ever…

 

clickbait-rewritten-historical-headlines-xkcd

The Sorry State of Clickbait Research.

I don’t think I’m alone in feeling an almost irrational hatred towards clickbait headlines. It’s not the headlines themselves, per se, more the fact that they exist solely to trick you into clicking through by attempting to trigger your desire for closure rather than a genuine interest in the content. Indeed after being blasted with these headlines for years now I’ve found myself being turned off by the headlines, sometimes even stopping me from reading things that I would have otherwise been interested in. This got me thinking: have we reached the point of diminishing returns for clickbait? As it turns out this might be true but there’s not exactly a lot to go on in terms of research in this field.

clickbait-rewritten-historical-headlines-xkcd

You don’t have to go far to find numerous articles which deride and lament the use of clickbait but they have existed since it first began its rise to infamy all those years ago. Certainly there’s a subsection of society which doesn’t appreciate the lowest common denominator style writing which clickbait headlines imply but you get that with almost any new trend, so the question then becomes one of magnitude of the resistance. In order to answer the question of whether or not we’ve reached peak clickbait I did my usual search through various sources but found myself coming up blank, even when I narrowed my view to scholarly sources only. The best I could find was this subject line report from ReturnPath which, whilst it provides some interesting insights, doesn’t speak to the larger question of whether or not we’re starting to get fed up with clickbait as a thing.

Essentially the report states that, for email subject headlines, clickbait style headlines are far less effective than they are on other mediums. Certainly in my experience this is somewhat true, clickbait in my inbox is far less likely to prompt me to click, however it’s a single data point in an area that should be flooded with data. This could be because that data is being held by those who are profiting from it and, by that token, since the main offenders are still engaging in such behaviour you’d hazard a guess that it’s still working from them. That doesn’t necessarily mean that it’s effectiveness isn’t waning but unless Buzzfeed or another clickbait site decides to open the doors to researchers we likely won’t have an answer for some time.

I must admit that this search was somewhat aspirational in nature; I wanted, nay hoped, that there’d be evidence that clickbait’s demise was just over the horizon. As it turns out while there are rumblings of discontent with the trend there’s very little evidence to suggest it will be going away anytime soon. Hopefully though more companies take a stance ala Facebook’s pushing these kinds of titles further down the chain in favour of more genuine headlines that rely on genuine interest rather than novelty or emotional responses. For now though we’ll just need to  keep applying our own filters to content of this nature.

Although I must admit whatever that one weird secret a stay at home mum has does sound rather intriguing… 😉

Learning Stuff

Slow Learner? You Might Be Thinking Too Hard.

Back in my school days I thought that skill was an innate thing, a quality that you were born with that was basically immutable. Thus things like study and practice always confused me as I felt that I’d either get something or I wouldn’t which is probably why my academic performance back then was so varied. Today however I don’t believe anyone is below mastering a skill, all that is required is that you put the required amount of time and (properly focused) practice in and you’ll eventually make your way there. Innate ability still counts for something though as there are things you’re likely to find much easier than others and some people are even just better in general at learning new skills. Funnily enough that latter group of people likely has an attribute that you wouldn’t first associate with that skill: lower overall brain activity.

Learning Stuff

Research out of the University of California – Santa Barbara has shown that people who are most adept at learning new tasks actually show a lower overall brain activity level than their slow learning counterparts. The study used a fMRI machine to study the subject’s brains whilst they were learning a new task over the course of several weeks and instead of looking at a specific region of the brain the researchers focused on “community structures”. These are essentially groups of nodes within the brain that are densely interconnected with each other and are likely in heavy communication. Over the course of the study the researchers could identify which of these community structures remained in communication and those that didn’t whilst measuring the subject’s mastery of the new skill they were learning.

What the researchers found is that people who were more adept at mastering the skill showed a rapid decrease in the overall brain activity used whilst completing the task. For the slower learners many of the regions, namely things like the visual and motor cortexs, remained far more active for a longer period, showing that they were more actively engaged in the learning process. As we learn skills much of the process of actually doing that skill gets offloaded, becoming an automatic part of what we do rather than being a conscious effort. So for the slow learners these parts of the brain remained active for far longer which could, in theory, mean that they were getting in the way of making the process automatic.

For me personally I can definitely attest to this being the case, especially with something like learning a second language. Anyone who’s learnt a different language will tell you that you go through a stage of translating things into your native language in your head first before re-translating them back into the target language, something that you simply can’t do if you want to be fluent. Eventually you end up developing your “brain” in that language which doesn’t require you to do that interim translation and everything becomes far more automatic. How long it takes you to get to that stage though varies wildly, although the distance from your native language (in terms of grammatical structure, syntax and script) is usually the primary factor.

It will be interesting to see if this research leads to some developmental techniques that allow us to essentially quieten down parts of our brain in order to aid the learning process. Right now all we know is that some people’s brains begin the switch off period quicker than others and whatever is causing that is the key to accelerating learning. Whether or not that can be triggered by mental exercises or drugs is something we probably won’t know for a while but it’s definitely an area of exciting research possibilities.

url-1024x683

Forgetting Might be an Adaptive Advantage.

Nearly all of us are born with what we’d consider less than ideal memories. We’ll struggle to remember where our keys our, draw a blank on that new coworker’s name and sometimes pause much longer than we’d like to remember a detail that should be front of mind. The idealised pinnacle, the photographic (or more accurately the eidetic) memory, always seems like an elusive goal, something you have to be born with rather than achieve. However it seems that our ability to forget might actually come from an evolutionary adaptation, enabling us to remember the pertinent details that helped us survive whilst suppressing those that might otherwise hinder us.

url-1024x683

The idea isn’t a new one, having existed in some form since at least 1997, but it’s only recently that researchers have had the tools to study the mechanism in action. You see it’s rather difficult to figure out which memories are being forgotten for adaptive reasons, I.E. to improve the survival of the organism, and which ones are simply forgotten due to other factors. The advent of functional Magnetic Resonance Imaging (fMRI) has allowed researchers to get a better idea of what the brain is doing at any one point, allowing them to set up situations to see what the brain is doing when it’s forgetting something. The results are quite intriguing, demonstrating that at some level forgetting might be an adaptive mechanism.

Back in 2007 researchers at Stanford University investigated the prospect that adaptive forgetting was potentially a mechanism for reducing the amount of brain power required to select the right memories for a particular situation. The hypothesis goes that remembering is an act of selecting a specific memory for a goal related activity. Forgetting then functions as an optimization mechanism, allowing the brain to more easily select the right memories by suppressing competing memories that might not be optimal. The research supported this notion, showing decreased activity in anterior cingulated cortex which is activated when people are weighing choices (like figuring out which memory is relevant).

More recent research into this phenomena, conducted by researchers at various institutes at the University of Birmingham and various institutes in Cambridge, focused on finding out if the active recollection of a specific memory hindered the remembering of others. Essentially this means that the act of remembering a specific memory would come at the cost of other, competing memories which in turn would lead to them being forgotten. They did this by getting subjects to view 144 picture and word associations and were then trained to remember 72 of them (whilst they were inside a fMRI machine). They were then given another set of associations for each word which would serve as the “competitive” memory for the first.

The results showed some interesting findings, some which may sound obvious on first glance. Attempting to recall the second word association led to a detriment in the subject’s ability to recall the first. That might not sound groundbreaking to start off with but subsequent testing showed a progressive detriment to the recollection of competing memories, demonstrating they were being actively repressed. Further to this the researchers found that their subject’s brain activity was lower for trained images than ones that weren’t part of the initial training set, an indication that these memories were being actively suppressed. There was also evidence to suggest that the trained memories showed the most average forgetting as well as increased activity in a region of the brain known to be associated with adaptive forgetting.

Whilst this research might not give you any insight into how to improve your memory it does give us a rare look into how our brain functions and why certain it behaves in ways we believe to be sub-optimal. Potentially in the future there could be treatments available to suppress that mechanism however what ramifications that might have on actual cognition is anyone’s guess. Needless to say though it’s incredibly interesting to find out why our brains do the things we do, even if we wished they did the exact opposite most of the time.

blair-lavatubes

Lava Tubes on the Moon Could House Massive Colonies.

Establishing lunar colonies seems like the next logical step, it’s our closest celestial body after all, however it might surprise you to learn that doing that might in fact be a lot harder than establishing a similarly sized colony on Venus or Mars. Without an atmosphere to speak of our Moon’s surface is an incredibly harsh place with the full brunt of our sun’s radiation bearing down on it. That’s only half the problem too as since the day/night cycles last 2 weeks you’ll spend half your time in perpetual darkness at temperatures fast approaching absolute zero. There are ways around it however and recent research has led to some rather interesting prospects.

blair-lavatubes

Whilst the surface of the Moon might be unforgiving going just a little bit below the surface negates many of the more undesirable aspects. Drilling into the surface is one option however that’s incredibly resource intensive, especially when you consider that all the gear required to do said drilling would need to be sent from Earth. The alternative is to use structures that are already present on the Moon such as caverns and other natural structures. We know that these kinds of formations are already present on the Moon thanks to the high resolution imagery and gravity mapping we’ve done (the Moon’s gravity field is surprisingly non-uniform) but just how big these structures could be has remained somewhat of a mystery.

Researchers at Purdue university decided to investigate just how big structures like these could be, specifically looking at how big lava tubes could get if they existed on the Moon. During its formation, which would have happened when a large object collided with the then primordial Earth, the surface of the Moon would have been ablaze with volcanic activity. However due to its much smaller size that activity has long since ceased but it would have still left behind the tell tale structures of its more tumultuous history. The researchers then modelled how big these tubes could have gotten given the conditions present on the Moon and came up with a rather intriguing discovery: they’d be huge.

When you see the outcome of the research it feels like an obvious conclusion, of course they’d be bigger since there’s less gravity, but the fact that they’re an order of magnitude bigger than what we’d see on Earth is pretty astounding. The picture above gives you some sense of scale for these potential structures, able to fit several entire cities within them with an incredible amount of room to spare. Whilst using such structures as a basis for a future lunar colony presents a whole host of challenges it does open up the possibility to the Moon having much more usable space than we first thought.