Much to the surprise of many I used to be a childcare worker back in the day. It was a pretty cruisy job for a uni student like myself, being able to show up after classes, take care of kids for a few hours and then head off home to finish off my studies (or World of Warcraft, as it mostly was). I consider it a valuable experience for numerous reasons not least of which is an insight into some of the public health issues that arise from having a bunch of children all packed into tight spaces. The school which I worked at had its very first peanut allergy ever when I was first there and I watched as the number of children who suffered from it increased rapidly.
Whilst the cause of this increase in allergic reactions is still somewhat unclear it’s well understood that the incident rate of food allergies has dramatically increased in developed countries in the last 20 years or so. There are quite a few theories swirling around as to what the cause will be but suffice to say that hard evidence to support any of them hasn’t been readily forthcoming. The problem for this is the nature of the beast as studies to investigate one cause or the other are plagued with variables that researchers are simply unable to control. However for researchers at the King’s College in London they’ve been able to conduct a controlled study with children who were at-risk of developing peanut allergies and have found some really surprising results.
The study involved 640 children who were all considered to be at a high risk of developing a peanut allergy due to other conditions they currently suffered from (eczema and egg allergies) aged between 4 and 11 months. They were then randomly split into 2 groups, one whose parents were advised to feed them peanut products at least 3 times per week and the other told to avoid. The results are quite staggering showing that when compared to the control group the children who were exposed to peanut products at an early age had an 80% reduced risk in developing the condition. This almost completely rules out early exposure as a risk factor for developing a peanut allergy, a notion that seems to be prevalent among many modern parents.
Indeed this gives credence to the Hygiene Hypothesis which theorizes that the lack of early exposure to pathogens and infections is a likely cause for the increase in allergic responses that children develop. Whilst this doesn’t mean you should let your kids frolic in the sewers it does indicate that keeping them in a bubble likely isn’t protecting them as much as you might think. Indeed the old adage of letting kids be kids in this regard rings true as early exposure to these kinds of things will likely help more than harm. Of course the best course of action is to consult with your doctor and devise a good plan that mitigates overall risk, something which budding parents should be doing anyway.
It’s interesting to see how many of the conditions that plague us today are the results of our affluent status. The trade offs we’ve made have obviously been for the better overall, as our increased lifespans can attest to, however there seems to be aspects of it we need to temper if we want to overcome these once rare conditions. It’s great to see this kind of research bearing fruit as it means that further study into this area will likely become more focused and, hopefully, just as valuable as this study has proven to be.
Vaccines are responsible for preventing millions upon millions of deaths each year through the immunity they grant us to otherwise life threatening diseases. Their efficacy and safety is undisputed (at least from a scientific perspective anyway, which is the only way that matters honestly) and this mostly comes from the fact that they use our own immune system as the mechanism of action. A typical vaccine uses part of the virus to trigger the immune system to produce the right antibodies without having to endure the potentially deadly symptoms that the virus can cause. This response is powerful enough to provide immunity from those diseases and so researchers have long looked for ways of harnessing the body’s natural defenses against other, more troubling conditions and a recent development could see vaccines used to treat a whole host of things that you wouldn’t think would be possible.
Conditions that are currently considered terminal, like cancer, often stem from the body lacking the ability to mount a defensive response. For cancer this is because the cells themselves look the same as normal healthy cells, despite their nature to reproduce in an uncontrolled fashion, which means that the immune system ignores them. These cells do have signatures that we can detect however and we can actually program people’s immune systems to register those cells as foreign, triggering an immune response. However this treatment (which relies on extracting the patient’s white blood cells, turning them into dendritic cells and programming them with the tumour’s antigens) is expensive and of limited on-going effectiveness. However the new treatment devised by researchers at the National Institute of Biomedical Imaging and Bioengineering uses a novel method which drastically increases this treatment’s effectiveness and duration.
The vaccine they’ve created uses 3D nano structures which, when injected into a patient, form a sort of microscopic haystack (pictured above). These structures can be loaded with all sorts of compounds however in this particular experiment they loaded them with the antigens found on a specific type of cancer cells. Once these rods have been injected they then capture within them the dendritic cells that are responsible for triggering an immune response. The dendritic cells are then programmed with the cancer antigens and, when released, trigger a body wide immune response. The treatment was highly effective in a mouse model with a 90% survival rate for animals who would have otherwise died at 25 days.
The potential for this is quite staggering as it provides us another avenue to elicit an immune response, one that appears to be far less invasive and more effective than current alternatives provide. Of course such treatments are still like years away from seeing clinical trials but with such promising results in the mouse model I’m sure it will happen eventually. What will be interesting to see is if this method of delivery can be used to deliver traditional vaccines as well, potentially paving the way for more vaccines to be administered in a single dose. I know that it seems like every other week we come up with another cure for cancer but this one seems to have some real promise behind it and I can’t wait to see how it performs in us humans.
Vaccines are incredibly beneficial for two reasons. The first is the obvious one; for the individual receiving them they provide near-immunity to a whole range of horrendous diseases, many of which can prove fatal or have lifelong consequences for those who become infected. The risks associated with them are so small it’s hard to even connect them with the vaccines themselves and are far more likely to simply be the background noise than anything else. Secondly, when a majority of the population is vaccinated individuals who can’t be vaccinated (such as newborns) or those idiots who simply choose not to gain the benefit of herd immunity. This prevents most diseases from spreading within a community, providing the benefits of vaccinations to those who don’t have them. However there’s a critical point where herd immunity stops working and that’s exactly what’s starting to happen in northern California.
A recent study conducted by researchers working for Kaiser Permanente analysed the vaccination records for some 154,000 individuals in the Northern California region. The records cover approximately 40% of the total insured individuals in the area so the sample size is large enough for it to be representative of the larger whole. The findings are honestly quite shocking showing that there were multiple pockets of under-immunization (children not recieving the required number of vaccinations) which were signficantly above the regional mean, on the order of 18~23% within a cluster. Worst still the rate of vaccination refusal, where people declined any vaccinations at all, was up to 13.5%. It’s a minority of people but it’s enough to completely eradicate herd immunity for several horrible diseases.
For diseases like pertussis (whooping cough) and measles the herd immunity rate may only start kicking in at the 95% vaccination rate, mostly due to how readily they can spread from person to person. That means that only 5% of the population has to forego these vaccinations before herd immunity fails, putting at risk individuals directly in harms way. Other diseases still maintain herd immunity status down to 85% vaccination rates which some of the clusters were getting dangerously close to breaking. It’s clusters like this that are behind the resurgence of diseases which were effectively eradicated decades ago, something which is doing far more harm than any vaccine ever has.
It all comes down to the misinformation spread by several notable public figures that vaccinations are somehow linked to other conditions. It’s been conclusively proven again and again that vaccines have no link to any of these conditions and the side effects from a vaccination rarely amount to more than a sore arm or a fever. It’s one thing to make a decision that only affects yourself but the choice not to vaccinate doesn’t, it puts many other individuals at risk, most of whom cannot do anything to change their situation. You can however and the choice not to is so incredibly selfish I can’t begin to explain my frustration with it.
Hopefully one day reason will prevail over popularity when it comes to things like this. It’s infuriating to think that people are putting both themselves and others at risk just because some celebrity told them that vaccines were doing them more harm than good when the reality is nothing like that. I know I’ve beaten this horse several times since it died but it seems the bounds of human stupidity is indeed limitless and if I can make even just a small difference in those figures than I feel compelled to do so. You should to as the anti-vaxxers need a good and thorough flogging with the facts, one that shouldn’t stop until they realise the error of their ways.
All life as we know it has one basic need: water. The amount of water required to sustain life is a highly variable thing, from creatures that live out their whole lives in our oceans to others who can survive for months at a time without a single drop of water. However it would be short sighted of us to think that water was the be all and end all of all life in our universe as such broad assumptions have rarely panned out to be true under sustained scrutiny. That does leave us with the rather puzzling question of what environments and factors are required to give rise to life, something we don’t have a good answer to since we haven’t yet created life ourselves. We can study how some of the known biological processes function in other environments and whether that might be a viable place for life to arise.
Researchers at the Washington State University have been investigating the possibility of fluids that could potentially take the place of water in life on other planets. Water has a lot of properties that make it conducive to producing life (as we know it) like dissolving minerals, forming bonds and so on. The theory goes that should a liquid have similar properties to that of water then, potentially, an environment rich in said substance could give rise to life that uses that liquid as its base rather than water. Of course finding something with those exact properties is a tricky endeavour but these researchers may have stumbled onto an unlikely candidate.
Most people are familiar with the triple point of substances, the point where a slight change in pressure or temperature can change it from any of its one three states (solid, liquid, gas) instantly. Above there however there’s another transition called the supercritical point where the properties of the gaseous and liquid phases of the substance converge producing a supercritical fluid. For carbon dioxide this results in a substance that behaves like a gas with the density of its liquid form, a rather peculiar state of matter. It’s this form of carbon dioxide that the researchers believe could replace water as the fluid of life elsewhere, potentially life that’s even more efficient than what we find here.
Specifically they looked at how enzymes behaved in supercritical CO2 and found that they were far more stable than the same ones that they had residing in water. Additionally the enzymes became far more selective about the molecules that they bound to, making the overall process far more efficient than it otherwise would have been. Perhaps the most interesting thing about this was that they found organisms were highly tolerant of this kind of fluid as several bacteria and their enzymes were found to be present in the fluid. Whilst this isn’t definitive proof for life being able to use supercritical CO2 as a replacement for water it does lend credence to the idea that life could arise in places where water is absent.
Of course whether that life would look like anything we’d recognise is something that we won’t really know for a long time to come. An atmosphere of supercritical C02 would likely be an extremely hostile place to our kind of life, more akin to Venus than our comfortable Earth, making exploration quite difficult. Still this idea greatly expands our concept of what life might be and what might give rise to it, something which has had an incredibly inward view for far too long. I have little doubt that one day we’ll find life not as we know it, I’m just not sure if we’ll know it when we see it.
There’s no doubt that the media we consume has an effect on us, the entire advertising and marketing industry is built upon that premise, however just how big that impact can be has always been a subject of debate. Most notably the last few decades have been littered with debate around how much of an impact violent media has on us and whether it’s responsible for some heinous acts committed by those who have consumed it. In the world of video games there’s been dozens of lab controlled studies done that shows consumption of violent games leads towards more aggressive behaviour but the link to actual acts of violence could not be drawn. Now researchers from Stetson University have delved into the issue and there doesn’t appear to be a relationship between the two at all.
The study, which was a retrospective analysis of reports of violence and the availability of violent media, was broken down into two parts. The first part of the study focused on homicide rates and violence in movies between 1920 and 2005 using independent data sources. The second then focused on incidents of violence in video games using the ESRB ratings from 1996 to 2011 and correlated them with rates of youth violence over the same period. Both periods of study found no strong correlation between violence in media and acts of actual violence, except for a brief period in the early 90s (although the trend quickly swung back the other way, indicating the result was likely unrelated).
Violent video games are often used as an outlet for those looking for something to blame but often the relationship between them and the act of violence is completely backwards. It’s not that violent video games are causing people to commit these acts, instead those who are likely to commit these acts are also likely to engage in other violent media. Had the reverse been true then there would have been a distinct correlation between the availability of violent media and acts of real world violence but, as the study shows, there’s simply no relationship between them at all.
Hopefully now the conversation will shift from video games causing violence (or other antisocial behaviours) to a more nuanced discussion around the influences games can have on our attitudes, behaviours and thought processes. There’s no doubt that we’re shaped by the media we consume however the effects are likely much more subtle than most people would like to think they are. Once these more subtle influences are understood we can then work towards curtailing any negative aspects that they might have rather than using a particular medium as a scapegoat for deplorable behaviour.
For much of my childhood people told me I was smart. Things that frustrated other kids, like maths, seemed to come easy to me and this led to many people praising my ability. I never felt particularly smart, I mean there were dozens of other kids who were far more talented than I was, but at that age it’s hard to deny the opinions of adults, especially the ones who raised you. This led to an unfortunate misconception that stayed with me until after I left university: the idea that my abilities were fixed and that anything I found hard or difficult was simply beyond my ability. It’s only been since then, some 8 years or so, that I learnt that any skill or problem is within my capability, should I be willing to put the effort in.
It’s a theme that will likely echo among many of my generation as we grew up with parents who were told that positive reinforcement was the way to make your child succeed in the world. It’s only now, after decades of positive reinforcement failing to produce the outcomes it decried, we’re beginning to realise the folly of our ways. Much of the criticism of our generation focuses on this aspect, that we’re too spoilt, too demanding when compared to previous generations. If there’s one good thing to come out of this however it’s that research has shown that the praising a child’s ability isn’t the way to go, you should praise them for the process they go through.
Indeed once I realised that things like skills, abilities and intelligence were primarily a function of the effort and process you went through to develop them I was suddenly greeted with a world of achievable goals rather than roadblocks. At the same time I grew to appreciate those at the peak of their abilities as I knew the amount of effort they had put in to develop those skills which allowed them to excel. Previously I would have simply dismissed them as being lucky, winning the genetic lottery that gave them all the tools they needed to excel in their field whilst I languished in the background.
It’s not a silver bullet however as the research shows the same issues with positive reinforcement arise if process praise is given too often. The nuances are also unknown at this point, like how often you should give praise and in what fashion, but these research does show that giving process praise in moderation has long lasting benefits. I’d be interested to see how well this translates into adults as well since my experience has been vastly positive once I made the link between effort and results. I can’t see it holding true for everyone, as most things don’t in this regard, but if it generally holds then I can definitely see a ton of benefits from it being implemented.
If you’ve ever spent a decent amount of time playing a MMORPG chances are you’ve come up against the terror that is the Random Number Generator (RNG). No matter how many times you run a dungeon to get that one item to complete your set or kill that particular mob to get that item you need to complete that quest it just never seems to happen. However, sometimes, everything seems to go your way and all your Christmases seem to come at once and the game has you in its grasp again. Whilst RNGesus might be a cruel god to many he’s the reason that many of us keep coming back and now there’s solid science to prove it.
It’s long been known that random rewards are seen as more rewarding than those that are given consistently. Many online games, notably those on social networks, have tapped into that mechanic in order to keep users engaged far longer than they would have otherwise. Interestingly though this seems to run contrary to what many players will tell you, often saying that they’d prefer a guaranteed reward after a certain amount effort or time committed. As someone who’s played through a rather large number of games that utilize both mechanics I can tell you that both types of systems will keep me returning however nothing beats the rush of finding a really good item from the hands of RNGesus.
Indeed my experience seems to line up with the recent study published by the University of Chicago which shows that people are more motivated by random rewards than they are by consistent ones. It sounds quite counter-intuitive when you think about it, I mean who would take a random reward over a guaranteed one, but the effect remains consistent across the multiple experiments that they conducted. Whilst the mechanism of what triggers this isn’t exactly known it’s speculated that randomness leads to excitement, much like the the infinitesimally small odds of winning the lottery are irrelevant to the enjoyment some people derive from playing it.
However the will of RNGesus needs to be given a guiding hand sometimes to ensure that he’s not an entirely cruel god. Destiny’s original loot system was a pretty good example of this as you could be bless with a great drop only to have the reveal turn it into something much less than what you’d expect it to be. Things like this can easily turn people off games (and indeed I think this is partly responsible for the middling reviews it received at launch) so there needs to be a balance struck so players don’t feel hard done by.
I’d be very interested to see the effect of random rewards that eventually become guaranteed (I.E. pseudo-random rewards). World of Warcraft implemented a system like this for quest drops a couple years ago and it was received quite well. This went hand in hand with their guaranteed reward systems (tokens/valor/etc.) which have also been met with praise. Indeed I believe the mix of these two systems, random rewards with guaranteed systems on the side, seems to be the best mix in order to keep players coming back. I definitely know I feel more motivated to play when I’m closer to a guaranteed reward which can, in turn, lead to more random ones.
It’s always interesting to investigate these non-intuitive behaviours as it can give us insight into why we humans act in seemingly irrational ways when it comes to certain things. We all know we’re not strict rational actors, nor are we perfect logic machines, but counter-intuitive behaviour is still quite a perplexing field of study. At least we’ve got definitive proof now that random rewards are both more rewarding and more motivating than their consistent brethren although how that knowledge will help the world is an exercise I’ll leave up to the reader.
For all of my working life I pined for the ability to do my work from wherever I choose. It wasn’t so much that I wanted to work in my trackies, only checking email whenever it suited, no more I wanted to avoid having to waste hours of my day travelling to and from the office when I could just as easily do the work remotely. Last year, when I permanently joined the company I had been contracting to the year previous, I was given such an opportunity and have spent probably about half the working year since at home. For me it’s been a wonderfully positive experience and, to humblebrag for a bit, my managers have been thoroughly impressed with my quality of work. Whilst I’ve always believed this would be the case I never had much hard evidence to back it up but new research in this field backs up my conclusions.
Researchers at the University of Illinois created a framework to analyse telecommuting employee’s performance. They then used this to gain insight into data taken from 323 employees and their corresponding supervisors. The results showed a very small, positive effect for the telecommuting workers showing that their performance was the same or slightly better than those who were working in the office. Perhaps most intriguingly they found that the biggest benefit was shown when employees didn’t have the best relationship with their superiors, indicating that granting flexible working arrangements could be seen as something of an olive branch to smooth over employee relations. However the most important takeaway from this is that no negative relationship between telecommuting and work performance was found, showing that employees working remotely can be just as effective as their in office counterparts.
As someone who’s spent a great deal of time working from various different places (not just at home) with other people in a similar situation I have to say that my experience matches up with research pretty well. I tend to be available for much longer periods of time, simply because it’s easier to, and it’s much easier to focus on a particular task for an extended period of time when the distractions of the office aren’t present. Sure after a while you might start to wonder if you’ll be able to handle human contact again (especially after weeks of conference calls) but it’s definitely something I think every employer should offer, if they have the capability to.
It also flies in the face of Marissa Mayer’s decision to outright ban all telecommuting in Yahoo last year, citing performance concerns. Whilst I don’t disagree with the idea that telecommuting isn’t for everyone (I know a few people who’d likely end up like this) removing it as an option is incredibly short sighted. Sure, there’s value to be had in face time, however if their performance won’t suffer offering them flexible working arrangements like telecommuting can generate an awful lot of goodwill with your employees. I know that I’m far more likely to stick around with my current company thanks to their stance on this, even if I probably won’t be able to take advantage of it fully for the next couple years.
Hopefully studies like this keep getting published as telecommuting is fast becoming something that shouldn’t have to be done by exception. Right now it might be something of a novelty but the technology has been there for years and it’s high time that more companies started to make better use of it. They might just find it easier to hold on to more employees if they did and, potentially, even attract better talent because of it. I know it will take time though as we’re still wrestling with the 40 hour work week, a hangover over 150 years ago, even though we’ve long since past the time where everyone is working factories.
One day though, one day.
The Sailing Stones of Death Valley have been a scientific curiosity for numerous decades. These rocks seemingly spring to life at various times throughout the year, blazing long trials across the desert’s floor before coming back down to rest. Whilst there have been numerous theories as to what causes this movement, ranging from the plausible to the downright insane, no one had managed to verify just what exactly was going on with these strange rocks. Well now thanks to researchers at the Scripps Institute of Oceanography we now have evidence of just what’s causing this to happen and it’s pretty fascinating.
The video largely supports the theory put forth by Ralph Lorenz some years ago whereby the the rocks are trapped within ice sheets which are then moved by the prevailing winds. What’s interesting about this video is that it shows why the previous experiments, which were largely inconclusive as to ice sheets being responsible, produced the data that they did. It also shows why there seems to be similarities between some movements whilst others seem to be completely random. Pretty much all of these can now be explained by the ice sheets breaking up and bumping off each other, leading to the wide variety of patterns and behaviours.
Like the video says this might not be the most exciting experiment to conduct however it’s always interesting when a long standing phenomena like this finally gets explained. We might not be able to use this knowledge to further other research or develop some novel product, however as we begin to explore further out into our universe knowledge of strange things like this becomes incredibly valuable. When we see phenomena like this elsewhere we’ll be able to deduce that similar processes are in action over there and thus further our understanding of the places we explore.
The need for organs for transplants has always outstripped demand and this has pushed the science in some pretty amazing directions. Indeed one of the most incredible advances is the ability to strip away host tissue from organs, leaving behind an organ scaffold, that we can then regrow with the recipient’s own cells. This drastically reduces the chance of rejection and hopefully avoids the patient having to take the harsh anti-rejection drugs. However such a process still relies on a donor organ which still leaves us with the supply problem to deal with. Whilst we’ve made some advances in creating parts of organs (some even done with biomedical 3D printers) growing a full organ has still proven elusive.
That is until recently.
Researchers at the University of Edinburgh have, for the first time, managed to grow a full functioning organ within a mouse using only a single injection. The organ that they created was the thymus, an organ that plays a critical role in the production of T-cells. These cells are the ones that are responsible for hunting down cells in your body that are either showing abnormalities or signs of infection and then eradicating them. What’s so incredible about this recent achievement is that the functional thymus developed after the injection of modified cells, requiring none of the additional work that’s previously been associated with creating functional organs.
The process starts off with cells from a mouse embryo, which from what I can gather were likely to be embryonic stem cells, which were then genetically programmed to form into a type of cell that’s found in the thymus. These, along with supporting cells, were then injected into the mice and the resultant cells developed into a fully functioning thymus. Interestingly though this didn’t seem to be the outright goal of the program as the researchers themselves stated that the result was surprising. Indeed whilst it’s been theorized that stem cells could be used in this manner it was never thought to be as straight forward as this and with these results further research is definitely on the table.
Whilst this research is still many years away from being useful in human models it does pave the way for research into how far this typical method can be applied. The thymus is a relatively simple organ when compared to others in the body so the next steps will be to see if this same process can be used to replicate them. If say a liver or heart can be reproduced in this manner then this has the potential to completely solve the transplant organ supply issue, allowing patients (or a surrogate) to grow their own organs for transplants. There’s a lot of research to be done before that happens however but this latest advance is incredibly promising.