Vaccines are incredibly beneficial for two reasons. The first is the obvious one; for the individual receiving them they provide near-immunity to a whole range of horrendous diseases, many of which can prove fatal or have lifelong consequences for those who become infected. The risks associated with them are so small it’s hard to even connect them with the vaccines themselves and are far more likely to simply be the background noise than anything else. Secondly, when a majority of the population is vaccinated individuals who can’t be vaccinated (such as newborns) or those idiots who simply choose not to gain the benefit of herd immunity. This prevents most diseases from spreading within a community, providing the benefits of vaccinations to those who don’t have them. However there’s a critical point where herd immunity stops working and that’s exactly what’s starting to happen in northern California.
A recent study conducted by researchers working for Kaiser Permanente analysed the vaccination records for some 154,000 individuals in the Northern California region. The records cover approximately 40% of the total insured individuals in the area so the sample size is large enough for it to be representative of the larger whole. The findings are honestly quite shocking showing that there were multiple pockets of under-immunization (children not recieving the required number of vaccinations) which were signficantly above the regional mean, on the order of 18~23% within a cluster. Worst still the rate of vaccination refusal, where people declined any vaccinations at all, was up to 13.5%. It’s a minority of people but it’s enough to completely eradicate herd immunity for several horrible diseases.
For diseases like pertussis (whooping cough) and measles the herd immunity rate may only start kicking in at the 95% vaccination rate, mostly due to how readily they can spread from person to person. That means that only 5% of the population has to forego these vaccinations before herd immunity fails, putting at risk individuals directly in harms way. Other diseases still maintain herd immunity status down to 85% vaccination rates which some of the clusters were getting dangerously close to breaking. It’s clusters like this that are behind the resurgence of diseases which were effectively eradicated decades ago, something which is doing far more harm than any vaccine ever has.
It all comes down to the misinformation spread by several notable public figures that vaccinations are somehow linked to other conditions. It’s been conclusively proven again and again that vaccines have no link to any of these conditions and the side effects from a vaccination rarely amount to more than a sore arm or a fever. It’s one thing to make a decision that only affects yourself but the choice not to vaccinate doesn’t, it puts many other individuals at risk, most of whom cannot do anything to change their situation. You can however and the choice not to is so incredibly selfish I can’t begin to explain my frustration with it.
Hopefully one day reason will prevail over popularity when it comes to things like this. It’s infuriating to think that people are putting both themselves and others at risk just because some celebrity told them that vaccines were doing them more harm than good when the reality is nothing like that. I know I’ve beaten this horse several times since it died but it seems the bounds of human stupidity is indeed limitless and if I can make even just a small difference in those figures than I feel compelled to do so. You should to as the anti-vaxxers need a good and thorough flogging with the facts, one that shouldn’t stop until they realise the error of their ways.
All life as we know it has one basic need: water. The amount of water required to sustain life is a highly variable thing, from creatures that live out their whole lives in our oceans to others who can survive for months at a time without a single drop of water. However it would be short sighted of us to think that water was the be all and end all of all life in our universe as such broad assumptions have rarely panned out to be true under sustained scrutiny. That does leave us with the rather puzzling question of what environments and factors are required to give rise to life, something we don’t have a good answer to since we haven’t yet created life ourselves. We can study how some of the known biological processes function in other environments and whether that might be a viable place for life to arise.
Researchers at the Washington State University have been investigating the possibility of fluids that could potentially take the place of water in life on other planets. Water has a lot of properties that make it conducive to producing life (as we know it) like dissolving minerals, forming bonds and so on. The theory goes that should a liquid have similar properties to that of water then, potentially, an environment rich in said substance could give rise to life that uses that liquid as its base rather than water. Of course finding something with those exact properties is a tricky endeavour but these researchers may have stumbled onto an unlikely candidate.
Most people are familiar with the triple point of substances, the point where a slight change in pressure or temperature can change it from any of its one three states (solid, liquid, gas) instantly. Above there however there’s another transition called the supercritical point where the properties of the gaseous and liquid phases of the substance converge producing a supercritical fluid. For carbon dioxide this results in a substance that behaves like a gas with the density of its liquid form, a rather peculiar state of matter. It’s this form of carbon dioxide that the researchers believe could replace water as the fluid of life elsewhere, potentially life that’s even more efficient than what we find here.
Specifically they looked at how enzymes behaved in supercritical CO2 and found that they were far more stable than the same ones that they had residing in water. Additionally the enzymes became far more selective about the molecules that they bound to, making the overall process far more efficient than it otherwise would have been. Perhaps the most interesting thing about this was that they found organisms were highly tolerant of this kind of fluid as several bacteria and their enzymes were found to be present in the fluid. Whilst this isn’t definitive proof for life being able to use supercritical CO2 as a replacement for water it does lend credence to the idea that life could arise in places where water is absent.
Of course whether that life would look like anything we’d recognise is something that we won’t really know for a long time to come. An atmosphere of supercritical C02 would likely be an extremely hostile place to our kind of life, more akin to Venus than our comfortable Earth, making exploration quite difficult. Still this idea greatly expands our concept of what life might be and what might give rise to it, something which has had an incredibly inward view for far too long. I have little doubt that one day we’ll find life not as we know it, I’m just not sure if we’ll know it when we see it.
There’s no doubt that the media we consume has an effect on us, the entire advertising and marketing industry is built upon that premise, however just how big that impact can be has always been a subject of debate. Most notably the last few decades have been littered with debate around how much of an impact violent media has on us and whether it’s responsible for some heinous acts committed by those who have consumed it. In the world of video games there’s been dozens of lab controlled studies done that shows consumption of violent games leads towards more aggressive behaviour but the link to actual acts of violence could not be drawn. Now researchers from Stetson University have delved into the issue and there doesn’t appear to be a relationship between the two at all.
The study, which was a retrospective analysis of reports of violence and the availability of violent media, was broken down into two parts. The first part of the study focused on homicide rates and violence in movies between 1920 and 2005 using independent data sources. The second then focused on incidents of violence in video games using the ESRB ratings from 1996 to 2011 and correlated them with rates of youth violence over the same period. Both periods of study found no strong correlation between violence in media and acts of actual violence, except for a brief period in the early 90s (although the trend quickly swung back the other way, indicating the result was likely unrelated).
Violent video games are often used as an outlet for those looking for something to blame but often the relationship between them and the act of violence is completely backwards. It’s not that violent video games are causing people to commit these acts, instead those who are likely to commit these acts are also likely to engage in other violent media. Had the reverse been true then there would have been a distinct correlation between the availability of violent media and acts of real world violence but, as the study shows, there’s simply no relationship between them at all.
Hopefully now the conversation will shift from video games causing violence (or other antisocial behaviours) to a more nuanced discussion around the influences games can have on our attitudes, behaviours and thought processes. There’s no doubt that we’re shaped by the media we consume however the effects are likely much more subtle than most people would like to think they are. Once these more subtle influences are understood we can then work towards curtailing any negative aspects that they might have rather than using a particular medium as a scapegoat for deplorable behaviour.
For much of my childhood people told me I was smart. Things that frustrated other kids, like maths, seemed to come easy to me and this led to many people praising my ability. I never felt particularly smart, I mean there were dozens of other kids who were far more talented than I was, but at that age it’s hard to deny the opinions of adults, especially the ones who raised you. This led to an unfortunate misconception that stayed with me until after I left university: the idea that my abilities were fixed and that anything I found hard or difficult was simply beyond my ability. It’s only been since then, some 8 years or so, that I learnt that any skill or problem is within my capability, should I be willing to put the effort in.
It’s a theme that will likely echo among many of my generation as we grew up with parents who were told that positive reinforcement was the way to make your child succeed in the world. It’s only now, after decades of positive reinforcement failing to produce the outcomes it decried, we’re beginning to realise the folly of our ways. Much of the criticism of our generation focuses on this aspect, that we’re too spoilt, too demanding when compared to previous generations. If there’s one good thing to come out of this however it’s that research has shown that the praising a child’s ability isn’t the way to go, you should praise them for the process they go through.
Indeed once I realised that things like skills, abilities and intelligence were primarily a function of the effort and process you went through to develop them I was suddenly greeted with a world of achievable goals rather than roadblocks. At the same time I grew to appreciate those at the peak of their abilities as I knew the amount of effort they had put in to develop those skills which allowed them to excel. Previously I would have simply dismissed them as being lucky, winning the genetic lottery that gave them all the tools they needed to excel in their field whilst I languished in the background.
It’s not a silver bullet however as the research shows the same issues with positive reinforcement arise if process praise is given too often. The nuances are also unknown at this point, like how often you should give praise and in what fashion, but these research does show that giving process praise in moderation has long lasting benefits. I’d be interested to see how well this translates into adults as well since my experience has been vastly positive once I made the link between effort and results. I can’t see it holding true for everyone, as most things don’t in this regard, but if it generally holds then I can definitely see a ton of benefits from it being implemented.
If you’ve ever spent a decent amount of time playing a MMORPG chances are you’ve come up against the terror that is the Random Number Generator (RNG). No matter how many times you run a dungeon to get that one item to complete your set or kill that particular mob to get that item you need to complete that quest it just never seems to happen. However, sometimes, everything seems to go your way and all your Christmases seem to come at once and the game has you in its grasp again. Whilst RNGesus might be a cruel god to many he’s the reason that many of us keep coming back and now there’s solid science to prove it.
It’s long been known that random rewards are seen as more rewarding than those that are given consistently. Many online games, notably those on social networks, have tapped into that mechanic in order to keep users engaged far longer than they would have otherwise. Interestingly though this seems to run contrary to what many players will tell you, often saying that they’d prefer a guaranteed reward after a certain amount effort or time committed. As someone who’s played through a rather large number of games that utilize both mechanics I can tell you that both types of systems will keep me returning however nothing beats the rush of finding a really good item from the hands of RNGesus.
Indeed my experience seems to line up with the recent study published by the University of Chicago which shows that people are more motivated by random rewards than they are by consistent ones. It sounds quite counter-intuitive when you think about it, I mean who would take a random reward over a guaranteed one, but the effect remains consistent across the multiple experiments that they conducted. Whilst the mechanism of what triggers this isn’t exactly known it’s speculated that randomness leads to excitement, much like the the infinitesimally small odds of winning the lottery are irrelevant to the enjoyment some people derive from playing it.
However the will of RNGesus needs to be given a guiding hand sometimes to ensure that he’s not an entirely cruel god. Destiny’s original loot system was a pretty good example of this as you could be bless with a great drop only to have the reveal turn it into something much less than what you’d expect it to be. Things like this can easily turn people off games (and indeed I think this is partly responsible for the middling reviews it received at launch) so there needs to be a balance struck so players don’t feel hard done by.
I’d be very interested to see the effect of random rewards that eventually become guaranteed (I.E. pseudo-random rewards). World of Warcraft implemented a system like this for quest drops a couple years ago and it was received quite well. This went hand in hand with their guaranteed reward systems (tokens/valor/etc.) which have also been met with praise. Indeed I believe the mix of these two systems, random rewards with guaranteed systems on the side, seems to be the best mix in order to keep players coming back. I definitely know I feel more motivated to play when I’m closer to a guaranteed reward which can, in turn, lead to more random ones.
It’s always interesting to investigate these non-intuitive behaviours as it can give us insight into why we humans act in seemingly irrational ways when it comes to certain things. We all know we’re not strict rational actors, nor are we perfect logic machines, but counter-intuitive behaviour is still quite a perplexing field of study. At least we’ve got definitive proof now that random rewards are both more rewarding and more motivating than their consistent brethren although how that knowledge will help the world is an exercise I’ll leave up to the reader.
For all of my working life I pined for the ability to do my work from wherever I choose. It wasn’t so much that I wanted to work in my trackies, only checking email whenever it suited, no more I wanted to avoid having to waste hours of my day travelling to and from the office when I could just as easily do the work remotely. Last year, when I permanently joined the company I had been contracting to the year previous, I was given such an opportunity and have spent probably about half the working year since at home. For me it’s been a wonderfully positive experience and, to humblebrag for a bit, my managers have been thoroughly impressed with my quality of work. Whilst I’ve always believed this would be the case I never had much hard evidence to back it up but new research in this field backs up my conclusions.
Researchers at the University of Illinois created a framework to analyse telecommuting employee’s performance. They then used this to gain insight into data taken from 323 employees and their corresponding supervisors. The results showed a very small, positive effect for the telecommuting workers showing that their performance was the same or slightly better than those who were working in the office. Perhaps most intriguingly they found that the biggest benefit was shown when employees didn’t have the best relationship with their superiors, indicating that granting flexible working arrangements could be seen as something of an olive branch to smooth over employee relations. However the most important takeaway from this is that no negative relationship between telecommuting and work performance was found, showing that employees working remotely can be just as effective as their in office counterparts.
As someone who’s spent a great deal of time working from various different places (not just at home) with other people in a similar situation I have to say that my experience matches up with research pretty well. I tend to be available for much longer periods of time, simply because it’s easier to, and it’s much easier to focus on a particular task for an extended period of time when the distractions of the office aren’t present. Sure after a while you might start to wonder if you’ll be able to handle human contact again (especially after weeks of conference calls) but it’s definitely something I think every employer should offer, if they have the capability to.
It also flies in the face of Marissa Mayer’s decision to outright ban all telecommuting in Yahoo last year, citing performance concerns. Whilst I don’t disagree with the idea that telecommuting isn’t for everyone (I know a few people who’d likely end up like this) removing it as an option is incredibly short sighted. Sure, there’s value to be had in face time, however if their performance won’t suffer offering them flexible working arrangements like telecommuting can generate an awful lot of goodwill with your employees. I know that I’m far more likely to stick around with my current company thanks to their stance on this, even if I probably won’t be able to take advantage of it fully for the next couple years.
Hopefully studies like this keep getting published as telecommuting is fast becoming something that shouldn’t have to be done by exception. Right now it might be something of a novelty but the technology has been there for years and it’s high time that more companies started to make better use of it. They might just find it easier to hold on to more employees if they did and, potentially, even attract better talent because of it. I know it will take time though as we’re still wrestling with the 40 hour work week, a hangover over 150 years ago, even though we’ve long since past the time where everyone is working factories.
One day though, one day.
The Sailing Stones of Death Valley have been a scientific curiosity for numerous decades. These rocks seemingly spring to life at various times throughout the year, blazing long trials across the desert’s floor before coming back down to rest. Whilst there have been numerous theories as to what causes this movement, ranging from the plausible to the downright insane, no one had managed to verify just what exactly was going on with these strange rocks. Well now thanks to researchers at the Scripps Institute of Oceanography we now have evidence of just what’s causing this to happen and it’s pretty fascinating.
The video largely supports the theory put forth by Ralph Lorenz some years ago whereby the the rocks are trapped within ice sheets which are then moved by the prevailing winds. What’s interesting about this video is that it shows why the previous experiments, which were largely inconclusive as to ice sheets being responsible, produced the data that they did. It also shows why there seems to be similarities between some movements whilst others seem to be completely random. Pretty much all of these can now be explained by the ice sheets breaking up and bumping off each other, leading to the wide variety of patterns and behaviours.
Like the video says this might not be the most exciting experiment to conduct however it’s always interesting when a long standing phenomena like this finally gets explained. We might not be able to use this knowledge to further other research or develop some novel product, however as we begin to explore further out into our universe knowledge of strange things like this becomes incredibly valuable. When we see phenomena like this elsewhere we’ll be able to deduce that similar processes are in action over there and thus further our understanding of the places we explore.
The need for organs for transplants has always outstripped demand and this has pushed the science in some pretty amazing directions. Indeed one of the most incredible advances is the ability to strip away host tissue from organs, leaving behind an organ scaffold, that we can then regrow with the recipient’s own cells. This drastically reduces the chance of rejection and hopefully avoids the patient having to take the harsh anti-rejection drugs. However such a process still relies on a donor organ which still leaves us with the supply problem to deal with. Whilst we’ve made some advances in creating parts of organs (some even done with biomedical 3D printers) growing a full organ has still proven elusive.
That is until recently.
Researchers at the University of Edinburgh have, for the first time, managed to grow a full functioning organ within a mouse using only a single injection. The organ that they created was the thymus, an organ that plays a critical role in the production of T-cells. These cells are the ones that are responsible for hunting down cells in your body that are either showing abnormalities or signs of infection and then eradicating them. What’s so incredible about this recent achievement is that the functional thymus developed after the injection of modified cells, requiring none of the additional work that’s previously been associated with creating functional organs.
The process starts off with cells from a mouse embryo, which from what I can gather were likely to be embryonic stem cells, which were then genetically programmed to form into a type of cell that’s found in the thymus. These, along with supporting cells, were then injected into the mice and the resultant cells developed into a fully functioning thymus. Interestingly though this didn’t seem to be the outright goal of the program as the researchers themselves stated that the result was surprising. Indeed whilst it’s been theorized that stem cells could be used in this manner it was never thought to be as straight forward as this and with these results further research is definitely on the table.
Whilst this research is still many years away from being useful in human models it does pave the way for research into how far this typical method can be applied. The thymus is a relatively simple organ when compared to others in the body so the next steps will be to see if this same process can be used to replicate them. If say a liver or heart can be reproduced in this manner then this has the potential to completely solve the transplant organ supply issue, allowing patients (or a surrogate) to grow their own organs for transplants. There’s a lot of research to be done before that happens however but this latest advance is incredibly promising.
Many moons ago, when I was still a poor uni student working multiple jobs to make ends meet, I remember one of my fellow childcare workers would rarely be seen outside without his sunglasses. It became something of a recurring joke as, even when it wasn’t particularly bright outside, he’d be sporting them. He later explained that he kept them on constantly because sunlight would make him sneeze and indeed upon taking them off he proceeded to prove his point. I had always thought that explanation was a most likely bunk however a couple years later I started to develop similar symptoms. Whilst the coincidence felt undeniable to me I had never really looked into it, until I saw this video:
Unfortunately whilst it’s a well established phenomena the lack of known cause is a little bit disappointing. I mean it’s not exactly a debilitating condition, all it takes is a half decent pair of sunglasses to negate any effect the sun might have in this regard, but still it seems like something that should have a simple explanation. Alas investigating why sunlight makes people sneeze probably isn’t the most sexy of research topics so I’m not holding my breath for any scientific breakthroughs.
All this talk of sneezing has made my nose itch, maybe I should pop outside to clear it out
Venus is probably the most peculiar planet that we have in our solar system. If you were observing it from far away you’d probably think that it was a twin of Earth, and for the most part you’d be right, but we know that it’s nothing like the place we call home. It’s atmosphere is a testament to the devastation that can be wrought by global warming with the surface temperature exceeding 400 degrees. Venus is also the only planet that spins in the opposite (retrograde) direction to every other planet, a mystery that still remains unsolved. Still for all we know about our celestial sister there’s always more to be learned and that’s where the Venus Express comes in.
Launched back in 2005 the Venus Express mission took the platform developed for the Mars Express mission and tweaked it for observational use around Venus. The Venus Express’ primary mission was the long term observation of Venus’ atmosphere as well as some limited study of its surface (a rather difficult task considering Venu’s dense atmosphere). It arrived at Venus back in early 2006 and has been sending data back ever since with its primary mission being extended several times since then. However the on board fuel resources are beginning to run low so the scientists controlling the craft proposed a daring idea: do a controlled deep dive into the atmosphere to gather even more detailed information about Venus’ atmosphere.
Typically the Venus Express orbits around 250KM above Venus’ surface, a pretty typical height for observational activities. The proposed dive however had the craft diving down to below 150KM, an incredibly low altitude for any craft to attempt. To put it in perspective the “boundary of space” (referred to as the Karman line) is about 100KM above Earth’s surface, putting this craft not too far off that boundary. Considering that Venus’ atmosphere is far more dense than Earth’s the risks you run by diving down that low are increased dramatically as the drag you’ll experience at that height will be far greater. Still, even with all those risks, the proposed dive went ahead last week.
The amazing thing about it? The craft survived.
The dive brought the craft down to a staggering 130KM above Venus’ surface during which it saw some drastic changes in its operating environment. The atmospheric density increased a thousandfold between the 160KM and 130KM, significantly increasing the drag on the spacecraft. This in turn led to the solar panels experiencing heating over 100 degrees, enough to boil water on them. It’s spent about a month at various low altitudes before the mission team brought it back up out of the cloudy depths, where its orbit will now slowly degrade over time before it re-enters the atmosphere one last time.
It’s stuff like this that gets me excited about space and the science we can do in it. I mean we’ve got an almost decade old craft orbiting another planet and we purposefully plunged it down, just in the hopes that we’d get some better data. Not only did it manage to do that but it came back out the other side, still ready and raring to go. If that isn’t a testament to our talents in engineering and orbital mechanics prowess then I don’t know what is.