Posts Tagged‘study’

Google Provides Insight Into SSD Reliability.

SSDs may have been around for some time now but they’re still something of an unknown. Their performance benefits are undeniable and their cost per gigabyte has plummeted year after year. However, for the enterprise space, their unknown status has led to a lot of hedged bets when it comes to their use. Most SSDs have a large portion of over provisioned space, to accommodate for failed cells and wear levelling. A lot of SSDs are sold as “accelerators”, meant to help speed up operations but not hold critical data for any length of time. This all comes from a lack of good data on their reliability and failure rates, something which can only come with time and use. Thankfully Google has been doing just that and at a recent conference released a paper about their findings.

SSDs

 

The paper focused on three different types of flash media: the consumer level MLC, the more enterprise focused SLC and the somewhere-in-the-middle eMLC. These were all custom devices, sporting Google’s own PCIe interface and drivers, however the chips they used were your run of the mill flash. The drives were divided into 10 categories: 4 MLC, 4 SLC and 2 eMLC. For each of these different types of drives several different metrics were collected over their 6 year lifetime: raw bit error rate (RBER), uncorrectable bit error rate (UBER), program/erase cycles and various failure rates (bad blocks, bad cells, etc.). All of these were then collated to provide insights into the reliability of SSDs and their comparison to each other and to old fashioned, spinning rust drives.

Probably the most stunning finding out of the report is that, in general, SLC drives are no more reliable than their MLC brethren. For both enterprises and consumers this is a big deal as SLC based drives are often several times the price of their MLC equivalent. This should allay any fears that enterprises had about using MLC based products as they will likely be just as reliable and far more cheaper. Indeed products like the Intel 750 series (one of which I’m using for big data analysis at home) provide the same capabilities as products that cost ten times as much and, based on Google’s research, will last just as long.

Interestingly the biggest predictive indicator for drive reliability wasn’t the RBER, UBER or even the number of PE cycles. In fact the most predictive factor of drive failure was the physical age of the drive itself. What this means is that, for SSDs, there must be other factors at play which affect drive reliability. The paper hypothesizes that this might be due to silicon aging but it doesn’t appear that they had enough data to investigate that further. I’m very much interested in how this plays out as it will likely come down to the way they’re fabricated (I.E. different types of lithography, doping, etc.), something which does vary significantly between manufacturers.

It’s not all good news for SSDs however as the research showed that whilst SSDs have an overall failure rate below that of spinning rust they do exhibit a higher UBER. What this means is that SSDs will have a higher rate of unrecoverable errors which can lead to data corruption. Many modern operating systems, applications and storage controllers are aware of this and can accommodate it but it’s still an issue for systems with mission/business critical data.

This kind of insight into the reliability of SSDs is great and just goes to show that even nascent technology can be quite reliable. The insight into MLC vs SLC is telling, showing that whilst a certain technology may exhibit one better characteristic (in this case PE cycle count) that might not be the true indicator of reliability. Indeed Google’s research shows that the factors we have been watching so closely might not be the ones we need to look at. Thus we need to develop new ideas in order to better assess the reliability of SSDs so that we can better predict their failures. Then, once we have that, we can work towards eliminating them, making SSDs even more reliable again.

Violence in All Media, Including Games, Does Not Lead to Real World Violence.

There’s no doubt that the media we consume has an effect on us, the entire advertising and marketing industry is built upon that premise, however just how big that impact can be has always been a subject of debate. Most notably the last few decades have been littered with debate around how much of an impact violent media has on us and whether it’s responsible for some heinous acts committed by those who have consumed it. In the world of video games there’s been dozens of lab controlled studies done that shows consumption of violent games leads towards more aggressive behaviour but the link to actual acts of violence could not be drawn. Now researchers from Stetson University have delved into the issue and there doesn’t appear to be a relationship between the two at all.

Frustration vred bærbar gal irriteret

The study, which was a retrospective analysis of reports of violence and the availability of violent media, was broken down into two parts. The first part of the study focused on homicide rates and violence in movies between 1920 and 2005 using independent data sources. The second then focused on incidents of violence in video games using the ESRB ratings from 1996 to 2011 and correlated them with rates of youth violence over the same period. Both periods of study found no strong correlation between violence in media and acts of actual violence, except for a brief period in the early 90s (although the trend quickly swung back the other way, indicating the result was likely unrelated).

Violent video games are often used as an outlet for those looking for something to blame but often the relationship between them and the act of violence is completely backwards. It’s not that violent video games are causing people to commit these acts, instead those who are likely to commit these acts are also likely to engage in other violent media. Had the reverse been true then there would have been a distinct correlation between the availability of violent media and acts of real world violence but, as the study shows, there’s simply no relationship between them at all.

Hopefully now the conversation will shift from video games causing violence (or other antisocial behaviours) to a more nuanced discussion around the influences games can have on our attitudes, behaviours and thought processes. There’s no doubt that we’re shaped by the media we consume however the effects are likely much more subtle than most people would like to think they are. Once these more subtle influences are understood we can then work towards curtailing any negative aspects that they might have rather than using a particular medium as a scapegoat for deplorable behaviour.

Uncertainty is More Rewarding Than Certainty.

If you’ve ever spent a decent amount of time playing a MMORPG chances are you’ve come up against the terror that is the Random Number Generator (RNG). No matter how many times you run a dungeon to get that one item to complete your set or kill that particular mob to get that item you need to complete that quest it just never seems to happen. However, sometimes, everything seems to go your way and all your Christmases seem to come at once and the game has you in its grasp again. Whilst RNGesus might be a cruel god to many he’s the reason that many of us keep coming back and now there’s solid science to prove it.

RNGesus

It’s long been known that random rewards are seen as more rewarding than those that are given consistently. Many online games, notably those on social networks, have tapped into that mechanic in order to keep users engaged far longer than they would have otherwise. Interestingly though this seems to run contrary to what many players will tell you, often saying that they’d prefer a guaranteed reward after a certain amount effort or time committed. As someone who’s played through a rather large number of games that utilize both mechanics I can tell you that both types of systems will keep me returning however nothing beats the rush of finding a really good item from the hands of RNGesus.

Indeed my experience seems to line up with the recent study published by the University of Chicago which shows that people are more motivated by random rewards than they are by consistent ones. It sounds quite counter-intuitive when you think about it, I mean who would take a random reward over a guaranteed one, but the effect remains consistent across the multiple experiments that they conducted. Whilst the mechanism of what triggers this isn’t exactly known it’s speculated that randomness leads to excitement, much like the the infinitesimally small odds of winning the lottery are irrelevant to the enjoyment some people derive from playing it.

However the will of RNGesus needs to be given a guiding hand sometimes to ensure that he’s not an entirely cruel god. Destiny’s original loot system was a pretty good example of this as you could be bless with a great drop only to have the reveal turn it into something much less than what you’d expect it to be. Things like this can easily turn people off games (and indeed I think this is partly responsible for the middling reviews it received at launch) so there needs to be a balance struck so players don’t feel hard done by.

I’d be very interested to see the effect of random rewards that eventually become guaranteed (I.E. pseudo-random rewards). World of Warcraft implemented a system like this for quest drops a couple years ago and it was received quite well. This went hand in hand with their guaranteed reward systems (tokens/valor/etc.) which have also been met with praise. Indeed I believe the mix of these two systems, random rewards with guaranteed systems on the side, seems to be the best mix in order to keep players coming back. I definitely know I feel more motivated to play when I’m closer to a guaranteed reward which can, in turn, lead to more random ones.

It’s always interesting to investigate these non-intuitive behaviours as it can give us insight into why we humans act in seemingly irrational ways when it comes to certain things. We all know we’re not strict rational actors, nor are we perfect logic machines, but counter-intuitive behaviour is still quite a perplexing field of study. At least we’ve got definitive proof now that random rewards are both more rewarding and more motivating than their consistent brethren although how that knowledge will help the world is an exercise I’ll leave up to the reader.

Telecommuting For All: The Research Shows Benefits.

For all of my working life I pined for the ability to do my work from wherever I choose. It wasn’t so much that I wanted to work in my trackies, only checking email whenever it suited, no more I wanted to avoid having to waste hours of my day travelling to and from the office when I could just as easily do the work remotely. Last year, when I permanently joined the company I had  been contracting to the year previous, I was given such an opportunity and have spent probably about half the working year since at home. For me it’s been a wonderfully positive experience and, to humblebrag for a bit, my managers have been thoroughly impressed with my quality of work. Whilst I’ve always believed this would be the case I never had much hard evidence to back it up but new research in this field backs up my conclusions.

Working From Home

Researchers at the University of Illinois created a framework to analyse telecommuting employee’s performance. They then used this to gain insight into data taken from 323 employees and their corresponding supervisors. The results showed a very small, positive effect for the telecommuting workers showing that their performance was the same or slightly better than those who were working in the office. Perhaps most intriguingly they found that the biggest benefit was shown when employees didn’t have the best relationship with their superiors, indicating that granting flexible working arrangements could be seen as something of an olive branch to smooth over employee relations. However the most important takeaway from this is that no negative relationship between telecommuting and work performance was found, showing that employees working remotely can be just as effective as their in office counterparts.

As someone who’s spent a great deal of time working from various different places (not just at home) with other people in a similar situation I have to say that my experience matches up with research pretty well. I tend to be available for much longer periods of time, simply because it’s easier to, and it’s much easier to focus on a particular task for an extended period of time when the distractions of the office aren’t present. Sure after a while you might start to wonder if you’ll be able to handle human contact again (especially after weeks of conference calls) but it’s definitely something I think every employer should offer, if they have the capability to.

It also flies in the face of Marissa Mayer’s decision to outright ban all telecommuting in Yahoo last year, citing performance concerns. Whilst I don’t disagree with the idea that telecommuting isn’t for everyone (I know a few people who’d likely end up like this) removing it as an option is incredibly short sighted. Sure, there’s value to be had in face time, however if their performance won’t suffer offering them flexible working arrangements like telecommuting can generate an awful lot of goodwill with your employees. I know that I’m far more likely to stick around with my current company thanks to their stance on this, even if I probably won’t be able to take advantage of it fully for the next couple years.

Hopefully studies like this keep getting published as telecommuting is fast becoming something that shouldn’t have to be done by exception. Right now it might be something of a novelty but the technology has been there for years and it’s high time that more companies started to make better use of it. They might just find it easier to hold on to more employees if they did and, potentially, even attract better talent because of it. I know it will take time though as we’re still wrestling with the 40 hour work week, a hangover over 150 years ago, even though we’ve long since past the time where everyone is working factories.

One day though, one day.

The Fountain of Youth Might Just be The Blood of the Young.

Aging is one of the most complex and nuanced processes that our body goes through, radically transforming us over the course of several decades. Whilst some of the basic mechanisms are well understood, like accumulated damage to DNA during its reproduction, the rest remains something of a mystery. Indeed once we get into the extreme end of the spectrum the factors that seem to influence longevity become a lot more muddled, with many octogenarians engaging in behaviours that would appear to be the antithesis to living longer. Still our quest for the proverbial fountain of youth has had us searching through the many different mechanisms at play in the aging process and it seems that the blood of our young might hold the clues to a longer life.

Lab Mouse

Two pieces of recent research point towards some interesting evidence that shows the radical differences between the blood of the young and the elderly. Hendrikje van Andel-Schipper was once the oldest woman in the world, reaching the ripe old age of 115 in the year 2005. She was in remarkable condition for her age, remaining mentally aware and alert right up until her death. In a great boon to the greater scientific community she donated her body for study giving us unprecedented insight into what happens to us as we age. That, combined with some recent research data coming at this from a different perspective, shows that the contents of our blood changes dramatically as we age and, possibly, that we could reinvigorate ourselves with transfusions from our younger selves.

At the end of her life all of Hendrikje’s white blood cells, the ones responsible for fighting off infections, came from a mere 2 stem cells. It is estimated that we begin our lives with around 20,000 such cells with around 5% of them working at any one time to replenish our white cell supply. The fact that Hendrikje had only two function stem cells remaining points to an upper limit on the natural human age as once you stop producing white blood cells it wouldn’t take long for your body to succumb to any number of diseases. Curiously though this also hints a potential pathway to reinvigorate individuals whose white cell count has deteriorated, by injecting them with their own blood (or potentially someone else’s) taken from many years previous.

That part was mostly conjecture on the part of the researchers but recent results from a study  at Stanford University have shown that old mice injected with the blood of younger mice show significant improvement in cognitive function. Whilst this isn’t likely to be the same mechanism that the previous research may have indicated (blood plasma with the proteins denatured in it didn’t achieve the same result) it does point towards a potential therapeutic pathway for combating some age related maladies. Of course whether this translates into a human model remains to be seen and who knows if this kind of thing would get passed an ethics tribunal.

Indeed research of this nature opens up all sorts of ethical questions as if it’s shown that blood transfusions can improve the quality of life of patients then it becomes imperative for doctors to use it. With blood supplies always being in high demand the question of where they can do the most good comes to the forefront, a troubled area that really has no good answers. Still if you could better the life of another, most likely a relative, by simply giving blood I’m sure many of us would do it, but the larger question of voluntary donations still remains.

There’s also some potentially dark sci-fi film in here about people being bled dry in order to feed an underground transfusion market but I’ll leave that one up to your imagination.

Does Chilli Really Help With The Common Cold?

After a long weekend of staying up late, drinking merrily and enjoying the company of many close friends I found myself being a little under the weather. This is pretty atypical for me as I’ve only ever had the flu twice and I usually pass through the cold season relatively unscathed. Whilst there’s thousands of possible reasons for this I’ve always found that should I find myself in the beginnings of an infection a strong dose of chilli seems to make it subside, or at least take my mind off it long enough to start feeling better. I realised yesterday that whilst I might have some anecdotal evidence to support this I hadn’t really looked into the science behind it and the stuff I uncovered in my search has been pretty intriguing.

Creepy Chilli Dude

For starters there are some strange experiments out there that have used chilli (well the chemical that gives it the burn, capsaicin) as an apparently reliable method to induce coughing in test subjects. The first one I came across was testing whether or not coughing is a voluntary action and the results seem to indicate that the coughing we get with the common cold is a mixture of both. Other experiments showed that people with an upper respiratory tract infection (which includes things like the common cold) are more prone to coughing when exposed to a capsaicin/citric acid mixture. None of these really helped me in understanding whether or not chilli aids in reducing the symptoms of the common cold or helping to cure it but a couple other studies do provide some potential paths for benefits.

Subjects with perennial rhinitis, a permanent allergic reaction to stimulus that doesn’t vary by season, showed a marked decrease in nasal complaints when treated with a solution of 0.15mg of capsaicin per nostal every 2nd or 3rd day for 7 treatments. The benefits lasted up to 9 months after the treatment and incredibly there were no adverse effects on cellular homeostasis or overall neurogenic staining (which sounds rather impressive but is a little out of my league to explain).  Whilst this doesn’t directly support the idea that consumption helps the common cold it does provide a potential mechanism for it to relieve symptoms. However how much capsaicin ends up in your sinuses while eating it isn’t something I could find any data on.

Other studies have found similar effects when capsaicin solutions have been sprayed into the nasal cavity with the improvements lasting for up to 6 months. That particular study was a little on the small side though with only 10 patients and no controls present but the result do fall in line with the previous study which had much more rigorous controls. The theme appears to resonate through most of the other studies that I could find: topical application in the sinuses is good, inhaling it will cause you to erupt in a coughing fit.

Anecdotally that seems to line up with the experiences I’ve had and it’s good to see it backed up by some proper science. As for consumed chilli helping overall however there doesn’t appear to be any studies that support that idea but there are potential avenues for it to work. So like many scientists I’ll have to say that the results are interesting but require a lot more research to be done. Whether it’s worthy of investigating is something I’ll leave up as an exercise to the reader, but I’m sure we’d find no shortage of spice loving test subjects who’d be willing to participate.

 

Reworking The Greater Internet Fuckwad Theory.

Anyone who’s been on the Internet for a while will be familiar with the idea that anonymity can to the worst coming out in the general populace. It’s not hard point to prove either, just wander over to any mildly popular video on YouTube and browse the comments section for a little while and you’ll see ready confirmation of the idea that regular people turn into total shitcocks the second they get the magical combination of anonymity and an audience. The idea was most aptly summed up by Penny Arcade in their Greater Internet Fuckwad Theory strip, something that has become kind of a reference piece sent to those poor souls who search for meaning as to why people are being mean to them on the Internet.

However it seems that the equation might need some reworking in light of new evidence coming from, of all places, South Korea.

I’ve long been of the thought that forcing people to use their real names would work in curtailing trolling to some degree as that removes one of the key parts of the fuckwad theory: anonymity. Indeed a site much more popular than mine said that the switch to Facebook comments, whilst dropping the total number of comments considerably, was highly effective in silencing the trolls on their site. Just over a year later however the same site posted an article saying that there’s considerable evidence that forcing users to use their real name had little effect on the total number of troll like comments citing research from South Korea and Carnegie Mellon. I’ve taken the liberty of reading the study for you and whilst the methods they employed are a little bit… soft for determining what a troll post was they do serve as a good basis for hypothesizing about how effective real name policies are.

If there was a causative link between forcing people to use their real names online and a reduction in undesirable behaviour we would’ve seen some strong correlations in the Carnegie Mellon study. Whilst there was some effectiveness shown (a reduction of 30% in the use of swear words) taken in the context that troll posts only account for a minority of posts on the sites studied (about 13%) the overall impact is quite low. Indeed whilst TechCrunch did say that Facebook comments silenced the trolls they may have called it too early as the study showed that whilst there was a damper initially, overall the level remained largely static after a certain period of time.

What this means for the Greater Internet Fuckwad theory is that the key part of the equation, anonymity, can be removed and much the same result will be had. This is a somewhat harrowing discovery as it means that the simple act of putting a regular person in front of an audience can lead to them being a reprehensible individual. On the flip side though it could also be more indicative of the people themselves as the study showed that only a minority of users engage in such behaviour. It would be very interesting to see how that compares to real life interactions as I’m sure we all know people who act like online trolls in real life.

In light of this new evidence my stance on using real names as a troll reduction method is obviously flawed. I was never really in any favour of implementing such a system (I considered using Facebook comments here for a little while) but I thought its efficacy was unquestioned. My favourite method for combating trolls is a form of timed hellbanning where by the user will not appear to everyone else but to them they will appear like they are contributing. It’s a rather ugly solution if you permanently ban someone but time limited versions appear to work to great effect in turning trolls into contributing users.

It may just be that trolling is an inevitable part of any community and the best we can do is remediate it, rather than eliminate it.