Posts Tagged‘research’


Researchers Create Electric Circuits in Roses.

The blending of organic life and electronics is still very much in its nascent stages. Most of the progress made in this area is thanks to the adaptability of the biological systems we’re integrating with, not so much the technology. However even small progress in this field can have wide reaching ramifications, sometimes enough to dramatically reframe the problem spaces we work in. One such small step has been made recently by a team from the Linköping University in Sweden as they have managed to create working electronic circuits within roses.


The research, born out of the Laboratory of Organic Electronics division of the university, experimented with ways of integrating electronics into rose plants so they could monitor, and potentially influence, the growth and development of the plant. To do this they looked at infusing the rose with a polymer that, once ingested into the plant, would form a conductive wire. Attempts with many polymers simply resulted in the death of the plant as they either poisoned it or blocked the channels the plant used to carry nutrients. However one polymer, called PEDOT-S:H, was readily taken up by the roses and didn’t cause any damage to the plant. Instead it formed a thin layer within the xylem (one of the nutrient transport mechanisms within plants) that produced a conductive hydrogel wire up to 10cm long.

The researchers then used this wire to create some rudimentary circuits within the plant’s xylem structure. The wire itself, whilst not being an ideal conductor, was surprisingly conducive to current with a contact resistance of 10KΩ. To put that in perspective the resistance of human skin can be up to 10 times more than that. Using this wire as a basis the researchers then went on to create a transistor by connecting source, drain and gate probes. This transistor worked as expected and they went one step further to create logic gates, demonstrating that a NOR gate could be created using the hydrogel wire as the semiconducting medium.

This kind of technology has potential to revolutionize the way that we monitor and influence plant growth and development. Essentially what this allows us to do is create circuitry within living plants, using their own cellular structures as a basis, that can act as sensors or regulators for the various chemical processes that happen within them. Of course there’s still a lot of work to be done in this area, namely modelling the behaviour of this organic circuitry in more depth to ascertain what kind of data we can get and processes we can influence. Suffice to say it should become a very healthy area of research as there are numerous potential applications.


Lithium-Air Batteries are the Future, Still Many Years Away.

There’s no question about it: batteries just haven’t kept pace with technological innovation. This isn’t for lack of trying however, it’s just that there’s no direct means to increasing energy densities like there is for increasing transistor count. So what we have are batteries that are mostly capable however have not seen rapid improvement as technology has rocketed away to new heights. There are however visions for the future of battery technology that, if they come to fruition, could see a revolution in battery capacity. The latest and greatest darling of the battery world is found in a technology called Lithium-Air, although it becoming a reality is likely decades away.

ambl012_fig_2Pretty much every battery in a smartphone is some variant of lithium-ion which provides a much higher energy density than most other rechargeable battery types. For the most part it works well however there are some downsides, like their tendency to explode and catch fire when damaged, which have prevented them from seeing widespread use in some industries. Compared to other energy dense mediums, like gasoline for example, lithium-ion is still some 20 times less dense. This is part of the reason why it has taken auto makers so long to start bringing out electric cars, they simply couldn’t store the required amount of energy to make them comparable to gasoline powered versions. Lithium-Air on the other hand could theoretically match gasoline’s energy density, the holy grail for battery technology.

Lithium-air relies on the oxidation (essentially rusting) of lithium in order to store and retrieve energy. This comes with a massive jump in density because, unlike other batteries, lithium-air doesn’t have to contain its oxidizing agent within the battery itself. Instead it simply draws it from the surrounding air, much like a traditional gasoline powered engine does. However such a design comes with numerous challenges which need to be addressed before a useable battery can be created. Most of the research is currently focused on developing a cathode (negative side) as that where the current limitations are.

That’s also where the latest round of lithium-air hype has come from.

The research out of Cambridge details a particularly novel chemical reaction which, theoretically, could be used in the creation of a lithium-air battery. The reaction was reversed and redone over 2000 times, showing that it has the potential to store and retrieve energy as you’d expect a battery to. However what they have not created, and this is something much of the coverage is getting wrong, is an actual lithium-air battery. What the scientists have found is a potential chemical reaction which could make up one of the cells of a lithium-air battery. The numerous other issues, like the fact their reaction only works in pure oxygen and not air, which limit the applicability of this reaction to real world use cases. I’m not saying they can’t be overcome but all these things need to be addressed before you can say you’ve created a useable battery.

Realistically that’s not any fault of the scientists though, just the reporting that’s surrounded it. To be sure their research furthers the field of lithium-air batteries and there’s a need for more of this kind of research if we ever want to actually start making these kinds of batteries. Breathless reporting of progressions in research as actual, consumer ready technology though doesn’t help and only serves to foster the sense that the next big thing is always “10 years away”. In this case we’re one step closer, but the light is at the end of a very long tunnel when it comes to a useable lithium-air battery.

UNSW Qubit in Silicon

Quantum Computing Comes to Silicon.

Traditional computing is bound in binary data, the world of zeroes and ones. This constraint was originally born out of a engineering limitation, designed to ensure that these different states could be easily represented by differing voltage levels. This hasn’t proved to be much of a limiting factor in the progress that computing has made however but there are different styles of computing which make use of more than just those zeroes and ones. The most notable one is quantum computing which is able to represent an exponential amount of states depending on the number of qubits (analogous to transistors) that the quantum chip has. Whilst there have been some examples of quantum computers hitting the market, even if their quantum-ness is still in question, they are typically based on exotic materials meaning mass production of them is tricky. This could change with the latest research to come out of the University of New South Wales as they’ve made an incredible breakthrough.

UNSW Qubit in Silicon

Back in 2012 the team at UNSW demonstrated that they could build a single qubit in silicon. This by itself was an amazing discovery as previously created qubits were usually reliant on materials like niobium cooled to superconducting temperatures to achieve their quantum state. However a single qubit isn’t exactly useful on its own and so the researchers were tasked with getting their qubits talking to each other. This is a lot harder than you’d think as qubits don’t communicate in the same way that regular transistors do and so traditional techniques for connecting things in silicon won’t work. So after 3 years worth of research UNSW’s quantum computing team has finally cracked it and allowed two qubits made in silicon to communicate.

This has allowed them to build a quantum logic gate, the fundamental building block for a larger scale quantum computer. One thing that will be interesting to see is how their system scales out with additional qubits. It’s one thing to get two qubits talking together, indeed there’s been several (non-silicon) examples of that in the past, however as you scale up the number of qubits things start to get a lot more difficult. This is because larger numbers of qubits are more prone to quantum decoherence and typically require additional circuitry to overcome it. Whilst they might be able to mass produce a chip with a large number of qubits it might not be of any use if the qubits can’t stay in coherence.

It will be interesting to see what applications their particular kind of quantum chip will have once they build a larger scale version of it. Currently the commercially available quantum computers from D-Wave are limited to a specific problem space called quantum annealing and, as of yet, have failed to conclusively prove that they’re achieving a quantum speedup. The problem is larger than just D-Wave however as there is still some debate about how we classify quantum speedup and how to properly compare it to more traditional methods. Still this is an issue that UNSW’s potential future chip will have to face should it come to market.

We’re still a long way off from seeing a generalized quantum computer hitting the market any time soon but achievements like those coming out of UNSW are crucial in making them a reality. We have a lot of investment in developing computers on silicon and if those investments can be directly translated to quantum computing then it’s highly likely that we’ll see a lot of success. I’m sure the researchers are going to have several big chip companies knocking down their doors to get a license for this tech as it really does have a lot of potential.


3D Printed Prosthesis Regenerates Nerves.

Nerve damage has almost always been permanent. For younger patients there’s hope for full recovery after an incident but as we get older the ability to repair nerve damage decreases significantly. Indeed by the time we reach our 60s the best we could hope for is what’s called “protective sensation”, the ability to determine things like hot from cold. The current range of treatments are mostly limited to grafts, often using nerves from the patient’s own body to repair the damage, however even those have limited success in practice. However that could all be set to change with the development of a process which can produce nerve regeneration conduits using 3D scanning and printing.


The process was developed by a collaboration of numerous scientists from the following institutions: University of Minnesota, Virginia Tech, University of Maryland, Princeton University, and Johns Hopkins University. The research builds upon one current cutting edge treatment which uses special structures to trigger regeneration, called nerve guidance conduits. Traditionally such conduits could only be produced in simple shapes, meaning they were only able to repair nerve damage in straight lines. This new treatment however can work on any arbitrary nerve structure and has proven to work in restoring both motor and sensory function in severed nerves both in-vitro (in a petri dish) and in-vivo (in a living thing).

How they accomplished this is really quite impressive. First they used a 3D scanner to reproduce the structure of the nerve they’re trying to regenerate, in this case it was the sciatic nerve (pictured above). Then they used the resulting model to 3D print a nerve guidance conduit that was the exact size and shape required. This was then implanted into a mouse who had a 10mm gap in their sciatic nerve (far too long to be sewn back together). This conduit then successfully triggered the regeneration of the nerve and after 10 weeks the rat showed a vastly improved ability to walk again. Since this process had only been verified on linear nerves before this process shows great promise for regenerating much more complicated nerve structures, like those found in us humans.

The great thing about this is that it can be used for any arbitrary nerve structure. Hospitals equipped with such a system would be able to scan the injury, print the appropriate nerve guide and then implant it into the patient all on site. This could have wide reaching ramifications for the treatment of nerve injuries, allowing far more to be treated and without the requisite donor nerves needing to be harvested.

Of course this treatment has not yet been tested in humans but the FDA has approved similar versions of this treatment in years past which have proven to be successful. With that in mind I’m sure that this treatment will prove successful in a human model and from there it’s only a matter of time before it finds its way into patients worldwide. Considering how slow progress has been in this area it’s quite heartening to see dramatic results like this and I’m sure further research into this area will prove just as fruitful.


Growing Up on a Farm Prevents Asthma.

I grew up in a rural community just outside of Canberra. Whilst we had a large plot of land out there we didn’t have a farm, but many people around us did including a friend of mine up the road. We’d often get put to work by their parents when we wandered up there, tending to the chickens or help wrangle the cows when they got unruly. Thinking back to those times it was interesting to note that barely anyone I knew who lived out there didn’t suffer from asthma. Indeed the only people I knew who did were a couple kids at school who had grown up elsewhere. As it turns out there’s a reason for this: “farm dust” has shown to have a protective effect when it comes to allergies, meaning farm kids simply won’t develop conditions like asthma.


There’s been research in the past that identified a correlative relationship between living on a farm and a resistance to developing allergies and asthma. Researchers at the Vlaams Instituut voor Biotechnologie (VIB) in Belgium took this idea one step further and looked for what the cause might be. They did this by exposing young mice to farm dust (a low dose endotoxin called bacterial lipopolysaccharide) and then later checked their rates of allergy and asthma development against a control group. The research showed that mice exposed to the farm dust did not develop the severe allergic reactions at all whilst the control group did at the expected rate.

The researchers also discovered the mechanism by which farm dust provides its benefits. When developing lungs are exposed to farm dust the mucus membranes in the respiratory tract react much less severely to common allergens than those without exposure do. This is because when the dust reacts with the mucus membranes the body produces more of a protein called A20 which is responsible for the protective effects. Indeed when the researches deactivated the protein the protective effects were lost, proving that it was responsible for the reduction in allergen reactivity.

What was really interesting however was the genetic profiling that the VIB researchers did of 2000 farm kids after their initial research with mice. They found that, for the vast majority of children who grew up on farms, that they had the protection granted to them by their increased A20 production. However not all the children had it but for them it was found that they had a genetic variant which caused the A20 protein to malfunction. This is great news because it means that the mouse model is an accurate one and can be used in the development of treatments for asthma and other A20 related conditions.

Whilst this doesn’t mean a month long holiday at the farm will cure you of your asthma (this only works for developing lungs, unfortunately) it does provide a fertile area for further research. This should hopefully lead to the swift development of a vaccine for asthma, a condition that has increased in prevalence over the past couple decades. Hopefully this will also provide insight into other allergies as whilst they might not have the exact same mechanism for action there’s potential for other treatment avenues to be uncovered by this research.


Age Related Cognitive Motor Decline Starts at 24, But It’s Not All Bad News.

Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.


The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:

  1. Is there age-related slowing of Looking-Doing Latency?
  2. Can expertise directly ameliorate this decline?
  3. When does this decline begin?

In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?

Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.

As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.


Standing 2 Hours a Day Shows Potential Benefits.

You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.


The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.

The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.

As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.

It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.


An Artificial Brain in Your Pocket.

Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.


The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.

But why, I hear you asking, would you want a computerized brain in your pocket?

IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.

The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.

Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.


Stanene: Graphene’s Metallic Brother.

Graphene has proven to be a fruitful area of scientific research, showing that atom thick layers of elements exhibit behaviours that are wildly different from their thicker counterparts. This has then spurred on research into how other elements behave when slimmed down to atom thick layers producing such materials as silicene (made from silicon) and phosphorene (made from phosphorous). Another material in the same class as these, stanene which is made from an atom thick layer of tin, has been an active area of research due to the potential properties that it might have. Researchers have announced that they have, for the first time, created stanene in the lab and have begun to probe its theoretical properties.

will2dtinbetNot all elements have the ability to form these 2D structures however researchers at Stanford University in California predicted a couple years ago that tin should be able to form a stable structure. This structure then lent itself to numerous novel characteristics, chief among them being the ability for an electric current to pass through it without producing waste heat. Of course without a real world example to test against such properties aren’t of much use and so the researchers have spent the last couple years developing a method to create a stanene sheet. That research has proved fruitful as they managed to create a stanene layer on top a supporting substrate of bismuth telluride.

The process that they used to create the stanene sheet is pretty interesting. First they create a chamber that has a base of bismuth telluride. Then they vaporize tin and introduce it into the chamber, allowing it to deposit itself onto the bismuth telluride base. It’s a similar process to what some companies use to create synthetic diamonds, called chemical vapor deposition. For something like stanene it ensures that the resulting sheet is created uniformly, ensuring that the underlying structure is consistent. The researchers have then used this resulting stanene sheet to test the theoretical properties that were modelled previously.

Unfortunately the stanene sheet produced by this method does not appear to have the theoretical properties that the theoretical models would indicate. The problem seems to stem from the bismuth telluride base that they used for the vapor deposition process as it’s not completely inert. This means that it interacts with the stanene sheet, contaminating it and potentially disrupting the topological insulator properties which it should exhibit. The researchers are investigating different surfaces to mitigate this effect so it’s likely that we’ll have a pure stanene sheet in the not too distant future.

Should this research prove fruitful it could open up many new avenues of research for materials development. Stanene has properties that would make it extremely ideal for use in electronics, being able to dramatically increase the efficiency of interconnects. Large scale implementations would likely still be a while off but if they could make the vapor deposition process work then there’s immediate applications for it in the world of microelectronics. Hopefully the substrate issue is sorted out soon and we’ll see consumerization of the technology begin in earnest.



Vaccination So Effective Even Bees Do It.

The unequivocal effectiveness of vaccinations has seen many of the world’s worst and most debilitating diseases relegated to the history books. Gone are the days when millions of people were afflicted with diseases that could leave them permanently disabled, enabling many more to live long and healthy lives. Before their invention however developing an immunity to a disease often meant enduring it, something ranged from a mild inconvenience to a life threatening prospect. Our biology takes care of part of that, with some immunity passing down from mother to child, however we’d never witnessed that outside our branch on the biology tree of life. New research shows though that bees in fact have their own form of natural immunity that queens pass onto their workers.


The research, conducted by scientists at Stanford University and published in PLOS Pathogens a couple days ago, shows that queen bees immunize their worker bees against certain types of pathogens that would otherwise devastate the colony. The mechanism by which this works is actually very similar to the way many vaccines work today. Essentially the queen bee, who rarely leaves the hive, is fed on a combination of pollen and nectar called royal jelly. This food actually contains a variety of pathogens which typically would be deadly to the bees.

However the queen bee has what’s called a fat body, an organ which functions similarly to our liver. Once the pathogen has been broken down in the queen bee’s gut it’s then transferred to the fat body where parts of the pathogen are wrapped up in a protein called vitellogenin. This is then passed onto her offspring who, when they hatch, now have immunity to pathogens that would otherwise kill them. What’s interesting about this process is that it has the potential for aiding current bee populations which have been collapsing around the world over the past decade.

Whilst the root cause of the widespread colony collapse is still under intense debate there are several potential causes which could be mitigated by using this mechanism. Essentially we could devise vaccines for some of the potential problems that bee colonies face and introduce them via spraying flowers with them. Then, when the pollen is brought back to the queen, all the subsequent bees would get the immunity, protecting them from the disease. This could also aid in making the end product better for humans, potentially eradicating problems like botulism toxin which sometimes makes its way into honey.

It’s always interesting to see common attributes like this pop up across species as it gives us an idea of how much of our evolutionary lineage is shared. Whilst we don’t share a lot in common with bees there are a lot of similar mechanisms at play, suggesting our evolutionary paths deviated at a common ancestor a long time ago. Something like this, whilst not exactly a revolution, does have the potential to benefit both us and our buzzing companions. Hopefully this leads to positive progress in combating colony collapse which is beneficial for far more than just lovers of honey.