Posts Tagged‘research’

SSDs

Google Provides Insight Into SSD Reliability.

SSDs may have been around for some time now but they’re still something of an unknown. Their performance benefits are undeniable and their cost per gigabyte has plummeted year after year. However, for the enterprise space, their unknown status has led to a lot of hedged bets when it comes to their use. Most SSDs have a large portion of over provisioned space, to accommodate for failed cells and wear levelling. A lot of SSDs are sold as “accelerators”, meant to help speed up operations but not hold critical data for any length of time. This all comes from a lack of good data on their reliability and failure rates, something which can only come with time and use. Thankfully Google has been doing just that and at a recent conference released a paper about their findings.

SSDs

 

The paper focused on three different types of flash media: the consumer level MLC, the more enterprise focused SLC and the somewhere-in-the-middle eMLC. These were all custom devices, sporting Google’s own PCIe interface and drivers, however the chips they used were your run of the mill flash. The drives were divided into 10 categories: 4 MLC, 4 SLC and 2 eMLC. For each of these different types of drives several different metrics were collected over their 6 year lifetime: raw bit error rate (RBER), uncorrectable bit error rate (UBER), program/erase cycles and various failure rates (bad blocks, bad cells, etc.). All of these were then collated to provide insights into the reliability of SSDs and their comparison to each other and to old fashioned, spinning rust drives.

Probably the most stunning finding out of the report is that, in general, SLC drives are no more reliable than their MLC brethren. For both enterprises and consumers this is a big deal as SLC based drives are often several times the price of their MLC equivalent. This should allay any fears that enterprises had about using MLC based products as they will likely be just as reliable and far more cheaper. Indeed products like the Intel 750 series (one of which I’m using for big data analysis at home) provide the same capabilities as products that cost ten times as much and, based on Google’s research, will last just as long.

Interestingly the biggest predictive indicator for drive reliability wasn’t the RBER, UBER or even the number of PE cycles. In fact the most predictive factor of drive failure was the physical age of the drive itself. What this means is that, for SSDs, there must be other factors at play which affect drive reliability. The paper hypothesizes that this might be due to silicon aging but it doesn’t appear that they had enough data to investigate that further. I’m very much interested in how this plays out as it will likely come down to the way they’re fabricated (I.E. different types of lithography, doping, etc.), something which does vary significantly between manufacturers.

It’s not all good news for SSDs however as the research showed that whilst SSDs have an overall failure rate below that of spinning rust they do exhibit a higher UBER. What this means is that SSDs will have a higher rate of unrecoverable errors which can lead to data corruption. Many modern operating systems, applications and storage controllers are aware of this and can accommodate it but it’s still an issue for systems with mission/business critical data.

This kind of insight into the reliability of SSDs is great and just goes to show that even nascent technology can be quite reliable. The insight into MLC vs SLC is telling, showing that whilst a certain technology may exhibit one better characteristic (in this case PE cycle count) that might not be the true indicator of reliability. Indeed Google’s research shows that the factors we have been watching so closely might not be the ones we need to look at. Thus we need to develop new ideas in order to better assess the reliability of SSDs so that we can better predict their failures. Then, once we have that, we can work towards eliminating them, making SSDs even more reliable again.

D-Wave 2X

D-Wave 2X Finally Demonstrates Quantum Speedup.

The possibilities that emerge from a true quantum computer are to computing what fusion is to energy generation. It’s a field of active research, one in which many scientists have spent their lives, yet the promised land still seems to elude us. Just like fusion though quantum computing has seen several advancements in recent years, enough to show that it is achievable without giving us a concrete idea of when it will become commonplace. The current darling of the quantum computing world is D-Wave, the company that announced they had created functioning qubits many years ago and set about commercializing them. However they were unable to show substantial gains over simulations on classical computers for numerous problems, calling into question whether or not they’d actually created what they claimed to. Today however brings us results that demonstrate quantum speedup, on the order of 108 times faster than regular computers.

D-Wave 2X

For a bit of background the D-Wave 2X (the device pictures above and the one which showed quantum speedup) can’t really be called a quantum computer, even though D-Wave calls it that. Instead it’s what you’d call a quantum annealer, a specific kind of computing device that’s designed to solve very specific kinds of problems. This means that it’s not a Turing complete device, unable to tackle the wide range of computing tasks which we’d typically expect a computer to be capable of. The kinds of problems it can solve however are optimizations, like finding local maximums/minimums for a given equation with lots of variables. This is still quite useful however which is why many large companies, including Google, have purchased one of these devices.

In order to judge whether or not the D-Wave 2X was actually doing computations using qubits (and not just some fancy tricks with regular processors) it was pitted against a classical computer doing the same function, called simulated annealing. Essentially this means that the D-Wave was running against a simulated version of itself, a relatively easy challenge for a quantum annealer to beat. However identifying the problem space in which the D-Wave 2X showed quantum speedup proved tricky, sometimes running at about the same speed or showing only a mild (comparative to expectations) speedup. This brought into question whether or not the qubits that D-Wave had created were actually functioning like they said they were. The research continued however and has just recently born fruit.

The research, published on ArXiv (not yet peer reviewed), shows that the D-Wave 2X is about 100 million times faster than its simulated counterpart. Additionally for another algorithm, quantum monte carlo, a similar amount of speedup was observed. This is the kind of speedup that the researchers have been looking for and it demonstrates that D-Wave is indeed a quantum device. This research points towards simulated annealing being the best measure with which to judge quantum systems like the D-Wave 2X against, something which will help immensely with future research.

There’s still a long way to go until we have a general purpose quantum computer however research like this is incredibly promising. The team at Google which has been testing this device has come up with numerous improvements they want to make to it and developed systems to make it easier for others to exploit such quantum systems. It’s this kind of fundamental research which will be key to the generalization of this technology and, hopefully, it’s inevitable commercialization. I’m very much looking forward to seeing what the next generation of these systems bring and hope their results are just as encouraging.

electronic-rose

Researchers Create Electric Circuits in Roses.

The blending of organic life and electronics is still very much in its nascent stages. Most of the progress made in this area is thanks to the adaptability of the biological systems we’re integrating with, not so much the technology. However even small progress in this field can have wide reaching ramifications, sometimes enough to dramatically reframe the problem spaces we work in. One such small step has been made recently by a team from the Linköping University in Sweden as they have managed to create working electronic circuits within roses.

electronic-rose

The research, born out of the Laboratory of Organic Electronics division of the university, experimented with ways of integrating electronics into rose plants so they could monitor, and potentially influence, the growth and development of the plant. To do this they looked at infusing the rose with a polymer that, once ingested into the plant, would form a conductive wire. Attempts with many polymers simply resulted in the death of the plant as they either poisoned it or blocked the channels the plant used to carry nutrients. However one polymer, called PEDOT-S:H, was readily taken up by the roses and didn’t cause any damage to the plant. Instead it formed a thin layer within the xylem (one of the nutrient transport mechanisms within plants) that produced a conductive hydrogel wire up to 10cm long.

The researchers then used this wire to create some rudimentary circuits within the plant’s xylem structure. The wire itself, whilst not being an ideal conductor, was surprisingly conducive to current with a contact resistance of 10KΩ. To put that in perspective the resistance of human skin can be up to 10 times more than that. Using this wire as a basis the researchers then went on to create a transistor by connecting source, drain and gate probes. This transistor worked as expected and they went one step further to create logic gates, demonstrating that a NOR gate could be created using the hydrogel wire as the semiconducting medium.

This kind of technology has potential to revolutionize the way that we monitor and influence plant growth and development. Essentially what this allows us to do is create circuitry within living plants, using their own cellular structures as a basis, that can act as sensors or regulators for the various chemical processes that happen within them. Of course there’s still a lot of work to be done in this area, namely modelling the behaviour of this organic circuitry in more depth to ascertain what kind of data we can get and processes we can influence. Suffice to say it should become a very healthy area of research as there are numerous potential applications.

ambl012_fig_2

Lithium-Air Batteries are the Future, Still Many Years Away.

There’s no question about it: batteries just haven’t kept pace with technological innovation. This isn’t for lack of trying however, it’s just that there’s no direct means to increasing energy densities like there is for increasing transistor count. So what we have are batteries that are mostly capable however have not seen rapid improvement as technology has rocketed away to new heights. There are however visions for the future of battery technology that, if they come to fruition, could see a revolution in battery capacity. The latest and greatest darling of the battery world is found in a technology called Lithium-Air, although it becoming a reality is likely decades away.

ambl012_fig_2Pretty much every battery in a smartphone is some variant of lithium-ion which provides a much higher energy density than most other rechargeable battery types. For the most part it works well however there are some downsides, like their tendency to explode and catch fire when damaged, which have prevented them from seeing widespread use in some industries. Compared to other energy dense mediums, like gasoline for example, lithium-ion is still some 20 times less dense. This is part of the reason why it has taken auto makers so long to start bringing out electric cars, they simply couldn’t store the required amount of energy to make them comparable to gasoline powered versions. Lithium-Air on the other hand could theoretically match gasoline’s energy density, the holy grail for battery technology.

Lithium-air relies on the oxidation (essentially rusting) of lithium in order to store and retrieve energy. This comes with a massive jump in density because, unlike other batteries, lithium-air doesn’t have to contain its oxidizing agent within the battery itself. Instead it simply draws it from the surrounding air, much like a traditional gasoline powered engine does. However such a design comes with numerous challenges which need to be addressed before a useable battery can be created. Most of the research is currently focused on developing a cathode (negative side) as that where the current limitations are.

That’s also where the latest round of lithium-air hype has come from.

The research out of Cambridge details a particularly novel chemical reaction which, theoretically, could be used in the creation of a lithium-air battery. The reaction was reversed and redone over 2000 times, showing that it has the potential to store and retrieve energy as you’d expect a battery to. However what they have not created, and this is something much of the coverage is getting wrong, is an actual lithium-air battery. What the scientists have found is a potential chemical reaction which could make up one of the cells of a lithium-air battery. The numerous other issues, like the fact their reaction only works in pure oxygen and not air, which limit the applicability of this reaction to real world use cases. I’m not saying they can’t be overcome but all these things need to be addressed before you can say you’ve created a useable battery.

Realistically that’s not any fault of the scientists though, just the reporting that’s surrounded it. To be sure their research furthers the field of lithium-air batteries and there’s a need for more of this kind of research if we ever want to actually start making these kinds of batteries. Breathless reporting of progressions in research as actual, consumer ready technology though doesn’t help and only serves to foster the sense that the next big thing is always “10 years away”. In this case we’re one step closer, but the light is at the end of a very long tunnel when it comes to a useable lithium-air battery.

UNSW Qubit in Silicon

Quantum Computing Comes to Silicon.

Traditional computing is bound in binary data, the world of zeroes and ones. This constraint was originally born out of a engineering limitation, designed to ensure that these different states could be easily represented by differing voltage levels. This hasn’t proved to be much of a limiting factor in the progress that computing has made however but there are different styles of computing which make use of more than just those zeroes and ones. The most notable one is quantum computing which is able to represent an exponential amount of states depending on the number of qubits (analogous to transistors) that the quantum chip has. Whilst there have been some examples of quantum computers hitting the market, even if their quantum-ness is still in question, they are typically based on exotic materials meaning mass production of them is tricky. This could change with the latest research to come out of the University of New South Wales as they’ve made an incredible breakthrough.

UNSW Qubit in Silicon

Back in 2012 the team at UNSW demonstrated that they could build a single qubit in silicon. This by itself was an amazing discovery as previously created qubits were usually reliant on materials like niobium cooled to superconducting temperatures to achieve their quantum state. However a single qubit isn’t exactly useful on its own and so the researchers were tasked with getting their qubits talking to each other. This is a lot harder than you’d think as qubits don’t communicate in the same way that regular transistors do and so traditional techniques for connecting things in silicon won’t work. So after 3 years worth of research UNSW’s quantum computing team has finally cracked it and allowed two qubits made in silicon to communicate.

This has allowed them to build a quantum logic gate, the fundamental building block for a larger scale quantum computer. One thing that will be interesting to see is how their system scales out with additional qubits. It’s one thing to get two qubits talking together, indeed there’s been several (non-silicon) examples of that in the past, however as you scale up the number of qubits things start to get a lot more difficult. This is because larger numbers of qubits are more prone to quantum decoherence and typically require additional circuitry to overcome it. Whilst they might be able to mass produce a chip with a large number of qubits it might not be of any use if the qubits can’t stay in coherence.

It will be interesting to see what applications their particular kind of quantum chip will have once they build a larger scale version of it. Currently the commercially available quantum computers from D-Wave are limited to a specific problem space called quantum annealing and, as of yet, have failed to conclusively prove that they’re achieving a quantum speedup. The problem is larger than just D-Wave however as there is still some debate about how we classify quantum speedup and how to properly compare it to more traditional methods. Still this is an issue that UNSW’s potential future chip will have to face should it come to market.

We’re still a long way off from seeing a generalized quantum computer hitting the market any time soon but achievements like those coming out of UNSW are crucial in making them a reality. We have a lot of investment in developing computers on silicon and if those investments can be directly translated to quantum computing then it’s highly likely that we’ll see a lot of success. I’m sure the researchers are going to have several big chip companies knocking down their doors to get a license for this tech as it really does have a lot of potential.

150918105030_1_900x600

3D Printed Prosthesis Regenerates Nerves.

Nerve damage has almost always been permanent. For younger patients there’s hope for full recovery after an incident but as we get older the ability to repair nerve damage decreases significantly. Indeed by the time we reach our 60s the best we could hope for is what’s called “protective sensation”, the ability to determine things like hot from cold. The current range of treatments are mostly limited to grafts, often using nerves from the patient’s own body to repair the damage, however even those have limited success in practice. However that could all be set to change with the development of a process which can produce nerve regeneration conduits using 3D scanning and printing.

150918105030_1_900x600

The process was developed by a collaboration of numerous scientists from the following institutions: University of Minnesota, Virginia Tech, University of Maryland, Princeton University, and Johns Hopkins University. The research builds upon one current cutting edge treatment which uses special structures to trigger regeneration, called nerve guidance conduits. Traditionally such conduits could only be produced in simple shapes, meaning they were only able to repair nerve damage in straight lines. This new treatment however can work on any arbitrary nerve structure and has proven to work in restoring both motor and sensory function in severed nerves both in-vitro (in a petri dish) and in-vivo (in a living thing).

How they accomplished this is really quite impressive. First they used a 3D scanner to reproduce the structure of the nerve they’re trying to regenerate, in this case it was the sciatic nerve (pictured above). Then they used the resulting model to 3D print a nerve guidance conduit that was the exact size and shape required. This was then implanted into a mouse who had a 10mm gap in their sciatic nerve (far too long to be sewn back together). This conduit then successfully triggered the regeneration of the nerve and after 10 weeks the rat showed a vastly improved ability to walk again. Since this process had only been verified on linear nerves before this process shows great promise for regenerating much more complicated nerve structures, like those found in us humans.

The great thing about this is that it can be used for any arbitrary nerve structure. Hospitals equipped with such a system would be able to scan the injury, print the appropriate nerve guide and then implant it into the patient all on site. This could have wide reaching ramifications for the treatment of nerve injuries, allowing far more to be treated and without the requisite donor nerves needing to be harvested.

Of course this treatment has not yet been tested in humans but the FDA has approved similar versions of this treatment in years past which have proven to be successful. With that in mind I’m sure that this treatment will prove successful in a human model and from there it’s only a matter of time before it finds its way into patients worldwide. Considering how slow progress has been in this area it’s quite heartening to see dramatic results like this and I’m sure further research into this area will prove just as fruitful.

3000

Growing Up on a Farm Prevents Asthma.

I grew up in a rural community just outside of Canberra. Whilst we had a large plot of land out there we didn’t have a farm, but many people around us did including a friend of mine up the road. We’d often get put to work by their parents when we wandered up there, tending to the chickens or help wrangle the cows when they got unruly. Thinking back to those times it was interesting to note that barely anyone I knew who lived out there didn’t suffer from asthma. Indeed the only people I knew who did were a couple kids at school who had grown up elsewhere. As it turns out there’s a reason for this: “farm dust” has shown to have a protective effect when it comes to allergies, meaning farm kids simply won’t develop conditions like asthma.

3000

There’s been research in the past that identified a correlative relationship between living on a farm and a resistance to developing allergies and asthma. Researchers at the Vlaams Instituut voor Biotechnologie (VIB) in Belgium took this idea one step further and looked for what the cause might be. They did this by exposing young mice to farm dust (a low dose endotoxin called bacterial lipopolysaccharide) and then later checked their rates of allergy and asthma development against a control group. The research showed that mice exposed to the farm dust did not develop the severe allergic reactions at all whilst the control group did at the expected rate.

The researchers also discovered the mechanism by which farm dust provides its benefits. When developing lungs are exposed to farm dust the mucus membranes in the respiratory tract react much less severely to common allergens than those without exposure do. This is because when the dust reacts with the mucus membranes the body produces more of a protein called A20 which is responsible for the protective effects. Indeed when the researches deactivated the protein the protective effects were lost, proving that it was responsible for the reduction in allergen reactivity.

What was really interesting however was the genetic profiling that the VIB researchers did of 2000 farm kids after their initial research with mice. They found that, for the vast majority of children who grew up on farms, that they had the protection granted to them by their increased A20 production. However not all the children had it but for them it was found that they had a genetic variant which caused the A20 protein to malfunction. This is great news because it means that the mouse model is an accurate one and can be used in the development of treatments for asthma and other A20 related conditions.

Whilst this doesn’t mean a month long holiday at the farm will cure you of your asthma (this only works for developing lungs, unfortunately) it does provide a fertile area for further research. This should hopefully lead to the swift development of a vaccine for asthma, a condition that has increased in prevalence over the past couple decades. Hopefully this will also provide insight into other allergies as whilst they might not have the exact same mechanism for action there’s potential for other treatment avenues to be uncovered by this research.

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

Age Related Cognitive Motor Decline Starts at 24, But It’s Not All Bad News.

Professional eSports teams are almost entirely made up of young individuals. It’s an interesting phenomenon to observe as it’s quite contrary to many other sports. Still the age drop off for eSports players is far earlier and more drastic with long term players like Evil Genius’ Fear, who’s the ripe old age of 27, often referred to as The Old Man. The commonly held belief is that, past your mid twenties, your reaction times and motor skills are in decline and you’ll be unable to compete with the new upstarts and their razor sharp reflexes. New research in this area may just prove this to be true, although it’s not all over for us oldies who want to compete with our younger compatriots.

competitive-gaming-seeing-tv-levels-of-viewership-in-2012-5e25250e47

The research comes out of the University of California and was based on data gathered from replays from StarCraft 2. The researchers gathered participants aged from 16 to 44 and asked them to submit replays to their website called SkillCraft. These replays then went through some standardization and analysis using the wildly popular replay tool SC2Gears. With this data in hand researchers were then able to test some hypotheses about how age affects cognitive motor functions and whether or not domain experience, I.E. how long someone had been playing a game for, influenced their skill level. Specifically they looked to answer 3 questions:

  1. Is there age-related slowing of Looking-Doing Latency?
  2. Can expertise directly ameliorate this decline?
  3. When does this decline begin?

In terms of the first question they found that unequivocally that, as we age, our motor skills start to decline. Previous studies in cognitive motor decline were focused on more elder populations with the data then used to extrapolate back to estimate when cognitive decline set in. Their data points to onset happening much earlier than previous research suggests with their estimate pointing to 24 being the time when cognitive motor functions being to take a hit. What’s really interesting though is the the second question: can us oldies overcome the motor skill gap with experience?

Whilst the study didn’t find any evidence to directly support the idea that experience can trump age related cognitive decline it did find that older players were able to hold their own against younger players of similar experience. Whilst the compensation mechanisms weren’t directly researched they did find evidence of older players using cognitive offloading tricks in order to keep their edge. Put simply older players would do things that didn’t require a high cognitive load, like using less complex units or strategies, in order to compete with younger players. This might not support other studies which have shown that age related decline can be combatted with experience but it does provide an interesting avenue for additional research.

As someone who’s well past the point where age related decline has supposedly set in my experience definitely lines up with the research. Whilst younger players might have an edge on me in terms of reaction speed my decades worth of gaming experience are more than enough to make up the gap. Indeed I’ve also found that having a breadth of gaming experience, across multiple platforms and genres, often gives me insights that nascent gamers are lacking. Of course though the difference between me and the professionals is a gap that I’ll likely never close but that doesn’t matter when I’m stomping young’uns in pub games.

standing-desk-technology

Standing 2 Hours a Day Shows Potential Benefits.

You don’t have to look far to find article after article about sitting down is bad for your health. Indeed whilst many of these posts boil down to simple parroting of the same line and then appealing to people to adopt a more active lifestyle the good news is that science is with them, at least on one point. There’s a veritable cornucopia of studies out there that support the idea that a sedentary lifestyle is bad for you, something which is not just limited to sitting at work. However the flip side to that, the idea that standing is good for you, is not something that’s currently supported by a wide body of scientific evidence. Logically it follows that it would be the case but science isn’t just about logic alone.

standing-desk-technology

The issue at hand here mostly stems from the fact that, whilst we have longitudinal studies on sedentary lifestyles, we don’t have a comparable body of data for your average Joe who’s done nothing but change from mostly sitting to mostly standing. This means that we don’t understand the parameters in which standing is beneficial and when it’s not so a wide recommendation that “everyone should use a standing desk” isn’t something that can currently be made in good faith. However preliminary studies are showing promise in this area, like new research coming out of our very own University of Queensland.

The study equipped some 780 participants, aged between 36 and 80, with activity monitors that would record their activity over the course of a week. The monitors would allow the researchers to determine when participants were engaging in sedentary activities, such as sleeping or sitting, or something more active like standing or exercising. In addition to this they also took blood samples and a number of other key indicators. They then used this data to glean insights as to whether or not a more active lifestyle was associated with better health indicators.

As they found this is true with the more active participants, the ones who were standing on average more than 2 hours a day above their sedentary counterparts, were associated with better health conditions like lower blood sugar levels (2%) and lower triglycerides (11%). That in and of itself isn’t proof that standing is better for you, indeed this study makes a point of saying that it can’t draw that conclusion, however preliminary evidence like this is useful in determine whether or not further research in this field is worthwhile. Based on these results there’s definitely some more investigation to be done, mostly to focus on isolating the key areas required to support the current thinking.

It might not sound like this kind of research really did anything we didn’t already know about (being more active means you’ll be more healthy? Shocking!) however validating base assumptions is always a worthwhile exercise. This research, whilst based off short term data with inferred results, provides solid grounds with which to proceed forward with a much more controlled and rigorous study. Whilst results from further study might not be available for a while this at least serves as another arrow in the quiver for encouraging everyone to adopt a more active lifestyle.

DARPA_SyNAPSE_16_Chip_Board

An Artificial Brain in Your Pocket.

Artificial neural networks, a computational framework that mimmics biological learning processes using statistics and large data sets, are behind many of the technological marvels of today. Google is famous for employing some of the largest neural networks in the world, powering everything from their search recommendations to their machine translation engine. They’re also behind numerous other innovations like predictive text inputs, voice recognition software and recommendation engines that use your previous preferences to suggest new things. However these networks aren’t exactly portable, often requiring vast data centers to produce the kinds of outputs we expect. IBM is set to change that however with their TrueNorth architecture, a truly revolutionary idea in computing.

DARPA_SyNAPSE_16_Chip_Board

The chip, 16 of which are shown above welded to a DARPA SyNAPSE board, is most easily thought of as a massively parallel chip comprising of some 4096 processes cores. Each of these cores contains 256 programmable synapses, totalling around 1 million per chip. Interestingly whilst the chip’s transistor count is on the order of 5.4 billion, which for comparison is just over double of Intel’s current offering, it uses a fraction of the power you’d expect it to: a mere 70 milliwatts. That kind of power consumption means that chips like these could make their way into portable devices, something that no one would really expect with transistor counts that high.

But why, I hear you asking, would you want a computerized brain in your pocket?

IBM’s TrueNorth chip is essentially the second half of the two part system that is a neural network. The first step to creating a functioning neural network is training it on a large dataset. The larger the set the better the network’s capabilities are. This is why large companies like Google and Apple can create useable products out of them, they have huge troves of data with which to train them on. Then, once the network is trained, you can set it loose upon new data and have it give you insights and predictions on it and that’s where a chip like TrueNorth can come in. Essentially you’d use a big network to form the model and then imprint on a TrueNorth chip, making it portable.

The implications of this probably wouldn’t be immediately apparent for most, the services would likely retain their same functionality, but it would eliminate the requirement for an always on Internet connection to support them. This could open up a new class of smart devices with capabilities that far surpass anything we currently have like a pocket translator that works in real time. The biggest issue I see to its adoption though is cost as a transistor count that high doesn’t come cheap as you’re either relying on cutting edge lithography or significantly reduced wafer yields. Both of these lead to high priced chips, likely even more than current consumer CPUs.

Like all good technology however this one is a little way off from finding its way into our hands as whilst the chip exists the software stack required to use it is still under active development. It might sound like a small thing however this chip behaves in a way that’s completely different to anything that’s come before it. However once that’s been settled then the floodgates can be opened to the wider world and then, I’m sure, we’ll see a rapid pace of innovation that could spur on some wonderful technological marvels.