In the last decade there’s been a move away from raw CPU speed as an indicator of performance. Back when single cores were the norm it was an easy way to judge which CPU would be faster than the other in a general sense however the switch to multiple cores threw this into question. Partly this comes from architecture decisions and software’s ability to make use of multiple cores but it also came hand in hand with a stalling CPU speeds. This is mostly a limitation of current technology as faster switching meant more heat, something most processors could not handle more of. This could be set to change however as research out IBM’s Thomas J. Watson Research Center proposes a new way of constructing transistors that overcomes that limitation.
Current day processors, whether they be the monsters powering servers or the small ones ticking away in your smartwatch, are all constructed through a process called photolithography. In this process a silicon wafer is covered in a photosensitive chemical and then exposed to light through a mask. This is what imprints the CPU pattern onto the blank silicon substrate, creating all the circuitry of a CPU. This process is what allows us to pack billions upon billions of transistors into a space little bigger than your thumbnail. However it has its limitations related to things like the wavelength of light used (higher frequencies are needed for smaller features) and the purity of the substrate. IBM’s research takes a very different approach by instead using carbon nanotubes as the transistor material and creating features by aligning and placing them rather than etching them in.
Essentially what IBM does is take a heap of carbon nanotubes, which in their native form are a large unordered mess, and then aligns them on top of a silicon wafer. When the nanotubes are placed correctly, like they are in the picture shown above, they form a transistor. Additionally the researchers have devised a method to attach electrical connectors onto these newly formed transistors in such a way that their electrical resistance is independent of their width. What this means is that the traditional limitation of increasing heat with increased frequency is now decoupled, allowing them to greatly reduce the size of the connectors potentially allowing for a boost in CPU frequency.
The main issue such technology faces is that it is radically different from the way we currently manufacture CPUs today. There’s a lot of investment in current lithography based fabs and this method likely can’t make use of that investment. So the challenge these researchers face is creating a scalable method with which they can produce chips based on this technology, hopefully in a way that can be adapted for use in current fabs. This is why you’re not likely to see processors based on this technology for some time, probably not for another 5 years at least according to the researchers.
What it does show though is that there is potential for Moore’s Law to continue for a long time into the future. It seems whenever we brush up against a fundamental limitation, one that has plagued us for decades, new research rears its head to show that it can be tackled. There’s every chance that carbon nanotubes won’t become the new transistor material of choice but insights like these are what will keep Moore’s Law trucking along.
For as long as we’ve been using semiconductors there’s been one material that’s held the crown: silicon. Being one of the most abundant elements on Earth its semiconductor properties made it perfectly suited to mass manufacture and nearly all of the world’s electronics contain a silicon brain within them. Silicon isn’t the only material capable of performing this function, indeed there’s a whole smorgasbord of other semiconductors that are used for specific applications, however the amount of research poured into silicon means few of them are as mature as it is. However with our manufacturing processes shrinking we’re fast approaching the limit of what silicon, in its current form, is capable of and that may pave the way for a new contender for the semiconductor crown.
The road to the current 14nm manufacturing process has been a bumpy one, as the heavily delayed release of Intel’s Broadwell can attest to. Mostly this was due to the low yields that Intel was getting with the process, which is typical for die shrinks, however solving the issue proved to be more difficult than they had originally thought. This is likely due to the challenges Intel faced with making their FinFET technology work at the smaller scale as they had only just introduced it in the previous 22nm generation of CPUs. This process will likely still work down at the 10nm level (as Samsung has just proven today) but beyond that there’s going to need to be a fundamental shift in order for the die shrinks to continue.
For this Intel has alluded to new materials which, keen observers have pointed out, won’t be silicon.
The type of material that’s a likely candidate to replace silicon is something called Indium Gallium Arsenide (InGaAs). They’ve long been used in photodetectors and high frequency applications like microwave and millimeter wave applications. Transistors made from this substrate are called High-Electron Mobility Transistors which, in simpler terms, means that they can be made smaller, switch faster and more packed into a certain size. Whilst the foundries might not yet be able to create these kinds of transistors at scale the fact that they’ve been manufactured at some scale for decades now makes them a viable alternative rather than some of the other, more exotic materials.
There is potential for silicon to hang around for another die shrink or two if Extreme Ultraviolet (EUV) lithography takes off however that method has been plagued with developmental issues for some time now. The change between UV lithography and EUV isn’t a trivial one as EUV can’t be made into a laser and needs mirrors to be directed since most materials will simply absorb the EUV light. Couple that with the rather large difficulty in generating EUV light in the first place (it’s rather inefficient) and it makes looking at new substrates much more appealing. Still if TSMC, Intel or Samsung can figure it out then there’d be a bit more headroom for silicon, although maybe not enough to offset the investment cost.
Whatever direction the semiconductor industry takes one thing is very clear: they all have plans that extend far beyond the current short term to ensure that we can keep up the rapid pace of technological development that we’ve enjoyed for the past half century. I can’t tell you how many times I’ve heard others scream that the next die shrink would be our last, only to see some incredibly innovative solutions to come out soon after. The transition to InGaAs or EUV shows that we’re prepared for at least the next decade and I’m sure before we hit the limit of that tech we’ll be seeing the next novel innovation that will continue to power us forward.
Anyone who’s had a passing interest in computers has likely run up against the notion of Moore’s Law, even if they don’t know the exact name for it. Moore’s Law is a simple idea, approximately every 2 years the amount of computing power than can be bought cheaply doubles. This often takes the more common forms of “computer power doubles every 18 months” (thanks to Intel executive David House) or, for those uninitiated with the law, computers get obsoleted faster than any other product in the world. Since Gordon E. Moore first stated the idea back in 1970 it’s held on extremely well and for the most part we’ve beaten the predictions pretty handily.
Of course there’s been a lot of research into the upper limits of Moore’s Law as with anything exponential it seems impossible for it to continue on for an extended period of time. Indeed current generation processors built on the standard 22nm lithography process were originally thought to be one such barrier, because the gate leakage at that point was going to be unable to be overcome. Of course new technologies enabled this process to be used and indeed we’ve still got another 2 generations of lithography processes ahead of us before current technology suggests another barrier.
More recently however researches believe they’ve found the real upper limit after creating a transistor that consists only of a single atom:
Transistors — the basic building block of the complex electronic devices around you. Literally billions of them make up that Core i7 in your gaming rig and Moore’s law says that number will double every 18 months as they get smaller and smaller. Researchers at the University of New South Wales may have found the limit of this basic computational rule however, by creating the world’s first single atom transistor. A single phosphorus atom was placed into a silicon lattice and read with a pair of extremely tiny silicon leads that allowed them to observe both its transistor behavior and its quantum state. Presumably this spells the end of the road for Moore’s Law, as it would seem all but impossible to shrink transistors any farther. But, it could also points to a future featuring miniaturized solid-state quantum computers.
It’s true that this seems to suggest an upper limit to Moore’s Law, I mean if the transistors can’t get any smaller than how can the law be upheld? The answer is simple, the size of transistors isn’t actually a limitation of Moore’s Law, the cost of their production is.
You see most people are only familiar with the basic “computing power doubles every 18 months” version of Moore’s Law and many draw a link between that idea and the size of transistors. Indeed the size is definitely a factor as that means we can squeeze more transistors into the same space, but what this negates is the fact that modern CPU dies haven’t really increased in size at all in the past decade. Additionally new techniques like 3D CPUs (currently all the transistors on a CPU are in a single plane) have the potential to exponentially grow the number of transistors without needing the die shrinks that we currently rely on.
So whilst the fundamental limit of how small a transistor is might be a factor that affects Moore’s Law it by no means determines the upper limit; the cost of adding in those extra transistors does. Indeed every time we believe we’ve discovered yet another limit another technology gets developed or improved to the point where Moore’s Law becomes applicable again. This doesn’t negate work like that in the linked article above as discovering potential limitations like that better equips us for dealing with them. For the next decade or so though I’m very confident that Moore’s Law will hold up, and I see no reason why it won’t continue on for decades afterward.