Anyone who’s had a passing interest in computers has likely run up against the notion of Moore’s Law, even if they don’t know the exact name for it. Moore’s Law is a simple idea, approximately every 2 years the amount of computing power than can be bought cheaply doubles. This often takes the more common forms of “computer power doubles every 18 months” (thanks to Intel executive David House) or, for those uninitiated with the law, computers get obsoleted faster than any other product in the world. Since Gordon E. Moore first stated the idea back in 1970 it’s held on extremely well and for the most part we’ve beaten the predictions pretty handily.
Of course there’s been a lot of research into the upper limits of Moore’s Law as with anything exponential it seems impossible for it to continue on for an extended period of time. Indeed current generation processors built on the standard 22nm lithography process were originally thought to be one such barrier, because the gate leakage at that point was going to be unable to be overcome. Of course new technologies enabled this process to be used and indeed we’ve still got another 2 generations of lithography processes ahead of us before current technology suggests another barrier.
More recently however researches believe they’ve found the real upper limit after creating a transistor that consists only of a single atom:
Transistors — the basic building block of the complex electronic devices around you. Literally billions of them make up that Core i7 in your gaming rig and Moore’s law says that number will double every 18 months as they get smaller and smaller. Researchers at the University of New South Wales may have found the limit of this basic computational rule however, by creating the world’s first single atom transistor. A single phosphorus atom was placed into a silicon lattice and read with a pair of extremely tiny silicon leads that allowed them to observe both its transistor behavior and its quantum state. Presumably this spells the end of the road for Moore’s Law, as it would seem all but impossible to shrink transistors any farther. But, it could also points to a future featuring miniaturized solid-state quantum computers.
It’s true that this seems to suggest an upper limit to Moore’s Law, I mean if the transistors can’t get any smaller than how can the law be upheld? The answer is simple, the size of transistors isn’t actually a limitation of Moore’s Law, the cost of their production is.
You see most people are only familiar with the basic “computing power doubles every 18 months” version of Moore’s Law and many draw a link between that idea and the size of transistors. Indeed the size is definitely a factor as that means we can squeeze more transistors into the same space, but what this negates is the fact that modern CPU dies haven’t really increased in size at all in the past decade. Additionally new techniques like 3D CPUs (currently all the transistors on a CPU are in a single plane) have the potential to exponentially grow the number of transistors without needing the die shrinks that we currently rely on.
So whilst the fundamental limit of how small a transistor is might be a factor that affects Moore’s Law it by no means determines the upper limit; the cost of adding in those extra transistors does. Indeed every time we believe we’ve discovered yet another limit another technology gets developed or improved to the point where Moore’s Law becomes applicable again. This doesn’t negate work like that in the linked article above as discovering potential limitations like that better equips us for dealing with them. For the next decade or so though I’m very confident that Moore’s Law will hold up, and I see no reason why it won’t continue on for decades afterward.