The end of Moore's Law
A law that has shaped our world for decades will soon be broken. This is not a law of men, or of nature, but of technology. It is called “Moore’s Law” and it has affected your life more than you might think.
In 1965 Gordon Moore was a young engineer, leading the research and development arm of Fairchild Semiconductor, a pioneer in the early days of the computing industry.
When asked by a magazine to predict how the industry would evolve over the next decade, Moore observed that integrated circuits (which we now call “microchips”) were doubling in capacity every two years.
This meant the computing power of those microchips was doubling while the cost was held steady. He predicted this would continue for “at least 10 years”.
It turns out he was wrong. The trend has continued exponentially for 50 years and has radically changed the paths of both technology and society. Billions of people now carry around more computing power in their pockets than was used to land a man on the moon. Computing is now intertwined with every facet of modern life.
Intel, the corporation that Moore co-founded in 1968, has been a fundamental part of this revolution and today holds 80% of global market share. After an unbroken decade of accuracy, Moore’s prediction was renamed “Moore’s Law” and it has held ever since.
Until last week, that is, when Intel announced that it will not be manufacturing its next range of chips using 10 nanometre (nm) transistors, but will instead stick to using the 14nm transistors until 2017.
What this means in English is that Intel is having a lot of trouble cramming more computing power into the same amount of space and that it will take longer than two years to do so. In short - Moore’s Law will falter for the first time since 1965.
You can’t really blame Intel. A human hair is about 60 000 nm in diameter and a red blood cell is about 7 500 nm wide. The fact that we’re already able to manufacture chips with 14nm transistors is a miracle of science.
Ironically, Moore’s Law has run headlong into the fundamental laws of physics. The gaps between individual atoms of silicon (technically their nuclei) is about 0.5nm. If the entire transistor is 10nm wide, then you start to run out of atoms with which to work.
A transistor is a tiny switch – it has separate parts. Those parts cannot be one atom wide or they will not be able to conduct enough electricity to function. We’re still a little way from that point, but we’re rapidly approaching it.
IBM may come (temporarily) to the rescue of Moore’s Law with their latest prototype chips which use 7nm transistors, half the size of Intel’s current chips. But this technology still needs to be proved viable for mass production and is two or more years away from that point.
Does this mean that computers will suddenly become more expensive? Not really, no. Because of its abundance, we have been extremely wasteful with processing power. But, as smartphones prove, we can squeeze enormous performance gains out of existing chip technology without further miniaturisation.
And besides, thanks to the internet the centre of gravity in computing is already shifting back towards the core of the network and away from personal devices.
Companies like Google and Facebook have literally millions of powerful computers busily crunching data for you. In that context you don’t need a super powerful handset or laptop – you just need a screen and a decent connection to the internet. The size of the transistors on chips matters far less when the processing is done centrally and at such massive scale.
It’s probably silly to mourn the death of a law that was never anything more than a good prediction, but Moore’s Law has been a rallying cry for technologists for my entire life. For fifty years an entire industry defied gravity and transformed the world. The party is nearly over, but boy was it fun while it lasted.