Alan Cane: Brilliance is a matter of timing

Listen to this article

00:00
00:00

Here is how the UK failed to lead the world in microchip development. In 1952 Geoffrey Dummer, an engineer working at the Royal Radar Establishment at Malvern in England delivered a speech at a conference in Washington in which he described “electronic equipment in a block with no connecting wires. The block may consist of layers of insulating, conducting, rectifying and amplifying materials, the electrical functions being connected directly by cutting out sections of the various layers”.

He was describing an integrated circuit, a slice of silicon on which transistors, resistors and capacitors are inscribed with a combination of photographic and chemical procedures. He was ahead of his time. The government failed to recognise the significance of his concept or fund its development and it was left to the Americans.

Jack Kilby of Texas Instruments, who died last month, and the late Robert Noyce of Intel jointly claim the credit - and deservedly so - for the single most important contribution to the IT revolution, the development which made the miniaturisation of electronic devices possible.

Neither Kilby, nor Noyce, who submitted their patent applications in July 1958 and January 1959 respectively were aware of Dummer’s ideas. Kilby said later: “Dummer’s thinking was valid and I think that, had we known about it, it might have accelerated the timing of things.”

Dummer, Kilby and Noyce were engineers and the integrated circuit was an engineering solution to problems of size, cost, reliability and performance in electronic equipment. The science, the creation of the transistor and the explanation of its mechanics, had been carried out more than a decade before by Shockley, Bardeen and Brattain.

It takes nothing away from the pioneering brilliance of Dummer, Kilby and Noyce to say that it would have been only a matter of time before a bright engineer hit on the idea of integrating transistors and other components on a semiconductor material. After all, Paul Eisler, the Austrian refugee who invented the printed circuit board had established the general principles some years earlier.

He eliminated the tangles of wiring by printing patterns on copper-covered boards using an acid-resistant ink. Immersion in acid dissolved the unwanted copper leaving a pattern of electrical connections. He noted in his autobiography: “I am sure the basic concept of producing whole multi-layer circuits by integrating passive components with the conducting network...must have contributed in the mind of the American scientists to the creation of integrated circuits on a chip”.

Kilby and Noyce’s first microchips integrated a few components and were more expensive than the discrete circuits they replaced. Today, Intel’s Itanium 2 microprocessor integrates some 410m transistors at a cost per transistor of some fraction of a cent. The integrated circuit made possible small, cheap circuits that rarely failed. Their performance was limited by the speed at which electrons moved in the wires connecting the chips together rather than the speed of the central processor.

The number of transistors which can be inscribed on a chip has increased regularly over the past 40 years. This phenomenon is known as Moore’s Law, following an empirical observation by Gordon Moore of Intel that the number of components on a chip and computational capacity seemed to double every 18 months to two years at constant price. Moore’s Law continues to hold true today - more or less - and it is the key to the savage economics of the semiconductor industry. If a manufacturer is two months late with a chip which took three years to design, it is likely the product will be obsolete before it appears in the catalogue.

But how long will Moore’s Law continue to hold sway? There are already problems with heat and power consumption which are stopping attempts to create ever denser chips in favour of what are called “multi-core processors” where computing operations are carried out by more than one processor on the same chip.

Moore’s Law must fall foul of the laws of physics. In a recent interview, Moore speculated it might have only another 10 or 20 more years before it came up against the limits of molecular size. Some think him optimistic, others, like Lawrence Krauss of Case Western Reserve University and Glenn Starkman of CERN in Switzerland, too unambitious. In a recent paper, Krauss and Starkmann suggest: “Moore’s Law cannot continue unabated for more than 600 years for any technological civilisation.”

If it does, it promises some staggering developments in a few years, an acceleration towards the “technological singularity” an expression used by the inventor Ray Kurzweil to indicate a rate of technological change so rapid that it will rupture the fabric of human history. It is a giant leap from that first simple circuit. Kilby used to tell the story of a rabbit and a beaver by the Hoover Dam. “Did you build that?” the rabbit asked. “No, but it was based on an idea of mine,” the beaver replied.

Copyright The Financial Times Limited 2017. All rights reserved. You may share using our article tools. Please don't copy articles from FT.com and redistribute by email or post to the web.