July 13, 2014 5:01 pm

Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom

In ‘Superintelligence’, Nick Bostrom argues that we need to endow robots with human values

Since the 1950s proponents of artificial intelligence have maintained that machines thinking like people lie just a couple of decades in the future. In Superintelligence – a thought-provoking look at the past, present and above all the future of AI – Nick Bostrom, founding director of Oxford’s university’s Future of Humanity Institute, starts off by mocking the futurists.

“Two decades is a sweet spot for prognosticators of radical change: near enough to be attention-grabbing and relevant, yet far enough to make it possible that a string of breakthroughs, currently only vaguely imaginable, might by then have occurred,” he writes. He notes, too, that 20 years may be close to the typical remaining duration of a forecaster’s career, limiting “the reputational risk of a bold decision”.

More

On this story

IN Books

Yet his book is based on the premise that AI research will sooner or later produce a computer with a general intelligence (rather than a special capability such as playing chess) that matches the human brain. While the corporate old guard such as IBM has long been interested in the field, the new generation on the US West Coast is making strides. Among the leaders, Google offers PR-led glimpses into its work, from driverless cars to neural networks that learn to recognise faces as they search for images in millions of web pages.

Approaches to AI fall into two overlapping classes. One, based on neurobiology, aims to understand and emulate the workings of the human brain. The other, based on computer science, uses the inorganic architecture of electronics and appropriate software to produce intelligence, without worrying too much how people think. Bostrom makes no judgment about which is most likely to succeed.

We are still far from real AI despite last month’s widely publicised “Turing test” stunt, in which a computer mimicked a 13-year-old boy with some success in a brief text conversation. About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of “superintelligence” faster than most expert

s expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.

Superintelligence

Paths, Dangers, Strategies

By Nick Bostrom
(Oxford University Press, £18.99/$29.95)

“Machines have a number of fundamental advantages, which will give them overwhelming superiority,” he writes. “Biological humans, even if enhanced, will be outclassed.” He outlines various ways for AI to escape the physical bonds of the hardware in which it developed. For example, it might use its hacking superpower to take control of robotic manipulators and automated labs; or deploy its powers of social manipulation to persuade human collaborators to work for it. There might be a covert preparation stage in which microscopic entities capable of replicating themselves by nanotechnology or biotechnology are deployed worldwide at an extremely low concentration. Then at a pre-set time nanofactories producing nerve gas or target-seeking mosquito-like robots might spring forth (though, as Bostrom notes, superintelligence could probably devise a more effective takeover plan than him).

What would the world be like after the takeover? It would contain far more intricate and intelligent structures than anything we can imagine today – but would lack any type of being that is conscious or whose welfare has moral significance. “A society of economic miracles and technological awesomeness, with nobody there to benefit,” as Bostrom puts it. “A Disneyland without children.”

Bostrom’s writing, often clear and vivid, sometimes veers into opaque language that betrays his background as a philosophy professor. But there is no doubting the force of his arguments. While he does not claim that an existential catastrophe is an inevitable long-term consequence of AI research, he shows the risk is large enough for society to think now about ways to prevent it by endowing future AI with positive human values.

How to do so is far from clear; simply transferring human values into computer code is unlikely to work. The problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake because machines thinking like people will not always lie two decades in the future – and when they arrive we want them to nurture not destroy us.

The writer is the FT science editor

-------------------------------------------

Letters in response to this review:

Giving machines human values would be the wrong thing / From Mr Oliver Corlett

Computers can’t share human history / From Dr Hugh Goodacre

Humans can always pull the plug / From Mr Ray Soifer

Copyright The Financial Times Limited 2014. You may share using our article tools.
Please don't cut articles from FT.com and redistribute by email or post to the web.

SHARE THIS QUOTE