South Korean professional Go player Lee Sedol is seen on the TV screens during the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, at the Yongsan Electronic store in Seoul, South Korea, Wednesday, March 9, 2016. Google’s computer program AlphaGo defeated its human opponent, Lee, in the first game of their highly anticipated five-game match.(AP Photo/Ahn Young-joon)
© AP

Not all artificial intelligence is created equal.

The variant that has been on display in Seoul this week is of a more intriguing kind than the run-of-the-mill machine intelligence used in today’s online recommendation engines and customer support systems. If it can live up to the hype, it may bring a step change in a wide range of real-world applications — though history suggests that eye-catching breakthroughs in AI fail to deliver as much as hoped for at their moment of maximum prominence.

On Thursday, Google’s DeepMind subsidiary won its second game of Go against Lee Se-dol, world champion of the ancient board game, putting it on the brink of victory in a five-game series. DeepMind’s program, AlphaGo, had already turned heads in the AI world. Now, it is on track to notch up a landmark victory for silicon brainpower.

Publicity stunts that pit man against machine are nothing new. IBM set the pattern 19 years ago, when its Deep Blue chess-playing computer beat world champion Garry Kasparov. At the time, it seemed that a citadel of human intelligence had fallen to computer science. But Deep Blue was more a victory for powerful hardware than the algorithms normally thought of as the basis of intelligence.

Computer chess programs had been making steady progress for years, using brute number-crunching to try to anticipate all possible future moves and calculate the best one available. Thanks to the inexorable advance of Moore’s law — bringing exponential increases in computing capacity — it was almost inevitable that Deep Blue would crush the human competition in the end: it was just a matter of time.

Two decades later, the Deep Blue victory still reverberates but it did little to advance the uses of AI. While the system could perform miracles in the narrow grid of a chessboard, that didn’t translate to the messy, “unstructured” nature of real-world phenomena.

IBM tried an altogether different stunt in 2011, when Watson — a computer named after its founder — took on the best human champions in the US TV quiz show Jeopardy!. This time, IBM had set itself the challenge of cracking the notoriously difficult task of “natural language processing” — understanding the meaning of language, even when it is veiled in puns and word games.

Watson’s success was a victory for engineering ingenuity. IBM had taken a collection of reasoning strategies known to researchers for years, and tuned them to create a system more supple in its handling of language than previously thought possible. It launched IBM’s most promising new business: the Watson division became the flagship of the company’s data analytics operation.

But while IBM has raced to apply the technology to real-world business problems, it has struggled so far to pull off the really difficult tasks it hoped were within its grasp.

DeepMind, by contrast, is a different class of technology altogether. Unlike chess, Go permits too many possible moves for a computer to calculate. As a result, the only approach a machine can take is to use pattern recognition to “understand” how a game is developing, then devise a strategy, and adapt it on the fly. A system must therefore rely on so-called deep learning — the technology behind the most startling recent advances in AI — applying networks of artificial neurons to sort through masses of data in the search for patterns and “meaning”.

To teach its system, DeepMind set two Go-playing programs against each other, using a technique known as “reinforcement learning” to help the technology iterate and adapt. In competition, the two computers came up with strategies that neither on its own had learned.

AI experts are hesitant about calling this the birth of a new intelligence, but suggest it represents something new in the evolution of computer learning.

Google’s goal for its AI research has been nothing less than a remaking of its core internet business: not just to present relevant information through its existing search engine, but to understand and anticipate its users’ needs and present advice.

The technology could also be applied more broadly as Google’s parent, tech holding company Alphabet, reaches into new markets. Likely areas of interest include healthcare, where tackling the complexities of diagnostics and treatment planning would hold out the potential for a new era of personalised medicine. This has already become one of the main focuses for IBM’s Watson.

Quite how well Google can build on its board game success remains hard to judge. But Mr Lee has clearly been on the receiving end of a highly visible demonstration. Speaking to the Financial Times in advance of the contest, he was dismissive about the chance of a computer victory. At least hubris remains an unchallenged human capability.

richard.waters@ft.com

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments