Artificial intelligence is one of the important technological advances of the early 21st century. Already it has meant that machines can read medical images as well as a radiologist, and enabled the auto industry to develop autonomous cars.
The technology is in danger of being overrated, however, and considerably more work is needed before we can reach the long-dreamt-of moment when machine intelligence matches the human variety.
When we discuss AI today we are mainly referring to just one facet of it: deep learning. This technology has its limitations, says Dave Ferrucci, a former AI expert at IBM. The Watson project he led there contributed to the rise of interest in cognitive systems when seven years ago it beat the best human players at Jeopardy, the US television quiz show.
However, Mr Ferrucci, co-founder and chief executive of Elemental Cognition, stresses that deep learning is simply a statistical technique for finding patterns in large amounts of data. It has predictive value but no true understanding in the sense that a human does. Having a computer simply spew out an answer “is not sufficient in the long term,” he says. “You want to say: ‘Here’s why.’”
The case against deep learning was put forcefully at the start of this year in a paper by Gary Marcus, a psychology professor at New York University and a persistent sceptic. His list of complaints extends from its heavy reliance on large data sets to its susceptibility to machine bias and its inability to handle abstract reasoning. Mr Marcus’s conclusion was that “one of the biggest risks in the current overhyping of AI is another AI winter”.
He was referring to the period in the 1970s when over-optimism about the technology was followed by a period of deep disillusionment. If one of the key hopes for deep learning, like autonomous driving, turns out to be misplaced, then “the whole field of AI could be in for a sharp downturn, both in popularity and funding”.
Some AI experts who are outside the current deep learning mainstream agree that it is important to question the current orthodoxy.
“Given the excitement and investment in deep learning, it’s important to analyse it and consider [its] limitations,” says Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence. Referring to recent warnings about the threat to humanity from an all-powerful AI, he says: “If we have Elon Musk and [Oxford university’s] Nick Bostrom talking about ‘superintelligence’, we need [sceptics like] Gary Marcus to provide a reality check.”
50 ideas to change the world
We asked readers, researchers and FT journalists to submit ideas with the potential to change the world. A panel of judges selected the 50 ideas worth looking at in more detail. This third tranche of 30 ideas (listed below) is about new ways to handle information and education. The next 10 ideas, looking at advances in healthcare, will be published on March 5, 2018.
Deep learning is a statistical approach using so-called “neural networks”, which are based on a theory of how the human brain works. Information passes through layers of artificial neurons, connections between which are adjusted until the desired result emerges. The main technique, called supervised learning, involves feeding in a series of inputs to train the system until the right output is obtained: pictures of cats, for instance, should eventually result in the word “cat”.
The approach has inherent limits. Andrew Ng, an adjunct professor at Stanford University and one of the founders of Google Brain, the search company’s deep learning project, says the system works for problems where a clear input can be mapped on to a clear output. This means it is best suited to a class of problems involving categorisation.
The applications of this kind of system are broad. The potential of neural networks first came to wide attention in 2012, when one system came close to matching human-level perception in recognising images. The technique has also brought big leaps in speech recognition and language translation, allowing machines to start doing jobs that were once the preserve of human workers.
However, neural networks can be fooled. Mr Marcus points to research showing how a network trained to look for rifles was tricked by a picture of turtles. Skewed data can also lead to machine bias.
The more fundamental case against deep learning is that the technology cannot deal with many of the problems that humans will want computers to handle. It has no capacity for things the human mind can do easily, like abstraction or inference that make it possible for us to “understand” from very little information, or instantly apply an insight to another set of circumstances.
“A huge problem on the horizon is endowing AI programs with common sense,” says Mr Etzioni. “Even little kids have it, but no deep learning program does.”
Recent research offers hope that at least some of the limitations of deep learning can be overcome. These developments include transfer learning, where an algorithm trained on one set of data is applied to a different problem, and unsupervised learning, where a system learns without needing any “labelled” data to teach it.
What we need are systems that can master a number of different forms of intelligence, says Mr Ferrucci. What humans think of as “cognition” actually encompasses a number of different techniques, each suited to a different type problem, he says. It will take similar hybrid computers to show that human kind of understanding.
Like Mr Etzioni, Mr Ferrucci suggests that this will require advances in other approaches to AI that are at risk of being sidelined by the fervour for deep learning. “We need to shift from narrow ‘AI savants’ that tackle a single problem, to broader AI that can tackle multiple tasks without requiring massive data sets for each,” says Mr Etzioni. “The last 50 years of AI research have yielded many insights that can help.”
Get alerts on Artificial intelligence when a new story is published