Listen to this article

00:00
00:00

Truth be told, I was pretty unimpressed by Chimp the robot for at least the first half hour.

Chimp looks like a character from Transformers. At 5ft tall and weighing 443lb, it has a red metal shell, tank-like treads for feet and arms with a three-pronged pincer that can each lift the weight of a smallish man. But it took the robot several minutes to get out of a car and when it tried to open a door, it fell over and broke the frame.

Chimp was competing for the $2m first prize in the final of a US government-sponsored competition held last month where the robots had to complete eight tasks that mimicked the conditions at the Fukushima nuclear disaster, such as turning off a valve and cutting a hole in a wall with a drill. Many of the robots tumbled, sometimes comically. They moved so slowly one of the organisers likened it to “watching paint dry”.

But, after a while, as I started to understand what was going on, I became mesmerised. There was no one standing beside Chimp with a joystick, manipulating the robot’s every movement. Instead, Chimp’s head and body are packed with cameras, sensors and processors that allow it to generate a 3D model of its environment, which it sends back to a control team. “If it is a task that is familiar, we can say ‘grab that drill or turn that valve’,” says Tony Stentz, a Carnegie Mellon university professor who runs Chimp. Or to put it more bluntly, the robot was making many of the decisions itself.

Science fiction has long fixated on autonomous robots that can think for themselves but researchers are now catching up with Hollywood fantasies. Self-driving cars are close to becoming reality and scientists believe smarter robots could revolutionise care for the elderly. And then there is another potential customer. The Robotics Challenge, the world’s largest competition for robots, was organised by Darpa — the Defense Advanced Research Projects Agency — the Pentagon’s blue-sky research group.

Audience members were invited to take control of Boston Dynamics’ robot Spot at the robotics event

Darpa has become one of the biggest backers of robotics research. The agency was set up in 1958 after the Soviets launched Sputnik to ensure the US military never again fell behind a rival and has been involved in creating the internet, the stealth bomber and the cruise missile. For some in the Pentagon, autonomous robots are the next revolution in military technology. After all, a robot that can repair a nuclear plant can also fire a weapon or launch a missile.

That is why the tentative steps in robot autonomy are so important. They represent the first chapter in a new wave of innovation that could allow the US military to maintain its technological edge. The Pentagon has dozens of different projects that introduce autonomy into its new generations of weapons, vehicles and surveillance systems.

Yet autonomous robots bring their own powerful ethical dilemmas. If machines are given guns, it opens profound moral and legal questions about war — about who is making the final decisions to kill and about the ease with which countries might opt for conflict. Even while autonomous weapons are still on the drawing board, a vigorous campaign has been launched to have them banned.

Peter Singer, Washington’s highest-profile military futurologist, has spent the past decade forecasting robotic war. “The real shift that matters is in their intelligence and their autonomy,” he says. “Robots will reorder how we think about wars and how we talk about war.”


Like so much of what the Pentagon does these days, the drive into robotics is really about China. Since the end of the cold war, the US has enjoyed an unprecedented period of superiority over its rivals through precision-guided missiles and other technologies developed to outsmart the Soviets.

Watching US firepower easily defeat Iraq in the 1990-91 Gulf war, Chinese military planners set about trying to neutralise these advantages. After two decades of extensive investments, China’s vast array of missiles could soon make it too dangerous for US aircraft carriers to operate in the Western Pacific. Seeing this potential threat, the Pentagon has decided it needs new tools. Robotics has captured the imagination of the Pentagon’s top leadership. Last year, a Washington think-tank called the Center for a New American Security (CNAS) published a paper making the case for military robots. Future conflicts would see an “entirely new war-fighting regime in which unmanned and autonomous systems play central roles”, the paper said. “US defence leaders should begin to prepare now for this not so distant future — for war in the Robotic Age”. Robert Work, a co-author, now has a new job: deputy secretary of defence.

Robots have two powerful attractions for the US military. The first is their capacity for taking risks. Robots would be able to undertake more daring tasks, whether bombing raids on heavily fortified sites or surveillance of an enemy position, without putting lives of service members in danger.

The second is more mundane: cost. Although its budget is still four times its nearest rival, China, the Pentagon faces intense financial pressures. New weapons programmes, such as the F-35 fighter jet, are bleeding the budget. After more than a decade of fighting wars in Afghanistan and Iraq, personnel expenses have also exploded. Robots can allow the military to do more with a reduced headcount. Last year Robert Cone, the then commanding general of US army training, speculated that a brigade combat team would drop from 4,000 people to 3,000 over the next 15 years, with robots taking up a lot of the slack.

The Pentagon is also just following the science. Advances in computing power have helped robots to develop greater perception of their surroundings, allowing them to deal with complex images. At the same time, many of the same innovations behind the smartphone revolution — smaller and more powerful sensors, microchips, cameras — have boosted robotics.

“As a colleague of mine likes to say, robots are assholes,” says Jerry Pratt, at the Institute for Human and Machine Cognition in Pensacola, Florida, whose team came second at the Robotics Challenge. “We have an unwritten rule in our lab — if something works for the first time at the end of the day, then walk away, because if you try to get it to do it again or to improve, it will not work.” But Pratt believes the field is finally beginning to achieve its potential. “I have been doing humanoid robots for 20 years and they have been pretty much a joke until now,” he says. “But they are now a reality. They are less science fiction and more science fact.”

Autonomous robots are already creeping into everyday life. Most cars have air bags, while luxury vehicles offer crash-avoidance and self-parking capabilities. The same is true in military technology. The US Air Force deploys a long-range air-to-surface missile that has its own autonomous navigation system, while missile defence systems automatically detect and fire at incoming artillery. South Korea’s Samsung has developed an automated gun tower for the demilitarised zone with North Korea that can sense and fire at a human target within 2.2km in the dark.

In 2013, the US navy tested a drone that was able to land by itself on the USS George HW Bush. In aviation, taking off and landing from an aircraft carrier is one of the hardest things to do — hundreds of pilots and sailors died in the 1940s and 1950s when the US was learning how to use carriers. But soon machines will perform the task more efficiently than humans. The Israeli anti-missile drone known as Harpy can loiter over potential sites for hours and attack as soon as it spots a missile being launched. The British defence company BAE Systems is developing a drone called Taranis that has software that can select its own targets to fire missiles at.

“The future of war is robotic because the present of war is robotic,” says Singer. The very language used to describe the new technologies indicates their infancy, he argues. “We call these ‘unmanned’ vehicles, just as people referred to cars as horseless carriages when they were first developed.”

A humanoid robot that marches into war is still something that is years if not decades away. Instead, there is a ripple effect as new advances in robotics are gradually applied to unmanned planes, ships and vehicles.

For the army, that could mean robots acting as scouts surveying the battlefield ahead or as decoys that distract attention. Carnegie Mellon, the university behind Chimp, has army backing for a project to build robotic snakes, with cameras attached, that could crawl close to enemy targets. Darpa has another project to produce drones small enough to fly through a window. One potential model is a drone shaped like a goshawk that can fly at 45mph. Other projects are even smaller. The army research laboratory is working on an insect-shaped drone, while Harvard researchers, backed by Darpa, are developing “RoboBee”, which has a 3cm wingspan — both of which have spawned a sub-genre of jokes about “fly-on-the-wall” surveillance.

Paul Scharre, who worked on autonomous weapons at the Pentagon, argues that robots could be used to draw out the enemy or to conduct suicide missions that degrade fortified defences. “Robots have the potential to change how we think about ground warfare,” says Scharre, now at CNAS. Darpa is working with the navy to develop an unmanned vessel (a “drone ship”, as it is sometimes called) that would navigate itself for weeks at a time and could detect submarines or hunt for mines. The navy has been working for a number of years on unmanned submarines, principally for surveillance, but which could be armed with torpedoes.

In some ways, the most important breakthrough for the military would be to develop systems that allow one person to operate a large number of machines. So far this has been shown to be possible only with small numbers of drones but researchers at Harvard have done experiments on something they call Kilobot, a large autonomous swarm of a thousand different mini-robots that performs simple tasks en masse. ▶

Multiple robots controlled by one operator would generate huge potential savings: at the moment, each drone requires a substantial support team. But it would also start to open up a different way of thinking about military technology and conflict. Rather than sending in aircraft carriers (each costing $12bn) and ever-more expensive fighter jets, the Pentagon could develop strategies that involve swarms of low-cost, expendable drones that overwhelm an adversary’s defences. Cheap and mass-produced would replace the current model of pricey and scarce. “That would be the real paradigm shift,” says Scharre.


Of course, there is another weapon that is simple, cheap and autonomous: the landmine. There is no human controller triggering a landmine. Instead, it decides by itself when to explode, whether stepped on by a commando launching a raid, or a three year-old out for a stroll. In the 1990s, the American political scientist Jody Williams helped launch an international campaign against landmines that eventually led to a treaty signed by 162 countries banning the use, development or stockpiling of anti-personnel mines. Although the US, China and Russia have never signed the treaty, it was one of the most successful grassroots campaigns ever to change the conduct of war. Williams won a Nobel Peace Prize for her efforts. Now she is the figurehead for a new movement: the Campaign To Stop Killer Robots. With the help of 19 fellow Nobel laureates, she wants to ban fully autonomous weapons before they become a reality. “I find the very idea of killer robots more terrifying than nukes,” she said earlier this year at an international meeting in Geneva to discuss the issue. “Where is humanity going if some people think it’s OK to cede the power of life and death of humans over to a machine?”

On one level, campaigners present a series of practical objections to robot weapons. Machines would not be able to distinguish between civilians and combatants, they could not judge the proportionality of a military response and unpredictable behaviour could lead to accidents. In the event of an atrocity involving a robot, they ask, who would actually be held responsible? “We are worried that international law is not robust enough for these systems,” says Mary Wareham at Human Rights Watch and a co-ordinator of the killer robots campaign.

Beyond this, there is a broader moral critique of autonomous weapons. Campaigners believe it is simply unethical to hand over decisions involving life and death to robots. Their worries feed into a broader angst about artificial intelligence and the potential for computers to outsmart humans, which troubles even some of the world’s most prominent scientists. This year, Stephen Hawking and Tesla founder Elon Musk were among a group of well-known figures who signed an open letter cautioning fellow researchers to be wary of the “potential pitfalls” in the advance of artificial intelligence. Hawking went further, telling the BBC: “The development of full artificial intelligence could spell the end of the human race.”

Critics also worry military robots will make it easier for governments to go to war. Politicians once had to worry about the impact of sending someone else’s son or daughter to die in a conflict. But as humans become more detached from the battlefield, that sense of political risk starts to erode. The Obama administration’s use of drones, which have been deployed to kill terrorist suspects in a number of countries with little public discussion, is a forerunner of the way governments might treat conflict in a robotic era.

The campaign against killer robots has made some progress. The matter has been taken up at the United Nations, where the Convention on Certain Conventional Weapons has been debating autonomous weapons for the past two years. The US government has already tried to anticipate some of the potential criticisms, with the Pentagon issuing a directive in late 2012 insisting that, for the foreseeable future, military commanders would exercise “appropriate levels of human judgment over the use of force”.

Robotics scientists are sensitive to some of these criticisms. Few of them are thrilled about the fact that the Pentagon is one of the biggest backers of research and that their innovations could have widespread military applications. Many would welcome clear rules that set limits on robots in war.

However, many also think that the debate confuses the issue by exaggerating the extent of robot autonomy. In practice, they say, robots are never completely free of human interaction. Stentz at Carnegie Mellon talks about “sliding autonomy”, a gradual scale where robots have more autonomy in situations that they understand well and where there are few risks. When there is more uncertainty, though, humans would remain in control.

The economist Tyler Cowen, in his 2013 book Average is Over, argues that while robots excel at chess, the very best players are teams of humans and machines working together. According to many robotics researchers, the same will go for machines in conflict, where human judgment about the morality or political wisdom of an action will always complement the greater fighting capability that a machine might offer.

Researchers also argue that the more dystopian predictions about machines and war underestimate the affinity between humans and robots.

Rodney Brooks was one of the co-founders of iRobot, the company that makes the PackBot robots the US military uses to detect unexploded mines. He says soldiers would “come back from Iraq or Afghanistan with PackBots that are totally damaged, and they would want them fixed rather than being issued with a new one”.

Aside from the legal and ethical challenges that robot soldiers raise, the Pentagon faces another major obstacle: getting its hands on the new technologies. Many of the weapons that gave the US an edge over the Soviets were forged within the military-industrial complex. Yet the new wave of innovation is taking place in the civilian world.

That means the Pentagon is going to have to reinvent the way it does business, learning to work much more closely with Silicon Valley-style venture capital firms and start-ups. Department of defence contracts typically involve heavy red tape that puts off most companies outside the defence sector. The culture gap is also large. Even before the Edward Snowden leaks in 2013, the Pentagon faced mistrust among technology companies, which has been aggravated by the revelations about National Security Agency spying on their networks.

“I don’t think we have enough competent people within the government to be able to set up acquisition programmes for autonomous weapons or anything robotic,” says Mary Cummings, a former fighter pilot who is now a leading drones researcher at Duke University.

Hydra, a robot, built by Team NEDO-Hydra in Japan

The commercial market for these new technologies is global rather than exclusively American. Around 70 countries are doing research into unmanned vehicles. Darpa’s robotics competition underlined the point. The Carnegie Mellon Chimp team led after the first day but ended up coming third: the winner was the Korea Advanced Institute of Science and Technology, known as Kaist. Much of the cutting-edge research in an area such as robotics is open source, which means that every military around the world — including China — has access to the same innovations.

Scharre says this represents a return to the environment of the first and second world wars, when the basic technology for tanks, planes and aircraft carriers were commercially available to most wealthy countries. “The winners in that era were not people who invented something in a military lab but they were the people who figured out how best to deploy them,” he says.

The US still enjoys a central role in these technology networks. The connections between the private sector, finance and academia, together with the country’s capacity to attract the best and brightest science students from around the world, give the US huge advantages. Yet the Darpa Robotics Challenge demonstrated the new reality that the Pentagon also needs to adapt to. The US can play the role of convenor, bringing the smartest people together and driving the conversation. But it will not enjoy a monopoly.

Geoff Dyer is the FT’s US diplomatic correspondent.

Photographs: Ramak Fazel

Copyright The Financial Times Limited 2017. All rights reserved.
myFT

Follow the topics mentioned in this article