The US military’s use of artificial intelligence and advanced robotics will not include creating Terminator-style robots, the Pentagon’s second-in-command has said, as concerns increase over the role AI should play in modern warfare.
Military planners were looking at ways for machines to help humans make quicker decisions on the battlefield, said Robert Work, deputy secretary of defence.
“We will use artificial intelligence in the sense that it makes human decisions better,” Mr Work said. “Human-machine collaboration will give humans better information upon which to help make decisions.”
The Pentagon’s multibillion-dollar investments in high-tech weapons have put it at the centre of a global debate about the use of AI and autonomous robots in warfare, with technologist Elon Musk warning that AI was “potentially more dangerous than nukes”.
Last year more than 1,000 of the biggest names in science and technology — including cosmologist Stephen Hawking and Mr Musk — signed an open letter calling for a global ban on “killer robots”, following concerns that it could trigger an international arms race.
Mr Work is leading the Pentagon’s push into fields such as AI and robotics, which the US military hopes will maintain its technological edge over China and Russia for another generation. “We need to up our strategic game in an era of great power competition,” he said.
At a national security forum last December Mr Work voiced US concerns over the speed of developments in artificial intelligence by China and Russia’s, saying the Russian army was “preparing to fight on a roboticised battlefield”.
The US’s investments include a range of unmanned aircraft, ships and submarines that will have an increased level of autonomy. The Pentagon is also looking at supercomputers that can absorb gigabytes of data to look for intelligence and to watch potential adversaries.
“The thing that people like Elon Musk are most worried about is a machine that gets smart enough to rewrite its own code,” he said. “We are way far away from that.”
General Paul Selva, deputy chairman of the joint chiefs of staff, said there needed to be a “firebreak” between a machine that can assess massive quantities of data to help make a targeting decision and one that decides on the use of force.
“As a human, it makes me very uncomfortable that we might make a machine that can make a decision about taking lethal action against anyone without understanding who programmed the consciousness of the machine,” he said. “That is a debate we need to have about the use of advanced robotics.”
Mr Work said the only likely uses of completely autonomous machines were in defence, such as missile batteries programmed to respond to incoming missiles and computer programmes that react to signs of a cyber attack.
“Our vision of our battle network is where the human will always be the one who makes the final decision on lethal action, with the possible exception of some defensive capabilities,” he said.
More powerful computers could allow the Pentagon to catch small movements of troops or weapons by an adversary. “By looking at all the social media and what people are reporting [machines could] give us warning of a little green men problem,” Mr Work said.
One of the dilemmas the US military faces with its new high-tech weapons is how to establish deterrence with capabilities that are largely secret and whose impact might be lost if publicised. The Pentagon has started to lift the veil on a few of its capabilities, talking openly about conducting cyber war operations against Isis and showing some of its submarine drones.
“We are going to have to make deliberate choices to withhold the most important capabilities,” said Gen Selva. “But we will have to make a conscious choice to demonstrate things that clearly signal to our adversaries that we have a conventional advantage.”