Clear-skies thinking: the prototype Tanaris stealth drone is being designed by BAE Systems and expected to be in use from 2030 © BAE
Experimental feature

Listen to this article

00:00
00:00
Experimental feature
or

Artificial intelligences have a long history when it comes to visions of future conflict: from the medieval Golem of Jewish tradition to the robotic Martian handling machines of H G Wells’ The War of the Worlds and beyond. Even our darkest cold war visions had technology at their heart: the world ends in Stanley Kubrick’s Dr Strangelove because a machine wills it.

The past few years, however, have seen a flowering of revolutionary technologies that make many of these fantasies a near-achievable reality: robotics and AI are now at the forefront of the military technologists’ art.

Military systems used in current conflicts around the world already depend on increasingly automated processes to make them run. US Aegis missile cruisers have automated targeting systems for their anti-aircraft and anti-missile systems. In cyber space, automation is essential to the way sensitive networks defend themselves against a huge onslaught of attacks because humans simply cannot make the decisions about what is friendly and what is threatening quickly enough.

On the drawing board, AI is even a defining feature of many platforms. The prototype Taranis stealth drone, being designed by BAE Systems and expected to be operational by 2030, will mostly run autonomously.

Indeed, across the board of offensive military platforms in development, computer programs that automatically identify and select targets for their human operators are becoming ubiquitous. So far, at least, western militaries have adhered to the principal that a human being must always be kept “in the loop” for any lethal action.

The next US president will face an early decision on just how much America’s future arsenal will depend on AI and robotics. President Barack Obama signed a directive on the research, development and use of autonomous weapons systems in 2012. Conscious of the moral questions such action might raise, the president inserted a five-year “sunset clause” into the order, meaning it needs an executive decision if such projects are to continue. What is decided in 2017 could determine how the world’s most powerful nation wages its wars over the next few decades.

The prospect of robotics in warfare has created anxiety, as scientific development often runs ahead of our ethical and moral consciousness. The implications of Terminator-style robots on the battlefield, or drones deciding by themselves who to blast from the skies, are still not discussed with great seriousness in public. Groups such as the International Committee for Robot Arms Control, Human Rights Watch and the Campaign to Stop Killer Robots are actively trying to change that. They advocate an outright ban on the development of autonomous weapons systems (AWS).

The US, at least, is unlikely to acquiesce to such a measure. Robotics has already been defined by the Pentagon as the future foundation of US military dominance. It is the key component of what the department of defence’s strategists call “the third offset” — each offset being a groundbreaking technology that the US has explored and used to dominate warfare for years before adversaries were able to adapt or catch up with it. The first was nuclear weapons; the second precision and guided munitions.

“The US and all western states are facing a challenge of increasing costs both of platforms and military personnel,” says Elizabeth Quintana, senior research fellow and director of military sciences at the UK’s Royal United Services Institute, a think-tank. “This is leading to ever more sophisticated platforms, but in ever decreasing numbers. But mass does have a quality all of its own — the third offset strategy proposes to use robotic platforms in support of larger, traditional platforms to overcome this challenge.”

Robot platforms will become essential, US strategists believe, in helping to protect valuable assets such as aircraft carriers from cheap technologies that adversaries are developing. To counter hypersonic Chinese missiles, or swarms of small explosive boats being developed by the Iranian navy, militaries like the US will have no option but to turn to robotics and AI.

“The necessity of AI is not that it replaces human agency in our military systems, but the opposite,” says one senior British naval officer.

“We need robots and AI more and more so that we can keep fielding our highest-value assets, which are human, without putting them in greater and greater danger.”

AI is also attractive for large, well-funded militaries precisely because of its expense. That is the essence of the third offset strategy. The technological cost of developing sophisticated AI puts such weapons beyond the reach of challenger countries, whose adoption of asymmetric warfare and cheap technologies such as drones and cyber weapons has begun to level the playing field with even the best-equipped opponents.

While AI technology may still be some years away, the ethical concerns and risks it poses are real. Proponents argue that AI will lead to greater accuracy in conflict and fewer incidents of civilian deaths and war crimes. But opponents point out that nothing goes wrong by design and that fielding robots in war with even partial elements of autonomy may change the nature of conflict.

The effect on local opinion of the US drone campaign to hit al-Qaeda’s leadership in the federally administered tribal area of Pakistan since 2004 has, for example, only recently been taken seriously by military chiefs.

Arbitrary justice meted out from the sky without warning, sometimes killing innocents, has undoubtedly had a powerful and deep psychological effect on many that could, in future years, be damaging to US interests.

As Russia’s invasion of Ukraine — and the huge information warfare campaign that accompanied it — has shown, the wars of the future will be as much about controlling messages and psychological operations to influence what military technologists call the “human terrain” as they will be about costly technical lethal platforms.

How robotics and AI will fit into that dynamic remains to be seen.

“AI is just a tool,” says Ms Quintana. “Humans wage war. As we have seen in Syria, manned platforms can be used in a very precise or considered manner, or to commit war crimes using unguided munitions and barrel bombs. The tool is not the problem. It is how it is trained.”

Copyright The Financial Times Limited 2017. All rights reserved.
myFT

Follow the topics mentioned in this article

Follow the authors of this article