Wired for War: The Robotics Revolution and Conflict in the 21st Century
By PW Singer
Penguin Press $29.95 512 pages
FT Bookshop price: £15.99
War Bots: How US Military Robots Are Transforming War in Iraq Afghanistan, and the Future
By David Axe
Nimble Books $28.36 88 pages
Moral Machines: Teaching Robots Right from Wrong
By Wendell Wallach and Colin Allen
OUP £15.99 288 pages
Trooper Talon doesn’t get tired or hungry. He doesn’t get scared and he doesn’t panic under fire. He fights on even when, all around him, his comrades are falling. He never forgets his orders, never gets distracted, never even blinks. Unfortunately for the rest of his platoon, he has one flaw: after eight hours in the field, his batteries run out.
Talon is a robot. He is the future of warfare and, with more than 12,000 robotic machines already deployed in Iraq, he is also the present. These machines range from the briefcase-sized PackBot that can scope a house for potential enemies, to the 35m wingspan Global Hawk spy-plane that can survey half of Iraq in one flight. They are doing some of the difficult, dull and dangerous jobs that once cost soldiers’ lives. And since 2002, when a Predator drone assassinated al-Qaeda leader Abu Ali al-Harithi, they are also doing the killing.
While our destructive power is launching into this science-fiction future, however, our principles are stuck in the trenches. There is no precedent for an android to stand in the dock for war crimes. And the Geneva Conventions don’t tell us who to blame when an automaton makes a lethal error, such as when US Patriot missile batteries shot down two allied aircraft in Iraq in 2003, killing two Britons and one American.
We are in the midst of a revolution in the way we wage war, as profound as the discovery of gunpowder or the building of the atomic bomb. Yet most of us hardly know it’s happening – and our legal and moral frameworks are entirely unprepared. But a few people have noticed: three fascinating and timely new books detail these developments and the issues they raise.
The American invasion of Afghanistan in 2001 was the first war in which “many of the forces still rode to battle on horses, and yet robotic drones were flying above,” explains PW Singer, senior fellow at US thinktank the Brookings Institution.
Talon, an all-purpose robot that looks like a dentist’s lamp on caterpillar tracks, was first deployed there for reconnaissance missions – dangerous work that was done by allies within Afghanistan until, as one soldier told Singer: “We began to run out of Afghans.” They were soon also assigned to dispose of the roadside bombs that cost the lives of so many allied soldiers. They proved such a success that by 2008 there were 2,000 in the field, and manufacturer Foster-Miller secured a $400m contract to double that number.
Talon impressed the US Army so much that they cloned him to make his evil twin. Built on the same chassis, Swords can carry a selection of lethal weapons, from assault rifles to grenade launchers. His makers boast that in target practice, “The robot hit the bulls-eye of the target 70 out of 70 tries.” However, though sent to Iraq in 2007, Swords have not been deployed because, writes journalist David Axe in War Bots, “They had a tendency to spin out of control.” But Swords have already been upgraded: expect to see its more stable successor, Maars, in an urban war-zone near you soon.
Axe’s War Bots is a slim, introductory volume. Light on text, its primary virtue is the full-colour pictures showing the droids in action. PW Singer, on the other hand, has written what is likely to be the definitive work on this subject for some time to come. He has a record of drawing out the underlying trends in modern warfare, with previous books on child soldiers and the increasing use of mercenaries. Wired for War will confirm his reputation: it is riveting and comprehensive, encompassing every aspect of the rise of military robotics, from the historical to the ethical.
While writing it, Singer was also co-ordinating the Obama presidential campaign’s defence policy taskforce. So perhaps it is no coincidence that the new US President has already announced his intention to see “greater investment in advanced technology, ranging from the revolutionary, like Unmanned Aerial Vehicles” to “electronic warfare capabilities.” Enormous sums are being invested – $230bn in the US Army’s Future Combat Systems programme alone. Clearly the warbot business will continue to boom.
The logic of moving to unmanned systems is compelling, as Singer makes clear. First, they are saving soldiers’ lives. He describes how the robot-makers’ offices are covered with thank-you letters from soldiers with messages such as: “This little guy saved our butts.” Second, they should also save civilian lives – unlike a hot-headed human trooper, robots don’t panic, don’t get greedy, and don’t set out to avenge their dead buddies. Combined with their accuracy, they promise less collateral damage.
So why is it that the prospect of robot armies makes us nervous? Perhaps we are unduly influenced by a diet of Daleks and Terminator movies. In fact, the use of robotic systems has been growing steadily since the second world war, when the Germans’ V-2 ballistic missile and the Allies’ automated Norden bombsight first took to the skies. The latter was an analogue computer that took over the decision for when to fire and was used to drop the atomic bomb on Hiroshima.
In the intervening decades, robots have become vastly more sophisticated – but they accomplish very specific tasks. Overall, Talon, Swords and others are still less bright than the average garden snail. They may take a wrong turn or identify the wrong target, but they won’t take over the world or enslave the human race.
Robots are currently given little autonomy – even the soldiers who use them feel nervous about machine guns with ideas of their own roaming the battlefield. But the pressure is on to give them a longer leash. A robot can react far faster than a human. If a platoon is being sniped at, a robot with infrared vision can instantly see where the shot came from and fire on the attacker before he can even duck. But if a human controller has to sanction every shot, the sniper will be long gone.
There are also personnel savings. At present, every robot plane flying high over Iraq has a flesh-and-blood pilot sitting in a box in Nevada holding the joy-stick; every Talon has a soldier with a remote control. That’s an expensive package – which would be more efficient if robots could get on with their work alone. And soon, human operators simply won’t be able to keep up, explains Singer. The coming robots “will be too fast, too small, too numerous, and will create an environment too complex for humans to direct”. So the machines will have to go solo.
And that is what should worry us. No matter how clever we make them, these robots will make mistakes. As Singer points out, current Artificial Intelligence systems struggle to tell the difference between an apple and a tomato – how could they distinguish between civilian and insurgent? Yet “the law is simply silent”, he writes, on whether autonomous robots can have a licence to kill, and what should happen if they shoot the wrong man. If a human is somewhere in the decision-making loop, legal accountability can be established. When machines go it alone, accountability disappears – and with it the rule of law. Which is why philosophers Wendell Wallach and Colin Allen are asking how we can persuade robots to do the right thing. The result, in their seminal, but stodgy, book Moral Machines, makes clear just how far we have to go.
They start by exploring the science fiction writer Isaac Asimov’s famous Three Laws of Robotics: that a robot must not injure a human; must obey the orders of a human; and must protect its own existence. But Asimov himself, in his short stories on this theme, showed the contradictions and limitations of these laws. What happens, for example, if two humans give opposing orders?
So the authors turn instead to classical moral theory for help, exploring whether, for example, a robot could be programmed to be a good utilitarian and act to maximise happiness and minimise suffering. Once again they are disappointed: any system would be paralysed by the massive, open-ended calculations required – assuming we could even agree how to measure happiness and suffering. Wallach and Allen ruefully conclude that “with respect to computability … the moral principles proposed by philosophers leave much to be desired”. The best we can do for now, they believe, is try to make sure that any super-tough, gun-toting androids are at least basically friendly.
Singer agrees: one solution, he suggests, would be to allow robots autonomous use of only non-lethal weapons. There are plenty on offer, ranging from incapacitating goo-guns, which immobilise targets, to microwave pain-rays. The robots could also be armed with more destructive weapons but for use only against the enemy’s hardware, not the people, he argues. Only with the authorisation of a flesh-and-blood – and legally accountable – soldier could lethal force be directed against a human.
These are excellent suggestions. But, with robot planes already dropping bombs on built-up areas, this would require a big shift from present-day practice. Current leaders in the field of high-tech weaponry, such as the US, may be reluctant to tie their hands with such restrictions.
But the world’s only superpower should realise that it might not lead for long. China produces three times as many engineering graduates a year as the US. And so-called “first movers” in new technologies pay heavily for initial development – those who come later can piggy-back on their research and learn from their mistakes. Also, many military robot systems are based on commercially available models – the Marcbot, for example, a small reconnaissance robot used widely by the US in Iraq, was developed from a popular remote-controlled toy car. If terrorists want to build their own droid army, they can order the parts from the internet. Regulating the robots therefore, is in the interests of the west as much as the rest of the world.
We have an ignoble history of deploying destructive new technologies before considering the consequences. Frankenstein visions of mechanical killers hunting down the last survivors of the human race are not entirely mad. But the robotics revolution is only just beginning: if we act now to update the laws of war, we can still avoid the worst-case scenarios. And, who knows, we might even dream of a day when wars will be fought entirely by machines – and the killing of a single human being would constitute a war crime.