A photo taken on May 15, 2017 shows staff monitoring the spread of ransomware cyber-attacks at the Korea Internet and Security Agency (KISA) in Seoul More cyberattacks could be in the pipeline after the global havoc caused by the Wannacry ransomware, a South Korean cybersecurity expert warned May 16 as fingers pointed at the North. More than 200,000 computers in 150 countries were hit by the ransom cyberattack, described as the largest ever of its kind, over the weekend. / AFP PHOTO / YONHAP / YONHAP / REPUBLIC OF KOREA OUT NO ARCHIVES RESTRICTED TO SUBSCRIPTION USE YONHAP/AFP/Getty Images
Monitoring ransomware cyber attacks in May 2017 in Seoul. WannaCry hit more than 200,000 computers in 150 countries. Such attacks are cheaper than sending tanks and, with no rules of engagement, victims are unsure how to respond © AFP

Imagine a celebrated politician in an operating theatre, undergoing robot-assisted surgery. The remotely operated machine is hacked from a foreign server and goes awry, inflicting injury. Does this count as an act of war?

This lurid scenario comes courtesy of an article in Foreign Policy magazine about the international rules of cyber warfare. The first rule of cyber war? There are no set rules. In theory, a cyber attack could trigger Article 5 of the North Atlantic treaty, signifying an attack on all Nato allies, but the scale of digital aggression required to meet this threshold is unspecified.

Unlike other kinds of skirmish — of land, sea, air and space — cyber is more of a virtual free-for-all. The Tallinn Manual, a document offering guidance on how international law applies in the cyber realm, is not legally binding. In fact, UN discussions to solidify accountability at the state level have stalled.

If this vacuum in international norms sounds worrying, things may worsen as artificial intelligence becomes the backbone of a country’s cyber defence. By cutting humans out of the decision-making loop, and in keeping with the nature of an arms race, both cyber attack and cyber defence are becoming quicker and more aggressive.

Warring machines could one day tip their host countries into a cyber conflict, escalating into military confrontation. That is why AI experts have been calling for an international doctrine governing state action in cyber space.

To a discerning rogue nation, cyber aggression offers multiple temptations over traditional warfare. Modern life runs on interconnected machines; one strike can be satisfyingly disruptive.

Even the most carefully designed systems are vulnerable. This month, the Pentagon submitted computerised weapons systems to authorised hacking tests and found that many could be infiltrated, even taken over, within hours. Some military operators only realised the weapons were compromised thanks to an on-screen pop-up telling them to insert coins to continue.

Such assaults are easy to unleash but hard to trace, and simpler than sending tanks rolling in. And cyber aggression befuddles the enemy: with no agreed rules of engagement, victims are unsure how to respond proportionately. That is, if the perpetrator can be identified. Attacks can be routed through third-party servers. Uncertainty over attribution renders retaliation risky.

Last year saw two ransomware attacks bearing the hallmarks of state involvement. WannaCry, which affected more than 200,000 computers in 150 countries, was traced back to North Korea. NotPetya, which is estimated to have cost logistics specialists such as Maersk and FedEx as much as $300m each, was blamed on Russia.

The National Cyber Security Centre last week revealed there were nearly 1,200 incidents in the UK over the past two years, the majority traceable to “hostile” states. None has posed a risk to life, but this grim threshold beckons. As Luciano Floridi, the philosopher who heads the Digital Ethics Lab at Oxford university, puts it, “Those who live by the digit may die by the digit.”

Professor Floridi and his colleague Mariarosaria Taddeo have described how countries could protect themselves in AI-enabled cyber space while maintaining international stability: by setting legal boundaries about what is a legitimate target and a proportionate response; by simulating war games between allies to highlight weaknesses, before AI cyberdefence is deployed; and by authorising a body such as the UN Security Council to arbitrate.

Surgical robots, incidentally, were first hacked in 2015. Today, the sabotage would be more subtle and remote manipulation might be hard to spot. What bad luck, the obituaries might say, that this celebrated politician never pulled through.

The writer is a science commentator

Get alerts on Cyber warfare when a new story is published

Copyright The Financial Times Limited 2019. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article