Anthony Bourdain
‘Roadrunner’, a film about Anthony Bourdain, gives a new technological twist to the debate about what constitutes artistic veracity © TNS/ABACA/Reuters

When Werner Herzog filmed his documentary on Russian mysticism Bells From The Deep, he paid two Siberian drunks to lie on the ice on a lake and pretend to be pilgrims listening out for the chimes from the legendary lost city of Kitezh. The German director defended his artifice, distinguishing between the literal truth of accountants and the poetic truth of artists.

The perennial debate about what constitutes artistic veracity has been given a new technological twist with the release of Roadrunner, a film about Anthony Bourdain, the celebrity chef and writer. The film’s director Morgan Neville incensed some cinematic purists after casually admitting that he had used software generated by artificial intelligence to mimic the voice of Bourdain reading out private emails written before his suicide in 2018. 

Some critics expressed offence that the audience had been so manipulated. It would certainly have been more respectful to have flagged the use of synthetic audio beforehand, even if its use hardly constitutes a capital offence. Yet the episode highlights how so-called deepfakes are steadily permeating our culture, often in imperceptible ways. As with so many other aspects of our digital world, their use is evolving through a chaotic mix of technological capability, commercial and political impulses, government regulation and societal norms.

The term “deepfake” first appeared in 2017 in a Reddit post referring to the use of deep learning AI techniques to generate fake content. Since then, the creation of artificial text, photos, audio and video has exploded. One AI company last year found more than 85,000 deepfake videos circulating on the internet. Some are for entertainment, satire or artistic provocation. For example, the British broadcaster Channel 4 startled viewers in 2020 by airing a deepfake film of Queen Elizabeth delivering an alternative Christmas message. 

But much deepfake content has more sinister intent. A study from 2019 found that 96 per cent of deepfake videos involved non-consensual face-swapping pornography, targeting celebrities or civil rights campaigners. Deepfake technology can also be used to generate false photo profiles, journalistic bylines and artificial articles, fuelling disinformation. This March, the FBI warned that foreign agents would most probably use synthetic media to spread propaganda.

There are four main ways to counter such abusive uses of deepfakes, although none offer more than a partial solution. First, the tech companies that carry the content can deploy smarter tools to identify and block damaging deepfakes. But these technological tools are only ever likely to win “fleeting victories”, according to Noah Giansiracusa, author of How Algorithms Create and Prevent Fake News. Many deepfakes are made by generative adversarial networks, pitting two competing neural networks against each other. As the name implies, the process amounts to a technological arms race to generate more authentic content. “They are literally learning how to avoid detection,” he says.

Second, commercial incentives play a big role in popularising deepfakes and can be swayed. Tech companies act as pimps as well as prudes. Designed to generate clicks and page views, their algorithms enable sensationalist synthetic content to reach mass audiences and even reward their creators with advertising dollars. Campaigners and advertisers can press the platforms to reorder their morality.

Third, legislators in the US and elsewhere have debated criminalising those using deepfakes to harm individuals or jeopardise national security. State legislatures in California and Texas have already passed laws restricting the use of synthetic media during election campaigns. The EU has even drafted legislation forcing companies to inform users whenever AI is used to imitate a human. That all sounds fine in theory but may prove nightmarish to enforce in practice.

Finally, there is societal adaptation. Behavioural scientists talk of educating internet users to “pre-bunk” deceptive content rather than trying to debunk it once posted. Arguably, the greatest danger from deepfake content is that it inflates the “liar’s dividend”, making everyone question everything and devaluing the currency of truth. As the US congressman Adam Schiff explained it, “not only may fake videos be passed off as real, but real information can be passed off as fake.” 

Such is the malleability of the technology and the ingenuity of the human mind, that the war against harmful deepfakes can never be won. Yet the impact of deepfakes depends on the ways in which they are used, the contexts in which they are posted and the scale of the audience they reach. Those are grounds still worth contesting.

Copyright The Financial Times Limited 2023. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article