How to counter deepfakery in the eye of the digital deceiver
We’ll send you a myFT Daily Digest email rounding up the latest Cyber Security news every morning.
Surges in patent filings often indicate the sprouting of transformative innovation, and this is certainly the case for the emerging economy around “deepfakes” and generative artificial intelligence. Deepfakes are “videos or images [ . . .] manipulated [to] take a video of a person, a candidate, a president, and alter it to look like they are saying something that they never said,” according to Professor Hany Farid at UC Berkeley. Fundamentally, the technology “is not in the hands of the few, but in the hands of many”.
Deepfakes and other synthetic media have already been deployed to spread disinformation. For example, a video of Nancy Pelosi, Speaker of the House of Representatives in the US, was manipulated to make her appear inebriated and then published on Facebook. Footage of former US president Barack Obama and former Italian prime minister Matteo Renzi has been tampered with so they appear to make crude comments or gestures. A synthetic voice was used to scam the chief executive of an unnamed energy company out of $250,000. Darker still, the practice of “deepfaking” originated in the murky world of face-swapped pornography.
Behind deepfakes rapidly becoming indistinguishable from reality is “deep learning”, a technology adept at identifying patterns. For example, once software has learnt that Mr Obama’s face looks like a particular collection of pixels, or that his voice sounds like that set of frequencies, raw source material can be altered to fit the same pattern. “Consider the potential effects, then, of video ‘evidence’ [ . . .] which has been entirely fabricated by deepfake software,” points out Joel Smith, an intellectual property litigator at Herbert Smith Freehills, “placing people at crime scenes or even depicting them committing crimes”.
On the flipside, the technology has been used in constructive and sometimes surprising ways: the Salvador Dalí Museum in Florida recently created a life-size interactive version of the artist himself. An altered voiceover of David Beckham helped raise awareness of malaria in nine different languages in a 2019 campaign, while Reuters has created a virtual sports presenter. Soon we will be able to insert ourselves and our friends into films, or have phones translate our speech into other languages using our voices.
Deepfake tech is mostly non-proprietary and can be rapidly distributed behind a cloak of anonymity on the internet. Added to this pernicious cocktail is the fact that social media platforms help deepfakes spread with unparalleled virality, volume and velocity. Deepfakes are an agnostic and enabling technology that can be used for greater good or evil. But bad actors are often masters at weaponising technology and using it in smarter and more efficient ways than good actors.
What about deploying technological countermeasures? Organisations including Reuters, Google, Microsoft, Amazon and Facebook have all joined the fight against malicious deepfakery. But any defence can be countered by the next software update. Even if foolproof detection was possible, our current battles with human-generated disinformation demonstrate the difficulties of finding solutions.
Our response must therefore be cyber-sociological: developing digital literacy and behavioural practices with the baseline premise that unverified media cannot be trusted. “Society has never relied solely on the content itself as a source of truth,” as Jeffrey Westling, a fellow at the R Street Institute, a US think-tank, points out. “A large part of the reason why Photoshop never became the ‘death of trust’ was because people broadly became aware of the technology, as well as the myriad ways that people may be manipulated by it.”
We clearly need to develop online habits armed with digital tools to authenticate information ourselves.
Supported by digital literacy campaigns, deepfakes might spur societal change to combat disinformation in general. They could be the catalyst for us to develop our ability to check the authenticity of information for ourselves. From art, music, and literature to social media, medicine, crime, justice, and even presidential campaigns, the technology driving deepfakes is going to transform our society.
Frederick Mostert is founder of Digilabs and law professor at King’s College London, and Henry Franks is a computer scientist and chief software officer at PowerX Technology
Get alerts on Cyber Security when a new story is published