Hands on a computer keyboard
Digital warfare: A technological arms race is developing between fakers and those fighting them © Reuters

When Camille François, a longstanding expert on disinformation, sent an email to her team late last year, many were perplexed.

Her message began by raising some seemingly valid concerns: that online disinformation — the deliberate spreading of false narratives typically designed to sow mayhem — “could get out of control and become a huge threat to democratic norms”. But the text from the chief innovation officer at social media intelligence group Graphika soon became rather more wacky. Disinformation, it read, is the “grey goo of the internet”, a reference to a nightmarish, end-of-the world scenario in molecular nanotechnology. The solution the email proposed was to make a “holographic holographic hologram”.

The bizarre email was not actually written by François, but by computer code; she had created the message ­— from her basement — using text-generating artificial intelligence technology. While the email in full was not overly convincing, parts made sense and flowed naturally, demonstrating how far such technology has come from a standing start in recent years.

“Synthetic text — or ‘readfakes’ — could really power a new scale of disinformation operation,” François said.

The tool is one of several emerging technologies that experts believe could increasingly be deployed to spread trickery online, amid an explosion of covert, intentionally spread disinformation and of misinformation, the more ad hoc sharing of false information. Groups from researchers to fact-checkers, policy coalitions and AI tech start-ups, are racing to find solutions, now perhaps more important than ever.

“The game of misinformation is largely an emotional practice, [and] the demographic that is being targeted is an entire society,” says Ed Bice, chief executive of non-profit technology group Meedan, which builds digital media verification software. “It is rife.”

So much so, he adds, that those fighting it need to think globally and work across “multiple languages”.

Camille François
Well informed: Camille François’ experiment with AI-generated disinformation highlighted its growing effectiveness © AP

Fake news was thrust into the spotlight following the 2016 presidential election, particularly after US investigations found co-ordinated efforts by a Russian “troll farm”, the Internet Research Agency, to manipulate the result.

Since then, dozens of clandestine, state-backed campaigns — targeting the political landscape in other countries or domestically — have been uncovered by researchers and the social media platforms on which they run, including Facebook, Twitter and YouTube.

But experts also warn that disinformation tactics typically used by Russian trolls are also beginning to be wielded in the hunt of profit — including by groups looking to besmirch the name of a rival, or manipulate share prices with fake announcements, for example. Occasionally activists are also employing these tactics to give the appearance of a groundswell of support, some say.

Earlier this year, Facebook said it had found evidence that one of south-east Asia’s biggest telecoms providers, Viettel, was directly behind a number of fake accounts that had posed as customers critical of the company’s rivals, and spread fake news of alleged business failures and market exits, for example. Viettel said that it did not “condone any unethical or unlawful business practice”.

The growing trend is due to the “democratisation of propaganda”, says Christopher Ahlberg, chief executive of cyber security group Recorded Future, pointing to how cheap and straightforward it is to buy bots or run a programme that will create deepfake images, for example.

“Three or four years ago, this was all about expensive, covert, centralised programmes. [Now] it’s about the fact the tools, techniques and technology have been so accessible,” he adds.

Whether for political or commercial purposes, many perpetrators have become wise to the technology that the internet platforms have developed to hunt out and take down their campaigns, and are attempting to outsmart it, experts say.

In December last year, for example, Facebook took down a network of fake accounts that had AI-generated profile photos that would not be picked up by filters searching for replicated images.

According to François, there is also a growing trend towards operations hiring third parties, such as marketing groups, to carry out the misleading activity for them. This burgeoning “manipulation-for-hire” market makes it harder for investigators to trace who perpetrators are and take action accordingly.

Meanwhile, some campaigns have turned to private messaging — which is harder for the platforms to monitor — to spread their messages, as with recent coronavirus text message misinformation. Others seek to co-opt real people — often celebrities with large followings, or trusted journalists — to amplify their content on open platforms, so will first target them with direct private messages.

As platforms have become better at weeding out fake-identity “sock puppet” accounts, there has been a move into closed networks, which mirrors a general trend in online behaviour, says Bice.

Against this backdrop, a brisk market has sprung up that aims to flag and combat falsities online, beyond the work the Silicon Valley internet platforms are doing.

There is a growing number of tools for detecting synthetic media such as deepfakes under development by groups including security firm ZeroFOX. Elsewhere, Yonder develops sophisticated technology that can help explain how information travels around the internet in a bid to pinpoint the source and motivation, according to its chief executive Jonathon Morgan.

“Businesses are trying to understand, when there’s negative conversation about their brand online, is it a boycott campaign, cancel culture? There’s a distinction between viral and co-ordinated protest,” Morgan says.

Others are looking into creating features for “watermarking, digital signatures and data provenance” as ways to verify that content is real, according to Pablo Breuer, a cyber warfare expert with the US Navy, speaking in his role as chief technology officer of Cognitive Security Technologies.

Manual fact-checkers such as Snopes and PolitiFact are also crucial, Breuer says. But they are still under-resourced, and automated fact-checking — which could work at a greater scale — has a long way to go. To date, automated systems have not been able “to handle satire or editorialising . . . There are challenges with semantic speech and idioms,” Breuer says.

Collaboration is key, he adds, citing his involvement in the launch of the “CogSec Collab MISP Community” — a platform for corporations and government agencies to share information about misinformation and disinformation campaigns.

But some argue that more offensive efforts should be made to disrupt the ways in which groups fund or make money from misinformation, and run their operations.

“If you can track [misinformation] to a domain, cut it off at the [domain] registries,” says Sara-Jayne Terp, disinformation expert and founder at Bodacea Light Industries. “If they are money makers, you can cut it off at the money source.”

David Bray, director of the Atlantic Council’s GeoTech Commission, argues that the way in which the social media platforms are funded — through personalised advertisements based on user data — means outlandish content is typically rewarded by the groups’ algorithms, as they drive clicks.

“Data, plus adtech . . . lead to mental and cognitive paralysis,” Bray says. “Until the funding-side of misinfo gets addressed, ideally alongside the fact that misinformation benefits politicians on all sides of the political aisle without much consequence to them, it will be hard to truly resolve the problem.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments