News groups and tech companies team up to fight disinformation
The BBC has launched an industry-wide campaign to get media organisations and technology companies to work together to fight disinformation, particularly around elections and other sensitive events.
The project, which is still in its early stages, grew out of a summit convened in June by Tony Hall, BBC director-general, who has likened the rise of false and misleading news to a “poison in the bloodstream of societies”.
On the media side, participants at the event included the Financial Times, Reuters, the Wall Street Journal, The Hindu, Agence France-Presse, Canadian Broadcasting Corporation, and the European Broadcasting Union. Google, Facebook and Twitter also took part.
The BBC-led group’s next steps will include holding working groups on media literacy education, voter information, and sharing lessons on how to handle problems arising during elections.
The group also plans to develop an “early warning system” so that news companies and tech platforms can alert each other rapidly when they “discover disinformation which threatens human life or disrupts democracy during elections”, the British broadcaster said in a statement. The system will be trialled this year, and would rely on a combination of technology and journalists to identify problematic content quickly and remove or minimise its reach.
“Disinformation and so-called fake news is a threat to us all. At its worst, it can present a serious threat to democracy and even to people’s lives,” said Mr Hall.
“This summit has shown a determination to take collective action to fight this problem and we have agreed some crucial steps towards this.”
The effort comes as concerns mount in many countries about the way social media and messaging apps have been used to influence public opinion and spread false information. In the US, special counsel Robert Mueller found Russian interference in the 2016 presidential election occurred “in sweeping and systematic fashion”, while rumours and fake news stories on Facebook and WhatsApp have fomented fatal violence in India, Myanmar and Sri Lanka.
In response, tech companies like Facebook and Google’s YouTube have invested heavily in hiring thousands of content moderators, and are also developing algorithms and filters that can help identify problematic posts. But given the masses of data being posted to social media networks, as well as the challenges posed by encrypted chat apps, such efforts have fallen woefully short.
On the policy side, some countries, including France and China, have passed laws that define illegal misinformation, and allow for its removal, while others are weighing regulations and legislation.
But experts, including a group convened by the European Commission last year, have warned that such rules can infringe on free speech, and pose issues given the inherent difficulties defining disinformation. As the experts advising the EU warned, “simplistic solutions should be disregarded” while “any form of censorship either public or private should clearly be avoided.”
James Lamont, managing editor of the Financial Times, said the newspaper was “delighted to participate” in the project, alongside other initiatives in media literacy and education.
“Fake news undermines well-informed democratic debate . . . It also destroys public confidence in media and adds to the financial pressures facing quality news organisations,” Mr Lamont said.