A senior EU official said the draft legislation was likely to impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies
A senior EU official said the draft legislation was likely to impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies © AFP

Brussels plans to force companies including Facebook, YouTube and Twitter to identify and delete online terrorist propaganda and extremist violence or face the threat of fines.

The European Commission has decided to abandon a voluntary approach to get big internet platforms to remove terror-related videos, posts and audio clips from their websites, in favour of tougher draft regulation due to be published next month. 

Julian King, the EU’s commissioner for security, told the Financial Times that Brussels had “not seen enough progress” on the removal of terrorist material from technology companies and would “take stronger action in order to better protect our citizens”. 

“We cannot afford to relax or become complacent in the face of such a shadowy and destructive phenomenon,” said Mr King. 

Although details of the regulation are still being drawn up inside the commission, a senior EU official said the draft legislation was likely to impose a limit of one hour for platforms to delete material flagged as terrorist content by police and law enforcement bodies.

The proposed regulation would be the first time that the EU has explicitly targeted tech companies’ handling of illegal content. So far, Brussels has favoured self-regulation for tech platforms who are not considered legally responsible for material on their websites.

In March, the commission toughened up its voluntary guidelines, encouraging the removal of material that incites terrorist violence or can radicalise users within one hour. Brussels promised to review progress made within three months and reserved the right to come up with legislation.

Mr King said the draft regulation — which would need to be approved by the European Parliament and a majority of EU member states to come into force — would help to create legal certainty for platforms and would apply to all websites, regardless of their size.

“The difference in size and resources means platforms have differing capabilities to act against terrorist content and their policies for doing so are not always transparent. All this leads to such content continuing to proliferate across the internet, reappearing once deleted and spreading from platform to platform,” said Mr King. 

Brussels’ crackdown on extremist activity comes in the wake of high-profile terror attacks in London, Paris, and Berlin over the past two years. But the move to draw up legislation has been contested inside parts of the commission, which believes self-regulation has been a success on the biggest platforms that are most utilised by terrorist groups.

Google said more than 90 per cent of the terrorist material removed from YouTube was flagged automatically, with half of the videos having fewer than 10 views. Facebook said it had removed the vast majority of 1.9m examples of Isis and al-Qaeda content that was detected on the site in the first three months of this year. 

One EU official said the commission’s push for an EU-wide law targeting terrorist content reflected concern that European governments would take unilateral action. Germany this year enforced a high profile “hate speech” law that targets anything from fake news to racist content. Companies must remove potentially illegal material within 24 hours or face fines of up to €50m.

The EU still opts for self-regulation by platforms on more subjective areas such as hate speech and fake news.

Twitter @mehreenkhn

Get alerts on EU tech regulation when a new story is published

Copyright The Financial Times Limited 2020. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article