FILE PHOTO: The Facebook logo is displayed on the company's website in an illustration photo taken in Bordeaux, France, February 1, 2017. REUTERS/Regis Duvignau/File Photo
© Reuters

When Steve Stephens posted a Facebook video of himself shooting a random stranger last Sunday, he did more than commit a horrific crime. The Cleveland killer also highlighted the abject failure of big technology groups to take responsibility for their role in spreading illegal, hateful and false content around the world.

Google, Facebook, Twitter and the like have long benefited from the principle, established in a 1996 law, that internet content providers are not responsible for user postings. At the time, the law made sense. Telephone companies were not liable for threats made using their services, so why should a broadband provider or blog hosting site be held responsible when some nut posted a terrorist rant? Not surprisingly, many countries and the EU have similar immunity rules.

But the growing power of social media and search companies raises questions about whether this legal framework still makes sense. The big tech groups do more than just host third-party content. Their algorithms actively promote some posts and disfavour others. They serve up advertising on other websites, and they take down some offensive postings after users complain (the Cleveland murder video was up for two hours).

To my mind, these companies are no longer mere conduits. They should have to take at least partial responsibility for the content that appears on their sites and alongside their adverts. The tough question is how far to go.

Clearly, YouTube, Facebook and others should be moving faster to find and remove offensive content, but should they also be required to avoid promoting such material in the first place?

The big companies already have mechanisms to take down copyrighted material — a 1998 US law requires them to. Congress should pass a similar law that makes social media companies liable unless they quickly remove illegal postings — child porn, hate speech and libel. That would force the groups to invest more money in scanning for and removing criminal content.

The law could go further and force them to create mechanisms for quick handling of complaints when the content is merely offensive, invasive of privacy or false.

It is also possible to penalise the posters. Officials in the Indian community of Varanasi recently said that administrators of Facebook and WhatsApp groups would be prosecuted for circulating fake news. If something false is posted, the group must deny it and remove the member, or the administrator will be held responsible. But, to my mind, that would let companies off the hook too easily.

Chris Reed, a professor at the UK’s Queen Mary University, argues it is unrealistic to expect Facebook and YouTube to avoid promoting offensive material because of the volume of posts and the difficulty in designing filters to distinguish between, say, YouTube videos of porn and breastfeeding. “Just having the technology will inevitably chill free speech,” he says.

Yet companies boast daily about advances in artificial intelligence. It is hard to believe that they could not improve screening.

The problem will only get worse. Facebook said this week that it is working on reading users thoughts. Before they inflict the results on the rest of us, the company needs to think harder about its responsibilities, not just as a pipeline but as a publisher. Experience teaches us that some ideas are better kept private.

Get alerts on Companies when a new story is published

Copyright The Financial Times Limited 2021. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article