In July 1855 the eminent scientist Michael Faraday took an unpleasant boat trip down the river Thames. So appalled was he by the state of the “fermenting sewer” that he wrote a letter to The Times urging a clean-up.
“If we neglect this subject, we cannot expect to do so with impunity; nor ought we to be surprised if, ere many years are over, a hot season give us sad proof of the folly of our carelessness,” he wrote.
Three years later, a heatwave struck London and Faraday’s warning become a smelly reality. The Great Stink, as it became known, forced MPs to pass a bill to refurbish the river. Joseph Bazalgette was entrusted with the task, building embankments along the Thames to encase an extensive sewerage system.
This remarkable feat of civic engineering should inspire us to tackle the Great Stink of our own times emanating from the open sewer that is the internet. For years now, users have been chucking chamber pots of informational filth into cyber space with no respect for public hygiene. Little surprise that the internet has turned into a political and social health hazard.
The maladies of social media are infecting societies everywhere. The persecution of the Rohingya population in Myanmar was fuelled by online hate campaigns. Elections across the world, including the supposedly mature democracies of the US and the UK, have been swayed by overt digital manipulation. The threatening trolling of many female politicians and the cyberbullying of teenage students have destroyed lives.
One of the great challenges of our age will be how to deal with this online filth while preserving free speech and the remarkable promise of our connected world. Who will build a sewerage system for the digital age?
The big tech companies have belatedly acknowledged the problem and finally appear serious about trying to deal with this scourge. They have written smart algorithms and hired thousands of moderators to flag and delete offensive content.
Twitter and Facebook have just shut down hundreds of accounts of Chinese trolls trying to provoke protesters in Hong Kong. Facebook is setting up a 40-person oversight board to provide a new form of supranational governance, although this initiative has already attracted its doubters.
Last month, the network provider Cloudfare terminated its contract with 8chan, an online forum that ran screeds of white supremacist propaganda, denouncing it as a “cesspool of hate”. Scientists are exploring ways to remodel social networks and defang the “online hate ecosystem”.
Politicians are slowly figuring out how to update free speech laws for the internet age. And smart journalists, such as Maria Ressa, chief executive of the Rappler website in the Philippines, are leading efforts to improve the credibility and influence of professional journalism.
Most of these collective efforts are worthwhile and necessary, but they can never be sufficient. The scale of the challenge is colossal, given that some 4.4bn people are now connected to the internet. Every minute of the day these users post 350,000 tweets and upload 300 hours of video to YouTube.
We can never stop those who want to post false or hateful content. But we can certainly go further to redesign the market to reward those who publish socially valuable information and squeeze those who promote viral misinformation.
Dhruv Ghulati, an investment banker turned entrepreneur, is one of those trying to do just that by changing the way the $333bn global online advertising market works.
The company he runs, Factmata, can quickly and algorithmically ascribe a trust score to millions of pieces of online content depending on toxicity, objectivity, political bias and so on. His ultimate goal is to create a new metric for the advertising industry that includes content quality, not just virality.
At present, advertisers mostly buy online ads on the basis of CPMs, or the cost per one thousand clicks or views. But such programmatic advertising takes little or no account of the nature of the content. Factmata is aiming to introduce a quality component to this metric, creating a Q-CPM, a kind of nutritional label for information.
“Fundamentally, there is a pricing problem,” Mr Ghulati says. “The value that society derives from a piece of information is different from the value that the market ascribes to that same information. The market value is determined solely by its propensity to serve ads.”
The obstacles to creating any such metric are obvious and daunting. The algorithmic determination of “quality” would also trigger endless controversy. But it is clear that advertisers must take far more responsibility for where their ads appear and whom they reward online.
By itself, this would create a powerful engine for change. Social networks and food companies, for example, really should care that cookie ads are being served up on white supremacist sites, even if racists eat cookies, too. If advertisers prove resistant, then consumers must force them to act.
“If we neglect this subject, we cannot expect to do so with impunity,” as Faraday warned long ago.
Get alerts on Social Media when a new story is published