In the early days of the internet, there was much talk of how the web would connect us all, thereby furthering knowledge and fostering community. Yet for all its advocates’ and early adopters’ optimism about its potential to enable us to organise, think and influence one another, freed from institutional supervision or what the newcomers frequently described as “mainstream media bias”, one thing has remained consistently problematic: comments posted under articles or blogs (or “below the line” in internet-speak).
On a recent weekday morning, I scrolled through a handful of comment threads published on various websites. “And pigs will fly!” wrote “Tobias Smollett” underneath an opinion piece published on ft.com by Lynn Forester de Rothschild on “inclusive capitalism”.
At dailymail.co.uk, a story on the Premier League’s decision to take no action over its chief executive Richard Scudamore’s sexist emails under the admittedly provocative headline, “Britain (and sadly the PM) is losing its collective good sense over one football boss’s vulgar email”, prompted “dennisherring” to ask: “Just how long do we have to put up with media coverage of so called sexism? Any adjective ending with ism is an evil.”
On theguardian.com, below an article about Vladimir Putin’s attempts to improve relations with China amid rising tensions with the west, “chesney79” noted that the Russian president’s actions would, rather, lead us down the “Road to World War 3”.
While comment threads can provide a snapshot of a range of opinions and a place where readers can contribute to a collective conversation, it is also clear they can be dysfunctional places where the weird meet the weird and get weirder. The borderless nature of the web means it can be hard to avoid the ranting bigots and creeps we would ordinarily shun in the real world.
Bad behaviour online is so common that it has generated its own typology of abuse. “Flaming” is to engage in a deeply personal and angry war of words across an online discussion. “Griefing” is repeatedly to torment someone, mostly through abuse in an online forum.
A “troll” is someone who intentionally disrupts online communities, most often under a pseudonym, and the activity of “trolling” is so widespread that the online Urban Dictionary lists dozens of rival definitions – “being a prick on the internet because you can” is the most succinct.
But, unpleasant as it may be, the vast majority of offensive, nasty and bullying comment is usually legal – it is only rare exceptions that have made it to court. Most media sites have moderators of some kind – posting rules or guidelines for comment, removing offensive comments or even closing articles to comment.
Take FT Alphaville. Founded in 2006, the blog for finance professionals now receives around 30,000 visitors a day. Paul Murphy, the blog’s editor, says moderation is preferable to banning anonymity. “We sometimes warn people not to be abusive or aggressive, or remind them to stay on subject,” he says. “And if they don’t comply, we zap them – sometimes for a day or a week, sometimes for ever. The readership learn what is acceptable and they often apologise if/when they’ve crossed the line.” Moderation seems to work. Alphaville receives around 4,000 comments a week, of which only one or two get deleted, and, on average, a reader is blocked just once or twice a month.
However, wrangling online conversation can also be a messy, frustrating, and typically thankless affair that involves more time than many organisations have. Even a dedicated team of moderators may struggle to compete with legions of trolls and spambots. Which is why, after years of letting anonymity rule online, media heavyweights are increasingly taking action.
Last year, for example, Popular Science, a 141-year-old American magazine, took the radical decision to banish comments from its website. Its editors argued that internet comments at the bottom of an article, particularly anonymous ones, were undermining the integrity of science and led to a culture of aggression and mockery.
Quartz, the Atlantic Media-owned business news site, hasn’t had comments since its launch in 2012, opting instead for edited annotations alongside stories. Vox, a tech-savvy news site launched in April by former Washington Post blogger Ezra Klein, does not have a comments section. And this month, US political magazine the National Journal, also owned by Atlantic Media, became the latest big-name news organisation to eliminate its comments section. “For every smart argument, there’s a round of ad hominem attacks . . . The debate isn’t joined. It’s cheapened, it’s debased,” said Tim Grieve, the magazine’s editor-in-chief.
Still, most publishers aren’t giving up on comment threads, which when they take off can provide some of the internet’s most rewarding content. For them, the goal is to take the playground back from anonymous bullies and give greater weight to those willing to offer, in addition to strong views, their real names. In September last year the Huffington Post announced that all commenters would be required to link their Huffington Post profiles to Facebook accounts verified with a phone number and have their real names displayed when commenting.
“I feel that freedom of expression is given to people who stand up for what they say and [are] not hiding behind anonymity,” said editor-in-chief Arianna Huffington. The internet colossus Google, too, is looking at how to clean up the frequently ferociously uninhibited comment sections on YouTube by linking comments to users’ Google+ profiles.
However, insisting that users link their profiles to social networking sites won’t necessarily absolve news websites of responsibility for offensive comments. Nor do social media platforms, priding themselves on their reach and accessibility, consider themselves as censors-in-chief for the entire globe. Besides, it’s easy enough to lie about your personal information when registering with social networking platforms and news websites – most only require an active email address. Many users opt for handles or pseudonyms, and regular users frequently develop personae to go along with those handles. “I create multiple identities for myself online, not because I want to heckle people but to devise a much more creative and fractured experience for myself,” “Susie K”, a 32-year-old self-confessed “comment junkie” on several major media websites, tells me over the phone. “Who you are and who you say you are can mean very different things.”
One of the great questions for the future of the net is: to what extent this extraordinary freedom will be allowed to remain in the hands of the people, and to what extent will it be limited and regulated? If a recent ruling by the European Court of Human Rights is anything to go by, perhaps we should expect more of the latter.
Housed within a gigantic glass-and-steel modernist building on the outskirts of Strasbourg, the European Court of Human Rights (ECtHR) has reached more than 10,000 judgments in its 60-year existence. In the past decade alone, it has required Austria to allow same-sex couples to adopt each other’s children, compelled improvements in Russia’s prisons, and ruled that France should give illegitimate children equal rights to inheritance.
Among the litigation today is an apparently unremarkable case that could almost escape notice but whose implications for the internet could be profound. “Delfi AS v Estonia” is a dispute about how closely websites need to police comments and whether they should have to predict when a story will attract defamatory posts. Brought against one of Estonia’s largest news websites, Delfi, by a ferry company’s main shareholder, named “L” in the judgment, it revolves around an article published in January 2006 about the implications of the company’s decision to change its routes. The new ferry routes would necessitate damaging ice roads (frozen, human-made structures on the surface of waterways), which are the cheapest way to get from mainland eastern Estonia to its outlying islands.
It was hardly front-page news, even in Estonia. But what happened next should worry any website that encourages its users to comment on its articles; particularly those websites that allow people to comment anonymously or under a pseudonym.
Within two days of publication, Delfi’s article attracted 185 comments, many posted anonymously. Some were enlightening, others funny, and 20 were identified by “L” as being not just insulting and vulgar but also defamatory and threatening. Delfi accepted that these comments – referring to the ferry company as “fucking shitheads”, for example; or describing the Estonian state as being led by “antisocial scum”, and so on – were defamatory and removed them as soon as they received “L’s” list of offending comments.
Delfi had in place a notice-and-takedown system of moderation favoured by many websites. But, although the system was easy to use – it did not require anything more than clicking on a reporting button – and the comments had been removed immediately upon notice, the website didn’t receive the complaint until six weeks after the article had gone live, the same amount of time the offending comments had been accessible to the public.
The comments are little worse than much of the hollow rage and name-calling found elsewhere online but, in April 2006, the ferry company sued and, two years later, an Estonian court found Delfi liable and ordered it to pay damages of EKr5,000 (£270). An appeal by Delfi was dismissed by Estonia’s Supreme Court in June 2009. After exhausting further appeals within Estonia, Delfi took the case to Strasbourg where, in October 2013, the ECtHR delivered its ruling, stating that: “Given the nature of the article, the company [Delfi] should have expected offensive posts, and exercised an extra degree of caution so as to avoid being held liable for damage to an individual’s reputation.”
Thus, according to the ECtHR, a news website should anticipate types of stories that might attract defamatory or insulting comments and be prepared to remove them promptly – or even before the comment has been reported, which might mean websites will be forced to pre-moderate any comment it publishes. One only has to look at the type and volume of comment posted below the line on websites from the FT’s to the Daily Mail’s to see the implications of this ruling. And, as any moderator will tell you, controversial comments can appear in the unlikeliest of places.
The judgment also says that if a commercial website allows anonymous comments, it is both “practical” and “reasonable” for it to be held legally responsible for the contents of those comments.
In January, responding to the implications of this ruling, a group of media organisations, internet companies, human rights groups and academic institutions sent an open letter to Dean Spielmann, a 51-year-old judge and president of the ECtHR, warning that the judgment could lead to “serious adverse repercussions for . . . democratic openness in the digital era”. The 69 signatories included Google, Guardian News and Media, the Daily Beast, PEN International, and the World Association of Newspapers and News Publishers (of which the Financial Times is a member).
What web publishers fear is that the failure of Delfi’s appeal might represent such a landmark case that, if followed, it could both strike a blow to freedom of expression online and open a Pandora’s box of people and companies demanding compensation from publishers against people who post anonymously on their websites.
“This baffling logic now appears to render it effectively impossible for an online publication to allow comments without positive identification of the end-users,” says Joe McNamee, executive director for European Digital Rights, an international advocacy group with headquarters in Brussels. “If the current ‘flag and remove’ system [of moderation] is no longer accepted, websites would have to employ a small army of moderators-turned-sleuths to track down the people hiding behind pseudonyms or false identities. That’s a mammoth, potentially impossible, task.”
For Eric Barendt, Goodman Professor of Media Law at University College London from 1990 until 2010, the ruling doesn’t adequately balance freedom of speech against an individual’s right to protect his or her reputation. “I wouldn’t stick my neck out to say the ECtHR’s judgment was ridiculous,” he tells me, “but I know many people who would. How bizarre that this case could be the straw that breaks the camel’s back.”
The judgment will not only affect whistleblowers, says Aidan Eardley, a London-based barrister specialising in data protection and media-related human rights law. “It’s also bad news for people who want to comment about sensitive personal issues such as domestic abuse, sexual identity, religious persecution, etc.”
As Sarah Laitner, the FT’s communities editor, says: “It’s important to remove any hurdles a reader may face to participation. Some people feel that they are able to comment more freely if they can use a pseudonym.”
However, the decision isn’t final. The ramifications of the case are such that on February 17 it was accepted for referral to the Grand Chamber of the ECtHR (the other 16 cases up for referral that day were rejected; just five per cent of requests for referrals succeed).
On July 9 a fresh panel of 17 judges will pass judgment. Only then will Delfi find out if it is liable or not.
Whatever the outcome, the scale of the task is huge: online anonymity is so embedded in our culture that it might be too late to change the rules.
John Sunyer is an FT journalist
Get alerts on Life & Arts when a new story is published