So-called dark patterns are used by tech companies to nudge users online, such as encouraging them to spend money
So-called dark patterns are used by tech companies to nudge users online, such as encouraging them to spend money © Getty Images

Is the tech industry using some of its most powerful tools to the detriment of its own users?

From Facebook’s news feed to online advertisement targeting and the so-called “dark patterns” that tech companies use to manipulate users’ responses, it seems that a small number of algorithms now govern the digital lives of billions.

These are among the most widely-deployed products of the AI revolution. Trained on vast amounts of user data, they use learning algorithms that constantly adjust in order to maximise the desired outcomes.

But, as the tech companies that use them have become richer and more powerful, there is no shortage of sceptics questioning whose benefit they are designed to serve. 

“You never see these companies picking ethics over revenue,” says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. “These are companies that are governed by shareholder capitalism.”

As AI spreads deeper into daily life, the impact of the algorithms has grown steadily. “Dark patterns” have become the latest focus for the tech industry’s critics. They are clever tricks used on websites and in smartphone apps to nudge users into taking particular actions — for example, leading people to buy more than they want on ecommerce sites, or encouraging them to spend more time on a social media site.

Such techniques have long been used online, but they have since been honed by the new techniques of machine learning to maximum effect.

Jennifer King, a fellow at Stanford University’s Human-Centered AI institute, says dark patterns are being driven by a “constant race to the bottom,” as companies copy one another in a bid for market share and bigger profits. “It only takes a handful of companies to start exploring a new area and then everyone is doing it,” she says.*

King singles out recent changes in the online travel industry as an example of how dark patterns can spread. Devices such as countdown clocks that prompt people into making quicker decisions about booking, and interfaces that hide extra fees, have become a common feature of many websites.

And the way the algorithms behind these tricks work can make them an insidious force. Reinforcement learning systems — which constantly adjust to maximise the chance users will behave in a certain way — create a powerful feedback loop, adjusting each time a user takes a desired action to increase the chance that others will do the same.

“Social media content selection algorithms, particularly those based on reinforcement learning, are guaranteed to manipulate people,” says Stuart Russell, a professor of AI at the University of California, Berkeley. He believes they lead to deeper political polarisation and stoke anger for the sake of increasing online engagement.

Facebook, for its part, argues that it would not benefit from creating an angry, polarised audience, because these users would ultimately spend less time using the service. But Russell says that is “disingenuous”. He argues that the social media company is driven by profit incentives that are blind to the wider impact of its service — rather like chemical companies whose products cause pollution and end up hurting their own customers.

A deep secrecy over the way algorithms like these work has made it impossible to judge their impact on users, increasing the divide between the tech industry and its critics. Peter Eckersley, a former director of research at the Partnership on AI, a joint research group set up by the biggest tech companies., says research into how algorithms affect the huge online population is urgently needed — but cannot be done because of the lack of access to data held by big tech platforms.

Meanwhile, the tech companies have attempted to head off worries about the effects of their technology by promising an enlightened form of self-regulation. Most claim to have adopted ethical principles and internal guidelines designed to make sure the users’ interests are put first.

Yet critics say there is no way to tell whether these codes of conduct are being followed, and that they inevitably clash with powerful financial incentives.

When the companies do take a public stand over ethical matters, it is invariably over something that does not hurt their own commercial interests, says Oren Etzioni, head of the Allen Institute for AI. He cites Microsoft’s call for limits on facial recognition technology (a field where it is not one of the leaders) and Apple’s repudiation of the kind of third-party data that feeds Facebook’s advertising system (a practice Apple does not engage in) as examples.

There are very few examples of companies taking “ethical” actions that run counter to their own interests, he adds.

Some argue that it is too simplistic to dismiss industry efforts at self-restraint outright, and point out that the big tech companies are not monolithic organisations that act with a single purpose.

Engineers inside Google have been instrumental in drawing public attention to some of the search company’s most controversial work and forcing it to change course, says Whittaker. That includes a decision to end work on a censored search engine for the Chinese market, after a storm of internal protest.

Yet the constant drive inside tech companies to push their technology forward and improve their products has created a momentum in the AI field that is hard to slow. Google’s recent dismissal of the two leaders of its ethical AI research team is the latest case to focus attention on forces that are pushing technology in directions that may not always be to the wider benefit of society.

Timnit Gebru
Timnit Gebru © Kimberly White/Getty Images

At Google, the dispute followed the company’s attempt to block publication of a research paper co-authored by Timnit Gebru, one of the research heads. According to Gebru, the case reflected the way the company’s leaders pay only lip service to limiting any negative effects from their technology.

“I think their attitude is: we care deeply about it and we want to work on it, but it’s just not as severe as you’re making it out to be,” she says. “They don’t think it’s severe enough to slow down the product, or even the research process.”

As the world’s most powerful and richest corporations rush headlong into the AI future, that is a warning that has started to resonate.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments