In the 2010s, the American political scientist Virginia Eubanks set out to investigate whether computer programs equipped with artificial intelligence were hurting poor communities in places such as Pittsburgh and Los Angeles.

Her resulting book, Automating Inequality (2018), makes chilling reading: Eubanks found that AI-enabled public and private systems linked to health, benefits and policing were making capricious — and damaging — decisions based on flawed data and ethnic and gender biases.

Worse, the AI systems were so impenetrable that they were hard to monitor or challenge when decisions were wrong — especially by the people who were victims of these “moralistic and punitive poverty management strategies”, as Eubanks puts it.

Eubanks’ warnings received scant public attention when they emerged. But now, belatedly, the issue of AI bias is sparking angry debate in Silicon Valley — not because of what is happening to those living in poverty but following a bitter row among well-paid tech workers at Google.

Earlier this month, Margaret Mitchell, a Google employee who co-led a team studying ethics in AI, was fired after allegedly engaging in the “exfiltration of confidential business-sensitive documents and private data of other employees”, according to Google. The tech group has not explained what this means. But Mitchell was apparently looking for evidence that Google had maltreated Timnit Gebru, her co-leader at the AI ethics unit, who was ousted late last year.

This is deeply embarrassing for the tech giant. Gebru is a rarity — a senior black female techie — who has been campaigning against racial and gender biases via the industry group Black in AI. More embarrassing, her departure came after she tried to publish a research paper about the dangers of untrammelled AI innovation that apparently upset Google executives.

As it happens, the offending paper is too geeky to grab headlines. However, it argues, among other things, that natural language processing platforms, which draw on huge bodies of text, can embed the type of biases that Eubanks warned about. And after Gebru was ousted, Mitchell told her Google colleagues that Gebru had been targeted because of the “same underpinnings of racism and sexism that our AI systems, when in the wrong hands, soak up”.

Mitchell tells me: “I tried to use my position to raise concerns to Google about race and gender inequity . . . To now be fired has been devastating.” Gebru echoes: “If you look at who is getting prominence and paid to make decisions [about AI design and ethics] it is not black women . . . There were a number of people [at Google] who couldn’t stand me.”

Google denies this and says Gebru left because she breached internal research protocols. The company points out that it has now appointed Marian Croak, another black female employee, to run a revamped AI ethics unit. Chief executive Sundar Pichai has also apologised to staff.

But the optics look “challenging”, to use corporate-speak, not least because according to Google’s latest diversity report, fewer than a third of its global employees are women (down slightly on 2019) and only 5.5 per cent of its US employees are black (compared with 13 per cent of the US population).

This story will no doubt run and run, but there are at least three things that everyone, even non-techies, needs to note now. First, Silicon Valley’s problems with gender and racial imbalance did not start and end with the more scandal-prone members of the Big Tech fraternity — the issue is endemic and likely to last for years.

Second, what pressure there is on tech giants to reform is coming not so much from regulators or shareholders but from employees themselves. They are becoming outspoken lobbyists, not just over gender and race but on the environment and labour rights as well. Even before this latest drama, Google had faced employee protests over sexual harassment; Amazon is experiencing similar opposition over green issues.

Third, the problem with AI and bias that Eubanks highlights in her book is becoming more acute. Companies such as Google are not just racing to create ever larger AI platforms, but embedding them deeper in our lives. The tools that Gebru’s paper takes a swipe at are a key component of Google’s search processes.

These systems often deliver extraordinary efficiency and convenience. But AI programs operate by scanning unimaginably vast quantities of data about human activity and speech to find patterns and correlations, using the past to extrapolate the future. This works well if history is a good guide to how we want things to unfold, but not if we want to build a better future by expunging elements of our past — such as racist speech.

The solution is to have more and better human judgment in these programs. Getting non-white faces involved in designing facial recognition tools, say, can reduce pro-white bias. But the rub is that human intervention slows down AI processes — and innovation. The question posed by the Gebru saga is not simply: “Is tech racist or sexist?” but also: “Will we sacrifice some time and money to get a fairer AI system?”

Let’s hope the Google drama finally focuses attention on that.

Gillian will join Mark Carney, UN special envoy on climate and former governor of the Bank of England, to discuss “How to Save the Planet — and Rethink the Global Economy” at the FT Weekend Digital Festival, March 18-20; ftweekendfestival.com

Follow Gillian on Twitter @gilliantett and email her at gillian.tett@ft.com

Follow @FTMag on Twitter to find out about our latest stories first. Listen to our podcast, Culture Call, where FT editors and special guests discuss life and art in the time of coronavirus. Subscribe on Apple, Spotify, or wherever you listen

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments