Rana el Kaliouby is working on making technology more ‘human-centric’

Rana el Kaliouby has spent her career tackling an increasingly important challenge: computers don’t understand humans. First, as an academic at Cambridge university and Massachusetts Institute of Technology and now as co-founder and chief executive of a Boston-based AI start-up called Affectiva, Ms el Kaliouby has been working in the fast-evolving field of Human Robot Interaction (HRI) for more than 20 years.

“Technology today has a lot of cognitive intelligence, or IQ, but no emotional intelligence, or EQ,” she says in a telephone interview. “We are facing an empathy crisis. We need to redesign technology in a more human-centric way.” 

That was not much of an issue when computers only performed “back office” functions, such as data processing. But it has become a bigger concern as computers are deployed in more “front office” roles, such as digital assistants and robot drivers. Increasingly, computers are interacting directly with random humans in many different environments.

This demand has led to the rapid emergence of Emotional AI, which aims to build trust in how computers work by improving how computers interact with humans. However, some researchers have already raised concerns that Emotional AI might have the opposite effect and further erode trust in technology, if it is misused to manipulate consumers.

In essence, Emotional AI attempts to classify and respond to human emotions by reading facial expressions, scanning eye movements, analysing voice levels and scouring sentiments expressed in emails. It is already being used across many industries, ranging from gaming to advertising to call centres to insurance.

Gartner, the technology consultancy, forecasts that 10 per cent of all personal devices will include some form of emotion recognition technology by 2022.

Amazon, which operates the Alexa digital assistant in millions of people’s homes, has filed patents for emotion-detecting technology that would recognise whether a user is happy, angry, sad, fearful or stressed. That could, say, help Alexa select what mood music to play or how to personalise a shopping offer.  

Affectiva has developed an in-vehicle emotion recognition system, using cameras and microphones, to sense whether a driver is drowsy, distracted or angry and can respond by tugging the seatbelt or lowering the temperature. 

And Fujitsu, the Japanese IT conglomerate, is incorporating “line of sight” sensors in shop floor mannequins and sending push notifications to nearby sales staff suggesting how they can best personalise their service to customers. 

A recent report from Accenture on such uses of Emotional AI suggested that the technology could help companies deepen their engagement with consumers. But it warned that the use of emotion data was inherently risky because it involved an extreme level of intimacy, felt intangible to many consumers, could be ambiguous and might lead to mistakes that were hard to rectify. 

The AI Now Institute, a research centre based at New York University, has also highlighted the imperfections of much Emotional AI (or affect-recognition technology as it calls it), warning that it should not be used exclusively for decisions involving a high degree of human judgment, such as hiring, insurance pricing, school performance or pain assessment. “There remains little or no evidence that these new affect-recognition products have any scientific validity,” its report concluded.

In her recently published book, Girl Decoded, Ms el Kaliouby makes a powerful case that Emotional AI can be an important tool for humanising technology. Her own academic research focused on how facial recognition technology could help autistic children interpret feelings.

But she insists that the technology should only ever be used with the full knowledge and consent of the user, who should always retain the right to opt out. “That is why it is so essential for the public to be aware of what this technology is, how and where data is being collected, and to have a say in how it is to be used,” she writes.

The main dangers of Emotional AI are perhaps twofold: either it works badly, leading to harmful outcomes, or it works too well, opening the way for abuse. All those who deploy the technology, and those who regulate it, will have to ensure that it works just right for the user.

john.thornhill@ft.com

@johnthornhillft


Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments