A robot of the project Robot-Era goes to take a lady in the room to accompany her in the dining room at nursing residence San Lorenzo on December 19, 2015 in Florence, Italy
Automatic relations: machines care for nursing home residents © Getty

As the use of robots increasingly encroaches on the real human world, it is easy to slip into science fiction fantasies about the unforeseen complications that will follow.

Usually, these involve imaginary dramas in which intelligent machines make choices that have unforeseen and painful results for their makers.

Makers of driverless cars, for example, are wrestling with an updated version of philosophy’s “trolley problem” — in which there is a hypothetical choice between a loose train killing multiple people on one railway track or just one person by switching to another. Car owners will have to decide if they will be comfortable with their autonomous vehicles making such choices in the event of an impending accident.

Fully self-driving cars are still some years away from being mainstream, but even today many other machines are performing the basic functions of robots — such as following commands and constantly monitoring the world around them, says Wendy Ju, senior researcher and expert in human and computer interaction at Stanford University. “We talk about it as really far off, but it’s happening right now,” she says.

Robotics experts say that many of the challenges are not so much ethical as technological and social: better design and changing social norms could resolve some of the perceived problems and narrow the range of truly moral conundrums that robots give rise to.

A basic challenge, for instance, is to accurately measure the performance of smart machines to determine whether they are achieving the goals set for them, says Martial Hebert, head of the robotics institute at Carnegie Mellon University. It is one thing to know, for example, whether granny’s home robot is working or not, but judging whether her quality of life has improved as a result of her having it may be more difficult.

A further step is to validate that a system will behave in a known way under a defined set of circumstances, Prof Hebert adds, making its actions more predictable.

Putting more thought into designing human interactions with robots will help, says Ms Ju. Compared with the field of computer and human interaction, which is three decades old, study of the interplay between robots and people is a new area, she says. Understanding how people will respond to machines, and designing these interactions accordingly, could go a long way to ensuring the outcomes are positive.

People often respond to intelligent machines by projecting human values on to them and interpreting their own interactions with inanimate objects in social terms.

As humanoid robots are used increasingly to perform functions that make life easier for groups such as the elderly or the ill, this could throw up particular concerns. Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield, says that he found 14 instances in Japan and South Korea in which robots were being promoted as machines that could be used to care for children, even though no work had been conducted on understanding the effects, says Prof Sharkey. “It was completely unethical,” he adds.

Another problem stems from the way in which autonomous machines react when faced with situations that were not anticipated by their programmers. Machine learning — the basic technique behind much artificial intelligence — relies on analysing large volumes of data to find patterns, in the process “training” machines how to interpret and respond to real world phenomena.

The outcome of that process is not always predictable, and not just because of the way that algorithms respond to unforeseen circumstances. A system is likely to adapt in different ways depending on the nature of the data it is fed, says Prof Hebert. Making decisions about the inputs into smart machines is as crucial as their basic algorithms.

How a robot will react when it faces a choice that it has not been programmed for is not always easy to predict. The challenge, says Prof Hebert, is to train it to behave the way a person would: “To know that you don’t know, and still do the right thing.” But with no self-awareness, this is a lofty aspiration for a machine.

Podcast

Fintech: The search for a super-algo

Podcast
http://podcast.ft.com/p/2987

Will computers powered by artificial intelligence become the next Warren Buffetts? Robin Wigglesworth reports on the investment groups’ race to build a machine that can think, learn, trade and adapt to changing market conditions

Meanwhile, even if research in areas like this succeeds in lessening the harm and maximising the social benefits of robots, it still leaves a fundamental dilemma. Will decisions made by machines ever be socially acceptable, particularly if the outcome is negative for some groups of people? For example, Google, Toyota and others have said they believe driverless cars will save large numbers of lives that would otherwise be lost on the roads to human error, but if their vehicles cause even a small number of deaths will it be acceptable — and who will be held to blame?

The ethical parameters for debates like these have yet to be set. “There is no joined-up thinking about it,” says Prof Sharkey. Much more public discussion is needed, he says, along with national “robot policies” that lay out goals for their adoption.

More from this report

Bionic advances to defeat death

Seven ways technology has changed us

Driverless cars pose worrying questions

How far can we extend the human life span? Why have the internet and mobile phones not increased productivity? Can we let cars make life or death decisions?
Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments