Listen to this article
A lot of “weird stuff” happens while driving, says Stanford University professor Chris Gerdes. His assertion goes to the heart of one of the problems associated with driverless cars. How can we expect a robot to deal with all the eventualities humans encounter on the road — whether unpredictable pedestrians, rogue traffic cones, or even dead plants blowing in the wind?
And what about so-called “algorithms of death”: can robots be trusted to choose the least bad outcome in the event of an unavoidable crash?
Autonomous cars are not only pushing a century-old industry to the forefront of innovation. They are also forcing us to face crucial questions about how much control we are willing to hand over to machines.
Cars that drive themselves may fundamentally reshape the way we view devices — from things that work or fail to a more nuanced picture of machines that can reason but also make mistakes.
“I don’t think we’ve seen a technology quite like this that mirrors what humans do in such an open-ended task,” says Prof Gerdes, director of Stanford’s automotive research laboratory. “It really is a place where we have a robot doing something which, up to this point, has been exclusively human.”When it comes to automated transport, the ethical questions are high stakes and fiendishly complicated.
Established manufacturers including Daimler and BMW, as well as tech upstarts such as Tesla and Google, are known to have engaged experts such as Prof Gerdes to discuss ethical questions. Others, such as Fiat Chrysler, say they have engineers “exploring” the implications of autonomous driving.
General Motors says “an autonomous system for production is not close enough today to have answers to these questions, or even to know all the questions”. But Nissan, the Japanese group that with partner Renault is the world’s fourth-largest carmaker, has gone further, appointing a researcher at its Silicon Valley office dedicated to looking at these ethical issues. Melissa Cefkin, an anthropologist, is researching the interaction between autonomously driven vehicles and pedestrians and cyclists.
One layer of ethical questions for driverless cars involves scenarios and thought experiments. Daniel Hirsch, an automotive expert at PA Consulting, poses one: “A child runs on the street and the car has only two options — killing the child or killing the old, cancer-suffering driver.” The “correct” response to this situation in one country or culture might be different in another. It might even be illegal — both German and Swiss law say human lives cannot be weighed against one another.
And what about the position of big business, such as insurers? “There’s a significant number of these cases in which the insurance company would decide differently — for instance, to them a handicapped child is more expensive than a handicapped elderly person due to remaining lifespan,” says Mr Hirsch.
While fully driverless cars remain some years away, highly automated cars with sophisticated crash-prevention technology are on the road today. Toyota wants to build cars that cannot be responsible for a crash, but most modern vehicles have some sort of active safety features. Such considerations are making carmakers take ethical questions seriously.
“There is an increasing awareness across all automakers that they have to deal with the psychological issues of these vehicles,” says Hans-Werner Kaas, senior partner at McKinsey, a consultancy. “They’re beefing up their skillset.”
These moves underline that the industry is hypersensitive to safety following a series of high-profile recalls of millions of vehicles, meaning the race to adopt new technologies must be approached with caution.
Volvo, which has built its brand around safety, typifies that approach. Erik Coelingh, a senior technical leader for safety at the Swedish carmaker, says: “In practice, we have to make sure a car never gets into a situation where it has to make an impossible choice.”
That means driving conservatively and observing traffic rules. To underscore the point, Volvo said in October it would accept full civil liability for accidents caused by its self-driving technology. But that is not the same as saying drivers can enter what one BMW executive calls “brain off” mode.
Facing the full ethical dilemma of autonomous cars is still some years away. California — one of the most forward-looking transport regulators — last month adopted draft rules that would require humans to stay in control of a vehicle at all times, as is written in the Vienna Convention observed by many European countries.
This means fully driverless cars would be “initially excluded from deployment” in California.
“We as a society have to decide whether we’re ready for a machine, with no driver intervention, to decide what should happen in a critical situation,” says Ian Robertson, BMW’s board member for sales and marketing. “And I’m not sure that we are yet ready for that.”
|How much should we edit human genes?|
Advances in technology pose difficult moral questions for humanity
|Can we let cars make life or death decisions?|