Listen to this article
Last year, California set out to draw up new rules of the road for the driverless car era. As the state’s motoring regulator released its proposals, one immediately attracted controversy: the requirement that a human sit in the driver’s seat.
For the pioneers of the autonomous car industry — among them Google and Tesla, both Californian companies — this rule would undermine the whole point of the technology. Instead of making the cars safer, they argued, it would only increase the chances of a crash.
“I cannot deliver this point more strongly: we need to be careful about the assumption that having a person behind the wheel will make the technology more safe,” Chris Urmson, technical director of Google’s self-driving car project, told a hearing in Sacramento this year to discuss the proposed rules.
Executives at Tesla and Ford echoed his argument. But Brian Soublet, deputy director and chief counsel at the California Department of Motor Vehicles, had another worry. Yes, there are potential safety benefits. But how would driverless cars share the road with vehicles driven by humans? Shouldn’t a person be able to take control in an emergency?
“It is not going to be overnight that every person owns an autonomous vehicle,” he says. “There is going to be a phase-in period where those vehicles are going to be on the road with a lot of other cars that do not have autonomous technology.”
Mapping the future
The clash in Sacramento was a taste of the regulatory, social and ethical questions that will emerge as robots become more deeply integrated with everyday life. Thanks to rapid advances in artificial intelligence, sensor technology and computing power, labs from Silicon Valley to Tokyo are making huge strides in robotics that will transform industries from healthcare to agriculture.
But if self-driving cars are introduced as quickly as many in the industry hope, it is on the roads that many people will have their first encounter with a robot. And while Google touts the prospect of safer roads once the possibility of human error is removed from driving, the idea of unleashing two-tonne robots capable of reaching 60mph in 10 seconds strikes some as unnerving.
Even industry executives acknowledge the potential for problems when traditional vehicles begin sharing the road with autonomous cars, which will have to learn to respond to hundreds of millions of erratic and distracted human drivers.
“It’s going to be a wild mixture for quite some time, if not forever,” says John Ristevski, a vice-president at Here, the digital mapping company owned by German carmakers Audi, BMW and Daimler. “Anything can happen and autonomous vehicles are going to have to be prepared for it.”
Many people in the automotive and technology industries believe that computers — which never get drunk, look at their phones instead of the road or fall asleep at the wheel — are already better drivers than humans. But completely self-driving cars could take decades to reach every city, even if the first autonomous vehicles are on the road in some places within a couple of years.
How to navigate that transition, and how significant the subsequent disruption to the traditional automotive business will be, are matters of intense debate from Silicon Valley and Detroit to China, Germany and Japan.
“Many people in the industry want things to be safer and they want cars to be more accessible,” says Gill Pratt, head of Toyota Research Institute, the Japanese carmaker’s new research and development unit in Silicon Valley. “There’s a divergence of opinion of how to get there.”
Rather than the all-or-nothing approach to self-driving cars pioneered by Google, Toyota is focusing on a more incremental kind of autonomous technology. “We think that we can add intelligence to cars that works in parallel with drivers — like a guardian angel that’s watching what you do and intervenes when you are about to make a mistake,” he says.
While Toyota eventually aims to ditch the steering wheel too, this “guardian angel” is more achievable in the near term, he argues. It also preserves what Akio Toyoda, Toyota president, describes as the “thrill of driving”, which Dr Pratt says is central to TRI’s mandate.
Full autonomy “will be possible someday, but it’s very, very hard”, he says. “It’s going to take a while until we get there.”
Sit back and relax — or not
As many human drivers might appreciate, one of the big obstacles to self-driving cars is weather. Heavy rain, snow or fog can play havoc with sensors essential for autonomous navigation.
“There will always be conditions that you can’t drive in,” says Mr Urmson. This means that in highly unusual situations — Mr Urmson gives the example of a sudden downpour in a desert — a robot car may simply refuse to continue.
Nonetheless, Mr Urmson insists that anything short of a completely self-driving car is riskier and less useful than the incremental approach taken by the likes of Toyota. If the passenger has to be ready to take the wheel in an emergency — the idea behind the California proposal — “you’re destroying a lot of the value to the user at that point”, he says.
Google’s early testing found that once passengers are told the car can drive itself most of the time, they lean their seat back and start snoozing or turn away from the road entirely. Mr Urmson concluded that the humans simply could not be trusted.
Dr Pratt, who previously oversaw the Robotics Challenge contest run by the US Defense Advanced Research Projects Agency, says this is one of the toughest problems facing autonomous vehicle technology. One idea is to get the driver’s attention in an emergency with an alarm or by shaking the seat or steering wheel.
“The difficulty is really if they’re asleep,” he says. “The answer in that case is somehow pull the car off to the side of the road in a safe way.”
Even in the traditional automotive industry, many believe this issue simply cannot be resolved. “It really has to do with being uncomfortable about having technology that can re-engage fully disengaged drivers,” says Jim Buczkowski, Ford’s global director of electronics systems, research and innovation. “We have to make the assumption that is not going to happen.”
Given the challenges, other tech companies are eschewing the Google example to take a more evolutionary approach to autonomous cars.
“You can’t aim for perfect — you just have to aim for better than human,” says George Hotz, founder and chief executive of Comma.ai, a start-up backed by Andreessen Horowitz, the venture capital group. Comma wants to alleviate the most painful aspect of driving. “We believe that our killer app is traffic,” says Mr Hotz. “A lot of people spend a lot of time in it. It isn’t fun.”
The company, less than a year old, wants to use a combination of sensors, cameras and machine learning to make life easier for rush-hour commuters. Before the end of this year, Comma plans to release a $1,000 kit that will enable owners of certain newer car models to cruise through congestion without having to pay close attention to the vehicle in front.
It is easier to teach cars to drive themselves at 10mph than at faster speeds, Mr Hotz says. “The consequences of a mistake also aren’t that high.”
The ‘soft side’ of driving
Visitors to Mountain View, home to Google and its parent company Alphabet, could be forgiven for thinking that the driverless future is already here. Dozens of Google’s pod-like car prototypes, with their rooftop sensors and smiley headlights, trundle around the Silicon Valley town every day. Each converts its thousands of hours of real-world driving experience into data from which the entire fleet can learn.
Google runs 3m miles of testing in simulation every day so its cars can prepare for any situation. There is even a special team that thinks up oddball incidents to throw at them.
Yet real life brings surprises no-one can anticipate. Last year, a Google car rounded a corner to find a woman in an electric wheelchair chasing a duck with a broom in the middle of the road. “We’d never tested the car against a woman and a duck,” Mr Urmson says, “and it was able to understand this was unusual, slow down, let that thing play out and then get on its way.”
Google is sufficiently confident about its technology that its staff have discussed launching a fully autonomous taxi service in Mountain View as soon as next year, according to people familiar with the company’s thinking. The service may initially be restricted to Google employees, which might get around any legal and regulatory issues. Google has already run some tests with employees who are trained drivers.
However, even on the sedate streets of Mountain View, Google’s technology does not work flawlessly. On Valentine’s Day this year, after more than 1m miles of autonomous driving, a Google car caused the project’s first crash: a slow-motion collision with a city bus.
Google blamed it on the kind of misunderstanding that often happens between people. “This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements,” it said.
The incident shows that Google’s car not only needs to get better at anticipating how human drivers behave but also whether the computer itself drives in a way other road users find natural, rather than robotic. “Self-driving cars basically work today,” says Chris Dixon, a partner at Andreessen Horowitz. “The challenge is cultural and regulatory.”
Trickier than keeping in lane or a safe distance behind the car in front is the “softer side” of driving, says Mr Buczkowski. If an autonomous vehicle is programmed to give way to human drivers, it might never cross a four-way junction, where people could take advantage of its hesitation. “It’s not the simple rules, it’s the unwritten ones,” he says.
Making sure autonomous cars drive naturally will also help passengers have confidence in the machines, says Mr Ristevski. “Speed, braking, acceleration — those little nuances are really important for making a consumer feel comfortable,” he says. “Even if you tell people it’s safe 99.999 per cent of the time, if it’s not comfortable, it’s not going to be palatable.”
But this creates ethical and legal quandaries. If human drivers want to break the speed limit, should robot cars be able to do so? And when autonomous cars are involved in fatal crashes, as they inevitably will, who should be held responsible? Volvo says it will take on liability for its autonomous vehicles — because if it did not, consumers would never trust it enough to ride in one.
As regulators in California and beyond begin to grapple with these issues, autonomous cars must first win the trust of other drivers. No matter how irresponsibly people drive, “society is going to judge the robot as being at fault” in a crash, says Mr Dixon.
Letters in response to this article:
Get alerts on Artificial intelligence when a new story is published