© FT montage

Self-driving cars seem like a magical idea. The concept of vehicles that can operate themselves, without steering wheels or pedals, leaps straight from the pages of science fiction. 

Yet like so many fantastical stories, there are “wizards” hidden behind the curtain — lots of them. Constructing the road to fully automated driving, it turns out, requires a lot of manual labour. 

Most companies working on this technology employ hundreds or even thousands of people, often in offshore outsourcing centres in India or China, whose job it is to teach the robo-cars to recognise pedestrians, cyclists and other obstacles. The workers do this by manually marking up or “labelling” thousands of hours of video footage, often frame by frame, taken from prototype vehicles driving around testbeds such as Silicon Valley, Pittsburgh and Phoenix. 

“Machine learning is a myth, it’s all Wizard of Oz type work,” says Jeremy Conrad, an investor at Lemnos Labs in San Francisco. “The labelling teams are incredibly important in every company, and will need to be there for some time because the outdoor environment is so dynamic.” 

Road test: over 100 Google employees pedal around a Waymo self-driving car in California to check the onboard safety system © Waymo

Huge advances in artificial intelligence, sensor quality and computing power have put in place the technological foundations of the driverless revolution. Yet despite these innovations, humans will still be needed behind the scenes for many years to come, drawing boxes around trees and highlighting road signs, in order to keep these systems fresh. 

“AI practitioners, in my mind, have collectively had an arrogant blind spot, which is that computers will solve everything,” says Matt Bencke, founder and chief executive of Mighty.Ai, which taps a community of part-time workers to filter and tag training data for tech companies.

The same problem exists for any AI system: computers “learn” by ingesting vast amounts of manually labelled information and use that “model” to recognise objects and patterns when they see them again. 

A graphic with no description

The challenge in training self-driving cars is greater than other AI applications because of the open-ended variety of scenes and situations in which vehicles can find themselves. Even after adjusting for changing lighting and weather conditions at different times of the day and year, the urban environment can change overnight due to construction, special events or accidents. 

“The annotation process is typically a very hidden cost that people don’t really talk about,” says Sameep Tandon, chief executive of autonomous driving start-up Drive.ai. “It is super painful and cumbersome.” 

The level of accuracy demanded of autonomous cars is also higher than other AI systems. Cars drive themselves by comparing the surroundings they see using their cameras and sensors to a detailed on-board 3D map of the streets around them. Safety is paramount: if Google Photos’ facial recognition system fails to correctly identify a person in a picture, it is inconvenient; if a Waymo vehicle does not spot a pedestrian, it could be fatal. 

In the race to create driverless cars, one of the yardsticks by which progress is measured is the number of miles a company’s vehicles have covered. Alphabet’s Waymo said in May that its cars have piloted themselves across 3m miles of public roads, while Tesla said last year it had gathered data from more than 100m miles driven by owners of its existing vehicles to help it develop its Autopilot system. 

More miles, however, means more manual work for these companies’ small armies of backroom data processors. Driving just a handful of miles can create tens of gigabytes of data, which quickly become too large a volume to be uploaded wirelessly from the car. Instead, it must be saved to a hard drive and shipped to an outsourcing centre. For such a cutting-edge industry, these analogue logistics seem archaic. 

Critical function: labelling teams are one of the key labour-intensive functions that are fundamental to the success of autonomous driving © Nvidia

Each hour of driving can take hundreds of hours for its conversion into useful data, says David Liu, chief executive of Plus.ai, another Silicon Valley start-up developing autonomous driving systems. “We need hundreds of thousands, maybe millions of hours of data” for self-driving vehicles to go everywhere, he says, requiring “hundreds of thousands of people to get this thing done” globally. 

Big tech companies prefer not to publicise the manual aspect of autonomous driving. Waymo, Uber and Tesla all declined to comment for this story. 

“It is very hard to get people to talk about this,” says Dan Weld, professor of computer science and engineering at the University of Washington in Seattle. “They all like to say it’s machine-learning ‘magic’.” 

In a rare public acknowledgment, during a talk at the University of California, Berkeley, back in 2013, former Waymo and Uber engineer Anthony Levandowski described a Google team in India made up of what he called “human robots”, who were labelling images from its Street View service. 

Such a labour-intensive process does not come cheap. Industry estimates put the cost of creating and maintaining such maps for every city in the US in the billions of dollars a year. 

Some start-ups see an opportunity here. Companies such as Plus.ai, Deepmap and Drive.ai claim they can use “deep learning” to reduce this human input, while still maintaining the accuracy that is necessary for autonomous vehicles to operate safely. Deep learning is a newer and more advanced type of machine learning that seeks to emulate the analytical functioning of the human brain.

“With machine learning, it is very hard to get above 90 or 95 percentage accuracy and precision, but with deep learning it’s a lot easier to build a model like that,” says James Wu, chief executive of Deepmap, which raised $25m in May. 

Others in the industry, however, are sceptical that deep learning will remove the need for people altogether. Mr Bencke of Mighty.ai points to the challenges Facebook, YouTube and Twitter have faced in tackling abuse, from bullying to terrorism, on their social platforms. “If deep learning were that capable, don’t you think they would have solved that problem by now?” he says. “That’s much less complicated than autonomous vehicles, and it’s a big market.” 

AI researchers everywhere are chasing this goal of “unsupervised learning”, when machines can teach themselves unaided. In the meantime, the wizards of Silicon Valley and Detroit will be hoping their customers and investors continue to pay no attention to all those people behind the curtain. 

Get alerts on Driverless vehicles when a new story is published

Copyright The Financial Times Limited 2019. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article