Experimental feature

Listen to this article

00:00
00:00
Experimental feature

A few years ago, a computer scientist called Chris Carson had a realisation. Soon the streets would be filled with self-driving cars, using high-quality cameras to navigate. What if those cameras could also be trained to recognise licence plates and spot traffic offences? It would be like having a hyper-efficient traffic cop on every corner. 

“We’re still pulling people over for traffic violations and writing them tickets,” says Carson. “That’s so 20 years ago.”

Carson’s start-up, Hayden AI, is now trying to create a network of eyes many times any CCTV network, supercharged by 5G mobile networks and artificial intelligence. The images could come from almost anywhere: taxi drivers could place a smartphone on their dashboards; if the footage resulted in another motorist receiving a parking fine, the taxi driver could receive a share of the proceeds. 

“That’s the idea — to put as many eyes on the road as possible,” enthuses Carson. “I think it’s going to create a large paradigm shift in behaviour.” Once this unprecedented network of intelligent cameras exists, there is no reason why it should stop with identifying traffic violations. The footage could help solve any crime, inform city planning and target advertising.

You don’t have to be a Luddite to feel that something fundamental is changing in the human experience. We are becoming known entities. It is not simply that information is being collected on us; that information is now often high-quality video and audio, which can precisely identify our every move. 

Across the US, police forces have been giving out Amazon’s smart doorbells, which film the surrounding street. Some of the footage is then streamed on an app, Neighbors. The days of private investigators sitting in cars could be over: now they can just park the car with a camera on the dashboard, and watch the footage from their living room. 

Meanwhile, employers can track our sleeping habits, retailers can follow us round the aisles, car parts suppliers say that they can identify drivers’ emotions. The resulting data are training complex algorithms, which then nudge us towards certain behaviours. “We are moving from a digital age to an age of prediction,” says Pam Dixon, director of the World Privacy Forum, a think-tank.

Screens showing video surveillance are seen as help desk staff work to support alarm triggers in private homes or small business at the hub of Verisure, a provider of professionally monitored security solutions, in Angers, France, July 3, 2019. REUTERS/Stephane Mahe - RC1B6F4D3ED0
A security centre in Angers, France. The use of sophisticated surveillance equipment is on the rise worldwide
A graphic with no description

Your smartphone aspires to know what you want before you do. I recently met a tech entrepreneur who couldn’t work out why Google Maps was suggesting that he visit a certain location. Then he realised it was time for his six-weekly haircut. At least it was only a suggestion.

How do we deal with this new world? In 2010, when he was chief executive of Google, Eric Schmidt said that the company’s policy was “to get right up to the creepy line and not cross it”. He added brain implants were beyond the creepy line “at least for the moment, until the technology gets better”. It remains one of the best summaries of how we deal with technologies: the creepy line exists, the creepy line shifts.

Twenty years ago, if a supermarket had asked to put a microphone in our houses, or a landlord had asked to put in a camera, or a train company had asked our whereabouts in the station, we would have said no. Now we buy Amazon Alexas, rent Airbnbs and use London Underground’s free WiFi. And then we cross our fingers. Amazon and Google now both admit that individual employees listen to some recordings from their smart speakers. Facebook argues that its users have no expectation of privacy on their posts.

This has set up a confrontation. On one side are those who are trying to make the creepy line into an uncrossable trench. In May the city of San Francisco banned its police from using facial recognition software — a move intended to nip the technology in the bud. This month a new email service called Superhuman disabled the ability for users to track where and when a recipient had opened their message — following online outcry.

On the other side are those intent on pushing the creepy line back, by accustoming people to new technologies. This includes Carson, the computer scientist planning a surveillance network. “If we can live in a society where you can walk safely down an alley, I think that creepy line goes away,” he says. Meanwhile, Google has learnt the lesson of Google Glass, the wearable camera launched in 2013 that was shunned by citizens. It is introducing facial recognition in a deliberately timid manner — as a quick way to log in to its home assistant.

Historically you could expect privacy in a phone-box or with a doctor. But what kind of privacy can you expect from a cellphone provider or a fitness watch? We don’t know enough to have expectations.


In 1998, while Google was based in a garage and Kodak still dominated the camera market, the science fiction writer David Brin predicted the end of privacy as we knew it. Brin argued that cameras and sensors were becoming so cheap that they would inevitably become omnipresent. 

People’s actions would be recorded, their tax returns published. But this, Brin argued, would create a new form of privacy. “Mutually assured surveillance” would ensure that people did not misbehave — that each citizen was let alone. There would be no more hit-and-run drivers or political corruption. 

“It was fun while it lasted, living on these city streets amid countless, nameless fellow beings,” Brin wrote in his book The Transparent Society. “It was also lonely.” Anonymity was just a phase in human existence, whose end was now inevitable. After all, what would anonymity have meant to a cave dweller? What privacy did Victorian teenagers have from their parents? 

A graphic with no description

Brin believes that his vision is coming true — that “godlike powers of almost omniscient vision and surveillance” are spreading. This is not limited to the powerful. Police malpractice is now caught on camera. In the US, family-history websites, to which millions of people submitted DNA samples out of idle curiosity, are being used to identify suspects (the first successful prosecution took place in Washington state last month).

There is also another argument that surveillance is inevitable. In China, surveillance is becoming pervasive, and algorithms score citizens on their behaviour. If the west enacts privacy laws, it will have less data — a key raw material for artificial intelligence — and so will put itself at a competitive disadvantage. Democracies will have to collect data in order to safeguard themselves from cyber attacks, or so the argument goes. 

The head of the Metropolitan Police’s staff association recently called China’s use of facial recognition “absolutely correct”. For most of us, however, total transparency or surveillance would be a dystopia. We fear data breaches, identity theft and simply the embarrassment of old photos online, but it often takes bitter experience for us to act. “Privacy’s a hindsight problem — you look back and see what’s happened,” says Jason Schultz, a law professor at New York University. 

It doesn’t help that privacy is a slippery, abstract concept. It is not mentioned explicitly in the US constitution or in English law before a few decades ago. Its essence is that it is contextual: we want certain information limited to certain people. Have you ever noticed how easy it is to share a secret with a stranger, who will never be able to connect it with any other information about you? Privacy is also often about power: the less knowledge that others have about you, the less easily they can meddle in your life. Total transparency would not satisfy either of these criteria. 

Big technology companies have different approaches to addressing our worries. First, they can build in some privacy: Apple, a pioneer, blocks some online tracking (its chief executive Tim Cook said last year that “stockpiles of personal data serve only to enrich the companies that collect them”). Facebook, a privacy laggard, now predicts that the “main ways” that people will communicate on its platform will be via encrypted messaging services, Messenger and WhatsApp. 

Second, tech companies offer to put us in control. Google and Apple now offer more options to hide our location, for example. On a cloudless day in Silicon Valley in May, I listened as a Google executive noted how much personal information was now stored on our phones. “You should always be in control of what you share and who you share it with,” she said. Facebook promises that users’ information “will only be seen by who they want to see it”. In other words, we can now choose where to put the creepy line. 

The third way that technology companies offer us privacy is by safeguarding and anonymising our data. Google will know you searched for gonorrhoea remedies and went to the cinema on a sick day, but no one else will. Likewise, so what if an Amazon employee who you never meet listens to a recording of your domestic life? The risk that anonymised data will be traced back to individuals can never be eliminated — especially because new data sets might become available that can be crossmatched — but it can be made minimal.

NANCHANG, CHINA - MAY 05: A police officer from the police station of Nanchang West railway station wearing augmented reality (AR) glasses is on duty at a street on May 5, 2019 in Nanchang, Jiangxi Province of China. The glasses are able to identify people with facial recognition technology. (Photo by Xu Guoliang/VCG via Getty Images)
A police officer in Nanchang, China, conducts a street patrol wearing augmented reality glasses, which are able to identify people using facial recognition technology

There are, however, notable flaws in this vision. Facebook and Google have to keep tracking us, because that is what their advertising business is built on. They can offer consent without offering real choice. Today less than 10 per cent — probably closer to 1 per cent — of Google users change their privacy settings. With many companies’ services, you either sign up to a sweeping statement, or you can’t use the service at all. Nearly all the time, we don’t know what it means to click “accept”. 

“When you go to your doctor, they don’t just say, I can perform any kind of surgery you want if you sign this form,” says Schultz of New York University. In the current world, we are like drugged patients, who submit ourselves a general anaesthetic, knowing our bodies will be used for medical research while we’re unconscious. 

In the future, individuals could be given more detail about the use of their data; an option to deny apps access to key information and still use the service; and easier means to delete their data after the fact. 

But perhaps we should go further still, and say that even informed consent does not guarantee privacy. Governments don’t allow us to save money by buying a car without seat belts, or to make money by selling our organs. Does privacy belong in the same category as safety or medical ethics? If some degree of privacy is a right — an essential part of being human, rather than a commodity — then we should not be able to trade it away either. “It is a classic consumer protection problem,” said Viktor Mayer-Schönberger, a professor at the University of Oxford’s Internet Institute.

The principle of individual agency — Google’s promise that “You should always be in control” — has other limitations. How does it apply to surveillance in public places: such as facial recognition by the police or even from other people’s smart cars? You can’t sign a form every time you leave the house. And even if we as individuals opt out, our behaviour can be inferred through the huge datasets on other people’s behaviour.

This is the new frontier: to protect our own privacy from intrusive algorithms, we may need to block collection of other people’s data and therefore slow down services that they might find useful. How do we find the balance? 


Quayside is a four-hectare, former industrial stretch of Toronto’s eastern waterfront. One of Google’s sister companies wants to turn it into “the most innovative district in the world”. 

In a proposal published last month, Sidewalk Labs outlined a vision of the future. As always with Silicon Valley dreams, there is something to love. The buildings would nearly all be made of wood; they would be able to turn food waste into energy; the greenhouse gas emissions would be barely a tenth of the current city average. 

At the heart of Sidewalk Labs’ vision is data. Sensors would gather information in buildings and adjust settings. The thermostats could be adjusted automatically. The street markings could fluctuate, giving more room to pedestrians at some times in the day and more to cars at others. Data would be made publicly accessible for other public goods — not advertising. People’s lives would be tracked, measured and “optimised”. And as in a Kafka novel or the TV show Black Mirror, residents might feel subject to opaque processes beyond their direct control.

Many services swap marginal intrusion for a marginal benefit (a meal ordering app recently asked me for a photo to make it easier for me to collect my order). Sidewalk Labs is promising significant intrusion for a significant benefit. As such, it is comparable to the healthcare services that want access to sensitive data — in exchange for helping to identify serious disease. 

The fate of Sidewalk Toronto is interesting, because it is a rare example where technology companies have had to submit their innovation for inspection beforehand. Critics have centred on how Quayside’s data will be handled. A trust, proposed by Sidewalk Labs, recommends all data gathered is de-identified. But it would not insist upon it. 

“As soon as they said that, I knew I had to resign,” says Ann Cavoukian, a former Ontario privacy commissioner, who was an adviser to the company last year. “The minute you make it voluntary, you’re not going to have any privacy. Everybody wants personally identifiable data, that’s the treasure trove.” 

Cavoukian compares it to the surveillance in China and Dubai. “No way, that’s not the direction we’re going to here in Toronto.” She adds that privacy concerns could yet lead Waterfront Toronto, the development agency, to cancel the whole thing. “It’s not a done deal,” she says.

But Waterfront Toronto and the city’s citizens are faced with an almost impossible decision: to evaluate unforeseeable benefits against unforeseeable costs. Who knows how massive, but probably de-identified, data collection will affect Toronto? Who knew how YouTube, Facebook and Twitter would affect the world?

In any case, nobody seriously thinks that tech companies will have less data on us in 10 years than they do today. We can ban police use of facial recognition, but the technology is likely to spread in other places — as will gait recognition, iris scanning and biometrics. (The US Department of Homeland Security aims for nearly all air travellers to be boarded using facial recognition by 2023.) Woodrow Hartzog, a privacy scholar, argues that there is a value in simply making data harder to collect, in creating transaction costs. But parents use cameras to watch their children, homeowners use them to watch their front doors; the shift from convenience to surveillance is an easy one.

When regulation has focused on particular areas, as with an Illinois law on biometrics, it has proved effective. Information may be the new oil, but for some start-ups, it’s also known as “the new Kryptonite”, because of the fines for data breaches. 

Politicians like to talk about “comprehensive” privacy legislation. Yet, privacy is so contextual that each sector needs different standards, says Dixon of the World Privacy Forum. The AI Now Institute, a think-tank working on holding algorithms to account, argues that different bodies should be set up with expertise in different sectors: healthcare, education, welfare and so on. 

There is not one creepy line, there are hundreds. “You should always be in control of what you share and who you share it with,” says Google. But after nearly two decades living in the data economy, we have learnt that perhaps the best way to draw those creepy lines is not as individuals, hurrying through online consent forms, but as communities, with the power of numbers. For 20 years, tech companies have assumed that they are right to infringe our privacy; we should now ask them to justify themselves first.

Henry Mance is the FT’s chief feature writer

What forms of surveillance would you find unacceptable? Share your thoughts below. Here’s a selection of some of the best comments so far 

Follow @FTLifeArts on Twitter to find out about our latest stories first. Listen and subscribe to Everything Else, the FT culture podcast, at ft.com/everything-else or on Apple Podcasts

Letter in response to this article:

Join forces to help break Big Tech’s monopoly / From Jianlong Zhu, Ashiya, Hyogo, Japan

Get alerts on Life & Arts when a new story is published

Copyright The Financial Times Limited 2019. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article