HOHHOT, CHINA - JANUARY 18: A security robot goes on patrol at a community on January 18, 2019 in Hohhot, Inner Mongolia Autonomous Region of China. The security robots, equipped with panoramic camera and far infrared system, are able to communicate with people, recognize strangers and automatically alarm when noticing something abnormal. (Photo by Wang Zheng/VCG via Getty Images)
A security robot equipped with facial recognition and thermal imaging on patrol in Hohhot, China © Getty
Experimental feature

Listen to this article

00:00
00:00
Experimental feature

Like many working women in China, Ashley Wu shops almost exclusively online. Then, last November, she booked a non-existent domestic flight online, after a person posing as a customer service representative called and told her that her existing ticket had been cancelled.

The person perpetrating the fraud had access to Ms Wu’s online purchases, and knew that she had recently booked a Rmb4,000 ($600) flight. The voice on the phone told her that she needed to rebook the flight immediately or risk not being able to fly. 

“When someone knows everything about you and you feel pressed for time, you will do anything they tell you, however irrational,” said Ms Wu, a Beijing-based teacher. Stressed, she quickly offered them her details.

Chinese citizens generate huge amounts of data, which is being used to fuel China’s big bet on artificial intelligence. But there are risks, and the push towards AI has raised questions about how the Chinese state, as well as private companies, will collect, safeguard and utilise the trillions of data points collected every day.

Beijing’s national artificial intelligence plan, implemented in 2017, provides a road map that predicts Chinese AI researchers leading an industry expected to be worth more than $150bn by 2030.

Critics warn that the proliferation of AI technology-based applications could exacerbate a threat to civil liberties. 

“I’m deeply concerned that AI is not going to empower the people, but instead that the government will use AI to further suppress its citizens, especially in combination with surveillance, big data and machine learning,” says Lokman Tsui, a scholar and activist at the Chinese University of Hong Kong.

HOHHOT, CHINA - JANUARY 18: Security guards watch a video taken by a security robot at a community on January 18, 2019 in Hohhot, Inner Mongolia Autonomous Region of China. The security robots, equipped with panoramic camera and far infrared system, are able to communicate with people, recognize strangers and automatically alarm when noticing something abnormal. (Photo by Wang Zheng/VCG via Getty Images)
Video streamed from a patrol robot's camera is watched by security guards in Hohhot, China © Getty

China is already home to the world’s leading facial recognition companies. SenseTime, one of the world’s mostly highly-valued AI companies at $4.5bn, provides software that analyses footage from the national CCTV camera network. Start-up Face++, a company valued at more than $1bn and backed by the Chinese state-owned venture capital fund, has provided much of the hardware for state video surveillance projects

A large population and a bureaucratic governmental system that requires large inputs of data has given Chinese AI companies advantageous economies of scale. China maintains the world’s largest database of national identification photos (upwards of 1bn). Security robots equipped with facial recognition technology roam the streets.

Chinese companies are already putting that data to use. Certain restaurants now allow customers to “pay with your face”, while Chinese banks are incorporating AI technology, developed by the financial services group Ping An, to scan a borrowers micro-facial movements for early signs of fraud.

But signs of pushback against AI initiatives are emerging. In 2018, China’s central bank reined in pilot schemes run by private companies including Tencent and Alibaba to develop financial credit-like scores over concerns that private companies could improperly use customer data.

In the western region of Xinjiang, China’s ruling Communist party has built a virtual police state employing tech companies such as Shenzhen-listed Hikvision, whose AI-enabled cameras monitor the region’s Muslim residents around the clock. In a sign of condemnation, Washington is taking a tougher line on companies such as Hikvision that are backed by US investment funds.

AI can be double-edged, say researchers. It can bolster surveillance regimes, and is also increasingly required by a one-party state governing a country of 1.3bn people.

“It’s part of a larger effort to integrate big data, AI and existing surveillance systems to improve command and control. But it’s also to improve the communication between different government departments,” says Samantha Hoffman, a fellow at the Australian Strategic Policy Institute’s International Cyber Policy Centre who has studied Chinese “smart cities”. 

The “smart cities” initiative integrates AI with urban services ranging from traffic to public healthcare, as well as video surveillance. Ms Hoffman says: “It is neither a positive thing nor an extremely negative thing. It’s actually sort of both and . . . contradictory . . . and I think that’s an important point.”

Sometimes, though, China’s AI capabilities fall short of the hype. Four years after the contentious social credit scheme was announced — intended to measure each citizen’s trustworthiness by assigning a universal score — only Beijing and Hangzhou have launched such scores. High scorers can unlock minor perks such as lower bus fares: not quite the Orwellian system predicted by China’s critics.

China still lacks a financial credit system, despite its economic size. Desperate for access to financing, Chinese citizens turned to peer-to-peer lending platforms, which promised to harness big data and AI algorithms to match lenders with borrowers. After a profusion of P2P platforms, a collapse this year wiped out thousands of users’ savings.

For all these functions, data have limitations. “Banks want to know who can repay loans, the police want to know who’s likely to commit crimes,” says Jeremy Daum, a legal expert at the Yale China Law Centre in Beijing. Some of these data are attainable he says, and some are not. “A general score just won’t help you find the data you need.”

Get alerts on Artificial intelligence when a new story is published

Copyright The Financial Times Limited 2019. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article