A mask-clad man works on his laptop in outdoors in Shanghai
Businesses might be wary of developing artificial intelligence technology because the consequences for violating China’s strict rules could be severe, analysts said © Yu Ruwen/Future Publishing/Getty Images

China is drawing up tighter rules to govern generative artificial intelligence as Beijing seeks to balance encouragement for companies to develop the technology against its desire to control content.

The Cyberspace Administration of China, the powerful internet watchdog, aims to create a system to force companies to obtain a licence before they release generative AI models, said two people close to Chinese regulators.

The licensing regime is part of regulations being finalised as early as this month, according to people with knowledge of the move. It signals how Beijing is struggling to reconcile an ambition to develop world-beating technologies with its longstanding censorship regime.

“It is the first time that [authorities in China] find themselves having to do a trade-off” between two Communist party goals of sustaining AI leadership and controlling information, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.

One person close to the CAC’s deliberations said: “If Beijing intends to completely control and censor the information created by AI, they will require all companies to obtain prior approval from the authorities.”

But “the regulation must avoid stifling domestic companies in the tech race”, the person added. Authorities “are wavering”, the person said.

China is seeking to formalise its regulatory approach to generative AI before the technology — which can quickly create humanlike text, images and other content in response to simple prompts — becomes widespread.

Draft rules published in April said AI content should “embody core socialist values” and not contain anything that “subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity”.

The CAC needed to ensure that AI was “reliable and controllable”, its director Zhuang Rongwen said recently.

The draft regulations also required that the data used by companies to train generative AI models should ensure “veracity, accuracy, objectivity and diversity”.

Companies such as Baidu and Alibaba, which rolled out generative AI applications this year, had been in contact with regulators over the past few months to ensure their AI did not breach the rules, said two other people close to the regulators.

Angela Zhang, associate professor of law at the University of Hong Kong, said: “China’s regulatory measures primarily centre on content control.”

Other governments and authorities are racing to legislate against potential abuses of the technology. The EU has proposed some of the toughest rules in the world, prompting outcry from the region’s companies and executives, while Washington has been discussing measures to control AI and the UK is launching a review.

The quality of the data used to train AI models is a key area of regulatory scrutiny, with attempts to address issues such as “hallucinations” in which AI systems fabricate material.

Sheehan said Beijing had set its requirement “so much higher”, meaning Chinese companies would need to expend more effort to filter the kind of data used to “train” AI.

The lack of available data to meet those demands has become a bottleneck preventing some companies from developing and improving so-called large language models, the technology underlying chatbots such as OpenAI’s ChatGPT and Google’s Bard.

Businesses were likely to be “more cautious and conservative about what [AI] they build” because the consequences of violating the rules could be severe, said Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.

Chinese authorities implied in their draft regulations that tech groups making an AI model would be almost fully responsible for any content created. That would “make companies less willing to make their models available since they might be held responsible for problems outside their control”, said Toner.

The CAC did not respond to a request for comment.

Additional reporting by Ryan McMorrow in Beijing

This article has been amended to remove a reference to the timing for registering products stipulated in draft regulations in April

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments