China to lay down AI rules with emphasis on content control
Receive free Technology regulation updates
We’ll send you a myFT Daily Digest email rounding up the latest Technology regulation news every morning.
China is to issue rules for generative AI as Beijing seeks to balance encouraging local companies to develop the revolutionary technology against its desire to control content.
The Cyberspace Administration of China, the powerful internet watchdog, aims to create a system that will force companies to gain a licence before they can release generative AI systems, said two people close to the regulators.
That requirement tightens draft regulations issued in April, which had provided groups more room for manoeuvre, as groups had 10 working days to register their product with Chinese authorities after launch.
The new licensing regime, part of regulations that are being finalised as early as this month according to people with knowledge of the move, signal how Beijing is struggling to reconcile the ambition to develop world-beating technologies with its longstanding censorship regime.
“It is the first time that [authorities in China] find themselves having to do a trade-off” between the two “fundamental” Communist party goals of sustaining AI leadership and controlling information, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.
“If Beijing intends to completely control and censor the information created by AI, they will require all companies to obtain prior approval from the authorities,” said a person close to the CAC’s deliberations.
“But the regulation must avoid stifling domestic companies in the tech race,” the person added. The authorities “are wavering”.
China is seeking to respond to the rise of generative AI systems — which quickly create humanlike text, images and other content in response to simple prompts.
Content should “embody core socialist values”, according to the draft rules from April, and must not contain anything that “subverts state power, advocates the overthrow of the socialist system, incites splitting the country or undermines national unity”.
Companies such as Baidu and Alibaba, which rolled out generative AI applications this year, had been in contact with regulators over the past few months to ensure that their AI did not breach the rules, said two other people close to the regulators.
The CAC needed to ensure that AI was “reliable and controllable” since Beijing was concerned about the data used, its director Zhuang Rongwen said recently.
“China’s regulatory measures primarily centre on content control,” said Angela Zhang, associate professor of law at the University of Hong Kong.
Other governments and authorities are racing to legislate against potential abuses of the technology. The EU has proposed some of the toughest rules in the world, prompting an outcry from the region’s companies and executives, while Washington has been discussing measures to control AI and the UK is launching a review.
April’s draft regulations in China laid out requirements for the data the tech companies used to train generative AI models with a specific demand to ensure “veracity, accuracy, objectivity, and diversity”.
The requirement shows that China is adopting a similar direction to Europe, where the quality of the data used to train AI models is a key area of regulatory scrutiny, such as seeking to tackle issues like “hallucinations” — when AI systems fabricate material.
Beijing, however, set its requirement “so much higher”, said Sheehan. This means that Chinese companies need to expend more effort on filtering the kind of data used to “train” AI.
The lack of available data that fit those demands, however, has become a bottleneck limiting many companies from developing and improving so-called large language models, the technology underlying chatbots such as OpenAI’s ChatGPT and Google’s Bard.
Businesses were likely to be “more cautious and conservative about what [AI] they build” because the consequences of violating the rules could be severe, said Helen Toner, director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology.
Chinese authorities implied in its draft regulations that tech groups making an AI model will be almost fully responsible for any content created, a move that would “make companies less willing to make their models available since they might be held responsible for problems outside their control,” said Toner.
The FT has attempted to contact the CAC for comment.
Additional reporting from Ryan McMorrow in Beijing
Read the full article Here