Chinese AI scientists call for stronger regulation ahead of landmark summit
Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Chinese artificial intelligence scientists have joined western academics to call for tighter controls on the technology than those being proposed by the UK, US and EU, as nations set out rival positions ahead of this week’s global AI safety summit.
Several Chinese academic attendees of the summit at Bletchley Park, England, which starts on Wednesday, have signed on to a statement that warns that advanced AI will pose an “existential risk to humanity” in the coming decades.
The group, which includes Andrew Yao, one of China’s most prominent computer scientists, calls for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures and for developers to spend 30 per cent of their research budget on AI safety.
The proposals are more focused on existential risk than US president Joe Biden’s executive order on AI issued this week, which encompasses algorithmic discrimination and labour-market impacts, as well as the European Union’s proposed AI Act, which focuses on protecting rights such as privacy.
The statement also goes further than the draft communique on AI safety being put together by UK prime minister Rishi Sunak for the landmark summit, which stops short of calling on governments to impose specific regulations.
The measures are an early indication of China’s likely stance on global AI regulation at a time of heightened tension over world leadership of key technologies between Beijing and Washington.
The joint statement differs in emphasis from China’s domestic AI regulations, which are focused on content control and censorship. Beijing recently announced its own global AI governance initiative, which highlights the unfairness of barriers to AI adoption caused by US chip export controls on China — a stance that will be heavily contested by Washington and its allies.
The academics behind the joint statement, which was also signed by western experts such as Yoshua Bengio of the Université de Montréal, gathered in October at Ditchley Park country house near Oxford. They met to agree a position ahead of the summit at Bletchley Park, a country estate home to codebreakers during the second world war.
“Having learned lessons from coping with environmental damages, we should work together as a global community to ensure the safe progress of AI,” said Yao, one of the organisers of the statement.
Ya-Qin Zhang, a Tsinghua University dean and former president of tech giant Baidu, said there was a need to ensure future artificial general intelligence was “safe and trustworthy”.
While Beijing is not on the attendee list on the first day of the summit, sources familiar with the Chinese government said the Ministry of Science and Technology would attend. The Ministry of Foreign Affairs did not respond to a request for comment.
The UK’s draft communique, seen by the Financial Times, stops short of proposing regulations but does warn of the “potential for serious, even catastrophic, harm” posed by advanced or “frontier” AI models.
Sunak wants the communique to be signed by politicians at the summit, including from Canada, India, South Korea, Japan, Singapore, Spain, the US and the EU. Tech executives including OpenAI chief Sam Altman and Elon Musk, head of X, formerly Twitter, are also expected to attend.
In a speech to be delivered in London on Wednesday, US vice-president Kamala Harris is set to argue that while “AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber attacks at a scale beyond anything we have seen before to AI-formulated bioweapons that could endanger the lives of millions”.
Such risks “could endanger the very existence of humanity”, Harris is set to say, according to excerpts of her speech seen by the FT. “These threats are, without question, profound, and demand global action.”
In an effort to build momentum and an early legacy for the event, the UK is poised to announce South Korea as the host of the next AI safety summit in 2024, according to two people briefed on the decision.
Paul Triolo, senior associate at the Center for Strategic and International Studies think-tank in Washington, said selecting South Korea was a “smart choice” as the country was seen as relatively neutral on the international stage.
Additional reporting by Stefania Palma in Washington
Read the full article Here