UK government to publish ‘tests’ on whether to pass new AI laws
Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The UK government is set to publish a series of tests that need to be met to pass new laws on artificial intelligence, as it continues to resist creating a tougher regulatory regime for the fast-developing technology.
British ministers are preparing to publish criteria in the coming weeks on the circumstances in which they would enact curbs on powerful AI models created by leading companies such as OpenAI and Google, according to multiple people familiar with the impending move.
Among the “key tests” that would trigger an intervention is if the systems put in place by the UK’s new AI Safety Institute — a government body comprised of academics and machine learning experts — fail to identify risks around the technology. Another scenario that could trigger legislation is if AI companies fail to uphold voluntary commitments to avoid harm.
The UK’s cautious approach to regulating the sector contrasts with moves around the world. The EU has agreed a wide-ranging “AI Act” that creates strict new obligations for leading AI companies making “high-risk” technologies.
US President Joe Biden has issued an executive order to compel AI companies to reveal they are tackling threats to national security and consumer privacy. Meanwhile, China has provided detailed guidance on the development of AI emphasising the need to control content.
The UK has said it will refrain from creating specific AI legislation in the short term in favour of a light-touch regime, over fears that tough regulation would inhibit industry growth.
The government’s new tests will be included as part of its response to a consultation on its white paper, published in March, that proposed splitting responsibility for AI regulation among existing regulators, such as Ofcom and the Financial Conduct Authority.
In November, leading AI companies, including OpenAI, Google DeepMind, Microsoft and Meta, signed a series of voluntary commitments on the safety of their products as part of the inaugural global AI Safety Summit hosted by the UK government.
The companies agreed to allow the UK’s AI Safety Institute to evaluate the safety of powerful models that underpin products like ChatGPT before they are released to businesses and consumers. The evaluation of models are understood to be under way, but it is unclear how they will be conducted or whether AI companies will provide comprehensive access.
“We’re currently lucky because we’re reliant on goodwill on both sides, and we have that, but it could always break down,” said one government official. “It’s very character-dependent and CEO-dependent.”
However, some AI experts have argued that the UK’s reliance on voluntary commitment lacks teeth.
“The concern is that the government is setting up the capabilities to assess and monitor the risks of AI through the institute but leaving itself powerless to do anything about those risks,” said Michael Birtwistle, associate director at the independent research body Ada Lovelace Institute.
“The economic stakes are so high in AI, and, without strong regulatory incentives, you can’t expect companies to stick to voluntary commitments once their market incentives move in a different direction,” he added.
The government’s response to its initial AI white paper will include a stipulation that, to pursue tabling legislation, there would need to be evidence that such a move would mitigate the risks of AI without stifling innovation, according to one person familiar with its contents.
The same person added that another reason the government could push forward with new laws was if it faced resistance from AI companies over future updates to their voluntary agreements, such as requesting access to code or adopting new testing requirements.
The UK government said it would “not speculate on what may or may not be included” in its response to the white paper consultation, but added it was “working closely with regulators to make sure we have the necessary guardrails in place — many of whom have started to proactively take action in line with our proposed framework”.
Read the full article Here