Tech alliance in AI standards push to fill ‘gap’ in regulation

Unlock the Editor’s Digest for free

Microsoft, OpenAI, Google and Anthropic have stepped up a united push towards safety standards for artificial intelligence and appointed a director as their alliance seeks to fill “a gap” in global regulation.

The four tech giants, which this summer banded together to form the Frontier Model Forum, on Wednesday picked Chris Meserole from the Brookings Institution to be executive director of the group. The forum also divulged plans to commit $10mn to an AI safety fund.

“We’re probably a little ways away from there actually being regulation,” Meserole, who is stepping down from his role as an AI director at the Washington-based think-tank, told the Financial Times. “In the meantime, we want to make sure that these systems are being built as safely as possible.”

Tech giants have coined the term “frontier” to describe a subset of artificial intelligence with highly advanced and unknown capabilities, including the type that powers generative AI products such as Open AI’s ChatGPT and Google’s Bard. These are driven by large language models, systems that can process and generate vast amounts of text and other data.

Concern has intensified over the past few months about the potential of increasingly powerful AI to displace jobs, create and spread misinformation, or eventually surpass human intelligence.

Meserole asserted that the forum would seek to “supplement or complement” any official regulation but, “in the interim, while there’s a gap, we need to move forward with building these systems safely”.

Chris Meserole

Governments worldwide have called for robust legislation to police this fast-developing technology, with the EU’s AI Act expected to be finalised by early next year. The UK meanwhile is hosting the first global summit on AI safety next week, with political leaders and leading tech executives invited to discuss co-operation on issues such as national security.

The forum will focus initially on risks including AI’s ability to help design bioweapons and also to generate computer code that could be used to facilitate hacking of critical systems, Meserole said.

The $10mn pot announced on Wednesday includes investment from former Google chief executive Eric Schmidt and will go towards supporting academic research in AI.

The group plans to use existing labs and teams from companies to research “red teaming” techniques — methods that researchers use to test systems for flaws or dangers — and develop standards for technical risk assessments and evaluations of the technologies.

Meserole said the forum would operate as a non-profit initiative funded by industry members through a membership fee. He gave no further financial details. The group plans to welcome more members at a later stage.

The forum will serve as an “industry body”, likening its work to the collaboration by tech giants on tackling child sexual abuse material and terrorist content online.

“There’s real value, even when there is robust regulation and regulatory frameworks in place, in having global standards so that there’s consistency across jurisdictions,” Meserole said.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link