Top tech companies form group seeking to control AI

Receive free Artificial intelligence updates

Four of the world’s most advanced artificial intelligence companies have formed a group to research increasingly powerful AI and establish best practices for controlling it, as public anxiety and regulatory scrutiny over the impact of the technology increases.

On Wednesday, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, with the aim of “ensuring the safe and responsible development of frontier AI models”.

In recent months, the US companies have rolled out increasingly powerful AI tools that produce original content in image, text or video form by drawing on a bank of existing material. The developments have raised concerns about copyright infringement, privacy breaches and that AI could ultimately replace humans in a range of jobs.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, vice-chair and president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Membership of the forum is limited only to the handful of companies building “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models”, according to its founders. 

That suggests its work will centre on the potential risks stemming from considerably more powerful AI, as opposed to answering questions around copyright, data protection and privacy that are pertinent to regulators today. 

The US Federal Trade Commission has launched a probe into OpenAI, looking at whether the company has engaged in “unfair or deceptive” privacy and data security practices or harmed people by creating false information about them. President Joe Biden has indicated that he will follow up with executive action to promote “responsible innovation”.

In turn, AI bosses have struck a reassuring tone, stressing they are mindful of the dangers and committed to mitigating them. Last week, executives at all four companies launching the new forum committed to the “safe, secure and transparent development of AI technology” at the White House.

Emily Bender, a computational linguist at the University of Washington who has extensively researched large language models, said that reassurances from the companies were “an attempt to avoid regulation; to assert the ability to self-regulate, which I’m very sceptical of”.

Focusing attention on fears that “machines will come alive and take over” was a distraction from “the actual problems we have to do with data theft, surveillance and putting everyone in the gig economy”, she said.

“The regulation needs to come externally. It needs to be enacted by the government representing the people to constrain what these corporations can do,” she added.

The Frontier Model Forum will aim to promote safety research and provide a communication channel between the industry and policymakers.

Similar groups have been established before. The Partnership on AI, of which Google and Microsoft were also founding members, was formed in 2016 with a membership drawn from across civil society, academia and industry, and a mission to promote the responsible use of AI.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link