ChatGPT maker investigated by US regulators over AI risks

Receive free Artificial intelligence updates

The Federal Trade Commission has launched a wide-ranging investigation into ChatGPT maker OpenAI, as the US regulator turns its attention to potential risks created by the rise of artificial intelligence. 

In a letter sent to the Microsoft-backed company, the FTC said it will look at whether people have been harmed by the AI chatbot creating false information about them, as well as whether OpenAI has engaged in “unfair or deceptive” privacy and data security practices.

Generative AI products are increasingly in the crosshairs of regulators around the world, as AI experts and ethicists sound the alarm about the enormous amount of personal data consumed by the technology, as well as its potentially harmful outputs, ranging from misinformation to sexist and racist comments. 

In May, the FTC fired a warning shot to the industry, saying it is “focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers”.

In its letter, the US regulator asked OpenAI to share internal material ranging from how the group uses or retains user information; the data it has used to develop large language models; and steps the company has taken to address risk of its model producing statements that are “false, misleading or disparaging”.

The FTC declined to comment on the letter, which was first reported by the Washington Post. OpenAI declined to comment.

Lina Khan, FTC chair, on Thursday morning testified before the House judiciary committee and faced strong criticism from Republican lawmakers over her tough enforcement stance.

Khan is part of a new generation of progressive antitrust officials appointed by Joe Biden’s administration, which is seeking to crack down on anti-competitive conduct it believes has gone unchecked for decades across the US economy.

Experts have been concerned about the huge amount of data being hoovered up by language models behind ChatGPT. OpenAI had more than 100mn monthly active users two months into its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was being used by more than 1mn people in 169 countries within two weeks of its release in January.

Users have reported that ChatGPT has fabricated names, dates and facts, as well as fake links to news websites and academic paper references, an issue known in the industry as “hallucinations”.

The FTC’s probe digs into technical details of how ChatGPT was designed, including the company’s work on fixing hallucinations, and the oversight of its human reviewers, which affect consumers directly. It has also asked for information on consumer complaints and efforts made by the company to assess consumers’ understanding of the chatbot’s accuracy and reliability.

In March, Italy’s privacy watchdog temporarily banned ChatGPT while it examined the US company’s collection of personal information following a cyber security breach, among other issues. It was reinstated a few weeks later, after OpenAI made its privacy policy more accessible and introduced a tool to verify users’ ages.

OpenAI chief executive Sam Altman has previously admitted that ChatGPT has weaknesses. “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” he wrote on Twitter in December. “It’s a mistake to be relying on it for anything important right now. It’s a preview of progress; we have lots of work to do on robustness and truthfulness.” 



Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link