OpenAI forms new team to assess “catastrophic risks” of AI
OpenAI is forming a new team to mitigate the “catastrophic risks” associated with AI. In an update on Thursday, OpenAI says the preparedness team will “track, evaluate, forecast, and protect” against potentially major issues caused by AI, including nuclear threats.
The team will also work to mitigate “chemical, biological, and radiological threats,” as well as “autonomous replication,” or the act of an AI replicating itself. Some other risks that the preparedness team will address include AI’s ability to trick humans, as well as cybersecurity threats.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity,” OpenAI writes in the update. “But they also pose increasingly severe risks.”
Aleksander Madry, who is currently on leave from his role as the director of MIT’s Center for Deployable Machine Learning, will lead the preparedness team. OpenAI notes that the preparedness team will also develop and maintain a “risk-informed development policy,” which will outline what the company is doing to evaluate and monitor AI models.
Read the full article Here