The EU AI Act passed — now comes the waiting
The European Union’s three branches provisionally agreed on its landmark AI regulation, paving the way for the economic bloc to prohibit certain uses of the technology and demand transparency from providers. But despite warnings from some world leaders, the changes it will require from AI companies remain unclear — and potentially far away.
First proposed in 2021, the AI Act still hasn’t been fully approved. Hotly debated last-minute compromises softened some of its strictest regulatory threats. And enforcement likely won’t start for years. “In the very short run, the compromise on the EU AI Act won’t have much direct effect on established AI designers based in the US, because, by its terms, it probably won’t take effect until 2025,” says Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights.
So for now, Barrett says major AI players like OpenAI, Microsoft, Google, and Meta will likely continue to fight for dominance, particularly as they navigate regulatory uncertainty in the US.
The AI Act got its start before the explosion in general-purpose AI (GPAI) tools like OpenAI’s GPT-4 large language model, and regulating them became a remarkably complicated sticking point in last-minute discussions. The act divides its rules on the level of risk an AI system has on society, or as the EU said in a statement, “the higher the risk, the stricter the rules.”
But some member states grew concerned that this strictness could make the EU an unattractive market for AI. France, Germany, and Italy all lobbied to water down restrictions on GPAI during negotiations. They won compromises, including limiting what can be considered “high-risk” systems, which would then be subject to some of the strictest rules. Instead of classifying all GPAI as high-risk, there will be a two-tier system and law enforcement exceptions for outright prohibited uses of AI like remote biometric identification.
That still hasn’t satisfied all critics. French President Emmanuel Macron attacked the rules, saying the AI Act creates a tough regulatory environment that hampers innovation. Barrett said some new European AI companies could find it challenging to raise capital with the current rules, which gives an advantage to American companies. Companies outside of Europe may even choose to avoid setting up shop in the region or block access to platforms so they don’t get fined for breaking the rules — a potential risk Europe has faced in the non-AI tech industry as well, following regulations like the Digital Markets Act and Digital Services Act.
But the rules also sidestep some of the most controversial issues around generative AI
AI models trained on publicly available — but sensitive and potentially copyrighted — data have become a big point of contention for organizations, for instance. The provisional rules, however, do not create new laws around data collection. While the EU pioneered data protection laws through GDPR, its AI rules do not prohibit companies from gathering information, beyond requiring that it follow GDPR guidelines.
“Under the rules, companies may have to provide a transparency summary or data nutrition labels,” says Susan Ariel Aaronson, director of the Digital Trade and Data Governance Hub and a research professor of international affairs at George Washington University. “But it’s not really going to change the behavior of companies around data.”
Aaronson points out that the AI Act still hasn’t clarified how companies should treat copyrighted material that’s part of model training data, beyond stating that developers should follow existing copyright laws (which leave lots of gray areas around AI). So it offers no incentive for AI model developers to avoid using copyrighted data.
The AI Act also won’t apply its potentially stiff fines to open-source developers, researchers, and smaller companies working further down the value chain — a decision that’s been lauded by open-source developers in the field. GitHub chief legal officer Shelley McKinley said it is “a positive development for open innovation and developers working to help solve some of society’s most pressing problems.” (GitHub, a popular open-source development hub, is a subsidiary of Microsoft.)
Observers think the most concrete impact could be pressuring other political figures, particularly American policymakers, to move faster. It’s not the first major regulatory framework for AI — in July, China passed guidelines for businesses that want to sell AI services to the public. But the EU’s relatively transparent and heavily debated development process has given the AI industry a sense of what to expect. While the AI Act may still change, Aaronson said it at least shows that the EU has listened and responded to public concerns around the technology.
Lothar Determann, data privacy and information technology partner at law firm Baker McKenzie, says the fact that it builds on existing data rules could also encourage governments to take stock of what regulations they have in place. And Blake Brannon, chief strategy officer at data privacy platform OneTrust, said more mature AI companies set up privacy protection guidelines in compliance with laws like GDPR and in anticipation of stricter policies. He said that depending on the company, the AI Act is “an additional sprinkle” to strategies already in place.
The US, by contrast, has largely failed to get AI regulation off the ground — despite being home to major players like Meta, Amazon, Adobe, Google, Nvidia, and OpenAI. Its biggest move so far has been a Biden administration executive order directing government agencies to develop safety standards and build on voluntary, non-binding agreements signed by large AI players. The few bills introduced in the Senate have mostly revolved around deepfakes and watermarking, and the closed-door AI forums held by Sen. Chuck Schumer (D-NY) have offered little clarity on the government’s direction in governing the technology.
Now, policymakers may look at the EU’s approach and take lessons from it
This doesn’t mean the US will take the same risk-based approach, but it may look to expand data transparency rules or allow GPAI models a little more leniency.
Navrina Singh, founder of Credo AI and a national AI advisory committee member, believes that while the AI Act is a huge moment for AI governance, things will not change rapidly, and there’s still a ton of work ahead.
“The focus for regulators on both sides of the Atlantic should be on assisting organizations of all sizes in the safe design, development, and deployment of AI that are both transparent and accountable,” Singh tells The Verge in a statement. She adds there’s still a lack of standards and benchmarking processes, particularly around transparency.
While the AI Act is not finalized, a large majority of EU countries acknowledged that this is the direction they want to go. The act does not retroactively regulate existing models or apps, but future versions of OpenAI’s GPT, Meta’s Llama, or Google’s Gemini will need to take into account the transparency requirements set by the EU. It may not produce dramatic changes overnight — but it demonstrates where the EU stands on AI.
Read the full article Here