Can tech companies police themselves on AI?

Receive free Artificial intelligence updates

In developing artificial intelligence — whose vast promise also involves myriad risks — can technology companies be trusted to police themselves? Seven of the biggest businesses in AI last week issued public commitments to safety and transparency, after a meeting with president Joe Biden. Four of them — Google, Anthropic, Microsoft and OpenAI — this week formed a group to look into more powerful AI, and establish best practices for controlling it.

The avowals of responsibility are certainly welcome. But the industry’s poor record in curbing the harms of, say, social media, suggests a regulatory framework will be needed to enforce these undertakings. The challenge is to make it effective without stifling innovation, and able to evolve along with AI.

At the White House, the tech firms committed to making sure products were safe through pre-release security testing internally and externally, and to share information across industry and government on managing risks. They undertook to make it easier for third parties to unearth vulnerabilities in their AI systems — and to ensure users knew when content was AI-generated, through digital “watermarking”.

These are positive steps, but they largely reflect practices the companies already follow or are planning, even if making them public bolsters their force. They also remain vague and incomplete. There was no promise to disclose what data AI models have been trained on — a central issue in planned EU legislation on AI. Microsoft has suggested a voluntary, but surely worthwhile, registry of high-risk AI systems.

Federal Trade Commission officials have signalled the FTC might be able to sue the tech companies for breaching their commitments, as a deceptive practice under consumer law. But their vagueness could make this tricky. The White House is working on an executive order which might give the undertakings more muscle, though this is also understood to be directed at controlling the ability of China and other rivals to acquire AI programs and components.

Ultimately, legislation is needed — but this runs into issues of proportionality, and lawmakers’ ability to draw up safeguards on a sophisticated technology evolving at warp speed. The EU, short of its own tech giants but an aspiring global tech policeman, has gone straight for lawmaking. Its proposed AI Act was drawn up before, for example, generative AI — the hottest technology of the moment — had properly evolved or its pitfalls were understood. Dozens of companies have complained the bill risks harming EU competitiveness but does not address the main problems.

The US Congress, by contrast, lacks either bipartisan will or the expertise to legislate on AI, and its failure to act after years of hearings on other tech issues does not give grounds for confidence. Chuck Schumer, Senate majority leader, plans to hold educational forums for senators on AI in the autumn.

Regulating AI surely requires working directly with the tech sector itself. Many of the smartest AI minds, best able to anticipate the risks, are within the companies. External experts must of course also be involved, to ensure the boffins do not hoodwink the legislators.

The tech sector will have to be ready to engage. Especially after the backlash that businesses such as Meta’s Facebook have suffered over the negative aspects of social media, however, they have an incentive to be more constructive with the authorities than they were a decade ago. There are signs they understand that the companies likely to do best in the AI world are those that position themselves as the most responsible. A technology as transformative as this one requires equally inventive approaches to keeping it under control.

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link