We need to create guardrails for AI
What if the only thing you could truly trust was something or someone close enough to physically touch? That may be the world into which AI is taking us. A group of Harvard academics and artificial intelligence experts has just launched a report aimed at putting ethical guardrails around the development of potentially dystopian technologies such Microsoft-backed OpenAI’s seemingly sentient chatbot, which debuted in a new and “improved” (depending on your point of view) version, GPT-4, last week.
The group, which includes Glen Weyl, a Microsoft economist and researcher, Danielle Allen, a Harvard philosopher and director of the Safra Center for Ethics, and many other industry notables, is sounding alarm bells about “the plethora of experiments with decentralised social technologies”. These include the development of “highly persuasive machine-generated content (eg ChatGPT)” that threatens to disrupt the structure of our economy, politics and society.
They believe we’ve reached a “constitutional moment” of change that requires an entirely new regulatory framework for such technologies.
Some of the risks of AI, such as a Terminator-style future in which the machines decide humans have had their day, are well trodden territory in science fiction — which, it should be noted, has had a pretty good record of predicting where science itself will go in the past 100 years or so. But there are others that are less well understood. If, for example, AI can now generate a perfectly undetectable fake ID, what good are the legal and governance frameworks that rely on such documents to allow us to drive, travel or pay taxes?
One thing we already know is that AI could allow bad actors to pose as anyone, anywhere, anytime. “You have to assume that deception will become far cheaper and more prevalent in this new era,” says Weyl, who has published an online book with Taiwan’s digital minister, Audrey Tang. This lays out the risks that AI and other advanced information technologies pose to democracy, most notably that they put the problem of disinformation on steroids.
The potential ramifications span every aspect of society and the economy. How will we know that digital fund transfers are secure or even authentic? Will online notaries and contracts be reliable? Will fake news, already a huge problem, become essentially undetectable? And what about the political ramifications of the incalculable number of job disruptions, a topic that academics Daron Acemoglu and Simon Johnson will explore in a very important book later this year.
One can easily imagine a world in which governments struggle to keep up with these changes and, as the Harvard report puts it, “existing, highly imperfect democratic processes prove impotent . . . and are thus abandoned by increasingly cynical citizens”.
We’ve already seen inklings of this. The private Texas town being built by Elon Musk to house his SpaceX, Tesla, and Boring Company employees is just the latest iteration of the Silicon Valley libertarian fantasy in which the rich take refuge in private compounds in New Zealand, or move their wealth and businesses into extragovernmental jurisdictions and “special economic zones”. Wellesley historian Quinn Slobodian tackles the rise of such zones in his new book, Crack-Up Capitalism.
In this scenario, tax revenues fall, the labour share is eroded and the resulting zero-sum world exacerbates an “exitocracy” of the privileged.
Of course, the future could also be much brighter. AI has incredible potential for increasing productivity and innovation, and might even allow us to redistribute digital wealth in new ways. But what’s already clear is that companies aren’t going to pull back on developing cutting-edge Web3 technologies, from AI to blockchain, as fast as they can. They view themselves as being in an existential race with each other and China for the future.
As such, they are looking for ways to sell not only AI, but the security solutions for it. For example, in a world in which trust cannot be digitally authenticated, AI developers at Microsoft and other firms are thinking about whether there might be a method of creating more advanced versions of “shared secrets” (or things that only you and another close individual might know about) digitally and at scale.
That, however, sounds a bit like solving the problem of technology with more technology. In fact, the best solution to the AI conundrum, to the extent that there is one, may be analogue.
“What we need is a framework for more prudent vigilance,” says Allen, citing the 2010 presidential commission report on bioethics, which was put out in response to the rise of genomics. It created guidelines for responsible experimentation, which allowed for safer technological development (though one could point to new information about the possible lab leak in the Covid-19 pandemic, and say that no framework is internationally foolproof).
For now, in lieu of either outlawing AI or having some perfect method of regulation, we might start by forcing companies to reveal what experiments they are doing, what’s worked, what hasn’t and where unintended consequences might be emerging. Transparency is the first step towards ensuring that AI doesn’t get the better of its makers.
rana.foroohar@ft.com
Read the full article Here