The right response to AI is more mundane than existential dread

This article is an on-site version of Martin Sandbu’s Free Lunch newsletter. Sign up here to get the newsletter sent straight to your inbox every Thursday

When ChatGPT and other instances of artificial intelligence software were unleashed on an unsuspecting public a few months ago, a frenzy of amazement followed. In its wake has come an avalanche of worry about where the dizzying developments in the software’s capabilities will take human society — including, strikingly, from people who are very close to the action.

Last month, AI investor Ian Hogarth insisted in the FT’s weekend magazine that “we must slow down the race to God-like AI”. A few weeks later, the man referred to as AI’s “godfather”, Geoffrey Hinton, quit Google so he could freely express his concerns, including in an interview with the New York Times. Professor and AI entrepreneur Gary Marcus worries about “what bad actors can do with these things”. And just today, the FT has an interview with AI pioneer Yoshua Bengio, who fears AI could “destabilise democracy”. Meanwhile, a large number of AI investors and experts have called for a “moratorium” on developing the technology further.

Call me naive, but I have found myself unable to get caught up in much of the excitement. Not because I doubt AI will shake up the way we live our lives and especially the structures of our economies — of course, it will. (Check out this list of the many ways people are already beginning to use AI.) But rather because I struggle to see how even the worst-case scenarios the experts warn us against are qualitatively different from the big problems humanity has already managed to cause and had to try to solve all by ourselves.

Take Hogarth’s example of an AI chatbot driving someone to suicide. In the 18th century, reading Goethe’s The Sorrows of Young Werther could supposedly have the same effect. Whatever conclusion we should draw, it is not that AI poses an existential danger.

Or take Hinton, whose “immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will ‘not be able to know what is true anymore’”. The inability to see the truth is a fear that seems shared by all the thinkers mentioned above. But lying and manipulation, especially in our democratic processes, are problems we humans have been perfectly capable of causing without the need for AI. A quick glance at some opinions held by large pluralities of the US public, for example, shows that (to put it politely) impaired access to the truth is nothing new. And, of course, generative AI’s ability to create deepfakes means we will have to become more critical of what we see and hear; and unscrupulous politicians will use the deepfake charge to dismiss damaging revelations about them. But, again, in 2017 Donald Trump did not need AI to exist to be able to turn “fake news” accusations back against his detractors.

So I think that the whiff of existential terror the latest AI breakthroughs have whipped up is a distraction. We should instead be thinking on a much more mundane level. Marcus draws a nice analogy with building codes and standards for electrical installations, and that — rather than an attempt to slow down the technological developments themselves — is the plane on which policy discussions should be had.

There are two particularly serious (because they are the most actionable) questions that policymakers should be addressing, in particular economic policymakers.

The first is who should be held accountable for decisions made by AI algorithms. It should be easy to accept the principle that we shouldn’t allow decisions made by AI that we wouldn’t allow (or wouldn’t want to allow) if they were made by a human decision maker. We have poor form on this, of course: we let corporate structures get away with actions we wouldn’t permit by individual humans. But with AI in its infancy, we have an opportunity to eliminate from the outset the possible impunity for actual people based on the defence that “it was the AI that did it”. (This argument isn’t limited to AI, by the way: we should treat non-intelligent computer algorithms the same way.)

Such an approach encourages legislative and regulatory efforts to not get bogged down in the technology itself but to focus instead on its particular uses and the harms that follow. In most cases, it doesn’t matter so much if a harm is caused by an AI decision or a human decision; what matters is to disincentivise and penalise the harmful decision. Daniel Dennett exaggerates when he says in The Atlantic magazine that AI’s ability to create “counterfeit digital people risks destroying our civilization”. But he makes the good point that if the executives of tech companies developing AI could face jail time for their technology being used to facilitate fraud, they will quickly ensure that the software includes signatures that make it easy to detect if we are communicating with an AI.

The Artificial Intelligence Act being legislated in the EU seems to be taking the right approach: identifying particular uses of AI to be banned, restricted or regulated; imposing transparency on when AI is used; ensuring that rules applying elsewhere also apply in uses of AI, such as copyright for artworks on which an AI may be trained; and clearly specifying where liability lies, for example, whether with the developer of an AI algorithm or its users.

The second big issue policymakers should pay attention to is what will be the distributional consequences of the productivity gains AI should eventually bring. A lot of that will depend on intellectual property rights, which are ultimately about who controls access to the technology (and can charge for that access).

Because we don’t know how AI will be used, it is hard to know how much access to the valuable uses would be controlled and monetised. So it’s useful to think in terms of two extremes. On the one hand is the completely proprietary world, where the most useful AI will be the intellectual property of the companies that create AI technologies. These will number a handful at most because of the enormous resources going into creating usable AI. An effective monopoly or oligopoly, they will be able to charge high rates for licensing and reap the bulk of the productivity gains AI can bring.

At the opposite extreme is the open-source world, in which AI technology can be run with very little investment so that any attempt at restricting access will just prompt the creation of a free open-source rival. If the author of the leaked Google “we have no moat” memo is correct, the open-source world is what we are looking at. Rebecca Gorman of Aligned AI, argues the same in a letter to the FT. In that world, the productivity gains from AI will be earned by whoever has the wits or motivation to deploy them — tech companies will see their product commodified and priced down by competition.

I think it is impossible to know now which extreme we will be closer to, for the simple reason that it is impossible to imagine how AI will be used and hence precisely what technology will be needed. But I would make two observations.

One is to look at the internet: its protocols are designed to be accessible to all, and the language is, of course, open-source. Yet that has not stopped big tech companies from trying, and often succeeding, in creating “walled gardens” with their products, and extract economic rent as a consequence. So we should err on the side of worrying that the AI revolution will lend itself to the concentration of economic power and rewards.

The second is that where we end up is, in part, a result of the policy choices we make today. To push towards an open-source world, governments could legislate to increase transparency and access to the technology developed by tech companies, in effect making the proprietary into open-source. Among the tools it makes sense to consider — especially for mature technologies, big companies, or AI instances that gain rapid take-up by users — is compulsory licensing (at regulated prices) and a requirement to publish source code.

After all, the big data on which any successful AI will have been trained is generated by all of us. The public has a strong claim on the fruit of their data labour.

Other readables

  • “There can be no functioning open trading order without a corresponding security order underwriting it,” argue Tobias Gehrke and Julian Ringhof, from the European Council on Foreign Relations, in an important analysis of how the EU must update its thinking on strategic trade policy.

  • The digital euro project is steaming ahead but has still to win widespread public support.

  • The Council of Europe is setting up a register of damages caused by Russia’s attack on Ukraine. As a formal multilateral initiative, this should make it easier to hold Russia financially accountable for the destruction it has wrought, including through the eventual confiscation of its assets.

  • The EU’s new joint purchasing platform for natural gas did better than expected in its first tender.

Numbers news

Britain after Brexit — Keep up to date with the latest developments as the UK economy adjusts to life outside the EU. Sign up here

Trade Secrets — A must-read on the changing face of international trade and globalisation. Sign up here

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link