Biden joins the AI regulation party
This article is an on-site version of our Swamp Notes newsletter. Sign up here to get the newsletter sent straight to your inbox every Monday and Friday
There’s lots of news coming down the pike this week, including the inaugural Americas Partnership for Economic Prosperity summit in DC, with 12 nations in North and South America, including the US, looking to revamp the regional trade and commerce systems. (See my column today on that topic here).
We’ve also gotten a new executive order from president Joe Biden on artificial intelligence, which is expected to be signed as early as today. It looks a bit similar in tone to the July 2021 order in which Biden sketched out 72 ways to prevent large corporations from dominating our economy and our society. The EO calls for numerous federal agencies to set standards on AI privacy, security and competition, and it also appoints a White House AI commissioner to co-ordinate efforts, which would include regular reports by companies to governments on how they are securing the technology and managing its risks. More on this in the FT story here.
While the US has been the biggest innovator in AI, we’ve been somewhat late to the party in terms of regulating it. Both Europe and China have their own proposals, and there’s been a lot of pressure on the White House to get something out there. Certainly, AI needs ground rules as soon as possible. Even libertarian tech CEOs like Elon Musk are begging for them (despite the fact that he and his peers are ploughing ahead with innovation as quickly as possible, for fear of being left behind by competitors or China).
Still, given all the hype, it’s worth stopping to consider what AI really can, and can’t do at the moment. Here’s a little quiz for Swampians on that score.
1. Can AI beat the market in stock investing? Answer: No. A recent study looking at an index of 12 hedge funds that use AI to invest has actually trailed its broader hedge fund index by about 14 percentage points over the past five years. According to Plexus Investments, only 45 per cent of AI-driven funds outperformed their benchmarks.
2. Can it cover a war? Yes. Well, kind of. Many news organisations are using AI-driven reporting and photos that algorithmically create what a reporter would have gathered by hand in the past.
3. Can AI outperform a doctor? Yes. AI that analyses a picture of a freckle will outperform 95 per cent of dermatologists in diagnosis. Ditto eye problems or a variety of other illnesses that are basically about the expertise that comes from repetition.
4. Can AI make us more empathetic? Maybe. AI systems are now used by call centres to monitor the tone, pace and intonation of human workers to see when they are becoming fatigued, or dissociating or just need a break. It’s sort of the happy opposite of the algorithmic scheduling software that makes it possible for workers to plan their lives, as their hours change as and when work surges.
What we know AI can do is make our daily activities — from computer coding to proofreading, customer service or paperwork — much more efficient. I suspect that within the next couple of years, we will all be using it regularly for data gathering the way that people use Google search now.
All this is pretty benign, but of course political deep fakes and the possibility of a Terminator-induced WWIII or pandemic is not. That’s the thing that’s still really unknown about AI — how it thinks.
I had dinner with one of Silicon Valley’s big thinkers recently, and he told me that he actually doesn’t believe ChatGPT is thinking — rather, it’s regurgitating answers from the vast amount of data it has consumed, and if they are occasionally creepy (remember that Kevin Roose piece in the NYT?) or dumb, well, so are we.
But I’m not entirely persuaded by this. Remember the story of AlphaGo, the computer programme developed by Google’s AI subsidiary DeepMind, which beat the world champion in the Chinese game of Go? It won not by playing better than the human, but by playing in a way that was inhuman. Thousands of years of human play had forged a rule of thumb known even to beginners: early in the game, you avoid placing stones on the fifth line from the edge. And yet, this is exactly what AlphaGo did in an early move to win in an unexpected way. One human Go master called it “beautiful”. Another said it made him feel “physically unwell”.
Those reactions encapsulate the common and diametrically opposed views of a world in which machines will do most of what human workers do today. One recent academic study from OpenAI and the University of Pennsylvania found that 80 per cent of the US workforce will have at least some of their work tasks transformed by AI. There’s a huge productivity multiple there — Goldman Sachs estimates labour productivity could rise by 1.5 per cent, which is twice the recent historic rate. That would be similar in scale to the effect of the PC and the tech boom of the 1990s, which doubled the US GDP growth rate.
But will the productivity be shared? I suspect we may see the blue-collar disruption of the 80s and 90s come to service work. The OECD warned in July that the job categories most at risk of displacement would be highly skilled, white-collar work accounting for a third of employment in the developed world. Think about the populism that could result — manufacturing is 8 per cent of the US workforce, while jobs at risk immediately from AI represent about 30 per cent.
Ed, as we wait to hear how the president is proposing to regulate AI, which worries do you think are most and least overblown about the new technology? And do you believe it’s a truly new kind of intelligence, or just a faster thinking machine?
Recommended reading
Edward Luce responds
Rana, Britain will host the first global summit on AI this week at Bletchley Park, appropriately enough. What you are asking — and one of the issues this summit will address — is the Alan Turing question on whether machines can think, or supply a convincing imitation of human thinking (hence the title of the 2014 biographical movie, The Imitation Game). I don’t feel remotely equipped to answer that question except to say that I have met quite a few humans who came across like robots. So the bar for machines to imitate us seems pretty low. In answer to your question of what I would like to see Biden address, the first is global co-operation on AI. It is all very well the EU and the US producing rules of the road, which we must do. But the west must also do its best to bind China into minimum global regulatory standards. That is why I think the most important AI event this week is at Bletchley Park, not Biden’s executive order.
This will only be the second time that Chinese officials have sat down with their western counterparts to discuss AI. As it happens, I was there the first time in Paris in late 2019 when the Atlantic Council co-hosted its somewhat boringly-titled conference on “International co-operation on artificial intelligence”. The action was anything but dull, as I wrote here. A senior US official said that America could not co-operate with China while it was authoritarian. A Chinese official responded with a litany of complaints about US double standards on human rights. No progress was made. My takeaway was more about the lack of human learning than about advances in machine learning. Rishi Sunak, the Bletchley Park summit’s host, has been criticised for focusing its agenda too much on the “frontier”, or “existential” challenges of AI, rather than on AI’s impact on work, privacy and more mundane near-term concerns. Either way, I am glad he is making the effort. The west and China need to engage.
My other answer is about inequality. Of course, I share all the forebodings about the future of warfare, the deepfake impact on democracy and the ultimate question about computers deciding we are too stupid as a species to keep around (I have periodic twinges of sympathy with the latter). But an immediate concern is the massive rates of return that owners of AI will inevitably reap in the coming years. We are already living in an oligarchic society. I fear that today will look like child’s play compared to what is around the corner. In other words, it is the Elon Musks and other humans that I fear the most.
Your feedback
And now a word from our Swampians . . .
In response to “Triumph of the GOP end-of-days caucus”:
“Thank you for the newsletter. I won’t say I enjoyed reading it, because it reinforced my fears about today’s Republican party. If I could describe my reaction, it would be sheer exhaustion. I’m writing to you from Salem, Massachusetts, which has its own dark history . . . My first ancestor in Massachusetts was expelled in part because of his belief in separation of church and state. It seems to me that we keep fighting religious extremists while the world burns. You’d think we would have figured this out by now.” — Angela Williams
Your feedback
We’d love to hear from you. You can email the team on swampnotes@ft.com, contact Ed on edward.luce@ft.com and Rana on rana.foroohar@ft.com, and follow them on Twitter at @RanaForoohar and @EdwardGLuce. We may feature an excerpt of your response in the next newsletter
Recommended newsletters for you
Unhedged — Robert Armstrong dissects the most important market trends and discusses how Wall Street’s best minds respond to them. Sign up here
The Lex Newsletter — Lex is the FT’s incisive daily column on investment. Local and global trends from expert writers in four great financial centres. Sign up here
Read the full article Here