Big Tech ventures into AI’s uncharted territory
This article is an on-site version of our Swamp Notes newsletter. Sign up here to get the newsletter sent straight to your inbox every Monday and Friday.
Over here in Silicon Valley, the new year has started with a bang. It’s been a while since we’ve had the kind of full-blown tech mania that’s broken out around artificial intelligence. Things like the metaverse and Web3, which were all the talk for a couple of years, don’t really count — no one could ever quite explain what they were or when they would arrive. But ChatGPT has given the world a graphic demonstration of how the powers of AI have leapt ahead, and now everyone wants a piece of the action.
Rana, I know you’ve watched the consolidation of tech power with some concern, so I wonder what you make of all this. Congress has completely failed to act on the issues thrown up by the rise of Big Tech — things such as the erosion of privacy, explosion of online misinformation and rise of monopoly power. It feels like things are racing ahead again; is anyone paying attention?
AI is already stirring up new concerns that no one has thought about, and we’re only just getting started. For instance: these systems are trained on mountains of online data, but should copyrighted material be off limits? This is a bit like the debate that broke out when the Google search engine came along and began crawling a large proportion of the world’s websites. Like Google, the AI companies are all claiming protection under the “fair use” exemption that lets anyone use copyrighted material in limited cases.
At least Google could say it was providing a service by returning web traffic to all the sites it crawled (though these days it keeps much more of your attention for its own in-house services). But AI doesn’t even do that. It takes all the human effort available online — pictures, music, Wikipedia entries, tweets — and reprocesses it to come up with its own, synthesised output. It reminds me of the moment in the 1970s sci-fi film Soylent Green when a horrified Charlton Heston discovers what’s been used as the raw material for a new wonder food: “Soylent Green is people!”
An AI developer I spoke to this week claimed the technology would be so good in a couple of years that it will be impossible to tell whether you’re having a conversation with a real person. This is the moment that Alan Turing said would herald the arrival of full AI. A lot of people debate that definition now — being tricked by an AI system is as much a sign of human gullibility as it is of machine intelligence — but it will still have a profound effect on how people relate to machines.
This throws up all sorts of issues. A Google engineer last year claimed that one of the company’s AI systems had become sentient. It’s normally us journalists who get accused of sensationalising the impact of technology — now even the engineers who build the stuff are doing it.
The AI person I spoke to predicted that many people would start to think of smart “bots” as intelligent entities deserving the full rights of personhood. Who knows, maybe the machines, being nothing but clever mimics of us humans, will start to make that claim for themselves?
The debate on this and many other new issues has barely even begun. But the mania around ChatGPT and the so-called generative AI it represents has sparked one of those races that come along once every decade or so, when a new general-purpose technology emerges that looks like it can be used in an almost unlimited number of situations. Who wants to wait and think about the potential downside if it means missing the chance to create a billion-dollar company?
Unleashing these systems into the world without more consideration feels just like the early days of social media. No one wanted to stop and think about the harm that might come from conducting a giant social experiment in real time on a large slice of the world’s population.
I would love to think there’s going to be a full public debate now about AI, and that Congress will play its part. But after the last round of public hearings on Capitol Hill (who can forget Mark Zuckerberg being asked by a senator how Facebook makes money?) I don’t hold out a lot of hope. Brussels has been working for some time on an AI directive: Washington hasn’t even grappled with online privacy yet, and it’s almost three decades since the Netscape browser brought us the wonders of the world wide web.
It’s encouraging to hear that Don Beyer, a 72-year-old congressman, has decided to do an MA in AI. At least it shows an awareness of the lack of understanding. But it hardly seems like an answer.
What do you think, Rana? Are you hearing much discussion of AI, or do you sense any awareness that a whole new set of tech policy problems are about to land?
Edward Luce is on book leave and will return in mid-February
Recommended reading
-
Fortune’s lengthy piece on OpenAI is a good place to start learning about the research firm behind ChatGPT. Among the insights: it makes hardly any revenue. But after launching a $20-a-month subscription service for ChatGPT this week, it’s clear that OpenAI now wants to be in the moneymaking business.
-
How many of the gadgets around your home are looking for a chance to shake you down? The Atlantic’s Charlie Warzel — always one of the most thoughtful writers on the social impacts of technology — discovers that his new home printer is intent on tapping his credit card whenever it can. Rather than dumb objects waiting to do our will, devices like this have become conduits for any moneymaking subscription “service” that their makers can foist on us.
-
New York Magazine’s inside view of the upheaval at Twitter since Elon Musk took over is riveting reading. It’s certainly been a chaotic start. But I still see this as a necessary (if badly handled) shake-up and I don’t subscribe to the widely held view in the media that Musk has all but destroyed the company he just paid $44bn for. We’ll see.
Rana Foroohar responds
Richard, for starters, your Charlton Heston reference makes me think we should start a Swamp Notes list of best dystopian tech films. I was for a time during my student years the sole female member of the Columbia University Sci-Fi Fantasy Club, so I can really geek out on this. Tops on my list would be Ex Machina, The Matrix, Never Let Me Go (I loved the book, too, and can’t wait for a film version of Klara and the Sun — Kazuo Ishiguro doing sci-fi is my idea of heaven), and of course, Blade Runner. I really want someone to do a film version of Argentine writer Agustina Bazterrica’s novel Tender Is the Flesh, which picks up where Heston left off. I’m curious if you have a favourite of the genre (tell me in your Monday response if so) and if readers do, too.
Anyway, to the more serious points you raise. Yes, I’m thinking a ton about AI and all the questions raised by ChatGPT. For starters, I’m interested in whether this will become a moment of triumph for Microsoft over Google, and what that means for the competitiveness landscape in tech. My take so far is, not enough, because it would basically be one behemoth grabbing market share from another, rather than a true disruption of the superstar effect, the possibility of which I wrote about here.
But I am hearing some surprisingly interesting thoughts in legislative circles around the growth of AI, how the technology should be regulated, and what (if any) positive disruptive possibilities it might pose. I had a conversation earlier this week with Saffron Huang, a former Google DeepMind research engineer and neural network expert who is now co-director of a policy advisory organisation called the Collective Intelligence Project, which is consulting with a number of governments and institutions. One client is a US congresswoman who is looking for ways that AI can be governed for broadly shared benefits, including how it could be used to break the existing surveillance capitalism models of data usage, like the ones you sketch out. On the one hand, you could imagine AI putting all that on steroids, unless the regulatory model shifts. On the other hand, if it challenges the existing model of search, it could create an opening to rethink the existing data collection and compensation models. What is fair use? Could data output somehow be watermarked so that value could return to creators? There aren’t easy answers, but these are far smarter questions than Washington has asked in the past.
One thing that’s good about the new debate is that it really is front-page news. That brings the public into the discussion, and forces journalists to explain (and thus understand) the issues and stakes better. I’m fascinated, for example, about how the new Department of Justice divestment suit against Google is asking for trial by jury. That would make it more difficult to use complexity to obfuscate technical issues, something that industries from tech, to finance to pharma do regularly. That will be the topic of my own Monday column.
Your feedback
And now a word from our Swampians . . .
In response to ‘What Republicans need’:
“Marco Rubio might look good on paper — easy win in Florida in 2022 and Latino, but Trump wasn’t too off to nickname him ‘little Marco’. He is a second-tier leader, aka follower, and is without any real policy proposals that would excite Americans, let alone the Republican base. Also go back to how disrespectful he was to the Parkland young adults [during a] town hall about guns in America . . . He pretends to be authentic but is everything but.” — Victoria Harmon, New York City
“For the Republicans, the task is to capture base voters in the primaries with an effective cultural issues campaign and then pivot to an economic opportunity theme attractive in the suburbs and to economically striving ethnic groups in the fall. Economically striving ethnic groups are more in play than many Washington-centric Democrats think. They’re over-patting themselves on the back for the November midterm results.” — Paul A. Myers, Corona del Mar California
Comments may be lightly edited for brevity and clarity.
Your feedback
We’d love to hear from you. You can email the team on swampnotes@ft.com. We may feature an excerpt of your response in the next newsletter
Recommended newsletters for you
Unhedged — Robert Armstrong dissects the most important market trends and discusses how Wall Street’s best minds respond to them. Sign up here
Martin Sandbu’s Free Lunch — Your guide to the global economic policy debate. Sign up here
Read the full article Here