Can generative AI’s stimulating powers extend to productivity?
Generative AI models, such as ChatGPT, will supposedly one day replace most humans at writing copy. In the meantime, though, humans are spending an awful lot of time writing about generative AI. Every day, announcements arrive boasting about how start-ups a, b and c are applying the technology to industry x, y and z. Global venture investment may have fallen 35 per cent to $415bn last year, but money is still gushing into hot, generative AI start-ups.
For years, machine-learning researchers have been writing increasingly impressive algorithms, devouring vast amounts of data and massive computing power, enabling them to do increasingly impressive things: winning chess and Go matches against the strongest human players, translating between languages in real time and modelling protein structures, for example. But 2022 marked a breakout year for generative AI as the San Francisco-based research company OpenAI, and others, opened up the technology for ordinary users.
Anyone with an internet connection can now experience the apparent magic by prompting Dall-E to generate an image of an astronaut riding a horse on the moon or ChatGPT to write a story about the lunar escapades of a horseriding astronaut.
All this is (mostly) good and harmless fun. Generative AI is already stimulating millions of copywriters, illustrators, video games developers — and time wasters. But the bigger question is: can it boost productivity in the economy overall? For years, technologists have compared the transformative effects of AI to that of microchips, electricity and fire. Yet economists still struggle to spot any change in the productivity data.
The use of generative AI, its evangelists claim, will now transform a minority sport into a mass-participation game. Mira Murati, chief technology officer at OpenAI, has even likened the diffusion of AI to a form of digital globalisation: it gives everyone access to new economic possibilities, increasing diversity of opportunity and lifting prosperity. Others have argued that the exponential increases in computing power over recent decades, as described by Moore’s Law, are flipping from hardware into software. Software creation is emerging from the artisanal into the industrial age.
AI can also be trained to predict the next lines of computer code. Microsoft, which is investing $10bn in OpenAI, says it will incorporate generative AI into its software, cloud computing and search services, empowering its business customers. Copilot, released in 2021 by Microsoft’s GitHub open-source software platform and OpenAI, already enables developers to autocomplete code in several programming languages.
Given the reviews from some users, it may be a sign of things to come. For example, the computer scientist Andrej Karpathy, who previously worked at Tesla and OpenAI, tweeted that Copilot had “dramatically accelerated” his workflow by writing 80 per cent of his code with 80 per cent accuracy. His role, he said, was now to prompt and edit computer-generated code more than write it himself.
This incipient software revolution underpins the investment thesis of Radical Ventures, a Toronto-based venture capital fund. Software is developing from being a hard-coded, static product that is periodically shipped into one that is powered by AI learning algorithms and constantly evolving in near-real time, Jordan Jacobs, Radical’s co-founder, tells me. “Every bit of software will be replaced by AI software over the next decade. That will have an enormous economic impact.”
Two question marks hover over this optimism. First, it is as yet unclear whether constantly evolving software will accelerate technological obsolescence, requiring companies to install new hardware and retrain employees — which some researchers blame for poor productivity after a tech breakthrough — or significantly reduce it. In other words, will generative AI gum up, or smooth over, the human adoption of technology? Second, will generative AI create a pernicious new form of “technical debt” requiring human coders to rework software to eliminate machine-written bugs?
As one FT reader has written, the highly imperfect nature of generative AI risks flipping the information world on its head. Today, we assume most digital content is accurate and use fact checkers to identify and remedy fake material. In the post-generative AI world, we must assume that all content is potentially flawed and employ truth finders to verify hygienic sources. I await the first email from a start-up arguing that truth finding is a fantastic new use case for generative AI.
john.thornhill@ft.com
Read the full article Here