How AI is shaking up work in three key industries
A revolution that will free workers from gruelling tasks or the destroyer of millions of jobs? New artificial intelligence capabilities have simultaneously prompted huge excitement around workplace productivity and dire warnings for employees. The Financial Times has selected three industries that are among the first to adopt the technology to analyse how it is being used in everyday work.
Professional services
-
AI tools enhanced so they are easy to access and can assist practical legal case work
-
Biggest time savings on tasks assigned to junior staff
-
Adoption has been swift but has limitations — it is not always correct
-
Rather than replacing jobs, AI could intensify work
It has been a strange year for lawyers such as Alex Shandro and Karishma Brahmbhatt. All around them, economists, technologists and journalists have been making predictions about what new advances in AI might mean for professional workers. Headlines warning that “AI is coming for lawyers” have been everywhere.
But Shandro and Brahmbhatt have a different vantage point — not from the top down but from the shop floor. As lawyers at Allen & Overy in London, they are already using the new generative AI tools in their everyday work. Roughly 3,500 employees at A&O have access to Harvey, an AI platform built on a version of Open AI’s latest models, which have been enhanced for legal work.
Shandro, a commercial intellectual property lawyer, says he used Harvey recently to prepare for a transaction that involved property rights in the metaverse. “So, what are the advertising regulations in the UK that might apply in augmented reality? I asked Harvey and got a really nice list. Before, I would have asked my trainee or a junior lawyer to go and find that out, and it would have taken that much longer.”
Brahmbatt, too, says she and her juniors use the technology regularly — albeit with mixed results. “I actually asked it a question last night and it completely made up the cases,” she laughs. “If you approach it from the basis of, ‘I’m going to have to read through and check it all anyway’ then it’s useful. I don’t think it’s quite something you can just take and run with.”
David Wakeling, head of A&O’s markets innovation group, says the workforce has adopted Harvey fairly quickly since the law firm began trials in November, although it is not yet being used by everyone every day. “I checked last night — roughly 800 people used it in the last 24 hours, and they asked three to four questions each on average, in different languages and different practice groups,” he says.
For Wakeling, one of the most important things is not to allow staff to believe the tool is more capable than it really is. “We say Harvey is like a very confident, very articulate 13-year-old. It doesn’t know what it doesn’t know. It has some fabulous knowledge, but incomplete knowledge. You wouldn’t trust a 13-year-old to do your tax return.”
Rather than a black box with “magic” written on the lid, the law firm considers the technology “a boring productivity gain. We know it has issues, we know it makes errors, we know it will be out of date. But that’s OK, because we’re trying to save an hour or two a week across 3,500 people who have access today.”
Across London, the young workforce at PwC (the average age is about 31) has also started to use new AI tools, including Harvey, in their work. One system allows them to drop in documents — a pile of legal contracts or a company’s articles of association, for example — and ask questions about them. The fluently written answers come with source notes that link back to the precise parts of the documents from which the AI drew its conclusions.
Euan Cameron, PwC’s UK AI leader, says the biggest difference with these new tools is that they democratise access. “Previously . . . it was like being in a world of horses and having a car with a one litre engine, but controls like a Boeing 747. So you needed to get really smart, specialist people to make it go. Now, you have tools that can be integrated into the sidebar of Office 365 or the Google Suite.”
He likens the latest AI technology to a Swiss army knife with 500 tools. “If you want to work out all the places you can use that in your organisation, you can either create a small team and put them in an ivory tower and they can come up with ideas, or you can give everyone a Swiss army knife and they’ll find their own use cases, as long as you’ve got guardrails around it.” Those guardrails include “first draft only; humans in the loop; use it for cases with a low cost of failure”.
It is still early days, but so far the biggest time-savings for professional firms appear to be on tasks that would usually be assigned to more junior staff. Does that mean law and consultancy firms will not need those roles any more, and if so, who will train the senior professionals of the future, and how?
Bivek Sharma, PwC’s chief technology officer for the tax, legal and people business, insists the firm will still want — and need — to train people to be “subject matter experts”, but the way they do that and how quickly will soon change. “The expectations for them are going to grow,” he predicts.
For the law sector, there is also the question of whether it makes sense to economise on human labour, given many firms charge for lawyers’ time by the hour. But by that logic, argues Wakeling, “we could have kept going with fax machines” and typewriters. “It’s coming anyway so we’re thinking embrace it and do it safely.”
As for the fears that AI will replace a swath of professions such as lawyers and accountants completely, the people who have started to use the tools seem sanguine for now. Shandro talks about the “context heavy” art of negotiating a contract that relies as much on “instinct” and “experience” as knowledge of the law.
At KPMG, global head of people Nhlamu Dlomu is more worried that work could intensify. “What is it that can help us not fall into that trap? What are the real guardrails we need to put around work so we can actually ensure we get the benefit [from AI], not just for organisations but for individuals as well?”
PwC’s Sharma has the same prediction. “What’s going to happen in a year’s time is that our clients . . . are going to expect us to deliver higher value insights in much much shorter timeframes. If we were to meet again in a year’s time, I think you could find an even older looking version of me.”
Filmmaking
-
Screenwriters striking over fears AI could reduce available work
-
Dubbing technology could expand reach of foreign-language films
-
Potential upsides for actors include hiring out “digital doubles”
Hollywood directors’ tentative deal with movie studios this month included a clause that would have flummoxed golden age filmmakers such as Billy Wilder or Frank Capra. Artificial intelligence, the two sides declared, “cannot replace the duties” performed by directors.
The statement was a landmark in the annals of Hollywood labour agreements, even if few were worried about directors being replaced by AI anytime soon. “I don’t think anyone can say we’re really at a point where a robot can direct,” notes a veteran Hollywood dealmaker.
The concerns are more pressing for others in Hollywood, from screenwriters — who have been on strike since May 1 — to actors and voice artists. Screenwriters fear there will be less well-paid work for them in a world where book adaptations or first drafts can be written by AI. Actors worry they will lose control of their images, while voice dubbers are concerned new AI technologies that match mouth movements to different languages will eliminate their jobs.
Some hot AI start-ups have already introduced dubbing technology that they say saves time and money on set — without killing jobs. Flawless, for example, has produced a tool that enables filmmakers to use generative AI to insert new dialogue into already captured scenes, eliminating the need for rebuilding sets and flying actors in for a reshoot. The actor will record the new dialogue in a studio and the AI technology will adapt the actor’s “mouth shape” from the original shot in a way that makes the words look natural.
Flawless and a UK-based AI start-up, Papercup, have also developed a tool that solves the problem of dubbed films in which mouth movements do not match the voiceover. Last month Flawless launched a partnership with XYZ Films and Tea Shop Productions to buy rights to foreign-language films, convert them to English using AI and distribute them in English-speaking markets.
Such deals have the potential to greatly expand the reach of foreign language films, but some actors may worry it could cost them work. “The concern there is . . . are you taking jobs from voiceover actors who dub foreign language films?” said the Hollywood dealmaker.
People close to the company say such fears are unfounded, since they use professional voice artists and actors for the dubbing and the AI technology matches the mouth movements to the new language.
Hilary Krane, chief legal officer at Creative Artists Agency, said she believes AI creates more opportunity than risk for Hollywood. The trick, she says, will be to “favour the creative thinkers and humans who actually put out the work without constraining the use of the new tool”.
Hollywood labour unions had initially been focused on the effects of a different technological disruption: streaming.
But the Hollywood veteran noted that AI was “the issue that caught fire in the zeitgeist”. “If a producer can use AI to shoot out a 100-page script, then they can go to the writer and say, ‘I’ll pay you $50,000 to rewrite this instead of the $200,000 for you to write it on the page’.”
Under the Writers Guild of America’s proposals, such a scenario would be forbidden. The union wants to prevent AI programmes from being used to write scripts or to rewrite work created by a human. The only acceptable use for AI at this point, the WGA says, is for research purposes.
The technology does have some potential benefits for screenwriters: AI could make them more efficient, allowing them to write more screenplays in a year. Such opportunities may be offset by cost-conscious studios that use the technology to hire fewer writers, however.
The upsides to AI technology may be more apparent for actors, who could hire out their “digital doubles” to, for example, act in an advertisement while they were shooting a feature film.
The key to making this work is for the industry to enforce basic ethical concepts, including that “people own their name, image and likeness and that they should be in control of when and how it is used,” says Krane.
Coding
-
Generative AI can suggest lines of code that programmers can run and test
-
Technology can analyse existing code and search for vulnerabilities
-
AI can boost productivity but struggles with complex software
Coders have benefited from developments in generative AI to drive efficiencies and save time, using tools such as ChatGPT to help write software.
If given specific instructions, generative AI chatbots can suggest lines of code that programmers can run and test. But data experts warn there are still clear limitations.
“It is very helpful and does speed things up a lot, but you should know what an answer should look like for it to work,” says Edward Rushton, data scientist and co-founder of the Efficient Data Group consultancy.
He says there is a lot of trial and error, so understanding how to fix what the AI has generated is crucial. “It does just get stuff wrong, and it does just make stuff up. It will invent a function that doesn’t exist, it all looks perfectly plausible, but it is not correct and won’t work,” he warns.
Archana Vaidheeswaran, a data product manager at the non-profit Women Who Code, used ChatGPT to build a Google Chrome extension tool that helps non-native English speakers translate text and adjust the tone to a more natural conversational style. OpenAI’s chatbot generated the code for the front end of the product, the part that users can see and interact with, while Vaidheeswaran wrote the background technology.
“ChatGPT can write something very specific, then you have to work with it,” she says.
Matt Shumer, chief executive and co-founder of Otherside AI, a start-up with a product for writing emails, says his staff use AI to assist with programming and that a large portion of the company’s code has been written this way. “It’s not technically a requirement, but I doubt anyone who wasn’t using it would be able to keep up with the rest of the team,” he says.
However, he highlights the need for experienced engineers to judge and validate the AI’s results and “coax” out the correct answers. “AI has profoundly transformed the role of coders. Instead of focusing solely on manual coding, they now spend more time defining the problem, designing the structure and directing AI to do the heavy lifting,” he says, adding that it frees up staff from “mundane tasks”.
The British Computing Society says that as well as using generative AI tools to write code, it can be used to analyse existing code and search for errors or vulnerabilities hackers might exploit. It echoes the need for developers to review responses critically and consider how data input into generative AI systems may be used.
“The professionals have to understand the level of competence needed [when using AI] because they’re taking on a huge responsibility,” says Rashik Parmar, chief executive of the British Computing Society. “They need to understand the ethics of what they’re doing and have accountability if this thing screws up.”
As the needs of the code become more complex, the limitations of generative AI increase. A top executive at one large Silicon Valley-based company says they are looking closely at the potential for AI to boost its developers’ productivity, but it is “not efficient yet”. While it works well for simple coding, this executive says, it struggles with the complicated software architecture inside a large company.
Still, coding is one of the top areas in which companies are looking to implement generative AI.
“For many developers, generative AI will become the most valuable coding partner they will ever know,” according to KPMG.
Read the full article Here