Why AI’s ‘godfather’ Geoffrey Hinton quit Google to speak out on risks

When Geoffrey Hinton had an ethical objection to his employer Google working with the US military in 2018, he didn’t join in public protests or put his name on the open complaint letter signed by more than 4,000 of his colleagues.

Instead, he just spoke to Sergey Brin, Google’s co-founder. “He said he was a bit upset about it too. And so they’re not pursuing it,” Hinton said in an interview at the time.

The incident is symbolic of Hinton’s quiet influence in the artificial intelligence world. The 75-year-old professor is revered as one of the “godfathers” of AI because of his formative work on deep learning — a field of AI that has driven the huge advances taking place in the sector.

But the anecdote also reflects Hinton’s loyalty, according to those who know him well. On principle, he never aired any corporate grievances, ethical or otherwise, publicly.

It was this belief that led him to quit his role of vice-president and engineering fellow at Google last week, so that he could speak more freely about his growing fears about the risks of AI to humanity.

Yoshua Bengio, his longtime collaborator and friend, who won the Turing Prize alongside Hinton and Yann LeCun in 2018, said he had seen the resignation coming. “He could have stayed at Google and spoken out, but his sense of loyalty is such that he wouldn’t,” said Bengio.

Hinton’s resignation follows a series of groundbreaking AI launches over the past six months, starting with Microsoft-backed OpenAI’s ChatGPT in November and Google’s own chatbot, Bard, in March.

Hinton voiced concerns that the race between Microsoft and Google would push forward the development of AI without appropriate guardrails and regulations in place.

“I think Google was very responsible to begin with,” he said in a speech at an EmTech Digital event on Wednesday, after his resignation was made public. “Once OpenAI had built similar things using . . . money from Microsoft, and Microsoft decided to put it out there, then Google didn’t have much choice. If you’re going to live in a capitalist system, you can’t stop Google competing with Microsoft.”

Since the 1970s, Hinton has pioneered the development of “neural networks”, technology that attempts to imitate how the brain works. It now underpins most of the AI tools and products we use today, from Google Translate and Bard, to ChatGPT and autonomous cars.

But this week, he acknowledged fears about its rapid development, which could result in misinformation flooding the public sphere and AI usurping more human jobs than predicted.

“My worry is that it will [make] the rich richer and the poor poorer. As you do that . . . society gets more violent,” said Hinton. “This technology which ought to be wonderful . . . is being developed in a society that is not designed to use it for everybody’s good.” 

Hinton also rang an alarm bell about the longer-term threats posed by AI systems to humans, if the technology was given too much autonomy. He’d always believed this existential risk was way off, but recently recalibrated his thinking on its urgency.

“It’s quite conceivable that humanity is a passing phase in the evolution of intelligence,” he said. Hinton’s decision to quit Google after a decade was spurred on by an academic colleague who convinced him to speak out about this, he added.

Born in London, Hinton comes from an illustrious lineage of scientists. He is the great-great-grandson of the British mathematicians Mary and George Boole, the latter of whom invented Boolean logic, the theory that underlies modern computing.

As a cognitive psychologist, Hinton’s work in AI has aimed to approximate human intelligence — not just to build AI technology but to illuminate the workings of our own brains.

Stuart Russell, an AI professor at University of California, Berkeley, an academic peer of Hinton’s, said his background meant he was “not the most mathematical of people you’ll find in the machine-learning community”. 

He pointed to Hinton’s big breakthrough in 1986, when he published a paper about a technique called “backpropagation”, which showed how computer software could learn over time.

“It was clearly a seminal paper,” Russell said. “But he didn’t derive the . . . rule the way a mathematician would. He used his intuition to figure out a method that would work.”

Hinton hasn’t always been publicly outspoken about his ethical views but in private he has made them plain.

In 1987, when he was an associate professor at Carnegie Mellon University in the US, he decided to leave his position and emigrate to Canada.

One of the reasons he gave, according to Bengio, was an ethical one — he was concerned about the use of technology, particularly AI, in war and much of his funding was coming from the US military.

“He wanted to feel good about the funding he got and the work he was doing,” said Bengio. “He and I share values about society. That humans matter, that the dignity of all humans is essential. And everyone should benefit from the progress that science is creating.”

In 2012, Hinton and his two graduate students at the University of Toronto — including Ilya Sutskever, now a co-founder of OpenAI — made a major breakthrough in the field of computer vision. They built neural networks that could recognise objects in images orders of magnitude more accurately than had ever been possible before. Based on this work, they founded their first start-up, DNNresearch.

Their company — which didn’t make any products — was sold to Google for $44mn in 2013, after a competitive auction that led to China’s Baidu, Microsoft and DeepMind bidding to acquire the trio’s expertise.

Since then, Hinton has spent half his time at Google, and the other half as a professor at the University of Toronto.

According to Russell, Hinton is constantly coming up with new ideas and trying new things out. “Every time he had a new idea, he’d say at the end of his talk: ‘And this is how the brain works!’”

When asked on stage whether he regretted his life’s work, given it could contribute to the myriad harms he had outlined, Hinton said he had been mulling it over.

“This stage of [AI] was not foreseeable. And until very recently I thought this existential crisis was a long way off,” he said. “So I don’t really have regrets about what I do.”

Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link