By Matt O’Brien and Wyatte Grantham-Philips
Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
But it’s the concerns of insiders in the AI research community that are attracting particular attention. A pioneering researcher and the so-called “Godfather of AI” Geoffrey Hinton quit his role at Google so he could more freely speak about the dangers of the technology he helped create.
Over his decades-long career, Hinton’s pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.
His comments come ahead of a high-powered meeting in Washington this week. US Vice President Kamala Harris is meeting with the chief executive officers from top artificial intelligence firms Alphabet, Microsoft, OpenAI, and Anthropic as part of a Biden administration effort to pressure companies to implement safeguards around the emerging technology.
Harris and administration officials plan to tell the corporate leaders that they have a responsibility to mitigate potential harm from AI tools, according to a White House official.
There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest artificial intelligence model, GPT-4, in March. Other tech giants have invested in competing tools — including Google’s “Bard.”
Some of the dangers of AI chatbots are “quite scary,” Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Since announcing his departure, Hinton has maintained that Google has “acted very responsibly” regarding AI. He told MIT Technology Review that there’s also “a lot of good things about Google” that he would want to talk about — but those comments would be “much more credible if I’m not at Google anymore.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
‘Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.’
Geoffrey Hinton is fearful about the capability of chatbots.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and centre in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honour that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, a professor at the University of Montreal, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.
Most Viewed in Technology