Google Brain founder says big tech is lying about AI danger

“It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.

In May, OpenAI CEO and co-founder Altman co-signed a letter saying that “mitigating the risk of extinction from AI should be a global priority”, and in March, more than 1100 industry leaders including Elon Musk and Apple co-founder Steve Wozniak called for a six-month moratorium on training powerful AI models.

“Sam [Altman] was one of my students at Stanford. He interned with me. I don’t want to talk about him specifically because I can’t read his mind, but …I feel like there are many large companies that would find it convenient to not have to compete with open-sourced large language models,” he said.

“There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”

Professor Ng declined to comment on the risk-based regulation of AI being proposed by the Labor government, but agreed that AI should be regulated.

“I don’t think no regulation is the right answer, but with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting,” he said.

“But thoughtful regulation would be much better than no regulation,” he said.

“Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”

High on the list of “good” regulations, he said, was the need for transparency from technology companies, which would have helped avert the social media disaster caused by big tech at the start of the century, and which will help avert AI disasters caused by big tech in the future.

Read More