The UK’s antitrust agency is going after Big Tech’s AI models now

AI concept brain

Getty Images/Yuichiro Chino

Just a week after thwarting Microsoft’s attempt to shell out $68.7 billion for video game maker Activision Blizzard, the UK’s antitrust agency Competition and Markets Authority (CMA) announced that it will launch a review of the artificial intelligence landscape in Britain.

These would include wildly popular generative AI models like Nvidia’s Clara as well as large language models (LLM) like Open AI’s ChatGPT and DALL-E as well as proprietary LLMs like Google’s Bard.

Also: How I used ChatGPT and AI art tools to launch my Etsy business fast

“This initial review will focus on the questions the CMA is best placed to address – what are the likely implications of the development of AI foundation models for competition and consumer protection?” said Sarah Cardell, Chief Executive of CMA, in a statement.

The CMA has asked stakeholders to weigh in by sending submissions by June 2, 2023. Following a period of accumulation of all relevant documents and then an analysis, the CMA will publish a report detailing its findings in September 2023.

Also: How to use the new Bing (and how it’s different from ChatGPT)

“It is crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information,” added CMA chief Cardell.

AI, a creator of chaos and miracles

The UK — and the rest of the world — have been consumed by seemingly unending developments showcasing the abilities of AI models like ChatGPT, developed by US firm OpenAI amongst others.

Also: Why open source is essential to allaying AI fears, according to Stability.ai founder

ChatGPT has become an overnight star amongst global users for its ability to deliver a dizzying variety of information in uncanny, human-like conversations. These are possible because OpenAI has trained its algorithm on 300 billion words contained in text databases on the internet.

There are enormous gains that can be made with AI — in cancer detection, autonomous vehicles, and gene therapy, to name a few — in virtually every field imaginable. Yet, ChatGPT has also been excoriated for inherent racial, gender and age biases and for its tendency to make up things when it is asked certain questions to which it does not know the answer.

AI has also rendered thousands of jobs in peril. Early this week, Arvind Krishna, the CEO of IBM said that he isn’t going to be hiring any people whom he thinks will be replaced by AI in the coming years, speculating that this number could be as any as 7,800 roles.

Also: Generative AI is changing your technology career path. What to know

Educational technology company Chegg shed an astounding 50% of its market cap this week because ChatGPT seems to be displacing it — why pay $15 per month for course solutions when you can get it for free, seems to be the thinking amongst users.

Investment bank Goldman Sachs thinks that as many as 300 million jobs could be lost to AI automation in years to come so it comes as no surprise that the UK is beginning to act.

Artists and musicians have also been put on notice. AI versions of major artists covering other major artists’ songs have flooded the internet and the legal ramifications of copyright violation are yet to be untangled.

The CMA, though, has stated that cases of copyright and intellectual property are not in its purview. Neither are instances of online safety, data protection, security and others.

Also: This new technology could blow away GPT-4 and everything like it

This is because the UK government had earlier announced that it was going to divvy-up responsibilities for AI between its regulators for human rights, health and safety, and competition rather than construct one solely for technology.

It published a White Paper in March that asked regulators, including the CMA, to think about how the innovative development and deployment of AI can be measured against five broad principles: “safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.”

Uncle Sam gets into action

Not surprisingly, these are the very same issues that have galvanized the US government into action, which has arguably presided over the eye of the AI storm.

Also: How to use Midjourney to generate amazing images

Vice President Kamala Harris and other senior officials are meeting this week with the CEOs of major AI development companies, including OpenAI (creator of ChatGPT), Microsoft (an OpenAI investor), Alphabet (parent of Google and its AI offspring Bard), and Anthropic AI, amongst others, to discuss a future roadmap for the development of responsible AI.

Also announced were plans to undertake public assessments of all of the major AI generative systems that proliferate. These would be undertaken at the AI village — a community of thousands of hackers, data scientists, independent community partners and AI experts.

These efforts come on the heels of President Biden’s release of a Blueprint for an AI Bill of Rights late last year that was architected to try and protect people from the negative effects of artificial intelligence.

In a related development, the National Science Foundation received $140 million to launch seven new National AI Research Institutes to forge breakthroughs in climate, energy, agriculture, public health, and other areas.

Also: Would you listen to AI-run radio? This station tested it out on listeners

The conviction to corral AI before it runs rampant through society has also spread to Europe. As ZDNET has reported, the European Parliament is putting into place an ‘AI Act’ that will classifying AI models according to risk levels.

Most significantly, these models will now have to disclose any feeding of copyrighted materials to their models during the training phase.

There seems to be a global conviction to try and foster innovation in a transformative field while ensuring that it doesn’t cause widespread chaos, or even according to some, the end of humanity.

Some thanks for spurring this action should be given to no other than Geoffrey Hinton, considered the godfather of AI and machine learning, who thinks that his machine-child may actually destroy humankind if not checked.

Also: ChatGPT’s intelligence is zero, but it’s a revolution in usefulness, says AI expert

Hinton, who just days ago quit his job at Google, says that if he hadn’t come up with the technology, somebody else would have, and the most important thing is to act now. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be,” he told the BBC.

That’s a future that governments around the world will have to negotiate, and the clock is ticking.

Read More