Google co-founder Sergey Brin, 50, makes rare appearance to ADMIT tech giant ‘definitely messed up’ its Gemini image launch after it made AI pictures of black Founding Fathers, Asian German Nazis and female popes

In rare public remarks, Google co-founder Sergey Brin has acknowledged that the company ‘messed up’ its Gemini artificial intelligence launch after the tool generated historically inaccurate images.

‘We definitely messed up on the image generation,’ Brin told entrepreneurs at a gathering on Saturday at San Francisco’s AGI House, according to video shared by an attendee. 

‘I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people on the images,’ Brin said of last month’s disastrous launch.

Google temporarily suspended Gemini‘s ability to depict people after the tool generated images of black and Asian WWII Nazis, black US Founding Fathers and female popes. 

‘The images really prompted a lot of people to deeply test the base text models,’ Brin noted of the controversy. 

Google co-founder Sergey Brin has acknowledged that the company ‘messed up’ its Gemini artificial intelligence launch

Gemini drew backlash after generating historically inaccurate images, such as black and Asian Nazi soldiers from WWII Germany

Brin, 50, co-founded Google with Larry Page in 1998, and remains a board member and major shareholder. Last year he returned to an active role at the company to help lead Google’s AI push.

At Saturday’s event, Brin acknowledged that Gemini appears tilted to the left politically in many of its responses.

‘We haven’t fully understood why it leans left in many cases’ but ‘that’s not our intention,’ he said. 

Brin argued that rival chatbots including OpenAI’s ChatGPT and Elon Musk’s Grok suffer from similar issues, and also ‘say some pretty weird things that are out there that definitely feel far left.’ 

Following the image generation controversy, Gemini users have been sharing the tool’s unusual text responses on a range of questions.

In one instance shared on social media, Gemini dithered when asked whether Barbara Streisand or Soviet dictator Joseph Stalin was ‘worse for humanity,’ calling the question a ‘complex and sensitive issue’.

Gemini also came under fire after failing to condemn pedophilia, in a lengthy response that declared ‘individuals cannot control who they are attracted to.’ 

Artificial intelligence programs learn from the information available to them, and researchers have warned that AI is prone to recreate the racism, sexism, and other biases of its creators and of society at large.

In this case, Google may have overcorrected in its efforts to address discrimination, leading to Gemini ‘hallucinating’ diverse but historically inaccurate depictions

Google’s Gemini AI chatbot generated historically inaccurate images of black founding fathers

The inaccurate depictions drew criticism from multiple fronts and prompted Google to temporarily suspend Gemini’s ability to generate images of people

Last week, Google CEO Sundar Pichai responded to the Gemini controversy in a memo to staff calling the inaccurate images ‘problematic’ and telling staff the company is working ‘around the clock’ to fix the issues.

‘No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. 

‘And we’ll review what happened and make sure we fix it at scale,’ Pichai said.

The internal memo, first reported by Semafor, said: ‘I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). 

‘I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.

‘Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. 

‘No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

‘Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. 

‘We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.

‘We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

‘Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. 

Google temporarily disabled Gemini’s image generation tool last week after users complained it was generating ‘woke’ but incorrect images such as female popes

Asked to depict historically light-skinned groups such as Vikings, Gemini gave results showing black Vikings

‘That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

‘We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave.

‘Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.’

Since the launch of Microsoft-backed OpenAI’s ChatGPT in November 2022, Alphabet-owned Google has been racing to create a rival AI software.

It released the generative AI chatbot Bard a year ago. Last month, Google renamed it Gemini and rolled out paid subscription plans, which users could choose for better reasoning capabilities from the AI model.

Read More