Google pauses AI image generation of people after diversity backlash

News

Stay informed with free updates

Google has temporarily stopped its latest artificial intelligence model, Gemini, from generating images of people, as a backlash erupted over its depiction of different ethnicities and genders.

Gemini creates realistic images based on users’ descriptions in a similar manner to OpenAI’s ChatGPT. Like other models, it is trained not to respond to dangerous or hateful prompts, and to introduce diversity into its outputs.

However, some users have complained that it has overcorrected towards generating images of women and people of colour, such that they are featured in historical contexts inaccurately, for instance in depictions of Viking kings or German soldiers from the second world war.

“We’re working to improve these kinds of depictions immediately,” Google said. “Gemini’s image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

It added that it would “pause the image generation of people and will re-release an improved version soon”.

The search giant has described Gemini as its “largest, most capable and most general” AI system, adding that it has sophisticated reasoning and coding capabilities.

The model follows the release of other sophisticated products by rivals including OpenAI, Meta and start-ups Anthropic and Mistral.

A core feature of generative AI models is their tendency to “hallucinate”, or fabricate names, dates and numbers. This happens because the software is designed to spot patterns and guess the best next option in a sequence.

Because of this predictive nature, the images and text generated by these models can be inaccurate or even absurd — a problem that AI companies such as OpenAI and Google are working to minimise.

In a recent Stanford University study of responses generated by three AI models to 200,000 legal queries, researchers found that questions about random federal court cases resulted in pervasive errors. OpenAI’s ChatGPT-3.5 fabricated responses 69 per cent of the time, while Meta’s Llama 2 model hit 88 per cent.

In order to reduce errors and biases in generative models, companies use a process called “fine-tuning”. This often relies on human reviewers who report whether they deem the AI’s prompts and responses to be inaccurate or offensive.

Google said that its goal was not to specify an ideal demographic breakdown of images, but rather to maximise diversity, which it argues leads to higher-quality outputs for a broad range of prompts.

However, it added that sometimes the model could be overzealous in taking guidance on diversity into account, resulting in an overcorrection.

Research from the University of Washington, Carnegie Mellon University and Xi’an Jiaotong University in August found that AI models, including OpenAI’s GPT-4 and Meta’s LLaMA, have different political biases depending on how they have been developed.

For instance, the paper found that OpenAI’s products tended to be left-leaning, while those of Meta’s LLaMA were closer to a conservative position.

Rob Leathern, who worked on products related to privacy and security at Google until last year, said on X: “It absolutely should not assume that certain generic queries are a particular gender or race, (eg software engineer) and I have been glad to see that change.”

He added: “But when it explicitly adds [a gender or race] for more specific queries it comes across as inaccurate. And it may actually hurt the positive intention for the former case when folks get upset.”

Articles You May Like

Dental supply stock surges on RFK’s anti-fluoride stance, activist involvement
Israel fights Hizbollah at Lebanese crusader castle as forces push north
Wisconsin village in court fight over terminated transportation fee
Warren previews next year’s tax debate: Which side are you on?
Home sales surged in October, just before mortgage rates jumped