reviewai

Written by Kelvin

Google pledges to fix Gemini’s inaccurate and biased image generation

  • Posted 2 years ago
  • Chatbots

Google’s Gemini model has come under fire for its production of historically-inaccurate and racially-skewed images, reigniting concerns about bias in AI systems.

The controversy arose as users on social media platforms flooded feeds with examples of Gemini generating pictures depicting racially-diverse Nazis, black medieval English kings, and other improbable scenarios.

Google Gemini Image generation model receives criticism for being ‘Woke’.

Gemini generated diverse images for historically specific prompts, sparking debates on accuracy versus inclusivity. pic.twitter.com/YKTt2YY265

— Darosham (@Darosham_) February 22, 2024

Meanwhile, critics also pointed out Gemini’s refusal to depict Caucasians, churches in San Francisco out of respect for indigenous sensitivities, and sensitive historical events like Tiananmen Square in 1989.

In response to the backlash, Jack Krawczyk, the product lead for Google’s Gemini Experiences, acknowledged the issue and pledged to rectify it. Krawczyk took to social media platform X to reassure users:

We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

As part of our AI principles https://t.co/BK786xbkey, we design our image generation capabilities to reflect our global user base, and we…

— Jack Krawczyk (@JackK) February 21, 2024

For now, Google says it is pausing the image generation of people:

We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon. https://t.co/SLxYPGoqOZ

— Google Communications (@Google_Comms) February 22, 2024

While acknowledging the need to address diversity in AI-generated content, some argue that Google’s response has been an overcorrection.

Marc Andreessen, the co-founder of Netscape and a16z, recently created an “outrageously safe” parody AI model called Goody-2 LLM that refuses to answer questions deemed problematic. Andreessen warns of a broader trend towards censorship and bias in commercial AI systems, emphasising the potential consequences of such developments.

Addressing the broader implications, experts highlight the centralisation of AI models under a few major corporations and advocate for the development of open-source AI models to promote diversity and mitigate bias.

Yann LeCun, Meta’s chief AI scientist, has stressed the importance of fostering a diverse ecosystem of AI models akin to the need for a free and diverse press:

We need open source AI foundation models so that a highly diverse set of specialized models can be built on top of them.
We need a free and diverse set of AI assistants for the same reasons we need a free and diverse press.
They must reflect the diversity of languages, culture,… https://t.co/9WuEy8EPG5

— Yann LeCun (@ylecun) February 21, 2024

Bindu Reddy, CEO of Abacus.AI, has similar concerns about the concentration of power without a healthy ecosystem of open-source models:

If we don’t have open-source LLMs, history will be completely distorted and obfuscated by proprietary LLMs

We already live in a very dangerous and censored world where you are not allowed to speak your mind.

Censorship and concentration of power is the very definition of an…

— Bindu Reddy (@bindureddy) February 21, 2024

As discussions around the ethical and practical implications of AI continue, the need for transparent and inclusive AI development frameworks becomes increasingly apparent.

(Photo by Matt Artz on Unsplash)

See also: Reddit is reportedly selling data for AI training

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, bias, chatbot, diversity, ethics, gemini, Google, google gemini, image generation, Model, Society