In a recent development, Google’s AI chatbot Gemini issued a public apology for its historically inaccurate images and refusal to show pictures of White people. This incident has raised questions about potential racial bias in other big tech chatbots as well.
To investigate further, Fox News Digital tested AI chatbots Gemini, OpenAI’s ChatGPT, Microsoft’s Copilot, and Meta AI to determine their ability to generate images and written responses. The results varied significantly when prompted to show images and provide information about White, Black, Asian, and Hispanic people.
Gemini stood out by refusing to show images of White people, citing harmful stereotypes and generalizations. On the other hand, Meta AI contradicted itself by producing images of other races. Microsoft Copilot and ChatGPT successfully generated images representing all racial groups.
Moreover, the prompt responses also varied, with Gemini and Meta AI showing reluctance to provide information about White people while being accurate about other races. Meanwhile, Microsoft Copilot and ChatGPT provided successful responses for all racial groups.
Following the controversy, Google has paused the image-generation element of Gemini AI to address the issue. However, Meta, Microsoft, and OpenAI have not responded to requests for comment from Fox News Digital.
This incident highlights the importance of addressing potential biases in AI technology and the need for transparency and accountability in the development of such systems. Stay tuned for further updates on this developing story.
“Travel aficionado. Incurable bacon specialist. Tv evangelist. Wannabe internet enthusiast. Typical creator.”