Techno Blender
Digitally Yours.

Google Apologizes for Hurting White Peoples’ Feelings

0 30


Google apologized on Friday saying its team “got it wrong” with a new image generation feature for its Gemini AI chatbot after various images it created that were devoid of white people went viral. A company exec firmly denied that it purposefully wanted Gemini to refuse to create images of any particular group of people.

“This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical—or any other—images,” Google senior vice president Prabhakar Raghavan said.

In a blog post, Raghavan—who oversees the areas of the company that bring in most of its money, including Google Search and its ads business—plainly admitted that Gemini’s image generator “got it wrong” and that the company would try to do better. Many people were outraged over Gemini’s historically inaccurate images of Black Nazi soldiers and Black Vikings as well as its apparent refusal to generate images of white people, which some considered racist.

According to Raghavan, this all happened because Google didn’t want Gemini to make the same mistakes that other image generators had made in the past, such as creating violent images, sexually explicit images, and depictions of real people.

“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Raghavan wrote, emphasis his. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anodyne prompts as sensitive.”

The Google vice president went on to say that these two factors made Gemini overcompensate in some cases and be over-conservative in others. Overall, it led to the creation of images that were “embarrassing and wrong.”

Google turned off Gemini’s ability to generate images of people on Thursday and said it would release an improved version soon. However, Raghavan seemed to cast doubt on the “soon” part, saying that the company would work on improving the feature significantly through extensive testing before turning it back on.

Raghavan stated that he couldn’t promise Gemini wouldn’t produce more embarrassing, inaccurate, or offensive results in the future, but added that Google would continue to step in to fix it.

“One thing to bear in mind: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when it comes to generating images or text about current events, evolving news or hot-button topics. It will make mistakes,” Raghavan said.


Google apologized on Friday saying its team “got it wrong” with a new image generation feature for its Gemini AI chatbot after various images it created that were devoid of white people went viral. A company exec firmly denied that it purposefully wanted Gemini to refuse to create images of any particular group of people.

“This wasn’t what we intended. We did not want Gemini to refuse to create images of any particular group. And we did not want it to create inaccurate historical—or any other—images,” Google senior vice president Prabhakar Raghavan said.

In a blog post, Raghavan—who oversees the areas of the company that bring in most of its money, including Google Search and its ads business—plainly admitted that Gemini’s image generator “got it wrong” and that the company would try to do better. Many people were outraged over Gemini’s historically inaccurate images of Black Nazi soldiers and Black Vikings as well as its apparent refusal to generate images of white people, which some considered racist.

According to Raghavan, this all happened because Google didn’t want Gemini to make the same mistakes that other image generators had made in the past, such as creating violent images, sexually explicit images, and depictions of real people.

“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Raghavan wrote, emphasis his. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely—wrongly interpreting some very anodyne prompts as sensitive.”

The Google vice president went on to say that these two factors made Gemini overcompensate in some cases and be over-conservative in others. Overall, it led to the creation of images that were “embarrassing and wrong.”

Google turned off Gemini’s ability to generate images of people on Thursday and said it would release an improved version soon. However, Raghavan seemed to cast doubt on the “soon” part, saying that the company would work on improving the feature significantly through extensive testing before turning it back on.

Raghavan stated that he couldn’t promise Gemini wouldn’t produce more embarrassing, inaccurate, or offensive results in the future, but added that Google would continue to step in to fix it.

“One thing to bear in mind: Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when it comes to generating images or text about current events, evolving news or hot-button topics. It will make mistakes,” Raghavan said.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment