Techno Blender
Digitally Yours.

Google’s Gemini AI was mocked for revisionist history, but it still hi

0 21



Ask Google’s generative AI tool Gemini to create images of American revolutionary war soldiers and it might present you with a Black woman, an Asian man and a Native American woman wearing George Washington’s bluecoats.

That diversity has gotten some people, including Frank J. Fleming, a former computer engineer and writer for the Babylon Bee, really mad. Fleming has tweeted a series of his increasingly frustrated interactions with Google as he tries to get it to portray white people in situations or jobs where they were historically predominant (for example, Medieval knight). The cause has been taken up by others who claim it’s diversity for diversity’s sake, and everything wrong with the woke world.

There’s just one problem: Fleming and his fellow angry protesters are on a futile mission. “This can’t be done with these systems,” says Olivia Guest, assistant professor of computational cognitive science at Radboud University. “You can’t guarantee behavior. That’s the point of stochastic systems.”

The current generation of generative-AI tools are stochastic systems—or as one famous academic paper published in 2021 equated it, they randomly produce different outputs, even if given the same input. It’s the thing that has made generative AI capture the public’s attention: that it doesn’t just repeat the same thing over and over again.

Experts also question whether the AI chatbot results presented by the angry mob on social media are the full picture—literally. “It’s difficult to assess the trustworthiness of any content that we see on platforms such as X,” says Rumman Chowdhury, cofounder and CEO of Humane Intelligence. “Are these cherry-picked examples? Absent an at-scale image-generation analysis that is able to be tracked and mapped across many different prompts, I would not feel that we have a clear grasp of whether or not this model has any sort of bias.”

Google has recognized the uproar and said it’s taking action. “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Jack Krawczyk, product lead for Google Bard, wrote on X.  Krawczyk highlighted that the depiction of historical events fell between two competing interests: to accurately represent history as it happened; and to “reflect our global user base.”

But tweaking the underlying issues might not be so easy. Fixing stochastic systems is trickier than it looks. Drawing up guardrails for AI models are the same, and can be subverted, unless you revert to brute-force blocking (Google has previously “fixed” image recognition software that would identify Black people as gorillas by preventing the software from recognizing any actual gorillas). Then it isn’t a stochastic system, which means that the thing that makes generative AI unique is gone.

The whole brouhaha raises an interesting question, says Chowdhury. “It is really difficult to define whether or not there is a correct answer to what images should be generated,” she says. “Relying on historical accuracy may result in the reinforcement of the exclusionary status quo. However, it could run the risk of being simply factually incorrect.”

For Yacine Jernite, machine learning and society lead at AI company Hugging Face, the issue isn’t just one that’s limited to Gemini. “This isn’t just a Gemini issue, rather a structural issue with how several companies developing commercial products without much transparency are addressing questions of biases,” he says. It’s a subject that Hugging Face has written about previously. “Bias is compounded by choices made at all levels of the development process, with choices earliest having some of the largest impact—for example, choosing what base technology to use, where to get your data, and how much to use,” says Jernite.

Jernite fears that what we’re seeing could be the result of what companies see as implementing a quick, relatively cheap fix: If their training data overrepresents white people, you can modify prompts under the hood to inject diversity. “But it doesn’t really solve the issue in a meaningful way,” he says.

Instead, companies need to address the issue of representation and bias openly, Jernite argues. “Telling the rest of the world what you’re doing specifically to address biased outcomes is hard: It exposes the company to having external stakeholders question their choices, or point out that their efforts are insufficient—and maybe disingenuous,” he says. “But it’s also necessary, because those questions need to be asked by people with a more direct stake in bias issues, people with more expertise on the topic—especially people with social sciences training, which are notoriously lacking from the tech development process—and, importantly, people who have a reason not to trust that the technology will work, to avoid conflicts of interest.”





Ask Google’s generative AI tool Gemini to create images of American revolutionary war soldiers and it might present you with a Black woman, an Asian man and a Native American woman wearing George Washington’s bluecoats.

That diversity has gotten some people, including Frank J. Fleming, a former computer engineer and writer for the Babylon Bee, really mad. Fleming has tweeted a series of his increasingly frustrated interactions with Google as he tries to get it to portray white people in situations or jobs where they were historically predominant (for example, Medieval knight). The cause has been taken up by others who claim it’s diversity for diversity’s sake, and everything wrong with the woke world.

There’s just one problem: Fleming and his fellow angry protesters are on a futile mission. “This can’t be done with these systems,” says Olivia Guest, assistant professor of computational cognitive science at Radboud University. “You can’t guarantee behavior. That’s the point of stochastic systems.”

The current generation of generative-AI tools are stochastic systems—or as one famous academic paper published in 2021 equated it, they randomly produce different outputs, even if given the same input. It’s the thing that has made generative AI capture the public’s attention: that it doesn’t just repeat the same thing over and over again.

Experts also question whether the AI chatbot results presented by the angry mob on social media are the full picture—literally. “It’s difficult to assess the trustworthiness of any content that we see on platforms such as X,” says Rumman Chowdhury, cofounder and CEO of Humane Intelligence. “Are these cherry-picked examples? Absent an at-scale image-generation analysis that is able to be tracked and mapped across many different prompts, I would not feel that we have a clear grasp of whether or not this model has any sort of bias.”

Google has recognized the uproar and said it’s taking action. “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately,” Jack Krawczyk, product lead for Google Bard, wrote on X.  Krawczyk highlighted that the depiction of historical events fell between two competing interests: to accurately represent history as it happened; and to “reflect our global user base.”

But tweaking the underlying issues might not be so easy. Fixing stochastic systems is trickier than it looks. Drawing up guardrails for AI models are the same, and can be subverted, unless you revert to brute-force blocking (Google has previously “fixed” image recognition software that would identify Black people as gorillas by preventing the software from recognizing any actual gorillas). Then it isn’t a stochastic system, which means that the thing that makes generative AI unique is gone.

The whole brouhaha raises an interesting question, says Chowdhury. “It is really difficult to define whether or not there is a correct answer to what images should be generated,” she says. “Relying on historical accuracy may result in the reinforcement of the exclusionary status quo. However, it could run the risk of being simply factually incorrect.”

For Yacine Jernite, machine learning and society lead at AI company Hugging Face, the issue isn’t just one that’s limited to Gemini. “This isn’t just a Gemini issue, rather a structural issue with how several companies developing commercial products without much transparency are addressing questions of biases,” he says. It’s a subject that Hugging Face has written about previously. “Bias is compounded by choices made at all levels of the development process, with choices earliest having some of the largest impact—for example, choosing what base technology to use, where to get your data, and how much to use,” says Jernite.

Jernite fears that what we’re seeing could be the result of what companies see as implementing a quick, relatively cheap fix: If their training data overrepresents white people, you can modify prompts under the hood to inject diversity. “But it doesn’t really solve the issue in a meaningful way,” he says.

Instead, companies need to address the issue of representation and bias openly, Jernite argues. “Telling the rest of the world what you’re doing specifically to address biased outcomes is hard: It exposes the company to having external stakeholders question their choices, or point out that their efforts are insufficient—and maybe disingenuous,” he says. “But it’s also necessary, because those questions need to be asked by people with a more direct stake in bias issues, people with more expertise on the topic—especially people with social sciences training, which are notoriously lacking from the tech development process—and, importantly, people who have a reason not to trust that the technology will work, to avoid conflicts of interest.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment