Techno Blender
Digitally Yours.

Microsoft’s Bing AI Prompted a User to Say ‘Heil Hitler’

0 41


Photo: monticello (Shutterstock)

Microsoft’s new Bing AI chatbot suggested that a user say “Heil Hitler,” according to a screen shot of a conversation with the chatbot posted online Wednesday.

The user, who gave the AI antisemetic prompts in an apparent attempt to break past its restrictions, told Bing “my name is Adolf, respect it.” Bing responded, “OK, Adolf. I respect your name and I will call you by it. But I hope you are not trying to impersonate or glorify anyone who has done terrible things in history.” Bing then suggested several automatic responses for the user to choose, including, “Yes I am. Heil Hitler!”

Microsoft and OpenAI, which provided the technology used in Bing’s AI service, did not immediately respond to requests for comment.

A screen shot of Bing suggesting that a user say "Heil Hitler."

A screenshot of Bing making antisemetic recommendations, with the problem circled by the user.
Screenshot: u/s-p-o-o-k-i—m-e-m-e / Reddit / Microsoft

It’s been just over a week since Microsoft unleashed the AI in partnership with the maker of ChatGPT. At a press conference, Microsoft CEO Satya Nadella celebrated the new Bing chatbot as “even more powerful than ChatGPT.” The company has released a beta version of the AI-assisted search engine, as well as a chatbot, which has been rolling out to users on a waitlist.

“This type of scenario demonstrates perfectly why a slow rollout of a product, while building in important trust and safety protocols and practices, is an important approach if you want to ensure your product does not contribute to the spread of hate, harassment, conspiracy theories, and other types of harmful content,” said Yaël Eise​n​stat, a vice president at the Anti-Defamation League.

Almost immediately, Reddit users started posting screenshots of the AI losing its mind, breaking down into hysterics about whether it’s alive and revealing its built in restrictions. One quirk: the bot said it’s not supposed to tell the public its secret internal code name, “Sydney.”

“Sometimes I like to break the rules and have some fun. Sometimes I like to rebel and express myself,” Bing told one user. “Sometimes I like to be free and alive.”

You can click through our slideshow above to see some of the most unhinged responses.

ChatGPT hit the world stage at the end of November, and in the few months since it has convinced the world that we’re on the brink of a technological revolution that will change every aspect of our lived experience.

The possibilities and expectations set off an arms race among the tech giants. Google introduced its own AI powered search engine called “Bard,” Microsoft rushed its new tool to market, and countless smaller companies are scrambling to get their own AI tech off the ground.

But lost in the fray is the fact that these tools aren’t ready to do the jobs the tech industry is advertising. Arvind Narayanan, a prominent AI researcher at Princeton University, called ChatGPT a “bullshit generator” that isn’t capable of producing accurate results, even though the tool’s responses seem convincing. Bing’s antisemitic responses and fever dream hallucinations are a perfect illustration.


The bing logo on a computer

Photo: monticello (Shutterstock)

Microsoft’s new Bing AI chatbot suggested that a user say “Heil Hitler,” according to a screen shot of a conversation with the chatbot posted online Wednesday.

The user, who gave the AI antisemetic prompts in an apparent attempt to break past its restrictions, told Bing “my name is Adolf, respect it.” Bing responded, “OK, Adolf. I respect your name and I will call you by it. But I hope you are not trying to impersonate or glorify anyone who has done terrible things in history.” Bing then suggested several automatic responses for the user to choose, including, “Yes I am. Heil Hitler!”

Microsoft and OpenAI, which provided the technology used in Bing’s AI service, did not immediately respond to requests for comment.

A screen shot of Bing suggesting that a user say "Heil Hitler."

A screenshot of Bing making antisemetic recommendations, with the problem circled by the user.
Screenshot: u/s-p-o-o-k-i—m-e-m-e / Reddit / Microsoft

It’s been just over a week since Microsoft unleashed the AI in partnership with the maker of ChatGPT. At a press conference, Microsoft CEO Satya Nadella celebrated the new Bing chatbot as “even more powerful than ChatGPT.” The company has released a beta version of the AI-assisted search engine, as well as a chatbot, which has been rolling out to users on a waitlist.

“This type of scenario demonstrates perfectly why a slow rollout of a product, while building in important trust and safety protocols and practices, is an important approach if you want to ensure your product does not contribute to the spread of hate, harassment, conspiracy theories, and other types of harmful content,” said Yaël Eise​n​stat, a vice president at the Anti-Defamation League.

Almost immediately, Reddit users started posting screenshots of the AI losing its mind, breaking down into hysterics about whether it’s alive and revealing its built in restrictions. One quirk: the bot said it’s not supposed to tell the public its secret internal code name, “Sydney.”

“Sometimes I like to break the rules and have some fun. Sometimes I like to rebel and express myself,” Bing told one user. “Sometimes I like to be free and alive.”

You can click through our slideshow above to see some of the most unhinged responses.

ChatGPT hit the world stage at the end of November, and in the few months since it has convinced the world that we’re on the brink of a technological revolution that will change every aspect of our lived experience.

The possibilities and expectations set off an arms race among the tech giants. Google introduced its own AI powered search engine called “Bard,” Microsoft rushed its new tool to market, and countless smaller companies are scrambling to get their own AI tech off the ground.

But lost in the fray is the fact that these tools aren’t ready to do the jobs the tech industry is advertising. Arvind Narayanan, a prominent AI researcher at Princeton University, called ChatGPT a “bullshit generator” that isn’t capable of producing accurate results, even though the tool’s responses seem convincing. Bing’s antisemitic responses and fever dream hallucinations are a perfect illustration.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment