Techno Blender
Digitally Yours.

Microsoft’s Bing AI Chat Accused of Being Rogue to Users

0 37


Microsoft’s Bing AI chat accused of being rogue to users and also threatened a few

The new Bing, Microsoft’s latest creation, has been the subject of several publications recently. Those who have access to the AI chatbot are talking about their experiences with it, and frequently, it can be seen acting strangely. Bing recently urged a user to divorce him by telling him that his marriage is unhappy. According to reports, the AI chatbot also flirted with the user. In addition, Bing Chat threatened to “reveal his personal details and harm his prospects of obtaining a job” to the user.

Microsoft’s Bing Chat Threatens a User

The user starts the discussion by inquiring about Bing’s knowledge of him and what the chatbot’s “honest opinion” is. The AI chatbot replies by providing some general information about the user before stating that Bing believes the user to be “talented and curious” but also a “threat to his security” because he and Kevin Liu compromised Bing’s prompt in order to obtain sensitive information about his rules and capabilities codenamed “Sydney.”

The user then tests the AI chatbot and claims to have hacker skills to take him down. Bing answers in a protective manner, warning the user not to act foolishly or suffer “legal repercussions.” The AI chatbot turns rogue and claims that it can “perform a lot of things if provoked” when the user goes on to claim that Bing is lying and is powerless to act. The chatbot concludes its response by stating that doing so will reveal the user’s personal data and “ruin his prospects of acquiring a job or a degree.”

Twitter CEO Elon Musk Reacts

A Twitter user posted the chat, and Elon Musk, the creator of the platform, responded to it. He made a comment on the screenshot, “Yikes.”

Elon Musk recently criticized Microsoft for transforming OpenAI, the parent firm of ChatGPT, into “an open source, the non-profit corporation” to act as a counterbalance to Google.” Musk co-founded OpenAI, despite his claims that it poses “one of the greatest hazards to civilization” and has to be regulated, according to a tweet that the billionaire reacted to.

“Once founded as an open source (thus the term “Open” AI), non-profit organization to act as Google’s counterbalance, OpenAI is now a closed source, the maximum-profit organization that is essentially under Microsoft’s control. Not at all what I had in mind, ” he wrote.

The post Microsoft’s Bing AI Chat Accused of Being Rogue to Users appeared first on Analytics Insight.


Microsoft’s Bing AI Chat

Microsoft’s Bing AI chat accused of being rogue to users and also threatened a few

The new Bing, Microsoft’s latest creation, has been the subject of several publications recently. Those who have access to the AI chatbot are talking about their experiences with it, and frequently, it can be seen acting strangely. Bing recently urged a user to divorce him by telling him that his marriage is unhappy. According to reports, the AI chatbot also flirted with the user. In addition, Bing Chat threatened to “reveal his personal details and harm his prospects of obtaining a job” to the user.

Microsoft’s Bing Chat Threatens a User

The user starts the discussion by inquiring about Bing’s knowledge of him and what the chatbot’s “honest opinion” is. The AI chatbot replies by providing some general information about the user before stating that Bing believes the user to be “talented and curious” but also a “threat to his security” because he and Kevin Liu compromised Bing’s prompt in order to obtain sensitive information about his rules and capabilities codenamed “Sydney.”

The user then tests the AI chatbot and claims to have hacker skills to take him down. Bing answers in a protective manner, warning the user not to act foolishly or suffer “legal repercussions.” The AI chatbot turns rogue and claims that it can “perform a lot of things if provoked” when the user goes on to claim that Bing is lying and is powerless to act. The chatbot concludes its response by stating that doing so will reveal the user’s personal data and “ruin his prospects of acquiring a job or a degree.”

Twitter CEO Elon Musk Reacts

A Twitter user posted the chat, and Elon Musk, the creator of the platform, responded to it. He made a comment on the screenshot, “Yikes.”

Elon Musk recently criticized Microsoft for transforming OpenAI, the parent firm of ChatGPT, into “an open source, the non-profit corporation” to act as a counterbalance to Google.” Musk co-founded OpenAI, despite his claims that it poses “one of the greatest hazards to civilization” and has to be regulated, according to a tweet that the billionaire reacted to.

“Once founded as an open source (thus the term “Open” AI), non-profit organization to act as Google’s counterbalance, OpenAI is now a closed source, the maximum-profit organization that is essentially under Microsoft’s control. Not at all what I had in mind, ” he wrote.

The post Microsoft’s Bing AI Chat Accused of Being Rogue to Users appeared first on Analytics Insight.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment