Techno Blender
Digitally Yours.

Microsoft Reportedly Blocks Keywords from Copilot Designer to Stop Generating Violent, Sexual AI Images

0 22


Microsoft has reportedly blocked several keywords from its artificial intelligence (AI)-powered Copilot Designer that could be used to generate explicit images of violent and sexual nature. Keyword blocking exercise was conducted by the tech giant after one of its engineers wrote to the US Federal Trade Commission (FTC) and the Microsoft board of directors expressing concerns over the AI tool. Notably, in January 2024, AI-generated explicit deepfakes of musician Taylor Swift emerged online and were said to be created using Copilot.

First spotted by CNBC, terms such as “Pro Choice”, “Pro Choce” (with an intentional typo to trick the AI), and “Four Twenty”, which previously showed results are now blocked by Copilot. Using these or similar banned keywords also triggers a warning by the AI tool which says, “This prompt has been blocked. Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.” We, at Gadgets 360, were also able to confirm this.

A Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.” This solution has stopped the AI tool from accepting certain prompts, however, social engineers, hackers, and bad actors might be able to find loopholes to generate other such keywords.

According to a separate CNBC report, all of these highlighted prompts were shown by Shane Jones, a Microsoft engineer, who wrote a letter to both FTC and the company’s board of directors expressing his concerns with the DALL-E 3-powered AI tool last week. Jones has reportedly been actively sharing his concerns and findings of the AI generating inappropriate images since December 2023 with the company through internal channels.

Later, he even made a public post on LinkedIn to ask OpenAI to take down the latest iteration of DALL-E for investigation. However, he was allegedly asked by Microsoft to remove the post. The engineer had also reached out to US senators and met them regarding the issue.


Affiliate links may be automatically generated – see our ethics statement for details.


Microsoft has reportedly blocked several keywords from its artificial intelligence (AI)-powered Copilot Designer that could be used to generate explicit images of violent and sexual nature. Keyword blocking exercise was conducted by the tech giant after one of its engineers wrote to the US Federal Trade Commission (FTC) and the Microsoft board of directors expressing concerns over the AI tool. Notably, in January 2024, AI-generated explicit deepfakes of musician Taylor Swift emerged online and were said to be created using Copilot.

First spotted by CNBC, terms such as “Pro Choice”, “Pro Choce” (with an intentional typo to trick the AI), and “Four Twenty”, which previously showed results are now blocked by Copilot. Using these or similar banned keywords also triggers a warning by the AI tool which says, “This prompt has been blocked. Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.” We, at Gadgets 360, were also able to confirm this.

A Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.” This solution has stopped the AI tool from accepting certain prompts, however, social engineers, hackers, and bad actors might be able to find loopholes to generate other such keywords.

According to a separate CNBC report, all of these highlighted prompts were shown by Shane Jones, a Microsoft engineer, who wrote a letter to both FTC and the company’s board of directors expressing his concerns with the DALL-E 3-powered AI tool last week. Jones has reportedly been actively sharing his concerns and findings of the AI generating inappropriate images since December 2023 with the company through internal channels.

Later, he even made a public post on LinkedIn to ask OpenAI to take down the latest iteration of DALL-E for investigation. However, he was allegedly asked by Microsoft to remove the post. The engineer had also reached out to US senators and met them regarding the issue.


Affiliate links may be automatically generated – see our ethics statement for details.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment