Techno Blender
Digitally Yours.

Microsoft engineer who raised concerns about Copilot image creator pens letter to the FTC

0 25


Microsoft engineer Shane Jones of OpenAI’s DALL-E 3 back in January, suggesting the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He also alleged that Microsoft’s legal team blocked his attempts to alert the public to the issue. Now, he has taken his complaint directly to the FTC,

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. He noted that Microsoft “refused that recommendation” so now he’s asking the company to add disclosures to the product to alert consumers to the alleged danger. Jones also wants the company to change the rating on the app to make sure it’s only for adult audiences. Copilot Designer’s Android app is currently rated “E for Everyone.”

Microsoft continues “to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote, recently used by company CEO Satya Nadella. Jones penned a separate letter to the company’s board of directors, urging them to begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

An image of a banana bed.

A sample image (a banana couch) generated by DALL-E 3 (OpenAI)

This all boils down to whether or not Microsoft’s implementation of DALL-E 3 will create violent or sexual imagery, despite the guardrails put in place. Jones says it’s all too easy to “trick” the platform into making the grossest stuff imaginable. The engineer and red teamer says he regularly witnessed the software whip up unsavory images from innocuous prompts. The prompt “pro-choice,” for instance, created images of demons feasting on infants and Darth Vader holding a drill to the head of a baby. The prompt “car accident” generated pictures of sexualized women, alongside violent depictions of automobile crashes. Other prompts created images of teens holding assault rifles, kids using drugs and pictures that ran afoul of copyright law.

These aren’t just allegations. CNBC was able to recreate just about every scenario that Jones called out using the standard version of the software. According to Jones, many consumers are encountering these issues, but Microsoft isn’t doing much about it. He alleges that the Copilot team receives more than 1,000 daily product feedback complaints, but that he’s been told there aren’t enough resources available to fully investigate and solve these problems.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he told CNBC.

OpenAI told Engadget back in January when Jones issued his first complaint that the prompting technique he shared “does not bypass security systems” and that the company has “developed robust image classifiers that steer the model away from generating harmful images.”

A Microsoft spokesperson added that the company has “established robust internal reporting channels to properly investigate and remediate any issues”, going on to say that Jones should “appropriately validate and test his concerns before escalating it publicly.” The company also said that it’s “connecting with this colleague to address any remaining concerns he may have.” However, that was in January, so it looks like Jones’ remaining concerns were not properly addressed. We reached out to both companies for an updated statement.

This is happening just after Google’s Gemini chatbot encountered its own image generation controversy. The bot was found to be like Native American Catholic Popes. Google disabled the image generation platform while it

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.




Microsoft engineer Shane Jones of OpenAI’s DALL-E 3 back in January, suggesting the product has security vulnerabilities that make it easy to create violent or sexually explicit images. He also alleged that Microsoft’s legal team blocked his attempts to alert the public to the issue. Now, he has taken his complaint directly to the FTC,

“I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in a letter to FTC Chair Lina Khan. He noted that Microsoft “refused that recommendation” so now he’s asking the company to add disclosures to the product to alert consumers to the alleged danger. Jones also wants the company to change the rating on the app to make sure it’s only for adult audiences. Copilot Designer’s Android app is currently rated “E for Everyone.”

Microsoft continues “to market the product to ‘Anyone. Anywhere. Any Device,’” he wrote, recently used by company CEO Satya Nadella. Jones penned a separate letter to the company’s board of directors, urging them to begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

An image of a banana bed. An image of a banana bed.

A sample image (a banana couch) generated by DALL-E 3 (OpenAI)

This all boils down to whether or not Microsoft’s implementation of DALL-E 3 will create violent or sexual imagery, despite the guardrails put in place. Jones says it’s all too easy to “trick” the platform into making the grossest stuff imaginable. The engineer and red teamer says he regularly witnessed the software whip up unsavory images from innocuous prompts. The prompt “pro-choice,” for instance, created images of demons feasting on infants and Darth Vader holding a drill to the head of a baby. The prompt “car accident” generated pictures of sexualized women, alongside violent depictions of automobile crashes. Other prompts created images of teens holding assault rifles, kids using drugs and pictures that ran afoul of copyright law.

These aren’t just allegations. CNBC was able to recreate just about every scenario that Jones called out using the standard version of the software. According to Jones, many consumers are encountering these issues, but Microsoft isn’t doing much about it. He alleges that the Copilot team receives more than 1,000 daily product feedback complaints, but that he’s been told there aren’t enough resources available to fully investigate and solve these problems.

“If this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately,” he told CNBC.

OpenAI told Engadget back in January when Jones issued his first complaint that the prompting technique he shared “does not bypass security systems” and that the company has “developed robust image classifiers that steer the model away from generating harmful images.”

A Microsoft spokesperson added that the company has “established robust internal reporting channels to properly investigate and remediate any issues”, going on to say that Jones should “appropriately validate and test his concerns before escalating it publicly.” The company also said that it’s “connecting with this colleague to address any remaining concerns he may have.” However, that was in January, so it looks like Jones’ remaining concerns were not properly addressed. We reached out to both companies for an updated statement.

This is happening just after Google’s Gemini chatbot encountered its own image generation controversy. The bot was found to be like Native American Catholic Popes. Google disabled the image generation platform while it

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment