Techno Blender
Digitally Yours.

OpenAI’s New Watermark Will Make Our Fake Image Problems Worse

0 19


Photo: Mike Coppola / Staff (Getty Images)

OpenAI announced it’s adding watermarks to images generated by its AI tools Tuesday, an effort to combat growing fears over the coming deepfake tsunami. Images spun up with DALL-E and other OpenAI services will include a visual watermark and other details about its origin in the metadata—information encoded in the generated file. Here’s the problem: all you have to do to remove the metadata watermark is take a screenshot. That means OpenAI’s “solution” could leave you more confused, not less, once it goes into effect.

Imagine looking at a suspicious image. If you check and discover the AI watermark, case closed. But if you’re looking at an AI-generated image that’s had its watermark removed, checking the metadata could give you a false sense of security. In other words, looking for the watermark could actually mean you have less information than when you started.

OpenAI itself explains that you people might even remove the watermark by accident. When you upload an image to social media, most platforms automatically remove metadata from the file because it can sometimes reveal a user’s personal information. So when you post one of your AI creations on Instagram, you might unwittingly cause a fake image fiasco.

A screenshot of OpenAI's watermark system.

OpenAI images will include a visual watermark and details about the source of the image embedded in the files.
Graphic: OpenAI

The company maintains this is still a good idea. “We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” OpenAI wrote in a blog post. However, the company admits that the watermark “is not a silver bullet.” OpenAI did not immediately respond to a request for comment.

OpenAI can’t take all the blame here. The company is adopting a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA), an initiative spearheaded by Adobe in partnership with a variety of companies including arm, the BBC, Intel, Microsoft, the New York Times, and X/Twitter. Meta announced it will add its own tags to AI-generated images, though it’s not clear exactly how the company plans to integrate the C2PA standard.

You can already run an image through an AI check system developed with C2PA called Content Credentials Verify. Just don’t assume you’re safe if your image comes up clear.


OpenAI CEO Sam Altman

Photo: Mike Coppola / Staff (Getty Images)

OpenAI announced it’s adding watermarks to images generated by its AI tools Tuesday, an effort to combat growing fears over the coming deepfake tsunami. Images spun up with DALL-E and other OpenAI services will include a visual watermark and other details about its origin in the metadata—information encoded in the generated file. Here’s the problem: all you have to do to remove the metadata watermark is take a screenshot. That means OpenAI’s “solution” could leave you more confused, not less, once it goes into effect.

Imagine looking at a suspicious image. If you check and discover the AI watermark, case closed. But if you’re looking at an AI-generated image that’s had its watermark removed, checking the metadata could give you a false sense of security. In other words, looking for the watermark could actually mean you have less information than when you started.

OpenAI itself explains that you people might even remove the watermark by accident. When you upload an image to social media, most platforms automatically remove metadata from the file because it can sometimes reveal a user’s personal information. So when you post one of your AI creations on Instagram, you might unwittingly cause a fake image fiasco.

A screenshot of OpenAI's watermark system.

OpenAI images will include a visual watermark and details about the source of the image embedded in the files.
Graphic: OpenAI

The company maintains this is still a good idea. “We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” OpenAI wrote in a blog post. However, the company admits that the watermark “is not a silver bullet.” OpenAI did not immediately respond to a request for comment.

OpenAI can’t take all the blame here. The company is adopting a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA), an initiative spearheaded by Adobe in partnership with a variety of companies including arm, the BBC, Intel, Microsoft, the New York Times, and X/Twitter. Meta announced it will add its own tags to AI-generated images, though it’s not clear exactly how the company plans to integrate the C2PA standard.

You can already run an image through an AI check system developed with C2PA called Content Credentials Verify. Just don’t assume you’re safe if your image comes up clear.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment