Techno Blender
Digitally Yours.

MIT researchers develop tool to battle deepfakes and AI manipulation; Know how PhotoGuard works

0 41


Deepfakes have emerged as a major talking point this year as a malicious side-effect of artificial intelligence (AI). Many bad actors have used the current boom in this space to use AI editing tools to create fake images of people and institutions. Multiple reports have emerged of criminals creating fake nudes of people and then threatening them to post these photos online if the victim did not pay them money. But now, a group of researchers at the Massachusetts Institute of Technology (MIT) have come up with a tool that can help combat this problem.

According to a report by MIT Technology Review, researchers have created a tool called PhotoGuard that alters images to protect them from being manipulated by AI systems. Hadi Salman, a contributor to the research and a PhD researcher at the institute said, Right now “anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us…(PhotoGuard is) an attempt to solve the problem of our images being manipulated maliciously by these models”.

Special watermark tool to protect photos from AI

Traditional protections aren’t sufficient for identifying AI-generated images because they’re often applied like a stamp on an image and can easily be edited out.

This new technology is added as an invisible layer on top of the image. It cannot be removed whether cropped or edited, or even when filters are added. While they do not interfere with the image, they will stop bad actors when they try to alter the image to create deepfakes or other manipulative iterations.

It should be noted that while special watermarking techniques also exist, this technique is different as it uses pixel altering as a way to safeguard images. While watermarking allows users to detect alterations through detection tools, this technique stops people from using AI tools to tamper with images to begin with.

Interestingly, Google’s DeepMind division has also created a watermarking tool to protect images from AI manipulation. In August, the company launched SynthID, a tool for watermarking and identifying AI-generated images. This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.


Deepfakes have emerged as a major talking point this year as a malicious side-effect of artificial intelligence (AI). Many bad actors have used the current boom in this space to use AI editing tools to create fake images of people and institutions. Multiple reports have emerged of criminals creating fake nudes of people and then threatening them to post these photos online if the victim did not pay them money. But now, a group of researchers at the Massachusetts Institute of Technology (MIT) have come up with a tool that can help combat this problem.

According to a report by MIT Technology Review, researchers have created a tool called PhotoGuard that alters images to protect them from being manipulated by AI systems. Hadi Salman, a contributor to the research and a PhD researcher at the institute said, Right now “anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us…(PhotoGuard is) an attempt to solve the problem of our images being manipulated maliciously by these models”.

Special watermark tool to protect photos from AI

Traditional protections aren’t sufficient for identifying AI-generated images because they’re often applied like a stamp on an image and can easily be edited out.

This new technology is added as an invisible layer on top of the image. It cannot be removed whether cropped or edited, or even when filters are added. While they do not interfere with the image, they will stop bad actors when they try to alter the image to create deepfakes or other manipulative iterations.

It should be noted that while special watermarking techniques also exist, this technique is different as it uses pixel altering as a way to safeguard images. While watermarking allows users to detect alterations through detection tools, this technique stops people from using AI tools to tamper with images to begin with.

Interestingly, Google’s DeepMind division has also created a watermarking tool to protect images from AI manipulation. In August, the company launched SynthID, a tool for watermarking and identifying AI-generated images. This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment