Techno Blender
Digitally Yours.

How to tell if a photo is AI-generated or real? There may be solution soon

0 61


Differentiating whether the picture you are seeing is AI-created or real has been a tough task since the past few months. In fact, chances are that many of us may have been fooled by AI-generated content in the last few months. However, it seems that the days of this deception may be numbered. For, a coalition of technology giants and startups have pledged to watermark content produced by AI.

Seven leading AI companies in the United States have agreed to voluntary safeguards on the technology’s development, pledging to manage the risks of the new tools. The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally announced their commitment to new standards in the areas of safety, security and trust at a meeting with President Joe Biden at the White House on July 21.

The announcement comes as the companies are racing to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as self-aware computers evolve.

AI images to get watermark
The companies have committed to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

These companies will also research the risks such as bias, discrimination and the invasion of privacy. Another key commitment is to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.

Read Also

OpenAI announces ChatGPT Android app All the details
How Meta is taking the road less travelled with AI and why it may just pay off

AI photos that shook the world
In May, US markets plunged briefly after a fake image of the Pentagon shrouded in smoke went viral on social media. Another had former US President Donald Trump running with Police with an assault rifle. In another AI-created image, Trump was seen hugging and kissing Anthony Fauci, the White House’s former chief medical advisor.

FacebookTwitterLinkedin



end of article


How to tell if a photo is AI-generated or real? There may be solution soon

Differentiating whether the picture you are seeing is AI-created or real has been a tough task since the past few months. In fact, chances are that many of us may have been fooled by AI-generated content in the last few months. However, it seems that the days of this deception may be numbered. For, a coalition of technology giants and startups have pledged to watermark content produced by AI.

Seven leading AI companies in the United States have agreed to voluntary safeguards on the technology’s development, pledging to manage the risks of the new tools. The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally announced their commitment to new standards in the areas of safety, security and trust at a meeting with President Joe Biden at the White House on July 21.

The announcement comes as the companies are racing to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as self-aware computers evolve.

AI images to get watermark
The companies have committed to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

These companies will also research the risks such as bias, discrimination and the invasion of privacy. Another key commitment is to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.

Read Also

OpenAI announces ChatGPT Android app All the details
How Meta is taking the road less travelled with AI and why it may just pay off

AI photos that shook the world
In May, US markets plunged briefly after a fake image of the Pentagon shrouded in smoke went viral on social media. Another had former US President Donald Trump running with Police with an assault rifle. In another AI-created image, Trump was seen hugging and kissing Anthony Fauci, the White House’s former chief medical advisor.

FacebookTwitterLinkedin



end of article

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment