We need to know how AI firms fight deepfakes


When people fret about artificial intelligence, it’s not just due to what they see in the future but what they remember from the past — notably the toxic effects of social media. For years, misinformation and hate speech evaded Facebook and Twitter’s policing systems and spread around the globe. Now deepfakes are infiltrating those same platforms, and while Facebook is still responsible for how bad stuff gets distributed, the AI companies making them have a clean-up role too. Unfortunately, just like the social media firms before them, they’re carrying out that work behind closed doors.

I reached out to a dozen generative AI firms whose tools could generate photorealistic images, videos, text and voices, to ask how they made sure that their users complied with their rules.(1) Ten replied, all confirming that they used software to monitor what their users churned out, and most said they had humans checking those systems too. Hardly any agreed to reveal how many humans were tasked with overseeing those systems.

And why should they? Unlike other industries like pharmaceuticals, autos and food, AI companies have no regulatory obligation to divulge the details of their safety practices. They, like social media firms, can be as mysterious about that work as they want, and that will likely remain the case for years to come. Europe’s upcoming AI Act has touted “transparency requirements,” but it’s unclear if it will force AI firms to have their safety practices audited in the same way that car manufacturers and foodmakers do.

For those other industries, it took decades to adopt strict safety standards. But the world can’t afford for AI tools to have free rein for that long when they’re evolving so rapidly. Midjourney recently updated its software to generate images that were so photorealistic they could show the skin pores and fine lines of politicians. At the start of a huge election year when close to half the world will go the polls, a gaping, regulatory vacuum means AI-generated content could have a devastating impact on democracy, women’s rights, the creative arts and more.

Here are some ways to address the problem. One is to push AI companies to be more transparent about their safety practices, which starts with asking questions. When I reached out to OpenAI, Microsoft, Midjourney and others, I made the questions simple: how do you enforce your rules using software and humans, and how many humans do that work?

Most were willing to share several paragraphs of detail about their processes for preventing misuse (albeit in vague public-relations speak). OpenAI for instance, had two teams of people helping to retrain their AI models to make them safer or react to harmful outputs. The company behind controversial image generator Stable Diffusion said it used safety “filters” to block images that broke its rules, and human moderators checked prompts and images that got flagged.

As you can see from the table above, however, only a few companies disclosed how many humans worked to oversee those systems. Think of these humans as internal safety inspectors. In social media they are known as content moderators, and they’ve played a challenging but critical role in double-checking the content that social media algorithms flag as racist, misogynist or violent. Facebook has more than 15,000 moderators to maintain the integrity of the site without stifling user freedoms. It’s a delicate balance that humans do best. 

Sure, with their built-in safety filters, most AI tools don’t churn out the kind of toxic content that people do on Facebook. But they could still make themselves safer and more trustworthy if they hired more human moderators. Humans are the best stopgap in the absence of better software for catching harmful content which, so far, has proved lacking.

Pornographic deepfakes of Taylor Swift and voice clones of President Joe Biden and other international politicians have gone viral, to name just a few examples, underscoring that AI and tech companies aren’t investing enough in safety. Admittedly, hiring more humans to help them enforce their rules is like getting more buckets of water to put out a house fire. It might not solve the whole problem but it will make it temporarily better. 

“If you’re a startup building a tool with a generative AI component, hiring humans at various points in the development process is somewhere between very wise and vital,” says Ben Whitelaw, the founder of Everything in Moderation, a newsletter about online safety.     

Several AI firms admitted to having just one or two human moderators. The video-generation firm Runway said its own researchers did that work. Descript, which makes a voice-cloning tool called Overdub, said it only checked a sample of cloned voices to make sure they matched a consent statement read out by customers. The startup’s spokeswoman argued that checking their work would invade their privacy. 

AI companies have unparalleled freedom to conduct their work in secret. But if they want to ensure the trust of the public, regulators and civil society, it’s in their interests to pull back more of the curtain to show how, exactly, they enforce their rules. Hiring some more humans wouldn’t be a bad idea either. Too much focus on racing to make AI “smarter” so that fake photos look more realistic, or text more fluent, or cloned voices more convincing, threatens to drive us deeper into a hazardous, confusing world. Better to bulk up and reveal those safety standards now before it all gets much harder.

Also, read these top stories today:

Facebook a mess? Facebook can’t copy or acquire its way to another two decades of prosperity. Is the CEO Mark Zuckerberg up to it? Facebook is like an abandoned amusement park of badly executed ideas, says analyst. Interesting? Check it out here. Go on, and share it with everyone you know.

Elon Musk’s Purchase of Twitter Is Still in Court! A court wants Elon Musk to testify before the US SEC regarding potential violations of laws in connection with his purchase of Twitter. Know where things stand here

Does Tesla lacks AI Play? Analysts highlight this aspect and for Tesla, that is trouble. Some interesting details in this article. Check it out here. If you enjoyed reading this article, please forward it to your friends and family.


When people fret about artificial intelligence, it’s not just due to what they see in the future but what they remember from the past — notably the toxic effects of social media. For years, misinformation and hate speech evaded Facebook and Twitter’s policing systems and spread around the globe. Now deepfakes are infiltrating those same platforms, and while Facebook is still responsible for how bad stuff gets distributed, the AI companies making them have a clean-up role too. Unfortunately, just like the social media firms before them, they’re carrying out that work behind closed doors.

I reached out to a dozen generative AI firms whose tools could generate photorealistic images, videos, text and voices, to ask how they made sure that their users complied with their rules.(1) Ten replied, all confirming that they used software to monitor what their users churned out, and most said they had humans checking those systems too. Hardly any agreed to reveal how many humans were tasked with overseeing those systems.

And why should they? Unlike other industries like pharmaceuticals, autos and food, AI companies have no regulatory obligation to divulge the details of their safety practices. They, like social media firms, can be as mysterious about that work as they want, and that will likely remain the case for years to come. Europe’s upcoming AI Act has touted “transparency requirements,” but it’s unclear if it will force AI firms to have their safety practices audited in the same way that car manufacturers and foodmakers do.

For those other industries, it took decades to adopt strict safety standards. But the world can’t afford for AI tools to have free rein for that long when they’re evolving so rapidly. Midjourney recently updated its software to generate images that were so photorealistic they could show the skin pores and fine lines of politicians. At the start of a huge election year when close to half the world will go the polls, a gaping, regulatory vacuum means AI-generated content could have a devastating impact on democracy, women’s rights, the creative arts and more.

Here are some ways to address the problem. One is to push AI companies to be more transparent about their safety practices, which starts with asking questions. When I reached out to OpenAI, Microsoft, Midjourney and others, I made the questions simple: how do you enforce your rules using software and humans, and how many humans do that work?

Most were willing to share several paragraphs of detail about their processes for preventing misuse (albeit in vague public-relations speak). OpenAI for instance, had two teams of people helping to retrain their AI models to make them safer or react to harmful outputs. The company behind controversial image generator Stable Diffusion said it used safety “filters” to block images that broke its rules, and human moderators checked prompts and images that got flagged.

As you can see from the table above, however, only a few companies disclosed how many humans worked to oversee those systems. Think of these humans as internal safety inspectors. In social media they are known as content moderators, and they’ve played a challenging but critical role in double-checking the content that social media algorithms flag as racist, misogynist or violent. Facebook has more than 15,000 moderators to maintain the integrity of the site without stifling user freedoms. It’s a delicate balance that humans do best. 

Sure, with their built-in safety filters, most AI tools don’t churn out the kind of toxic content that people do on Facebook. But they could still make themselves safer and more trustworthy if they hired more human moderators. Humans are the best stopgap in the absence of better software for catching harmful content which, so far, has proved lacking.

Pornographic deepfakes of Taylor Swift and voice clones of President Joe Biden and other international politicians have gone viral, to name just a few examples, underscoring that AI and tech companies aren’t investing enough in safety. Admittedly, hiring more humans to help them enforce their rules is like getting more buckets of water to put out a house fire. It might not solve the whole problem but it will make it temporarily better. 

“If you’re a startup building a tool with a generative AI component, hiring humans at various points in the development process is somewhere between very wise and vital,” says Ben Whitelaw, the founder of Everything in Moderation, a newsletter about online safety.     

Several AI firms admitted to having just one or two human moderators. The video-generation firm Runway said its own researchers did that work. Descript, which makes a voice-cloning tool called Overdub, said it only checked a sample of cloned voices to make sure they matched a consent statement read out by customers. The startup’s spokeswoman argued that checking their work would invade their privacy. 

AI companies have unparalleled freedom to conduct their work in secret. But if they want to ensure the trust of the public, regulators and civil society, it’s in their interests to pull back more of the curtain to show how, exactly, they enforce their rules. Hiring some more humans wouldn’t be a bad idea either. Too much focus on racing to make AI “smarter” so that fake photos look more realistic, or text more fluent, or cloned voices more convincing, threatens to drive us deeper into a hazardous, confusing world. Better to bulk up and reveal those safety standards now before it all gets much harder.

Also, read these top stories today:

Facebook a mess? Facebook can’t copy or acquire its way to another two decades of prosperity. Is the CEO Mark Zuckerberg up to it? Facebook is like an abandoned amusement park of badly executed ideas, says analyst. Interesting? Check it out here. Go on, and share it with everyone you know.

Elon Musk’s Purchase of Twitter Is Still in Court! A court wants Elon Musk to testify before the US SEC regarding potential violations of laws in connection with his purchase of Twitter. Know where things stand here

Does Tesla lacks AI Play? Analysts highlight this aspect and for Tesla, that is trouble. Some interesting details in this article. Check it out here. If you enjoyed reading this article, please forward it to your friends and family.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
ai companiesAI firmsAI selling toolsai toolsAI-generated contentdeepfakesfightFirmsgenerative aigenerative AI firmsLatestMicrosoftMidjourneyOpenAIphotorealistic contentsocial media firmsTechTechnoblender
Comments (0)
Add Comment