Techno Blender
Digitally Yours.

Catching bad content, and farming from space

0 69


Big Tech is surprisingly bad at catching, labeling, and removing harmful content. In theory, new advances in AI should improve our ability to do that. In practice, AI isn’t very good at interpreting nuance and context. And most automated content moderation systems were trained with English data, meaning they don’t function well with other languages.

The recent emergence of generative AI and large language models like ChatGPT means that content moderation is likely to become even harder. 

Whether generative AI ends up being more harmful or helpful to the online information sphere largely hinges on one thing: AI-generated content detection and labeling. Read the full story.

—Tate Ryan-Mosley

Tate’s story is from The Technocrat, her weekly newsletter giving you the inside track on all things power in Silicon Valley. Sign up to receive it in your inbox every Friday.

If you’re interested in generative AI, why not check out:

+ How to spot AI-generated text. The internet is increasingly awash with text written by AI software. We need new tools to detect it. Read the full story.

+ The inside story of how ChatGPT was built from the people who made it. Read our exclusive conversations with the key players behind the AI cultural phenomenon.

+ Google is throwing generative AI at everything. But experts say that releasing these models into the wild before fixing their flaws could prove extremely risky for the company. Read the full story.


Big Tech is surprisingly bad at catching, labeling, and removing harmful content. In theory, new advances in AI should improve our ability to do that. In practice, AI isn’t very good at interpreting nuance and context. And most automated content moderation systems were trained with English data, meaning they don’t function well with other languages.

The recent emergence of generative AI and large language models like ChatGPT means that content moderation is likely to become even harder. 

Whether generative AI ends up being more harmful or helpful to the online information sphere largely hinges on one thing: AI-generated content detection and labeling. Read the full story.

—Tate Ryan-Mosley

Tate’s story is from The Technocrat, her weekly newsletter giving you the inside track on all things power in Silicon Valley. Sign up to receive it in your inbox every Friday.

If you’re interested in generative AI, why not check out:

+ How to spot AI-generated text. The internet is increasingly awash with text written by AI software. We need new tools to detect it. Read the full story.

+ The inside story of how ChatGPT was built from the people who made it. Read our exclusive conversations with the key players behind the AI cultural phenomenon.

+ Google is throwing generative AI at everything. But experts say that releasing these models into the wild before fixing their flaws could prove extremely risky for the company. Read the full story.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment