Techno Blender
Digitally Yours.

OpenAI’s Sora Is a Giant ‘F*ck You’ to Reality

0 21


Everybody knows that online disinformation is a huge problem—one that has arguably torn communities apart, manipulated elections, and caused certain segments of the global population to lose their minds. Of course, nobody seems particularly concerned about actually fixing this problem. In fact, the institutions most responsible for online disinformation (and thus, the ones most well-placed to do something about it)—that is to say, tech companies—seem intent on doing everything they can to make the problem exponentially worse.

Case in point: OpenAI launched Sora, its new text-to-video generator, on Thursday. The model is designed to allow web users to generate high-quality, AI videos with just a text prompt. The application is currently wowing the internet with its bizarre variety of visual imagery—whether that’s a Chinese New Year parade, a guy running backward on a treadmill in the dark, a cat in a bed, or two pirate ships swirling around in a coffee cup.

At this point, despite its “world-changing” mission, it could be argued that OpenAI’s biggest contribution to the internet has been the instantaneous generation of countless terabytes of digital crap. All of the company’s open and public tools are content generators, the likes of which, experts have warned, are primed to be used in fraud and disinformation campaigns.

In its blog post about Sora, OpenAI’s team openly acknowledges that there could be some potential downsides to their new app. The company said that it will potentially implement some watermarking technologies to flag content that its generator has created and that it’s in the process of interfacing with knowledgeable people to figure out how to make the inevitable deluge of AI-generated crap that Sora will unleash slightly less toxic. The statement notes:

We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.

This sort of framing of the problem is hilarious because it’s already totally obvious how OpenAI’s new tool will be abused. It will be used to generate fake content on a gargantuan scale—some of which will likely be used for the purposes of online political disinformation, some of which will be used to aid in a variety of fraud and scams, and some of which will be filled with “violence, sexual content” and “hateful imagery,” as the company has already put it. All of that content is going to flood social media channels, making it harder for everyday people to distinguish between what’s real and what’s fake, and making the internet, in general, a whole lot more annoying. I don’t think it requires a global panel of experts to figure that out.

There are a number of other obvious downsides. For one thing, Sora—and others of its ilk—probably won’t have the greatest environmental impact. Researchers have shown that text-to-image generators are significantly worse, environmentally speaking, than text-generators, and just creating an AI image takes the same amount of energy as it does to fully charge your smartphone. For another thing, new text-to-video generation technologies will likely hurt the video creator economy, because why should companies pay people to make visual content when all that’s necessary to create a video is clicking a button?

As far as the corporate class in this country goes, nothing really matters except money. Fuck the environment, fuck artists, fuck an internet that is disinformation-free, fuck the health of political discourse, fuck anything that gets in the way of the profit motive. Anything that can be squeezed to make money should be squeezed, even if it’s a software program whose only real utility is that it can generate a video of a cowboy hamster riding a dragon. As one X user put it: “This is what the morons sacrifice the environment for. Stupid. Shit. Like. This.”




Everybody knows that online disinformation is a huge problem—one that has arguably torn communities apart, manipulated elections, and caused certain segments of the global population to lose their minds. Of course, nobody seems particularly concerned about actually fixing this problem. In fact, the institutions most responsible for online disinformation (and thus, the ones most well-placed to do something about it)—that is to say, tech companies—seem intent on doing everything they can to make the problem exponentially worse.

Case in point: OpenAI launched Sora, its new text-to-video generator, on Thursday. The model is designed to allow web users to generate high-quality, AI videos with just a text prompt. The application is currently wowing the internet with its bizarre variety of visual imagery—whether that’s a Chinese New Year parade, a guy running backward on a treadmill in the dark, a cat in a bed, or two pirate ships swirling around in a coffee cup.

At this point, despite its “world-changing” mission, it could be argued that OpenAI’s biggest contribution to the internet has been the instantaneous generation of countless terabytes of digital crap. All of the company’s open and public tools are content generators, the likes of which, experts have warned, are primed to be used in fraud and disinformation campaigns.

In its blog post about Sora, OpenAI’s team openly acknowledges that there could be some potential downsides to their new app. The company said that it will potentially implement some watermarking technologies to flag content that its generator has created and that it’s in the process of interfacing with knowledgeable people to figure out how to make the inevitable deluge of AI-generated crap that Sora will unleash slightly less toxic. The statement notes:

We’ll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology. Despite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.

This sort of framing of the problem is hilarious because it’s already totally obvious how OpenAI’s new tool will be abused. It will be used to generate fake content on a gargantuan scale—some of which will likely be used for the purposes of online political disinformation, some of which will be used to aid in a variety of fraud and scams, and some of which will be filled with “violence, sexual content” and “hateful imagery,” as the company has already put it. All of that content is going to flood social media channels, making it harder for everyday people to distinguish between what’s real and what’s fake, and making the internet, in general, a whole lot more annoying. I don’t think it requires a global panel of experts to figure that out.

There are a number of other obvious downsides. For one thing, Sora—and others of its ilk—probably won’t have the greatest environmental impact. Researchers have shown that text-to-image generators are significantly worse, environmentally speaking, than text-generators, and just creating an AI image takes the same amount of energy as it does to fully charge your smartphone. For another thing, new text-to-video generation technologies will likely hurt the video creator economy, because why should companies pay people to make visual content when all that’s necessary to create a video is clicking a button?

As far as the corporate class in this country goes, nothing really matters except money. Fuck the environment, fuck artists, fuck an internet that is disinformation-free, fuck the health of political discourse, fuck anything that gets in the way of the profit motive. Anything that can be squeezed to make money should be squeezed, even if it’s a software program whose only real utility is that it can generate a video of a cowboy hamster riding a dragon. As one X user put it: “This is what the morons sacrifice the environment for. Stupid. Shit. Like. This.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment