Techno Blender
Digitally Yours.

The Year Ahead in Generative AI

0 68


AI Image generators have become more and more sophisticated by the end of 2022. They will only get better in 2023, but the technology has progressed so fast the people asking the ethical questions have been left in the lurch.
Graphic: local_doctor (Shutterstock)

At this very moment, I could boot up OpenAI’s popular ChatGPT and ask it to write this article for me. Or I could ask it to “write a lullaby about a cool goose who just wants to honk.” I could pull up Midjourney, Stable Diffusion, DALL-E 2, or any number of AI art generators and order it to craft me an portrait of this cool goose.

Then comes the ethical conundrums: Could AI write and illustrate a children’s book about the goose for me? Could I sell that book, even though I put much less effort into it than any children’s author would? Who should own the copyright? What about the rights of the authors whose work trained the AI that created my book?

AI technology is developing rapidly, but it’s not clear how close are we to generative AI models regularly making final, release-ready products. People in the art scene have already raised concerns over how generative AI systems, particularly GPT and diffusion-based machine learning models, will disrupt their profession. But AI generated content could be a big deal in many other industries, from programmers to copywriters. So how long until all our proverbial gooses are metaphorically cooked?

To answer these questions, I spoke to three people who all work directly with AI but have varying takes on the technology:

  • Irene Solaiman got deep into newfangled AI systems as a researcher and public policy manager at OpenAI back in 2019, a company that has since become one of the biggest names in the generative AI scene. She was there for the release of the GPT2 language model and the API for GPT3, and wrote one of the first toxicity safety reports on GPT3. She also worked for Zillow on the ethics of housing predictive models. Now as the policy director at the machine learning resource platform Hugging Face, she spends quite a bit of time thinking on how this technology will grow, and how companies can ethically direct its progress.
  • Alfred Wahlforss is a Harvard graduate student studying data science. He also helped develop the app BeFake, which used open source data from Stable Diffusion’s model alongside Google-designed DreamBooth to create fake selfies based on users’ images. Wahlforss is very bullish on AI technology, and wants to see the technology progress further and further.
  • Margaret “Meg” Mitchell is an AI researcher with a storied legacy in just the short time that generative AI has been around. Though she got her start doing research in natural language processing, should would go on to work at Microsoft and Google Research’s machine intelligence division where she became lead in the company’s AI ethics. In 2021, Google fired her after the company reportedly told employees to try and be more “positive” in their talk about the problems with AI. She has remained outspoken about the possibilities and challenges of generative AI since and now works as the chief ethics scientist at Hugging Face.

AI models exacerbate ownership concerns 

As AI gets better at generating visual and written content, there’s a real risk that it could undermine creators whose livelihoods depend on their ability to generate content.

Some companies are already using cheap AI art instead of paying for the real thing. In December, science fiction and fantasy publisher Tor was put on blast after sharing the cover of an upcoming book that turned out to be AI art purchased off a stock image site. The work was uncredited, and even the person who touched up the image for the cover went unnamed.

Creators including Polish fantasy artist Greg Rutkowski have come out against AI art for fear their personal brands have been squashed by the proliferation of AI art specifically meant to imitate their work. Rutkowski has supported plans from the likes of the Concept Artists Association who want to lobby for updates to IP and data privacy laws.

Stability AI, the company behind the open source text-to-image diffusion model Stable Diffusion, has made a few concessions to artists over concerns that their work is being stolen or copied by AI. In the company’s release of Stable Diffusion 2, it created new parameters making it more difficult to create images based on celebrities, to craft porn, or to create art “in the style of” real-world artists. Some fans of the more open, open source model were none too happy with the changes. Stability AI also announced it was working with the company Spawning to allow artists to “opt-out” of having their work used in the training of Stable Diffusion 3, which will likely be released in 2023.

Though Stability AI has made some stated efforts to disable AI’s ability to explicitly copy work, there’s a genuine sense among small artists who depend on art and portrait commissions they’ll suffer as AI art generators become even more popular.

That’s not to say things are dire just yet. Mitchell said there are some artists who are seeing some benefits to current AI models, especially helping them speed up their work by creating a baseline they can work off of.

“Some of the specific artists who are speaking out about their work being stolen, I think, will also be artists that will potentially become even more valued as the actual artists, potentially even driving up their sales, or at least driving up the cost of their original pieces of work,” she said.

Tackling AI lies and biases

In November of 2022, Meta’s Galactica AI had to be pulled two days after release since researchers found it was propagating lies and misinformation.

“There’s such a drive and push to get things out and sort of ask for forgiveness later or not even ask for forgiveness,” Mitchell said. “The incentive for these safer structures is fear of bad PR, or wanting to get good PR, and what that incentivizes is the bare minimum.”

Emad Mosteque, the CEO of Stability AI, has been publicly resistant to, in his words, “all barriers from anyone being able to create anything they can imagine.” And even when Stability AI sets limits on output, there’s nothing that stops users from modifying the open source model and removing them. Mosteque, a former hedge fund manager, is bullish about pushing the technology as far as it will go.

But pushing too far too fast can expose a system’s internal biases. It’s almost a certainty that our current generative AI systems will grow more sophisticated and more lifelike, and the better a model is, the more it can be used to promote racism or hate speech.

Many of the companies in the space seem to understand how bad the optics can get. Meta’s BlenderBot language model seems to get very weird when talking about racism. Google has been hesitant to release its LaMDA chatbot to the public, in part, for this reason, but that only means the company is being beaten to the punch by smaller outfits that are being less careful.

“All generative models have harmful biases,” Solaiman said. “It’s the sort of garbage in, garbage out idiom. With improved quality of output comes increased risk of harm.”

Who will stand up to our new AI overlords?

Mitchell didn’t mince words regarding what it would take for somebody to put the brakes on the speed of AI development. “I imagine it’s going to take someone being seriously harmed —for example, a child drinking bleach and dying after they read misinformation generated by an AI— before there’s a public outcry about how these things can be harmful and problematic,” she said.

Copyright confusion is already causing legal battles. Last year New York-based artist Kris Kashtanova claimed they received the first copyright for a graphic novel that used AI art generator Midjourney as a baseline for the work. The problem is, it appears the copyright office did not understand the work had been generated using AI, even though it credits Midjourney on the cover. In October, Kashtanova said the copyright office notified them it wants to rescind the copyright. They have since appealed to the copyright office through Texas attorney Van Lindberg. The letter of appeal, which Linbergerg shared with Gizmodo, argued Kashtanova had used Midjourney as a “tool” like any other artist might use pencil or a brush.

Another legal battle pits software authors against AI: As detailed by Bloomberg Law, a class action lawsuit filed by two software developers states that an OpenAI program called Copilot, which is supposed to write code from scratch, is actually borrowing from and straight up copying copyrighted code, since it was is trained on open source software hosted on sites like GitHub.

More lawsuits are likely in 2023. Eric Bourdages, an artist for the game Dead by Daylight, has created a campaign on Twitter to proliferate and sell merchandise of Mickey Mouse and other copyrighted characters generated by AI. Bourdages wants to test just how far AI images can push their immunity to lawsuits, especially from the litigation-happy Disney corporation. “Legally there should be no recourse from Disney as according to the AI models TOS these images transcends copyright and the images are public domain,” he tweeted.

“Intellectual property, I think, is going to be a really big one moving forward, especially for imagery,” Solaiman said. “One of the big considerations that I think we’ll see in this openness debate is there’s a huge concentration of power, and who can train these models, not just because of the resources they have, but because of what data they have access to.”

While the act of scraping the web for all available images has routinely been found legal in the courts, Mitchell said this suit and future litigation will inevitably put the legal framework of fair use of copyrighted material to the test. While there has been some effort to watermark images to make them unusable by AI, “there hasn’t been enough to keep up with the progress of the technology.”

Meanwhile, we’re not likely to see new U.S. legislation put a damper on AI technology. The European Union’s AI Act, first proposed in 2021, recently passed a critical stage, though that bill focuses on using AI for social scoring or on biometric identification, not on generative AI.

What’s next for Generative AI? 

Language generation is likely to get better and better, Mitchell said, and those systems will become much better at interpreting prompts to understand what users mean. In a year, people won’t need to collect and share prompts to use to get the best results, as the AI will simply be able to interpret context of users’ phrases.

This could also mean new models will facilitate larger and larger pools of dialogue within prompts. And inside a chat AI, users might be able to carry on larger, more detailed conversations. That means systems like Replika, a kind of AI-based chat meant to be a wellness aid, could become even more sophisticated and prolific.

Wahlforss said that there’s significant room for improvement of AI with even bigger training sets. GPT3 was trained on what Solaiman called an “unfathomable” 43 terabytes of data, but even that ludicrous amount of data could be dwarfed by upcoming projects. LAION 5B, for example, is a 250-terabyte dataset with 5.6 billion images scraped off the web.

But the advancements won’t stop there. RunwayML is already creating technology that can generate new images from empty space based on a single picture, so he said we can expect even more innovations from original AI generation. Solaiman also noted how Runway’s company has pulled together “some really impressive people to come out with a high-performance model.”

Solaiman is also following the work happening at companies like Nvidia, which is producing graphics processors that have the potential to put AI development into high gear. The company produces large DGX supercomputers made for AI, though things could grow complicated next year thanks to U.S. restrictions on export licenses of this tech to China.

Better hardware and smarter AI architecture could all compound to “really see the potential of diffusion models next year,” he said, including text-to-HD video. Current models are unavailable to the public, and can barely compute short, low-resolution video, but we’re probably not far from much more complex, public-facing applications.

There’s also a lot of attention on how sophisticated OpenAI’s GPT4 language model will be. The bot has already proved useful generating code and acting as a digital assistant. Experts say AI designed specifically for these capacities will only get better into next year.

Some have even proposed ChatGPT as an alternative for Google search going into next year, though Mitchell pointed out nobody has figured out how to monetize the AI model. Google relies on ads placed in search, but would OpenAI give companies more prominence in the AI’s algorithm? The AI industry has not yet even started to crack open the monetization question beyond simple subscription setups.

What’s worth watching?

Solaiman said she’s looking forward to more open source AI projects, especially as no one model fits every single group of people. What it may come down to between big companies and smaller firms is what data they can get access to.

“I think a lot about this compute resource gap, especially among academics, who just don’t have access to compute in the way that industry does,” she says. This is why I’m so bullish about publicly available compute credits, hopefully, by some governmental bodies, and making that more global and not just Western-centric.”

Mitchell says she’s paying attention to Character.ai, which creates chatbots made to emulate fictional characters or historical person. It’s “ripe for the kind of use that ChatGPT has been getting,” she says. Adept.ai is also worth attention: The company is made up of ex-staff from Google, DeepMind and Meta looking to build an AI assistant that, for instance, can automate software tasks that usually require human hands.

There’s also a need to develop new systems for use as communication aids. Mitchell’s background is in assistive technology, and she said there’s promise for AI-driven devices to definitely improve the lives of people including, for instance, nonverbal children living with cerebral palsy.

“We’ve seen this great growth in image generation and text generation. So that probably means speech generation is not far behind, but it’s generally been sort of lagging,” she said. “That requires a different approach to what’s happening now.”

Kyle Barr covers breaking news for Gizmodo. You can follow his coverage here, and email story ideas and tips to [email protected].




Two digitized hands on a black background touching like Michealangelo's David behind a window box reading secret project exe.

AI Image generators have become more and more sophisticated by the end of 2022. They will only get better in 2023, but the technology has progressed so fast the people asking the ethical questions have been left in the lurch.
Graphic: local_doctor (Shutterstock)

At this very moment, I could boot up OpenAI’s popular ChatGPT and ask it to write this article for me. Or I could ask it to “write a lullaby about a cool goose who just wants to honk.” I could pull up Midjourney, Stable Diffusion, DALL-E 2, or any number of AI art generators and order it to craft me an portrait of this cool goose.

Then comes the ethical conundrums: Could AI write and illustrate a children’s book about the goose for me? Could I sell that book, even though I put much less effort into it than any children’s author would? Who should own the copyright? What about the rights of the authors whose work trained the AI that created my book?

AI technology is developing rapidly, but it’s not clear how close are we to generative AI models regularly making final, release-ready products. People in the art scene have already raised concerns over how generative AI systems, particularly GPT and diffusion-based machine learning models, will disrupt their profession. But AI generated content could be a big deal in many other industries, from programmers to copywriters. So how long until all our proverbial gooses are metaphorically cooked?

To answer these questions, I spoke to three people who all work directly with AI but have varying takes on the technology:

  • Irene Solaiman got deep into newfangled AI systems as a researcher and public policy manager at OpenAI back in 2019, a company that has since become one of the biggest names in the generative AI scene. She was there for the release of the GPT2 language model and the API for GPT3, and wrote one of the first toxicity safety reports on GPT3. She also worked for Zillow on the ethics of housing predictive models. Now as the policy director at the machine learning resource platform Hugging Face, she spends quite a bit of time thinking on how this technology will grow, and how companies can ethically direct its progress.
  • Alfred Wahlforss is a Harvard graduate student studying data science. He also helped develop the app BeFake, which used open source data from Stable Diffusion’s model alongside Google-designed DreamBooth to create fake selfies based on users’ images. Wahlforss is very bullish on AI technology, and wants to see the technology progress further and further.
  • Margaret “Meg” Mitchell is an AI researcher with a storied legacy in just the short time that generative AI has been around. Though she got her start doing research in natural language processing, should would go on to work at Microsoft and Google Research’s machine intelligence division where she became lead in the company’s AI ethics. In 2021, Google fired her after the company reportedly told employees to try and be more “positive” in their talk about the problems with AI. She has remained outspoken about the possibilities and challenges of generative AI since and now works as the chief ethics scientist at Hugging Face.

AI models exacerbate ownership concerns 

As AI gets better at generating visual and written content, there’s a real risk that it could undermine creators whose livelihoods depend on their ability to generate content.

Some companies are already using cheap AI art instead of paying for the real thing. In December, science fiction and fantasy publisher Tor was put on blast after sharing the cover of an upcoming book that turned out to be AI art purchased off a stock image site. The work was uncredited, and even the person who touched up the image for the cover went unnamed.

Creators including Polish fantasy artist Greg Rutkowski have come out against AI art for fear their personal brands have been squashed by the proliferation of AI art specifically meant to imitate their work. Rutkowski has supported plans from the likes of the Concept Artists Association who want to lobby for updates to IP and data privacy laws.

Stability AI, the company behind the open source text-to-image diffusion model Stable Diffusion, has made a few concessions to artists over concerns that their work is being stolen or copied by AI. In the company’s release of Stable Diffusion 2, it created new parameters making it more difficult to create images based on celebrities, to craft porn, or to create art “in the style of” real-world artists. Some fans of the more open, open source model were none too happy with the changes. Stability AI also announced it was working with the company Spawning to allow artists to “opt-out” of having their work used in the training of Stable Diffusion 3, which will likely be released in 2023.

Though Stability AI has made some stated efforts to disable AI’s ability to explicitly copy work, there’s a genuine sense among small artists who depend on art and portrait commissions they’ll suffer as AI art generators become even more popular.

That’s not to say things are dire just yet. Mitchell said there are some artists who are seeing some benefits to current AI models, especially helping them speed up their work by creating a baseline they can work off of.

“Some of the specific artists who are speaking out about their work being stolen, I think, will also be artists that will potentially become even more valued as the actual artists, potentially even driving up their sales, or at least driving up the cost of their original pieces of work,” she said.

Tackling AI lies and biases

In November of 2022, Meta’s Galactica AI had to be pulled two days after release since researchers found it was propagating lies and misinformation.

“There’s such a drive and push to get things out and sort of ask for forgiveness later or not even ask for forgiveness,” Mitchell said. “The incentive for these safer structures is fear of bad PR, or wanting to get good PR, and what that incentivizes is the bare minimum.”

Emad Mosteque, the CEO of Stability AI, has been publicly resistant to, in his words, “all barriers from anyone being able to create anything they can imagine.” And even when Stability AI sets limits on output, there’s nothing that stops users from modifying the open source model and removing them. Mosteque, a former hedge fund manager, is bullish about pushing the technology as far as it will go.

But pushing too far too fast can expose a system’s internal biases. It’s almost a certainty that our current generative AI systems will grow more sophisticated and more lifelike, and the better a model is, the more it can be used to promote racism or hate speech.

Many of the companies in the space seem to understand how bad the optics can get. Meta’s BlenderBot language model seems to get very weird when talking about racism. Google has been hesitant to release its LaMDA chatbot to the public, in part, for this reason, but that only means the company is being beaten to the punch by smaller outfits that are being less careful.

“All generative models have harmful biases,” Solaiman said. “It’s the sort of garbage in, garbage out idiom. With improved quality of output comes increased risk of harm.”

Who will stand up to our new AI overlords?

Mitchell didn’t mince words regarding what it would take for somebody to put the brakes on the speed of AI development. “I imagine it’s going to take someone being seriously harmed —for example, a child drinking bleach and dying after they read misinformation generated by an AI— before there’s a public outcry about how these things can be harmful and problematic,” she said.

Copyright confusion is already causing legal battles. Last year New York-based artist Kris Kashtanova claimed they received the first copyright for a graphic novel that used AI art generator Midjourney as a baseline for the work. The problem is, it appears the copyright office did not understand the work had been generated using AI, even though it credits Midjourney on the cover. In October, Kashtanova said the copyright office notified them it wants to rescind the copyright. They have since appealed to the copyright office through Texas attorney Van Lindberg. The letter of appeal, which Linbergerg shared with Gizmodo, argued Kashtanova had used Midjourney as a “tool” like any other artist might use pencil or a brush.

Another legal battle pits software authors against AI: As detailed by Bloomberg Law, a class action lawsuit filed by two software developers states that an OpenAI program called Copilot, which is supposed to write code from scratch, is actually borrowing from and straight up copying copyrighted code, since it was is trained on open source software hosted on sites like GitHub.

More lawsuits are likely in 2023. Eric Bourdages, an artist for the game Dead by Daylight, has created a campaign on Twitter to proliferate and sell merchandise of Mickey Mouse and other copyrighted characters generated by AI. Bourdages wants to test just how far AI images can push their immunity to lawsuits, especially from the litigation-happy Disney corporation. “Legally there should be no recourse from Disney as according to the AI models TOS these images transcends copyright and the images are public domain,” he tweeted.

“Intellectual property, I think, is going to be a really big one moving forward, especially for imagery,” Solaiman said. “One of the big considerations that I think we’ll see in this openness debate is there’s a huge concentration of power, and who can train these models, not just because of the resources they have, but because of what data they have access to.”

While the act of scraping the web for all available images has routinely been found legal in the courts, Mitchell said this suit and future litigation will inevitably put the legal framework of fair use of copyrighted material to the test. While there has been some effort to watermark images to make them unusable by AI, “there hasn’t been enough to keep up with the progress of the technology.”

Meanwhile, we’re not likely to see new U.S. legislation put a damper on AI technology. The European Union’s AI Act, first proposed in 2021, recently passed a critical stage, though that bill focuses on using AI for social scoring or on biometric identification, not on generative AI.

What’s next for Generative AI? 

Language generation is likely to get better and better, Mitchell said, and those systems will become much better at interpreting prompts to understand what users mean. In a year, people won’t need to collect and share prompts to use to get the best results, as the AI will simply be able to interpret context of users’ phrases.

This could also mean new models will facilitate larger and larger pools of dialogue within prompts. And inside a chat AI, users might be able to carry on larger, more detailed conversations. That means systems like Replika, a kind of AI-based chat meant to be a wellness aid, could become even more sophisticated and prolific.

Wahlforss said that there’s significant room for improvement of AI with even bigger training sets. GPT3 was trained on what Solaiman called an “unfathomable” 43 terabytes of data, but even that ludicrous amount of data could be dwarfed by upcoming projects. LAION 5B, for example, is a 250-terabyte dataset with 5.6 billion images scraped off the web.

But the advancements won’t stop there. RunwayML is already creating technology that can generate new images from empty space based on a single picture, so he said we can expect even more innovations from original AI generation. Solaiman also noted how Runway’s company has pulled together “some really impressive people to come out with a high-performance model.”

Solaiman is also following the work happening at companies like Nvidia, which is producing graphics processors that have the potential to put AI development into high gear. The company produces large DGX supercomputers made for AI, though things could grow complicated next year thanks to U.S. restrictions on export licenses of this tech to China.

Better hardware and smarter AI architecture could all compound to “really see the potential of diffusion models next year,” he said, including text-to-HD video. Current models are unavailable to the public, and can barely compute short, low-resolution video, but we’re probably not far from much more complex, public-facing applications.

There’s also a lot of attention on how sophisticated OpenAI’s GPT4 language model will be. The bot has already proved useful generating code and acting as a digital assistant. Experts say AI designed specifically for these capacities will only get better into next year.

Some have even proposed ChatGPT as an alternative for Google search going into next year, though Mitchell pointed out nobody has figured out how to monetize the AI model. Google relies on ads placed in search, but would OpenAI give companies more prominence in the AI’s algorithm? The AI industry has not yet even started to crack open the monetization question beyond simple subscription setups.

What’s worth watching?

Solaiman said she’s looking forward to more open source AI projects, especially as no one model fits every single group of people. What it may come down to between big companies and smaller firms is what data they can get access to.

“I think a lot about this compute resource gap, especially among academics, who just don’t have access to compute in the way that industry does,” she says. This is why I’m so bullish about publicly available compute credits, hopefully, by some governmental bodies, and making that more global and not just Western-centric.”

Mitchell says she’s paying attention to Character.ai, which creates chatbots made to emulate fictional characters or historical person. It’s “ripe for the kind of use that ChatGPT has been getting,” she says. Adept.ai is also worth attention: The company is made up of ex-staff from Google, DeepMind and Meta looking to build an AI assistant that, for instance, can automate software tasks that usually require human hands.

There’s also a need to develop new systems for use as communication aids. Mitchell’s background is in assistive technology, and she said there’s promise for AI-driven devices to definitely improve the lives of people including, for instance, nonverbal children living with cerebral palsy.

“We’ve seen this great growth in image generation and text generation. So that probably means speech generation is not far behind, but it’s generally been sort of lagging,” she said. “That requires a different approach to what’s happening now.”

Kyle Barr covers breaking news for Gizmodo. You can follow his coverage here, and email story ideas and tips to [email protected].

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment