Techno Blender
Digitally Yours.

AI regulation might prompt OpenAI to remove ChatGPT from Europe

0 34


OpenAI CEO Sam Altman recently warned that he has no qualms about removing ChatGPT from Europe if legislation designed to regulate AI becomes law. The legislation in question is the AI Act and includes several provisions that Altman argues are overly broad and overreaching.

“The current draft of the EU AI Act would be over-regulating,” Altman said in remarks picked up by Reuters. “But we have heard it’s going to get pulled back,” he added.

AI has been around for a long time, but now that powerful user-facing AI apps are all the rage — from ChatGPT to Midjourney — lawmakers worry that regulatory safeguards are necessary. Notably, many revered figures in the tech industry have also expressed concern about the power of AI to cause mayhem.

Just recently, for example, former Google CEO Eric Schmidt said that unfettered access to powerful AI poses an “existential risk.”

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology,” Schmit said.

Meanwhile, there have already been a few examples that illustrate the havoc AI can cause. You might recall the viral AI-generated photo of the Pope wearing a Balenciaga jacket from a few months ago. And just this week, an AI-generated image of a fire at the Pentagon went viral.

As to the safeguards the EU wants to implement, there would be an array of “design, information and environmental requirements” OpenAI would have to adhere to.

A press release on the matter reads in part:

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Other provisions would require OpenAI to disclose the training methods it uses to make ChatGPT as powerful as it is. OpenAI, of course, isn’t happy with any of this.

“Either we’ll be able to solve those requirements or not,” Altman told TIME. “If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible.”




OpenAI CEO Sam Altman recently warned that he has no qualms about removing ChatGPT from Europe if legislation designed to regulate AI becomes law. The legislation in question is the AI Act and includes several provisions that Altman argues are overly broad and overreaching.

“The current draft of the EU AI Act would be over-regulating,” Altman said in remarks picked up by Reuters. “But we have heard it’s going to get pulled back,” he added.

AI has been around for a long time, but now that powerful user-facing AI apps are all the rage — from ChatGPT to Midjourney — lawmakers worry that regulatory safeguards are necessary. Notably, many revered figures in the tech industry have also expressed concern about the power of AI to cause mayhem.

Just recently, for example, former Google CEO Eric Schmidt said that unfettered access to powerful AI poses an “existential risk.”

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology,” Schmit said.

Meanwhile, there have already been a few examples that illustrate the havoc AI can cause. You might recall the viral AI-generated photo of the Pope wearing a Balenciaga jacket from a few months ago. And just this week, an AI-generated image of a fire at the Pentagon went viral.

As to the safeguards the EU wants to implement, there would be an array of “design, information and environmental requirements” OpenAI would have to adhere to.

A press release on the matter reads in part:

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Other provisions would require OpenAI to disclose the training methods it uses to make ChatGPT as powerful as it is. OpenAI, of course, isn’t happy with any of this.

“Either we’ll be able to solve those requirements or not,” Altman told TIME. “If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible.”

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment