Techno Blender
Digitally Yours.

Here’s How the European Union Will Regulate Advanced AI Models Like OpenAI’s ChatGPT

0 23


 The European Union reached a preliminary deal that would limit how the advanced ChatGPT model could operate, in what’s seen as a key part of the world’s first comprehensive artificial intelligence regulation.

All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they’re provided free and open-source, according to an EU document seen by Bloomberg.

These include:

  • Having an acceptable-use policy
  • Keeping up-to-date information on how they trained their models
  • Reporting a detailed summary of the data used to train their models
  • Having a policy to respect copyright law

Models deemed to pose a “systemic risk” would be subject to additional rules, according to the document. The EU would determine that risk based on the amount of computing power used to train the model. The threshold is set at those models that use more than 10 trillion trillion (or septillion) operations per second.  

Currently, the only model that would automatically meet this threshold is OpenAI’s GPT-4, according to experts. The EU’s executive arm can designate others depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end-users, among other possible metrics.

These highly capable models should sign on to a code of conduct while the European Commission works out more harmonized and longstanding controls. Those that don’t sign will have to prove to the commission that they’re complying with the AI Act. The exemption for open-source models doesn’t apply to those deemed to pose a systemic risk.

These models would also have to:

  • Report their energy consumption
  • Perform red-teaming, or adversarial tests, either internally or externally
  • Assess and mitigate possible systemic risks, and report any incidents
  • Ensure they’re using adequate cybersecurity controls
  • Report the information used to fine-tune the model, and their system architecture
  • Conform to more energy efficient standards if they’re developed

The tentative deal still needs to be approved by the European Parliament and the EU’s 27 member states. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like France’s Mistral AI or Germany’s Aleph Alpha.

For now, Mistral will likely not need to meet the general purpose AI controls because the company is still in the research and development phase, Spain’s secretary of state Carme Artigas said early Saturday.


 The European Union reached a preliminary deal that would limit how the advanced ChatGPT model could operate, in what’s seen as a key part of the world’s first comprehensive artificial intelligence regulation.

All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they’re provided free and open-source, according to an EU document seen by Bloomberg.

These include:

  • Having an acceptable-use policy
  • Keeping up-to-date information on how they trained their models
  • Reporting a detailed summary of the data used to train their models
  • Having a policy to respect copyright law

Models deemed to pose a “systemic risk” would be subject to additional rules, according to the document. The EU would determine that risk based on the amount of computing power used to train the model. The threshold is set at those models that use more than 10 trillion trillion (or septillion) operations per second.  

Currently, the only model that would automatically meet this threshold is OpenAI’s GPT-4, according to experts. The EU’s executive arm can designate others depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end-users, among other possible metrics.

These highly capable models should sign on to a code of conduct while the European Commission works out more harmonized and longstanding controls. Those that don’t sign will have to prove to the commission that they’re complying with the AI Act. The exemption for open-source models doesn’t apply to those deemed to pose a systemic risk.

These models would also have to:

  • Report their energy consumption
  • Perform red-teaming, or adversarial tests, either internally or externally
  • Assess and mitigate possible systemic risks, and report any incidents
  • Ensure they’re using adequate cybersecurity controls
  • Report the information used to fine-tune the model, and their system architecture
  • Conform to more energy efficient standards if they’re developed

The tentative deal still needs to be approved by the European Parliament and the EU’s 27 member states. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like France’s Mistral AI or Germany’s Aleph Alpha.

For now, Mistral will likely not need to meet the general purpose AI controls because the company is still in the research and development phase, Spain’s secretary of state Carme Artigas said early Saturday.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment