Techno Blender
Digitally Yours.

US to propose massive changes to how AI companies report safety tests

0 13


It seems US’s Biden Administration is now hell bent on going after deepfakes days before the US 2024 elections. This comes just days after Taylor Swift’s deepfakes went viral across social media platforms

Not just on economics, Taylor Swift will apparently have an effect on how the US government regulates AI as well. After the po-star’s AI-generated pornographic deepfakes went viral, social media platforms have been scrambling to stop the dissemination of these images.

Now, the White House too is jumping in, and is talking about some new regulations to take on deepfakes. The plan is to have these AI companies adhere to certain standards and regulations in order to keep public safety in check.

The Biden administration is set to implement a requirement compelling major AI system developers to disclose their safety test results to the government.

Related Articles

X

X takes drastic steps to combat Taylor Swift deepfakes, blocks all searches around pop-star

X

Taylor Swift AI deepfakes: Can the popstar take legal action?

This initiative is part of the progress review on the executive order signed by President Joe Biden three months ago, aimed at effectively managing the rapidly evolving technology.

Scheduled for Monday, the White House AI Council will evaluate the achievements of the 90-day goals outlined in the executive order. A crucial aspect of this mandate under the Defense Production Act is that AI companies must share essential information, including safety test results, with the Commerce Department.

White House Special Adviser on AI, Ben Buchanan, emphasised the government’s intention to ensure the safety of AI systems before their release to the public. He stated in an interview, “The president has been very clear that companies need to meet that bar.”

While software companies have committed to specific categories for safety tests, there is currently no common standard for these tests.

To address this, the National Institute of Standards and Technology, as outlined in Biden’s October order, will develop a uniform framework for assessing safety.

Recognizing AI’s significance in economic and national security considerations, the Biden administration is actively engaging with congressional legislation and collaborating with other countries and the European Union to establish rules for managing AI technology.

The Commerce Department has taken a step further by developing a draft rule concerning US cloud companies that provide servers to foreign AI developers.

Nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, have conducted risk assessments related to AI’s use in critical national infrastructure, such as the electric grid. In addition, the government has increased its hiring of AI experts and data scientists within federal agencies to effectively navigate the transformative effects and potential risks associated with AI.

Buchanan clarified the administration’s stance, stating, “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”

(With inputs from agencies)


Taylor Swift Deepfakes: US to propose massive changes to how AI companies report safety tests

It seems US’s Biden Administration is now hell bent on going after deepfakes days before the US 2024 elections. This comes just days after Taylor Swift’s deepfakes went viral across social media platforms

Not just on economics, Taylor Swift will apparently have an effect on how the US government regulates AI as well. After the po-star’s AI-generated pornographic deepfakes went viral, social media platforms have been scrambling to stop the dissemination of these images.

Now, the White House too is jumping in, and is talking about some new regulations to take on deepfakes. The plan is to have these AI companies adhere to certain standards and regulations in order to keep public safety in check.

The Biden administration is set to implement a requirement compelling major AI system developers to disclose their safety test results to the government.

Related Articles

X

X takes drastic steps to combat Taylor Swift deepfakes, blocks all searches around pop-star

X

Taylor Swift AI deepfakes: Can the popstar take legal action?

This initiative is part of the progress review on the executive order signed by President Joe Biden three months ago, aimed at effectively managing the rapidly evolving technology.

Scheduled for Monday, the White House AI Council will evaluate the achievements of the 90-day goals outlined in the executive order. A crucial aspect of this mandate under the Defense Production Act is that AI companies must share essential information, including safety test results, with the Commerce Department.

White House Special Adviser on AI, Ben Buchanan, emphasised the government’s intention to ensure the safety of AI systems before their release to the public. He stated in an interview, “The president has been very clear that companies need to meet that bar.”

While software companies have committed to specific categories for safety tests, there is currently no common standard for these tests.

To address this, the National Institute of Standards and Technology, as outlined in Biden’s October order, will develop a uniform framework for assessing safety.

Recognizing AI’s significance in economic and national security considerations, the Biden administration is actively engaging with congressional legislation and collaborating with other countries and the European Union to establish rules for managing AI technology.

The Commerce Department has taken a step further by developing a draft rule concerning US cloud companies that provide servers to foreign AI developers.

Nine federal agencies, including the Departments of Defense, Transportation, Treasury, and Health and Human Services, have conducted risk assessments related to AI’s use in critical national infrastructure, such as the electric grid. In addition, the government has increased its hiring of AI experts and data scientists within federal agencies to effectively navigate the transformative effects and potential risks associated with AI.

Buchanan clarified the administration’s stance, stating, “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”

(With inputs from agencies)

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment