Techno Blender
Digitally Yours.

tech CEO: White House pushes tech CEOs to limit risks of AI

0 38


The White House on Thursday pushed Silicon Valley CEOs to limit the risks of artificial intelligence, in the administration’s most visible effort to confront rising questions and calls to regulate the rapidly advancing technology.

For roughly two hours in the White House’s Roosevelt Room, Vice President Kamala Harris and other officials told the leaders of Google; Microsoft; OpenAI, the maker of the popular ChatGPT chatbot; and Anthropic, an AI startup, to seriously consider concerns about the technology. President Joe Biden also briefly stopped by the meeting.

“What you’re doing has enormous potential and enormous danger,” Biden told the executives.

It was the first White House gathering of major AI CEOs since the release of tools such as ChatGPT, which have captivated the public and supercharged a race to dominate the technology.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said in a statement. “And every company must comply with existing laws to protect the American people.”

The meeting signified how the AI boom has entangled the highest levels of the U.S. government and put pressure on world leaders to get a handle on the technology. Since OpenAI released ChatGPT to the public last year, many of the world’s biggest tech companies have rushed to incorporate chatbots into their products and accelerated AI research. Venture capitalists have poured billions of dollars into AI startups.

Discover the stories of your interest


But the AI explosion has also raised fears about how the technology might transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that powerful AI systems are too opaque, with the potential to discriminate, displace people from jobs, spread disinformation and perhaps even break the law on their own. Even some of the makers of AI have warned against the technology’s consequences. This week, Geoffrey Hinton, a pioneering researcher who is known as a “godfather” of AI, resigned from Google so he could speak openly about the risks posed by the technology.

Biden recently said that it “remains to be seen” whether AI is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way. Members of Congress, including Sen. Chuck Schumer of New York, the majority leader, have also moved to draft or propose legislation to regulate AI.

That pressure to regulate the technology has been felt in many places around the world. Lawmakers in the European Union are in the midst of negotiating rules for AI, although it is unclear how their proposals will ultimately cover chatbots like ChatGPT. In China, authorities recently demanded that AI systems adhere to strict censorship rules.

“Europe certainly isn’t sitting around, nor is China,” said Tom Wheeler, a former chair of the Federal Communications Commission. “There is a first mover advantage in policy as much as there is a first mover advantage in the marketplace.”

Wheeler said all eyes are on what actions the United States might take.

“We need to make sure that we are at the table as players,” he said. “Everybody’s first reaction is, ‘What’s the White House going to do?'”

Yet even as governments call for tech companies to take steps to make their products safe, AI companies and their representatives have pointed back at governments, saying elected officials need to take steps to set the rules for the fast-growing space.

Attendees at Thursday’s meeting included Google’s CEO Sundar Pichai; Microsoft’s CEO Satya Nadella; OpenAI’s Sam Altman; and Anthropic’s CEO Dario Amodei. Some of the executives were accompanied by aides with technical expertise, while others brought public policy experts, an administration official said.

Google, Microsoft and OpenAI declined to comment after the White House meeting. Anthropic did not immediately respond to requests for comment.

“The president has been extensively briefed on ChatGPT and knows how it works,” White House press secretary Karine Jean-Pierre said at Thursday’s briefing.

The White House said it had impressed on the companies that they should address the risks of new AI developments. In a statement after the meeting, the administration said there had been “frank and constructive discussion” about the desire for the companies to be more open about their products, the need for AI systems to be subjected to outside scrutiny and the importance that those products be kept away from bad actors.

“Given the role these CEOs and their companies play in America’s AI innovation ecosystem, administration officials also emphasized the importance of their leadership, called on them to model responsible behavior and to take action to ensure responsible innovation and appropriate safeguards, and protect people’s rights and safety,” the White House said.

Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new research centers devoted to AI. The administration also pledged to release draft guidelines for government agencies to ensure that their use of AI safeguards “the American people’s rights and safety,” adding that several AI companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.

The meeting and announcements build on earlier efforts by the administration to place guardrails on AI.

Last year, the White House released what it called a blueprint for an AI bill of rights, which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in AI development, which had been in the works for years.

But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.

In April, a group of government agencies pledged to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed using the technology.

In a guest essay in The New York Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “key decision point” with AI. She likened the technology’s recent developments to the birth of tech giants like Google and Facebook, and she warned that, without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a potent tool.

“As the use of AI becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” she said.


The White House on Thursday pushed Silicon Valley CEOs to limit the risks of artificial intelligence, in the administration’s most visible effort to confront rising questions and calls to regulate the rapidly advancing technology.

For roughly two hours in the White House’s Roosevelt Room, Vice President Kamala Harris and other officials told the leaders of Google; Microsoft; OpenAI, the maker of the popular ChatGPT chatbot; and Anthropic, an AI startup, to seriously consider concerns about the technology. President Joe Biden also briefly stopped by the meeting.

“What you’re doing has enormous potential and enormous danger,” Biden told the executives.

It was the first White House gathering of major AI CEOs since the release of tools such as ChatGPT, which have captivated the public and supercharged a race to dominate the technology.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said in a statement. “And every company must comply with existing laws to protect the American people.”

The meeting signified how the AI boom has entangled the highest levels of the U.S. government and put pressure on world leaders to get a handle on the technology. Since OpenAI released ChatGPT to the public last year, many of the world’s biggest tech companies have rushed to incorporate chatbots into their products and accelerated AI research. Venture capitalists have poured billions of dollars into AI startups.

Discover the stories of your interest


But the AI explosion has also raised fears about how the technology might transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that powerful AI systems are too opaque, with the potential to discriminate, displace people from jobs, spread disinformation and perhaps even break the law on their own. Even some of the makers of AI have warned against the technology’s consequences. This week, Geoffrey Hinton, a pioneering researcher who is known as a “godfather” of AI, resigned from Google so he could speak openly about the risks posed by the technology.

Biden recently said that it “remains to be seen” whether AI is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way. Members of Congress, including Sen. Chuck Schumer of New York, the majority leader, have also moved to draft or propose legislation to regulate AI.

That pressure to regulate the technology has been felt in many places around the world. Lawmakers in the European Union are in the midst of negotiating rules for AI, although it is unclear how their proposals will ultimately cover chatbots like ChatGPT. In China, authorities recently demanded that AI systems adhere to strict censorship rules.

“Europe certainly isn’t sitting around, nor is China,” said Tom Wheeler, a former chair of the Federal Communications Commission. “There is a first mover advantage in policy as much as there is a first mover advantage in the marketplace.”

Wheeler said all eyes are on what actions the United States might take.

“We need to make sure that we are at the table as players,” he said. “Everybody’s first reaction is, ‘What’s the White House going to do?'”

Yet even as governments call for tech companies to take steps to make their products safe, AI companies and their representatives have pointed back at governments, saying elected officials need to take steps to set the rules for the fast-growing space.

Attendees at Thursday’s meeting included Google’s CEO Sundar Pichai; Microsoft’s CEO Satya Nadella; OpenAI’s Sam Altman; and Anthropic’s CEO Dario Amodei. Some of the executives were accompanied by aides with technical expertise, while others brought public policy experts, an administration official said.

Google, Microsoft and OpenAI declined to comment after the White House meeting. Anthropic did not immediately respond to requests for comment.

“The president has been extensively briefed on ChatGPT and knows how it works,” White House press secretary Karine Jean-Pierre said at Thursday’s briefing.

The White House said it had impressed on the companies that they should address the risks of new AI developments. In a statement after the meeting, the administration said there had been “frank and constructive discussion” about the desire for the companies to be more open about their products, the need for AI systems to be subjected to outside scrutiny and the importance that those products be kept away from bad actors.

“Given the role these CEOs and their companies play in America’s AI innovation ecosystem, administration officials also emphasized the importance of their leadership, called on them to model responsible behavior and to take action to ensure responsible innovation and appropriate safeguards, and protect people’s rights and safety,” the White House said.

Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new research centers devoted to AI. The administration also pledged to release draft guidelines for government agencies to ensure that their use of AI safeguards “the American people’s rights and safety,” adding that several AI companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.

The meeting and announcements build on earlier efforts by the administration to place guardrails on AI.

Last year, the White House released what it called a blueprint for an AI bill of rights, which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in AI development, which had been in the works for years.

But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.

In April, a group of government agencies pledged to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed using the technology.

In a guest essay in The New York Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “key decision point” with AI. She likened the technology’s recent developments to the birth of tech giants like Google and Facebook, and she warned that, without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a potent tool.

“As the use of AI becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” she said.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment