Techno Blender
Digitally Yours.

Biden meets with experts about dangers of AI

0 49


President Biden is scheduled to meet researchers and advocates with expertise in artificial intelligence on Tuesday in San Francisco as his administration attempts to tackle potential dangers of a technology that could fuel misinformation, job losses, discrimination and privacy violations.

The meeting comes as Biden ramps up efforts to raise money for his 2024 reelection bid, including from tech billionaires. While visiting Silicon Valley on Monday, he attended two fundraisers, including one co-hosted by entrepreneur Reid Hoffman who has numerous ties to AI businesses. The venture capitalist was an early investor in Open AI, which built the popular ChatGPT app, and sits on the board of tech companies including Microsoft that are investing heavily in AI.

Experts Biden is expected to meet with Tuesday include some of Big Tech’s loudest critics. The list includes children’s advocate Jim Steyer, who founded and leads Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute.

Some of the experts have experience working inside major tech companies. Harris, a former Google product manager and design ethicist, has spoken out about how social media companies like Facebook and Twitter can harm people’s mental health and amplify misinformation.

Biden’s meetings with AI researchers and tech executives underscore how the president is playing both sides as his campaign tries to attract wealthy donors while his administration examines the risks of the fast-growing technology. While Biden has been critical of tech giants, executives and workers from companies such as Apple, Microsoft, Google and Meta also contributed millions of dollars to his campaign in the 2020 election cycle.

“AI is a top priority for the president and his team. Generative AI tools have increased significantly in the past several months and we don’t want to solve yesterday’s problem,” a White House official said in a statement.

The Biden administration has been focusing on AI’s risks. Last year, the administration released a “Blueprint for an AI Bill of Rights,” outlining five principles developers should keep in mind before they release new AI-powered tools. The administration also met with tech CEOs, announced steps the federal government took to address AI risks and advanced other efforts to “promote responsible American innovation.”

Lina Khan, he Federal Trade Commission chairperson who was appointed by Biden, said in a May op-ed published in the New York Times that the rise of tech platforms like Facebook and Google cost users their privacy and security.

“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” Khan said.

Tech giants use AI in various products to recommend videos, power virtual assistants and transcribe audio. While AI has been around for decades, the popularity of an AI chatbot known as ChatGPT intensified a race between big tech players like Microsoft, Google and Facebook’s parent company Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text and complete a variety of tasks.

The rush to advance AI technology has made tech workers, researchers, lawmakers and regulators uneasy about whether new products will be released before they’re safe. In March, Tesla, SpaceX and Twitter Chief Executive Elon Musk, Apple co-founder Steve Wozniak and other technology leaders called for AI labs to pause the training of advanced AI systems and urged developers to work with policy makers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so he could speak about AI’s risks more openly.

As technology rapidly advances, lawmakers and regulators have struggled to keep up. In California, Gov. Gavin Newsom signaled he wants to tread carefully with state-level AI regulation. Newsom said at a Los Angeles conference in May that “the biggest mistake” politicians can make is asserting themselves “without first seeking to understand.” California lawmakers have floated several ideas, including legislation that would combat algorithmic discrimination, establish an office of artificial intelligence and create a working group that would provide the Legislature with an AI report.

Writers and artists are also worried that companies could use AI to replace workers. The use of AI to generate text and art comes with ethical questions, including about plagiarism and copyright infringement. The Writer’s Guild of America, which remains on strike, proposed rules in March on how Hollywood studios can use AI. Text generated from AI chatbots, for example, “cannot be considered in determining writing credits,” under the proposed rules.

The potential abuse of AI to spread political propaganda and conspiracy theories, a problem that has plagued social media, is another top concern among disinformation researchers. They fear AI tools that can spit out text and images will make it easier and cheaper for bad actors to spread misleading information.

Already, AI has begun to be deployed in some mainstream political ads. The Republican National Committee posted an AI-generated video ad that depicts a dystopian future if Biden wins his reelection bid in 2024. AI tools have also been used to create fake audio clips of politicians and celebrities making remarks they didn’t actually say. The campaign of GOP presidential candidate and Florida Gov. Ron DeSantis shared a video of what appeared to be AI-generated images of former President Trump hugging Dr. Anthony Fauci.

Tech companies aren’t opposed to guardrails. They’re welcoming regulation but are also trying to shape it. In May, Microsoft released a 42-page report about governing AI, noting that no company is above the law. The report includes a “blueprint for the public governance of AI” that outlines five points, including the creation of “safety breaks” for AI systems that control the electric grid, water systems and other critical infrastructure.

That same month, OpenAI CEO Sam Altman testified before Congress and called for AI regulation.

“My worst fear is that we, the technology industry, cause significant harm to the world,” Altman told lawmakers. “If this technology goes wrong, it can go quite wrong.” Altman, who has met with leaders in Europe, Asia, Africa and the Middle East, also signed a one-sentence letter in May with scientists and other leaders that warned about the “risk of extinction” AI poses.


President Biden is scheduled to meet researchers and advocates with expertise in artificial intelligence on Tuesday in San Francisco as his administration attempts to tackle potential dangers of a technology that could fuel misinformation, job losses, discrimination and privacy violations.

The meeting comes as Biden ramps up efforts to raise money for his 2024 reelection bid, including from tech billionaires. While visiting Silicon Valley on Monday, he attended two fundraisers, including one co-hosted by entrepreneur Reid Hoffman who has numerous ties to AI businesses. The venture capitalist was an early investor in Open AI, which built the popular ChatGPT app, and sits on the board of tech companies including Microsoft that are investing heavily in AI.

Experts Biden is expected to meet with Tuesday include some of Big Tech’s loudest critics. The list includes children’s advocate Jim Steyer, who founded and leads Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, Co-Director of Stanford’s Human-Centered AI Institute.

Some of the experts have experience working inside major tech companies. Harris, a former Google product manager and design ethicist, has spoken out about how social media companies like Facebook and Twitter can harm people’s mental health and amplify misinformation.

Biden’s meetings with AI researchers and tech executives underscore how the president is playing both sides as his campaign tries to attract wealthy donors while his administration examines the risks of the fast-growing technology. While Biden has been critical of tech giants, executives and workers from companies such as Apple, Microsoft, Google and Meta also contributed millions of dollars to his campaign in the 2020 election cycle.

“AI is a top priority for the president and his team. Generative AI tools have increased significantly in the past several months and we don’t want to solve yesterday’s problem,” a White House official said in a statement.

The Biden administration has been focusing on AI’s risks. Last year, the administration released a “Blueprint for an AI Bill of Rights,” outlining five principles developers should keep in mind before they release new AI-powered tools. The administration also met with tech CEOs, announced steps the federal government took to address AI risks and advanced other efforts to “promote responsible American innovation.”

Lina Khan, he Federal Trade Commission chairperson who was appointed by Biden, said in a May op-ed published in the New York Times that the rise of tech platforms like Facebook and Google cost users their privacy and security.

“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” Khan said.

Tech giants use AI in various products to recommend videos, power virtual assistants and transcribe audio. While AI has been around for decades, the popularity of an AI chatbot known as ChatGPT intensified a race between big tech players like Microsoft, Google and Facebook’s parent company Meta. Launched in 2022 by OpenAI, ChatGPT can answer questions, generate text and complete a variety of tasks.

The rush to advance AI technology has made tech workers, researchers, lawmakers and regulators uneasy about whether new products will be released before they’re safe. In March, Tesla, SpaceX and Twitter Chief Executive Elon Musk, Apple co-founder Steve Wozniak and other technology leaders called for AI labs to pause the training of advanced AI systems and urged developers to work with policy makers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so he could speak about AI’s risks more openly.

As technology rapidly advances, lawmakers and regulators have struggled to keep up. In California, Gov. Gavin Newsom signaled he wants to tread carefully with state-level AI regulation. Newsom said at a Los Angeles conference in May that “the biggest mistake” politicians can make is asserting themselves “without first seeking to understand.” California lawmakers have floated several ideas, including legislation that would combat algorithmic discrimination, establish an office of artificial intelligence and create a working group that would provide the Legislature with an AI report.

Writers and artists are also worried that companies could use AI to replace workers. The use of AI to generate text and art comes with ethical questions, including about plagiarism and copyright infringement. The Writer’s Guild of America, which remains on strike, proposed rules in March on how Hollywood studios can use AI. Text generated from AI chatbots, for example, “cannot be considered in determining writing credits,” under the proposed rules.

The potential abuse of AI to spread political propaganda and conspiracy theories, a problem that has plagued social media, is another top concern among disinformation researchers. They fear AI tools that can spit out text and images will make it easier and cheaper for bad actors to spread misleading information.

Already, AI has begun to be deployed in some mainstream political ads. The Republican National Committee posted an AI-generated video ad that depicts a dystopian future if Biden wins his reelection bid in 2024. AI tools have also been used to create fake audio clips of politicians and celebrities making remarks they didn’t actually say. The campaign of GOP presidential candidate and Florida Gov. Ron DeSantis shared a video of what appeared to be AI-generated images of former President Trump hugging Dr. Anthony Fauci.

Tech companies aren’t opposed to guardrails. They’re welcoming regulation but are also trying to shape it. In May, Microsoft released a 42-page report about governing AI, noting that no company is above the law. The report includes a “blueprint for the public governance of AI” that outlines five points, including the creation of “safety breaks” for AI systems that control the electric grid, water systems and other critical infrastructure.

That same month, OpenAI CEO Sam Altman testified before Congress and called for AI regulation.

“My worst fear is that we, the technology industry, cause significant harm to the world,” Altman told lawmakers. “If this technology goes wrong, it can go quite wrong.” Altman, who has met with leaders in Europe, Asia, Africa and the Middle East, also signed a one-sentence letter in May with scientists and other leaders that warned about the “risk of extinction” AI poses.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment