Techno Blender
Digitally Yours.

The FCC just put AI robocaller creeps on notice during election year

0 29



The Federal Communications Commission (FCC) on Thursday made it illegal to place AI-generated robocalls to voters in the U.S. 

The issue came into sharp focus when New Hampshire voters received AI-generated calls in the voice of Joe Biden implying that if they cast a ballot in the state’s primary they couldn’t vote in the general election. 

The FCC commissioners voted unanimously to make explicit that the 1991 Telephone Consumer Protection Act, which already outlaws artificial or prerecorded messages, also covers AI-generated calls. Starting now, the FCC can fine companies that place AI robocalls up to $23,000 per call, and can block service providers that carry the calls. Robocall recipients can also now sue for up to $1,500 per call in damages per.

But according to the consumer rights advocacy group Public Citizen, the FCC’s move still doesn’t cast a wide enough net.

“The Telephone Consumer Protection Act applies only in limited measure to election-related calls,” says Public Citizen president Robert Weissman. “The Act’s prohibition on use of ‘an artificial or prerecorded voice’ generally does not apply to noncommercial calls and nonprofits.” 

Still, the revised law would likely apply to Walter Monk, the Texas man who New Hampshire authorities believe was behind the Biden robocalls. Monk is the proprietor of Life Corporation, and authorities believe the calls were distributed by the Texas carrier Lingo Telecom. 

The FCC’s action may make robocallers even more careful about covering their tracks. Generating an AI robocall is relatively simple with available tools (New Hampshire authorities believe Monk may have used ElevenLabs’ voice-cloning tool), and techniques for masking the origin of a call are readily available. 

But law enforcement’s investigative powers are becoming more high-tech as well. The New Hampshire authorities used traceback technology to follow the robocalls back up through the communications network and finally to the originator of the calls.

The FCC has been helping state authorities with both federal resources and investigation tactics to hunt down robocallers. The federal-state partnership can help build an air-tight case against suspected violators when it’s time to prosecute. Now, the FCC has provided prosecutors more tools for making robocallers pay.

Still, experts say that all these efforts won’t be a silver bullet for AI robocalls and other misinformation this election season. In the end, the FCC’s action, and the attention it gets, may help more than the threatened penalties. It may communicate to voters to be aware that those election-year dinnertime calls may not be what they seem.





The Federal Communications Commission (FCC) on Thursday made it illegal to place AI-generated robocalls to voters in the U.S. 

The issue came into sharp focus when New Hampshire voters received AI-generated calls in the voice of Joe Biden implying that if they cast a ballot in the state’s primary they couldn’t vote in the general election. 

The FCC commissioners voted unanimously to make explicit that the 1991 Telephone Consumer Protection Act, which already outlaws artificial or prerecorded messages, also covers AI-generated calls. Starting now, the FCC can fine companies that place AI robocalls up to $23,000 per call, and can block service providers that carry the calls. Robocall recipients can also now sue for up to $1,500 per call in damages per.

But according to the consumer rights advocacy group Public Citizen, the FCC’s move still doesn’t cast a wide enough net.

“The Telephone Consumer Protection Act applies only in limited measure to election-related calls,” says Public Citizen president Robert Weissman. “The Act’s prohibition on use of ‘an artificial or prerecorded voice’ generally does not apply to noncommercial calls and nonprofits.” 

Still, the revised law would likely apply to Walter Monk, the Texas man who New Hampshire authorities believe was behind the Biden robocalls. Monk is the proprietor of Life Corporation, and authorities believe the calls were distributed by the Texas carrier Lingo Telecom. 

The FCC’s action may make robocallers even more careful about covering their tracks. Generating an AI robocall is relatively simple with available tools (New Hampshire authorities believe Monk may have used ElevenLabs’ voice-cloning tool), and techniques for masking the origin of a call are readily available. 

But law enforcement’s investigative powers are becoming more high-tech as well. The New Hampshire authorities used traceback technology to follow the robocalls back up through the communications network and finally to the originator of the calls.

The FCC has been helping state authorities with both federal resources and investigation tactics to hunt down robocallers. The federal-state partnership can help build an air-tight case against suspected violators when it’s time to prosecute. Now, the FCC has provided prosecutors more tools for making robocallers pay.

Still, experts say that all these efforts won’t be a silver bullet for AI robocalls and other misinformation this election season. In the end, the FCC’s action, and the attention it gets, may help more than the threatened penalties. It may communicate to voters to be aware that those election-year dinnertime calls may not be what they seem.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment