The FCC Bans AI-Generated Voices in Robocalls to Combat Fraud
In an effort to crack down on fraudulent activities and protect consumers, the Federal Communications Commission (FCC) has officially prohibited the use of artificial intelligence-generated voices in unwarranted robocalls across the United States. This decision comes after an incident where residents in New Hampshire received fake voice messages imitating President Joe Biden, advising against participating in the state’s primary election.
FCC Extends TCPA Protections
The ban, implemented under the Telephone Consumer Protection Act (TCPA), is a significant step towards curbing the proliferation of robocall scams. FCC Chairwoman Jessica Rosenworcel emphasized that bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters. The latest ruling extends the prohibition to cover “voice cloning technology,” effectively shutting down an essential tool used by scammers.
Texas Firm Linked to Biden Robocall
Authorities have traced a recent high-profile robocall incident imitating President Joe Biden’s voice back to a Texas-based firm named Life Corporation and an individual identified as Walter Monk. Attorney General Mayes has sent a warning letter to the company, stating that using AI to impersonate the President and deceive voters is unacceptable. Deceptive practices like this undermine public trust in the electoral process. Attorney General John Formella has also confirmed that a cease-and-desist letter has been issued and a criminal investigation is underway.