The FCC Bans AI-Generated Deep Fakes in Robocalls
The U.S. Federal Communications Commission (FCC) has officially declared the use of AI-generated deep fakes in robocalls illegal. This announcement comes after the FCC began investigating illegal robocalls that utilize generative AI technology in November.
Lingo Telecom LLC Accused of AI-Generated Biden Deepfake
Recently, the FCC, along with lawmakers in Texas, issued a cease and desist order to Lingo Telecom LLC. The company is accused of being responsible for an infamous AI-generated audio deepfake that imitated the voice of U.S. President Joe Biden. This deepfake was used in a robocall that discouraged voters in New Hampshire from participating in the state’s primary.
FCC Cracks Down on Voice Cloning Scams
The FCC’s decision directly addresses the misuse of voice cloning technology to carry out scams, such as extorting vulnerable individuals, impersonating high-profile figures, and spreading misinformation. The agency cites the Telephone Consumer Protection Act as the basis for this action.
Previous Fines and Collaborative Efforts
The FCC has previously fined individuals and groups for creating robocalls. Before the 2020 elections, Jacob Wohl and Jack Burkman were fined $5 million for making over 1,000 robocalls. In another case, the FCC imposed a $299,997,000 fine for 33,333 calls promoting auto warranty sales. Additionally, the FCC is collaborating with 48 state attorneys general to combat robocalls.
Hot Take: Protecting Against Fraud and Misinformation
The FCC’s ban on AI-generated deep fakes in robocalls aims to address the rising threat of scams and misinformation. By prohibiting the use of AI to recreate voices for this purpose, the agency intends to crack down on fraudsters and protect the public. State attorneys general will have new tools to combat these scams and safeguard against fraud and misinformation.