FCC makes AI-generated voices in robocalls illegal

Techa Tungateja/Getty Images

The move gives states new authority to crack down on AI-generated voice-cloning schemes.

The Federal Communications Commission on Thursday unanimously issued a declaratory ruling that deems AI-generated voices in robocalls illegal, granting state attorneys general more authority to go after entities that target consumers with voice cloning technology in robocall scams.

The announcement comes as state authorities are investigating a Texas-linked robocalling operation that allegedly originated phone calls featuring an AI-generated voice of President Joe Biden in last month’s New Hampshire primary and urged Democrats to “save” their votes for the November ballot. The FCC earlier this week issued a cease and desist letter to the provider accused of being involved in the scheme. 

The agency justified the measure under the Telephone Consumer Protection Act, a 1991 statute that gives the FCC authority to regulate junk calls and also established the national “Do Not Call” registry.

“While currently State Attorneys Generals can target the outcome of an unwanted AI-voice generated robocall — such as the scam or fraud they are seeking to perpetrate — this action now makes the act of using AI to generate the voice in these robocalls itself illegal, expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law,” the FCC said in a statement.

The notice comes amid heightened fears that AI systems will supercharge the spread of election misinformation and disinformation in November and beyond. While several New Hampshire voters were able to recognize the falsified audio, AI technologies can still be used to deploy less overt forms of disinformation that may trick individuals or media outlets into spreading or acting on false information, policy researchers argue.

Spam and robocalling operations have been traditionally carried out in environments with human managers overseeing calling schemes, but AI technologies have automated some of these tasks, allowing robocalling operations to leverage speech and voice-generating capabilities of consumer-facing AI tools available online or on the dark web.

A pair of reports released Thursday warn the U.S. is a top target for election interference threats at a time when generative artificial intelligence technologies and manipulative AI-generated content are poised to upend the November presidential election cycle.

The FCC’s enforcement arm convened an investigation into the robocall scheme last month. The agency is also in the midst of a proceeding to determine how best to protect consumers from AI content in robocalls and robotexts.