Mastermind behind Biden AI robocalls faces potential $6M fine from FCC

Hermosawave/Getty Images

Steve Kramer previously admitted to plotting the robocall scheme that told voters in the New Hampshire primary to “save” their votes for November.

The Federal Communications Commission proposed a $6 million fine against the Democratic operative tied to a series of robocalls containing an AI-generated voice of President Joe Biden that was spread to voters during January’s New Hampshire Primary.

Steve Kramer was indicted by a federal grand jury in New Hampshire this week, facing candidate impersonation charges and felony charges for bribery and intimidation. The AI-generated Biden voice told voters not to go to the polls, urging them to “save” their votes for November.

He admitted in February to creating the robocall operation. A former consultant for Minnesota congressman and presidential candidate Dean Phillips, Kramer said the calls were created as a way to spell out the dangers of AI content in political campaigns.

“We will act swiftly and decisively to ensure that bad actors cannot use U.S. telecommunications networks to facilitate the misuse of generative AI technology to interfere with elections, defraud consumers, or compromise sensitive data,” said Loyaan Egal, who heads the FCC’s Enforcement Bureau, in a statement announcing the proposed charges.

The agency in February issued a declaratory ruling making AI-generated voices in robocalls illegal, granting state attorneys general more authority to go after entities that target consumers with voice cloning technology in robocall scams.

Kramer hired Voice Broadcasting Corp. to handle the call transmissions, which in turn utilized Texas-based Life Corp.’s services to route the calls through the voice service provider Lingo Telecom, the FCC says.

Lingo then transmitted these calls and incorrectly marked them with the highest level of caller ID attestation, reducing the chance that other providers would identify the calls as sham communications, the agency added. In a separate enforcement action announced Thursday, the commission proposed a $2 million fine against Lingo for potentially not employing adequate “Know Your Customer” measures meant to authenticate caller ID data connected to the robocalls.

“Because when a caller sounds like a politician you know, a celebrity you like, or a family member who is familiar, any one of us could be tricked into believing something that is not true with calls using AI technology,” said FCC Chairwoman Jessica Rosenworcel in a statement.

Spam and robocalling operations have been traditionally carried out in environments with human managers overseeing calling schemes, but AI technologies have automated some of these tasks, allowing robocalling operations to leverage speech and voice-generating capabilities of consumer-facing AI tools available online or on the dark web.

In a related move, the FCC on Wednesday said it would consider a proposal requiring disclosure of AI-generated materials on radio or TV-based political advertisements, a first-of-its-kind potential mandate for broadcasters that seeks to quell political mis- or disinformation leading into November’s presidential election.

A Wednesday survey from cloud-based call center provider Talkdesk said 21% of respondents expect their vote to be swayed by election deepfakes and misinformation, while 31% said they fear being unable to reliably distinguish real and fake election content.