ACLU warns of free-speech risks in FEC oversight of AI-generated election ads

Bakal/Getty Images

The civil liberties group expressed concern over a possible Federal Election Commission rulemaking that would call out content generated by artificial intelligence in the agency’s regulations on fraudulent misrepresentation.

Election campaign advertisements generated with help from artificial intelligence software should not be called out for special treatment in cases of "fraudulent misrepresentation" by the Federal Election Commission because of First Amendment protections,according to the American Civil Liberties Union.

ACLU officials said in a letter to FEC Chair Dara Lindenbaum on Monday that AI-generated content made for political elections is protected as free speech under the First Amendment.

Fraudulent misrepresentation as defined by the FEC covers the deliberate mislabeling of advertising to impersonate rival campaigns or otherwise confuse voters about the source of a particular advertisement.

The letter responds to a petition for rulemaking filed with the FEC by Public Citizen — a consumer rights advocacy group — which called for the regulatory agency to proactively clarify that its prohibition on “fraudulent misrepresentation” in political campaigns applies to AI-generated material.

“Should the FEC move forward with a rulemaking to clarify or expand the fraudulent

misrepresentation provision of [the Federal Election Campaign Act] for AI-generated campaign ads, it must draw the line between protected AI-generated speech and impermissible fraudulent misrepresentations carefully,” the ACLU letter reads. “It is unclear whether this petition seeks for the FEC to merely apply its fraudulent misrepresentation analysis to AI-generated campaign ads without adequate disclosure, or whether it wants those communications to be deemed per se fraudulent misrepresentation.”

Due to the specific meaning of fraudulent misrepresentation, the ACLU argued that any potential regulation would have to take a similarly narrow stance on AI-generated content, in that it must demonstrate “both intent to deceive and a reasonable likelihood of deceiving persons of ordinary prudence and comprehension.”

Otherwise, the FEC runs the risk of infringing upon satire, parody or other free-speech protected content that was generated using AI.

“There is understandable fear that AI-generated campaign communications may interfere with democratic values, but the mere fact that something is AI-generated does not make it bad," ,” Jenna Leventoff, ACLU senior policy counsel said in a statement. "AI-generated content is entitled to the same First Amendment protections as speech generated in other ways. Lawmakers should apply traditional First Amendment analysis to any efforts to regulate AI-generated campaign communications." 

Lawmakers like Rep. Yvette Clark, D-N.Y., have raised concerns about the capability to use AI to exacerbate misinformation in political advertising. She introduced legislation in May that would establish disclaimer requirements for AI-generated content.

But, the ACLU argues, existing law on fraudulent misrepresentation would not require such disclaimers to be included on such content.