House lawmakers push measure to limit impact of deceptive AI on elections

Rep. Shontel Brown, D-Ohio, participates in a House hearing in January 2023. Brown is leading an effort to require disclosures of AI-generated content in election materials.

Rep. Shontel Brown, D-Ohio, participates in a House hearing in January 2023. Brown is leading an effort to require disclosures of AI-generated content in election materials. Bill Clark/CQ-Roll Call, Inc via Getty Images

The legislation would apply to AI-generated election content that is designed to interfere with voting in federal, state or local contests.

A coalition of House Democrats have proposed legislation that aims to minimize the impact of artificial intelligence technologies on all U.S. elections by establishing penalties and disclosure requirements around the use of the emerging capabilities in election messaging.

The bill, introduced on Thursday by Rep. Shontel Brown, D-Ohio, would prohibit AI from being used to interfere with Americans’ right to vote and would require that AI-generated election content include a disclaimer. The measure is co-sponsored by 47 Democrats.

According to the bill text provided by Brown’s office, the legislation would prohibit both the developers of AI tools and the users of the technologies from using the capabilities “in a manner that intentionally deprives or defrauds, or intentionally attempts to deprive or defraud, an individual of the right to vote in an election for federal, state or local office.”

This includes using AI to provide deceptive information about voting registration, eligibility, locations, “the counting and canvassing of ballots” and endorsements from any person or candidate. The measure would also prohibit AI from being used to deceive, threaten, intimidate, interfere with or otherwise prevent an individual from voting or participating in the election process. 

Individuals who are found to be in violation of these provisions would be subject to a fine or imprisonment for up to one year. 

The legislation would also mandate that election-related content generated by AI include a disclaimer specifying that it was created by the technology or include “a symbol, similar in size and design to the copyright symbol, consisting of the letters ‘AI’ in a circle.”

This would apply to audio and video content that “includes a simulation, alteration or distortion of an image or video of a candidate for federal, state or local office,” as well as content that “depicts information about the time, place, manner or requirements for voting or registering to vote in election.”

The bill would empower the Federal Trade Commission to enforce the measure’s provisions.

Lawmakers have expressed concern over the past year about how the advent of more powerful AI tools will affect the spread of election misinformation. Those concerns proved to be warranted after New Hampshire residents received a robocall ahead of the state’s presidential primary in January that included an AI-generated voice of President Joe Biden telling voters not to head to the polls. 

The Federal Communications Commission subsequently issued a unanimous ruling in February that made it illegal for robocalls to use AI-generated voices. Separately, the FCC is considering a proposal that would require disclosures of AI-generated material in campaign ads on broadcast television and radio.

In a statement, Brown said that the threat of AI being weaponized to interfere in elections “is no longer theoretical.”

“AI-generated content is a powerful new tool requiring oversight and regulation to ensure our elections are secure, and the rights of voters are protected,” she said, adding that she was “especially concerned that deepfake images, audio and video will be used to target Black and minority voters whose voting rights have historically and consistently been under threat.”

Brown and a bipartisan group of 33 House lawmakers sent a letter in March to the Department of Justice, Department of Homeland Security and the Election Assistance Commission that expressed concern about AI’s impact on elections and requested additional information from officials about how the technologies could misinform or harm voters.

Other House lawmakers have also introduced proposals aimed at promoting greater transparency around AI-generated content, particularly when it comes to political campaigns. 

Rep. Yvette Clarke, D-N.Y., introduced the REAL Political Advertisements Act in May 2023 to require political ads to include a disclaimer if they were made using AI tools. Clarke subsequently told Nextgov/FCW last year that “the 2024 election cycle will be the first cycle where AI-generated ads will be an integral part of how we do campaign advertising.”