House Bill Mandates Disclosure of AI-Generated Content in Political Ads

Rep. Yvette Clarke, D-NY., speaks with other lawmakers about Border Policies during a news conference on Capitol Hill on Thursday, January 26, 2023, in Washington DC. Clarke introduced legislation May 2 requiring the disclosure of AI-generated content in political advertising.

Rep. Yvette Clarke, D-NY., speaks with other lawmakers about Border Policies during a news conference on Capitol Hill on Thursday, January 26, 2023, in Washington DC. Clarke introduced legislation May 2 requiring the disclosure of AI-generated content in political advertising. Jabin Botsford/The Washington Post via Getty Images

The legislation follows the Republican National Committee’s release of an entirely AI-generated video last week.

A House Democrat introduced legislation on Tuesday requiring political advertisements to include a disclaimer if they were created using artificial intelligence. The proposal comes as concerns about the use of AI software—and its potential to generate entirely fake or misleading text, audio and video—continue to mount ahead of the 2024 presidential primary season. 

The bill—the REAL Political Ads Act—was introduced by Rep. Yvette Clarke, D-N.Y., who has been a prominent voice in Congress about the potential harms and biases of AI-generated content. Clarke, who serves on the House Energy and Commerce Committee and the House ​​Homeland Security Committee, previously introduced legislation in 2019 and 2021 that would require that deepfakes—digitally manipulated photos, video or audio—include “digital watermarks” and a written disclaimer stating that the pieces of media had been altered or generated. 

Clarke’s legislation would amend federal campaign election laws to require that political ads “include a statement within the contents of the advertisements if generative AI was used to generate any image or video footage in the advertisements.”

In a statement, Clarke warned that “the upcoming 2024 election cycle will be the first time in U.S. history where AI generated content will be used in political ads by campaigns, parties and Super PACs.”

“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” she added. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security. It’s time we sound the alarm, and work to ensure our campaign finance laws keep pace with the innovation of new technologies.”

The introduction of Clarke’s bill comes as the use of generative AI tools has crossed over into the world of political campaigns. Last week, following President Joe Biden’s reelection announcement, the Republican National Committee released a video that it said was entirely created through the use of AI software. The ad envisions a dystopian future where, after winning the 2024 presidential election, Biden’s leadership is undermined by a series of domestic and international crises, including a Chinese invasion of Taiwan. 

Clarke told The Washington Post in a May 2 article that her legislation was in direct response to the RNC’s video, which included a disclaimer in the top-left corner stating that it was “built entirely with AI imagery.” She warned, however, that “there will be those who will not want to disclose that it’s AI-generated, and we want to protect against that, particularly when we look at the political season before us.”

While generative AI has not played a prominent role in political campaigns until now, digitally altered videos and audio have been used to spread mis- and disinformation in recent years. Ahead of the 2020 elections, a video of then-House Speaker Nancy Pelosi, D-Calif. was manipulated to make it appear as though she was intoxicated while giving a speech. That video, as well as a similarly doctored video from 2019, received millions of views on social media platforms.