Tech companies vow to fight deepfake election content

Google CEO Sundar Pichai speaks at the opening of the 60th Munich Security Conference. Google was among 20 tech firms that affirmed commitments to counter and deter harmful AI-generated election content at the conference on February 16, 2024.

Google CEO Sundar Pichai speaks at the opening of the 60th Munich Security Conference. Google was among 20 tech firms that affirmed commitments to counter and deter harmful AI-generated election content at the conference on February 16, 2024. Tobias Hase/picture alliance via Getty Images

Some House and Senate lawmakers praised the voluntary commitments but want legislation to require labeling and disclosure.

Twenty technology and AI-focused firms — including Google, Microsoft, Meta (the parent company of Facebook and Instagram), OpenAI and X (formerly known as Twitter) — agreed to combat deceptive uses of AI in 2024 elections as part of an agreement signed at the Munich Security Conference on Friday.

The pact’s signatories agreed to take eight steps to mitigate harmful AI-generated election content, including by developing new technologies to limit risks; assessing the harms posed by AI models; detecting the spread of manipulated content on their platforms; working “to appropriately address this content detected on their platforms;” promoting “cross-industry resilience;” enhancing their transparency efforts; engaging with civil society organizations; and supporting efforts “to foster public awareness” of deceptive content. 

Several leading lawmakers voiced support for the newly unveiled tech accord that seeks to prevent AI tools from being used to interfere in elections around the globe, but they said the voluntary nature of the pact underscores the need for Congress to take action against the democratic harms posed by unchecked emerging technologies. 

Sens. Brian Schatz, D-Hawaii — who introduced bipartisan legislation last year to require firms to label AI-generated content — called the accord “welcome news” but said “we urgently need legislation that regulates AI-generated content in a holistic way and mandates labeling on all such content, no matter the context.”

In a joint statement, Sens. Mark R. Warner, D-Va., and Lindsey Graham, R-S.C., also voiced their concerns “about the potential use of AI-enabled content, such as ‘deepfakes,’ in this year’s elections.” Warner chairs the Senate Intelligence Committee and Graham serves as the ranking member of the Senate Judiciary Committee. 

While the two senators called the pact “a constructive step forward,” they said that mitigating the harms of AI-generated election content “requires a shared responsibility that transcends partisan interests and brings together political leaders, civic groups, tech companies, other businesses and informed citizens across the country, including through potential new rules.”

Rep. Yvette Clarke, D-N.Y., who has been a leading voice in Congress for greater transparency around deepfakes and other AI-generated materials, also said in a statement that the accord showed that tech companies “understand that powerful AI tools have the potential to cause real and potentially irreparable harm.”

Clarke said that the voluntary nature of the pact means it lacks “meaningful, enforceable safeguards to ensure the safe and responsible use of AI tools.”

“I believe this presents a unique opportunity for both Republicans and Democrats to put aside their differences and come together for the sake of our democracy,” Clarke said. “This could be a defining moment for this Congress, and this may be the one unifying issue where we can band together to protect this nation and future generations of Americans to come.”

Clarke previously introduced legislation that would require political ads to include a disclaimer if they were made using AI, and she also told Nextgov/FCW in an interview last year that the 2024 election season would be “the first cycle where AI-generated ads will be an integral part of how we do campaign advertising.”

Concerns about digitally-generated videos and audio proliferating ahead of the 2024 elections have already proven to be warranted, with the Federal Communications Commission earlier this month banning AI-generated voices in robocalls. The commission’s unanimous ruling came after a robocall impersonating President Joe Biden went out to New Hampshire voters ahead of the state’s presidential primary last month.