The Election Integrity Partnership consists of university researchers and others. Local election officials are among those the partnership is hoping to assist in the weeks ahead.
With Election Day now less than three months away, national security officials and outside experts are warning that the nation needs to be on guard against attempts by Russia and other foreign countries to interfere with the contest.
One set of concerns centers on social media posts, on online platforms like Facebook and Twitter, containing false or misleading information meant to confuse voters about election procedures, drive down turnout, or stoke false narratives about voter fraud.
For state and local officials who oversee the more than 10,000 election administration districts around the U.S., dealing with this type of disinformation and misinformation can be daunting.
A new partnership that involves universities and other research entities aims to help.
The Election Integrity Partnership consists of Stanford University’s Internet Observatory and Program on Democracy and the Internet, the company Graphika, the Atlantic Council’s Digital Forensic Research Lab and the University of Washington’s Center for an Informed Public.
This election season, the partnership will seek to provide government agencies, civil society organizations and social media companies with real-time assistance identifying and responding to election-related misinformation and disinformation campaigns.
"Voters this fall will look to social media for real-time election information at an unprecedented rate,” Elena Cryst, assistant director of the Stanford Internet Observatory, said on Wednesday during a webinar about the project.
"We need an independent and nongovernmental partnership with the capability to review and analyze perceived attempts at election mis- and disinformation online,” she added.
The group plans to mainly focus on content interfering in three areas: procedures (like posts containing wrong information about how to submit a mail-in ballot), participation (such as false posts about long lines at polling stations) and, lastly, false claims of voter fraud.
Kate Starbird, a professor in the Department of Human Centered Design and Engineering at the University of Washington, said the project’s goals are twofold: to both disrupt campaigns that might be spreading faulty information around the election and to let the public know about this information so that they’re not manipulated by it.
A claim’s severity will guide whether it warrants investigation, explained Isabella García-Camargo, product manager for the partnership and a graduate student at Stanford.
So, for instance, a generic claim like “the election is being stolen” might not be scrutinized. But a tweet with a false video of ballots being burned, or that contains forged documents, would be a priority.
People working with the groups that are part of the partnership will attempt to detect these sorts of posts themselves. But other stakeholders, like government officials and civil society organizations, will also be able to submit “tickets” flagging suspect content.
Staff with the partnership would investigate this content and possibly bring it to the attention of social media companies who could remove malicious posts as necessary.
García-Camargo gave an example from 2019 where a Twitter user made a false claim in a tweet that ballots in Kentucky were being destroyed. Twitter removed that post but images of it continued to circulate online in other posts after it was taken down.
In that case, García-Camargo said a local election official was left sifting through Twitter to see how widely the claim had spread and to figure out possible next steps for responding.
This is the type of situation where the partnership might be able to provide assistance—assigning a researcher to track the post and possibly alerting other social media companies, like Facebook, if the false claim spills onto their platforms as well.
“We're bringing to the table both technical and analytical capacity to create content that's not currently available to election officials or civil society partners,” García-Camargo said.
The project will not involve crowd-sourcing, where anyone in the public can submit a ticket. One reason is that the groups that are leading it don’t have enough staff for that sort of approach.
Camille Francois, research and analysis director at Graphika, said the purpose of the project is not to simply remove as much questionable content from the internet as possible.
“This isn't what we're trying to do,” she said. “It's not a race to take down. But it is a race to rigorous, fast analysis to make sure that everybody who’s flagging has the ability to get a quick answer and things that are indeed problematic can be enforced against."
The partnership is not receiving government funds and has no plans to provide governments with any exclusive or confidential information, its organizers say. But the groups involved directly are collaborating with the National Association of State Election Directors and the Elections Infrastructure Information Sharing and Analysis Center.
García-Camargo said the groups are still figuring out some of the logistics for how they will coordinate with state and local officials. “We've already done some initial engagements with certain states,” she said. “We're very excited to open this up to all the rest of the states.”