Astroturfing Is Bad But It's Not the Whole Problem

Andrii Yalanskyi/Shutterstock.com

The shift to a digital process has made commenting easier yet also inadvertently opened the floodgates to voluminous, duplicative and, yes, even “fake” comments.

In November 2019, Securities and Exchange Commission Chairman Jay Clayton boasted that draft regulations requiring proxy advisors to run their recommendations past the companies they are evaluating before giving that advice to their clients received dozens of letters of support from ordinary Americans. But the letters he cited turned out to be fakes, sent by corporate advocacy groups and signed with the names of people who never saw the comments or who do not exist at all.

When interest groups manufacture the appearance that comments come from the “ordinary public,” it’s known as astroturfing. The practice is the subject of today’s House Committee on Financial Services Subcommittee on Oversight and Investigations hearing, entitled “Fake It till They Make It: How Bad Actors Use Astroturfing to Manipulate Regulators, Disenfranchise Consumers, and Subvert the Rulemaking Process.” 

Of course, commissioners who cherry-pick from among the public comments looking for the information to prove themselves right should be called out and it is tempting to use the occasion to embarrass those who do, especially when they are from the other party. But focusing on astroturfing distracts attention away from the more salient and urgent problem: the failure to obtain the best possible evidence by creating effective public participation opportunities in federal rulemaking. 

Thousands of federal regulations are enacted every year that touch every aspect of our lives, and under the 1946 Administrative Procedure Act, the public has a right to participate.

Participation in rulemaking advances both the legitimacy and the quality of regulations by enabling agencies—and the congressional committees that oversee them—to obtain information from a wider audience of stakeholders, interest groups, businesses, nonprofits, academics and interested individuals. Participation also provides a check on the rulemaking process, helping to ensure public scrutiny.

But the shift over the last two decades to a digital process, where people submit comments via regulations.gov has made commenting easier yet also inadvertently opened the floodgates to voluminous, duplicative and, yes, even “fake” comments, making it harder for agencies to extract the information needed to inform the rulemaking process.

Although many agencies receive only a handful of comments, some receive voluminous responses, thanks to this ease of digital commenting. In 2017, when the Federal Communications Commission sought to repeal an earlier Obama-era rule requiring internet service providers to observe net neutrality, the agency received 22 million comments in response. 

There is a remedy. Tools have evolved to make quick work of large data stores. 

Google and Microsoft announced in 2019 that they had built systems that could summarize articles spanning news, science, stories, instructions, emails, patents and now even legislative bills. Indian news aggregation app InShorts recently debuted its AI-based news summarization feature that creates 60-word summaries of full-length news articles.

Such solutions exist for citizen participation, too. CitizenLab, a Belgian software company, uses an algorithm to identify the main topics and group similar ideas together using an approach known as topic modeling. 

Then there is the challenge of duplicative comments. The Pew Research Center found that in the 2017 net neutrality rulemaking the top five comments were each repeated over 800,000 times. But in 2007, Dr. Stuart Shulman created a tool called DiscoverText designed to deduplicate comments. While Shulman’s research was funded by the National Science Foundation. The tool is not in widespread use in government. 

In short, researchers have cracked problems far more challenging than making sense of rulemaking data. So far rulemakers, legislators and agencies have paid little attention to them. 

While Congress needs to mandate the use of better data science tools to make it possible for federal agencies to make sense of public comments, it must go beyond fixing the problem after the fact and reimagine how public participation should work. In our research, we are tracking over a hundred examples of what we call crowdlaw—innovative uses of public engagement designed to improve the quality of the lawmaking—and identifying how Congress and the federal government could apply them.

In 2018, for example, the German Chancellor’s Office, working in collaboration with Harvard University's Berkman Center for Internet and Society and the New York University Governance Lab, set up an “annotation platform” and invited dozens of experts—leading computer science, law and ethics scholars—to comment online on the draft of its artificial intelligence policy, providing much-needed perspectives that informed the final draft. Imagine if, instead of oversight hearings with a handful of experts, congressional committees or federal agencies invited a hundred experts to review a draft rule online?

U.K. Parliamentary committees use the web for “Evidence Checks,” where they invite members of the public to comment on the rigor of evidence on which a policy is based. In 2016, for example, the Women and Equalities Committee invited those with lived experience of harassment to examine the evidence on sexual harassment in schools.

Congress could also innovate the process of citizen engagement by piloting the use of a citizen jury to democratize and enhance oversight. Inspired by ancient Greek democracy where citizens chosen by lot served in a wide range of governing roles, in December 2019, the legislature of the Brussels Region of Belgium formally introduced the use of citizen juries, comprising a random sample of 45 residents aged 16 and above, into the work of its legislative committees. 

The real problem is not astroturfing but neglecting to understand the value of public commenting. Failure to redesign public participation for the digital age will only put us further behind the growing number of advanced nations that use new technology to tap the collective experience and expertise of their citizens.

Beth Simone Noveck leads The Governance Lab at the NYU Tandon School of Engineering and formerly was the deputy chief technology officer and director of the Open Government Initiative under President Obama.