GSA Wants More Ideas To Stop Fake Commenters From Flooding Rulemaking Process

tadamichi/Shutterstock

The General Services Administration took over management of the eRulemaking portal last year with a mandate to crack down on fraudulent commenting.

Federal agencies—particularly regulatory agencies—want and are often required to solicit feedback from the public when issuing new rules. Most often, these comments are submitted through a web portal such as the General Services Administration-run eRulemaking site.

However, with technology often comes unforeseen consequences. When it comes to commenting on rules, the ease of commenting through a website has led to a rise in fake comments posted by trolls, bots and other insincere operators.

GSA took over management in October of the eRulemaking program from the Environmental Protection Agency, which had run the website since 2002. With the change in agency management, GSA established the eRulemaking Program Management Office and began steps to upgrade the commenting portal. As they do so, the agency wants to hear some of the best ideas for preventing the flood of fake comments.

For example, in 2017, the Federal Communications Commission asked for public feedback on its Net Neutrality regulation. The call received some 22 million comments, though half of those were later determined to be fraudulent.

“What’s the big deal about fraudulent comments? Why should anyone care about these to begin with, given that we have a gating mechanism downstream from our agencies, they can resolve it?” Sanith Wijesinghe, an innovation area lead at the federally-funded MITRE group, asked during a public meeting Thursday held by GSA. “These identities were stolen from citizens at large, high-profile senators, celebrities. Elvis was supposed to be commenting on a couple of those. You now have, on the public record, a statement attributed to someone without their consent. It takes quite a lot of effort to correct that. And not everyone has the resources to do that.”

Wijesinghe noted this follows similar patterns to identity theft in other criminal activities, including the use of deceased individuals.

“Their next of kin are on the hook to try to resolve those issues,” he said.

There is a cost for federal agencies, as well, he added.

“Given that we now have the Evidence-Based Policymaking Act, the overhead associated with tracking down all of the assertions made in these comments is a non-trivial effort,” he said. “We really need to make sure our policies are truly evidence-based and not fake-evidence based.”

One potential solution is the use of CAPTCHA widgets, which asks the user to perform a task ill-suited to bots, such as identifying pictures with certain features or copying a string of characters that is blurred or otherwise obscured.

“Some of the ideas out there right now, including things like CATPCHA, raise the bar in terms of taking out some of those … not-so-smart kind of bots. But it’s only going to get worse,” Wijesinghe said, expressing concern over an impending arms race in this space.

He suggested restructuring the commenting process through eRulemaking to be more structured and less of a “free for all.”

“A more directed comment process where you can actually map comments to particular provisions” is one option he offered. While bots could still attack such a process, “It does increase the cost and potentially streamline the process downstream for our agencies.”

Michael Fitzpatrick, head of global regulatory affairs at Google, echoed Wijesinghe’s suggestion to use CAPTCHA technology, though the traditional use of that technology has its own problems.

“That adds a lot of friction to the process,” he noted. “That is the great balance for agencies and for the rulemaking process: We want to protect against bad actors but we don’t want to add a level of friction that deters democratic participation in the process.”

However, newer versions of the CAPTCHA technology can be made invisible to the user.

“The enterprise will undertake to screen every single interaction with the website using an individual token for that particular interaction,” Fitzpatrick explained. “Everybody is treated as a first-time user. There are sophisticated collections of data—not personal data—data around how the submission is being made. Click speeds, click patterns, the nature of the comment that is actually being filed. Using machine learning, you can draw a risk score between one and 10 that is predictive of whether or not that is a suspect, bot-driven submission.”

Once that algorithm is in place, agencies can then set a threshold for allowing comments, with scores that fall below that level flagged for further review.

“That can be two-factor authentication, it can be a phone call, it could be the dreaded reCAPTCHA widget,” he said. “That will substantially shut down the bot activity and will have very little to no disruption for legitimate users.”

Thursday’s public meeting also dealt with mass comments, in which a large number of respondents comment with the exact same content. While there are issues associated with this trend as well, mass comments are not illegitimate comments, according to several panelists, though at times agencies might want a larger number of individual, substantive comments.

Tobias Schroeder, director of GSA’s eRulemaking Program Management Office, said the agency does not have a specific timeline on changes or new technology upgrades, but said his office plans to move as quickly as possible.

A second public meeting is set for March 25 to get additional public comments, and GSA has a standing docket open for public comments until April 30. 

Editor's Note: This story has been updated to correct the timeline for GSA taking over management of the eRulemaking program.

NEXT STORY: How GSA does oversight on EIS