Skip to main content

Federal Agencies Should Continue to Improve Public Comments Process

Before implementing a new regulatory rule, executive agencies are generally required to initiate a process of public comments to gain feedback on its potential impact. The intent of the process is to help the agencies improve their proposed rules or to stop ones that look to be excessively-burdensome. While public comments are a welcome and necessary part of the policy making process, additional tools to better analyze the source and quality of public comments are desperately needed. Without such tools, public input may be devalued due to a few bad actors, hampering the ability of the agencies to assess the received comments.

The public petitioning of executive agencies has been a tremendously valuable tool for empowering individuals to make their voices heard regarding policy changes that dramatically affect their lives. These comments, typically set up in favor or in opposition to a certain policy, provide an opportunity for interested individuals, advocacy groups, or businesses to give input on a certain issue. Executive agencies facilitate this process by providing a website on which to voice these opinions, as required by the E-Government Act of 2002.

Certainly these online portals represent a critical line of communication between the public and regulatory agencies, but making certain that the comments received are constructive and genuine has been a problem. Unfortunately, individual voices can be drowned out. As demonstrated time and again in the young history of internet advocacy, some groups can “brigade” these forums to make their chosen opinion carry an outsized voice thanks to bots that overwhelm the forum with comments.

In an extreme case in 2017 the Federal Communications Commission (FCC) held public comments regarding the repeal of “Net Neutrality”. Thanks to large scale online campaigns, 22 million comments flooded the FCC’s forums with identical or nearly identical messages. It was difficult to discern the number of genuine commenters among the sea of total comments, more than 80 percent of which were almost  launched by bots. This is pure astroturfing intended to crowd out real feedback.

In continuing to examine this issue the General Services Administration (GSA) held a forum on January 30, 2020 and discussed how best to identify fake comments. This meeting was the first in response to a GSA notice on Modernizing Services for Regulation Management. This modernization aims to better integrate data systems across the regulatory agencies to promote public access, accountability, and transparency. GSA also sought to provide insight on how regulators should assess duplicate, mass responses from real people as compared to individually-written responses. Both signal concern or interest with the issue, but those who took the time to write a personal message may be washed-out by mass communications, which are increasingly easier to generate with proper technical know-how. Through improved processes and technologies, GSA hopes to reduce the impact of bot-created comments on government forums.  

The GSA forum revealed that more research is needed to properly define and identify bot-created comments. In response to the meeting, the Regulatory Studies Center at George Washington University remains concerned there is not enough empirical evidence regarding artificial comments. The creation of bot brigades to spam public comment portals with one opinion has serious implications in the long-run, eroding trust in web-based comments to begin with and slowly chipping away at the legitimacy of public influence on regulatory policy. 

NTUF has made active use of the capacity to provide public comment, while supporting public input in the creation of regulatory policy. Last year we combed through public comments to learn about the impact of restrictive tariffs on small businesses, particularly the time and paperwork burden of applying for exclusion from these new trade barriers.

The Government Accountability Office (GAO) produced a report on the public comment process as a whole, further highlighting the need for increased transparency and scrutiny regarding public comments. 

The GAO report revealed that out of the 52 program offices surveyed, none of the program offices responded that the identity of an individual commenter was extremely important to their analysis. While it is important regulatory agencies do not discriminate against individual commenters due to their identity, verifying that an individual is, in fact, a real person and a member of the voting public should remain a priority in this rulemaking process. Should the human element of the web-based input begin to break down there will be an erosion of legitimacy for the process as a whole. 

The GAO report is certainly not a conclusive document, indicating there is much more to discuss and thoroughly research for the issue of public commenting. The development of better tools, whether policy or technology, should remain a high priority for the regulatory apparatus to make certain that public input is an useful tool for interested citizens to help federal agencies properly assess the impact of their proposed regulations. 

As the government continues its trend in digital-democratization, regulatory agencies should continue to take steps to improve and develop internet infrastructure to more effectively interact with the public. Executive agencies should make certain that public comments are coming from genuine and interested contributors, and should develop tools and technologies to effectively sort the sea of public input, aiming to mitigate the impact of bot-generated comments.