The key term that recurs throughout Henry Farrell’s and Bruce Schneier’s essay is “trust.” That is no surprise, as the concept unites both authors’ bodies of work: Schneier, a security expert, and Farrell, a political scientist, have each written books about it. Security enables trust, and trust enables a functioning democracy.

Small wonder, then, that these two have teamed up to propose a starting point for improving democracy by conceptualizing it as an information system in which distrust is a security problem. By and large, I find this a useful way to frame the issue. However, their essay underplays the value of a longstanding U.S. tradition—anonymous speech—that the authors recognize as creating vectors for attacks on trust. At the same time, it overlooks some known vulnerabilities that anti-democratic elements have already exploited to undermine institutional trust.


One component of building and maintaining trust is transparency. Transparency is essential in a democracy. A democratic government must subject itself to public oversight if it is to retain its legitimacy. The United States has open court proceedings, notices of proposed agency rulemakings, public comment periods, the Freedom of Information Act, and other sunshine laws; it even has C-SPAN. Measures that enhance government transparency serve to bolster the public’s trust in government institutions.

Policymakers need to understand the significant harms that would come from reducing citizens’ ability to speak anonymously to their government.

Farrell and Schneier propose greater transparency as one means of strengthening democracy against information attacks that undermine trust. But sometimes transparency comes into tension with other democratic values. One of those—anonymous speech—is on the chopping block in the authors’ proposal for fixing the public policy comment process. They suggest authentication of commenters as “actual constituents” in order to protect the accurate representation of public opinion from the manipulation of bots and spam. They acknowledge, however, that this measure would come “at the cost of weakening or preventing anonymous commenting.”

This is a tradeoff that should not be made lightly. On its face, public comment might seem more amenable to enhanced transparency than, say, voting. But in a democratic society, the ability to speak anonymously, like the right to keep one’s vote secret, enables free expression without the threat of coercion or retaliation. Anonymity fosters a diversity of viewpoints, something Farrell and Schneier recognize as crucial to a vibrant democracy. It forces others to respond to the merits of an idea, not the identity or reputation of the speaker.

When you disable channels for anonymous speech, you chill important viewpoints. When urging New Yorkers to ratify the Constitution, for example, Alexander Hamilton, James Madison, and John Jay published the Federalist Papers under a pseudonym. More recently, this past winter the Department of Education held a public comment period on a proposal to overhaul schools’ handling of sexual assault and harassment reports under Title IX. The tens of thousands of responses included numerous anonymous submissions by survivors of sexual violence on college campuses. How many would have shared their stories had they been required to tell the government who they were?

In some situations, privacy makes democracy work better than transparency would. In those contexts, disclosing private facts can operate as another kind of information attack. If constituents’ comments will be ineluctably tied to their identities, that risks depressing the rate of public participation in the comment process—undocumented individuals, for example, who would be affected by an immigration-related proposal or people who have received abortions who want to weigh in on an abortion-related proposal might not comment. This would narrow the range of opinions expressed and skew comments in favor of a particular position. Those distortions could undermine the legitimacy of the whole process if they are pronounced enough.

It is not clear from their essay just what kind of authentication system Farrell and Schneier have in mind. To preserve anonymous speech, any such system should be able to verify a commenter without divulging her identity to the agency or the public unless she chose to disclose it. This is a trust problem: the commenter must believe the system’s assurance of anonymity. It is also a hard security design problem. Several European countries already have schemes for electronic identification (“eID”) that allow users to access services anonymously or pseudonymously. But these can suffer from serious security flaws. For example, last year Germany’s eID system was shown to be vulnerable to an identity-spoofing attack that would allow one person to impersonate another on a website that supports eID authentication—a high-tech variation on the same stolen-identity problem that plagued the FCC’s net neutrality comment process.

The public comment process is indeed broken, but Farrell and Schneier overlook how it was being exploited well before 2017.

Even if people are not afraid to link their comment to their identity, the very existence of an authentication process could dampen participation. It would need to be low-friction, as quick and painless as possible. The more steps required for someone to submit a comment and confirm that it really came from her, the likelier she becomes to abandon the process before completion.

It may be the case that, outside of a few sensitive topics, most people motivated to submit a comment to an agency do not mind attaching their name or email address or taking an extra step to ensure their comment is counted as “real.” If that is Farrell’s and Schneier’s operating assumption, they do not say whether it is borne out by research. If so, a simpler authentication system might be appropriate, even at the cost of some participants unwilling to sacrifice anonymity.

Whether the loss of some authentic viewpoints is worth stanching the tide of fakes depends not only on how many of the former are stymied, but also how many of the latter are not. If it is to justify the blow to participation and to America’s anonymous speech tradition overall, the chosen authentication system had better actually keep out the bots. As my colleague Ryan Singel observed in his report analyzing the 2017 FCC net neutrality comments, some proposed measures to stop bot comments “would likely have the effect of reducing real public participation, while not substantially curtailing bot submissions.” That would put us in a worse position than we are in now.


The mess we are in now pre-dates the FCC bot debacle. The public comment process is indeed broken, but Farrell and Schneier overlook how it was being exploited well before 2017. Specifically, their essay ignores the vulnerability of agencies to regulatory capture. “Captured” agencies act in the interests of the very industries they are charged with regulating, not in the public interests they were created to serve. Under the leadership of Ajit Pai—who scrapped Obama-era net neutrality rules—the FCC has come to be viewed by many Americans as the poster child for this phenomenon.

In the quest to protect democracy, we must be wary of weakening its strengths while leaving existing vulnerabilities unpatched.

Regulatory capture (real or perceived) erodes confidence and discourages participation in the public comment process by making it look like a sham. If policymakers are in thrall to powerful special interests and regular citizens’ voices will be ignored—or if I simply believe this is so—then why should I bother to comment?

The crisis of public faith occasioned by the outsize influence of money and power is not limited to agency rulemaking. It extends to the electoral process too, which was already inundated by corporate spending even before the Supreme Court’s infamous Citizens United decision in 2010. Rebuilding public trust in democratic institutions will require grappling with the corrosive influence of money and reining in special interests’ influence over the policy process.

Policymakers need to understand the significant harms that would come from reducing citizens’ ability to speak anonymously to their government. Chilling legitimate participation and reducing the diversity of opinions voiced would not fix the broken public comment process. It would only exacerbate the existing problems of regulatory capture and money in politics. Those are as hard to fix, maybe harder, than the information attacks Farrell and Schneier identify. They are interrelated, though, so any approach to “Democracy’s Dilemma” must not leave them out. In the quest to protect democracy, we must be wary of weakening its strengths while leaving existing vulnerabilities unpatched.