The Internet was going to set us all free. At least, that is what U.S. policy makers, pundits, and scholars believed in the 2000s.  The Internet would undermine authoritarian rulers by reducing the government’s stranglehold on debate, helping oppressed people realize how much they all hated their government, and simply making it easier and cheaper to organize protests.

This is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.

Today, we live in darker times. Authoritarians are using these same technologies to bolster their rule. Even worse, the Internet seems to be undermining democracy by allowing targeted disinformation, turning public debate into a petri dish for bots and propagandists, and spreading general despair. A new consensus is emerging that democracy is less a resilient political system than a free-fire zone in a broader information war.

This despairing, technologically determinist response is premature. The Arab Spring wasn’t the twilight of dictatorship, yes, but today isn’t the twilight of democracy, either. Still, we agree that to the extent democracy has revealed systemic weaknesses, we should be working overtime to repair them.

To pursue this project of repair, we need a better understanding of democracy’s resiliency in the face of information attacks. Building that understanding is harder than it might seem. Our theories have mostly assumed that democracies are better off when there is less control over information. The central assumption, which owes much to John Stuart Mill and Louis Brandeis, is that the answer to bad speech is more and better speech.

We need new frameworks to understand the limits of this optimistic view. Changes in technology have made speech cheap, and the bad guys have figured out that more speech can be countered with even more bad speech. In this world, the easy flow of information can cause trouble for democracy.


To understand the informational weaknesses of democracy, we propose to start in what may be a surprising place: the parallel weaknesses of autocracies. Consider what is called the “Dictator’s Dilemma,” the tradeoff autocrats face between political stability and open information flows.

On the one hand, accurate and freely available information helps governments, including authoritarian ones, to run better: by alerting government officials to what is happening among their citizens, allowing markets to function properly, and identifying corrupt low-level officials who stymie policy and make citizens unhappy. A government without accurate information about its country and its people risks enacting unpopular and ineffective policies it might otherwise have avoided.

Focusing on Democracy’s Dilemma may help us to cut the problem down to size. We can begin to understand what the fundamental threats are, and how best to respond to them.

On the other hand, freely available information can also undermine an autocracy. It allows citizens to figure out how deeply unpopular a detested regime is, builds up their confidence in shared collective action, and makes it easier for regime opponents to turn that confidence into popular protests. A despot who allows information to flow freely risks losing power. Authoritarian regimes constantly need to think about how to protect their own power while maintaining some economic and political efficiency. They thus view the open Internet as threatening their stability, but they also worry about how restricting information can hurt economic growth and damage the ability of the higher levels of government to keep track of what the lower levels are doing.

Autocrats have addressed this dilemma with a variety of mitigating strategies that strike tradeoffs between the risks and benefits of free information. Traditionally, authoritarian governments have tried to restrict access to most information and limit speech through a variety of censorship mechanisms. More recently, Margaret Roberts has explained how the Chinese government uses “flooding” techniques to maintain domestic stability. Instead of just censoring people, they seed public debate with nonsense, disinformation, distractions, vexatious opinions and counter-arguments, making it harder for their opponents to mobilize against them.

What we need now is to understand the corresponding Democracy’s Dilemma. Democracies depend on the free flow of accurate information more fundamentally than autocracies do, not only for functioning markets and better public policy, but also to allow citizens to make informed voting decisions, provide policy input, and hold officials accountable. At the same time, information flows can be manipulated to undermine democracy by allowing the unchecked spread of propaganda and pseudo-facts, all made more efficient by the Internet, automation, and machine learning. This is Democracy’s Dilemma: the open forms of input and exchange that it relies on can be weaponized to inject falsehood and misinformation that erode democratic debate.

Understanding Democracy’s Dilemma will require that social scientists—who try to explain democratic legitimacy and why people accept election outcomes that aren’t in their short-term interests—think more carefully about the information and knowledge that democracy requires. It will require, too, information security specialists—who model information systems and their vulnerabilities—to think more carefully about how the more complex systems underpinning legitimacy and shared beliefs can create serious vulnerabilities. Focusing on Democracy’s Dilemma may help us to cut the problem down to size. We can begin to understand what the fundamental threats are, and how best to respond to them. Political science questions about what citizens need to know (or believe they know) for democracy to be stable must be translated into information security questions about the attack surface and threat models of democracy, and vice versa.


Over the past few decades, political scientists such as Adam Przeworski and Barry Weingast have explained democratic stability as a kind of institutional equilibrium: it depends on rules that citizens and politicians respect. In Przeworski’s understanding, the rules tend to channel uncertainty in stability-enhancing ways. Losers accept electoral outcomes rather than trying to overturn democracy because they know they have a chance of doing better later. In Weingast’s framework, rules coordinate shared expectations about how the political system works, which then allow people with diverse perspectives to engage peacefully and fruitfully with each other in civil society and commercial exchange.

Such models help us understand what can go wrong with democracy. Steven Levitsky and Daniel Ziblatt’s excellent best-selling book, How Democracies Die (2018), depicts the decline of U.S. democracy as resulting from the decay of norms that previously stabilized political competition. This analysis may be right up to a point, but it also leaves a lot out. Some instability is precisely what democracy is good for. As their critics point out, democracies often make progress by tearing up old norms—about gender and race, for example—that oppress groups or that just don’t make sense anymore. Norm erosion, per se, is not a bad thing.

If democracy is to succeed in responding creatively to new problems, it needs to be able to draw upon the diverse beliefs of its citizens. Yet it also has to ensure that these differences don’t destabilize it.

More fundamentally, theories that focus on shared rules and norms don’t sufficiently capture the central place of disagreement in democracy. For sure, shared expectations are being eroded by disagreement—over whether voting is fair, whether the other party is committed to democracy, whether the institutional knowledge located in traditional authorities such as government, journalism, and science can be trusted. Yet some shared expectations—again, such as oppressive race and gender norms or simple deference to lazy centrist shibboleths—might richly deserve to be destabilized. Preserving democracy means preserving the space for democratic disagreement and the capacity to question, challenge, and demolish institutions that are no longer fit for purpose.

What we really need is what complex systems theorists would call a “dynamical” model of democracy, which would capture how democratic systems can remain stable in the face of deep disagreements and the changing needs of a complex environment.

Building such a model is hard, but one advantage of a well-working democracy over autocracy is that it can draw on the differences of perspective among its population, harnessing disagreement to solve problems. The open-endedness of democracy doesn’t just provide greater stability. It also can turn dispute into an engine of creativity. As political theorist Nancy Rosenblum argues, partisan competition has great virtues. Disagreement can lead to parties organizing over the key problems of society and how to solve them, winning or losing power according to their ability to convince voters. If one party claims that climate change is a problem and that the best way to solve it is through large-scale government action, the other may advocate market measures instead.

Such disagreement can lead to better policy making, as long as it is anchored in a roughly shared understanding of the common problems that confront society. Disagreement over how to tackle global warming will only be useful when people accept that global warming is a real problem. If voters are truly irrational, then efforts to appeal to voters may lead to misguided policy. Of course, the reality of politics is more complicated than this, but we have no hope of getting the politics right if we get the facts wrong.

This creative aspect of democratic disagreement can also be a source of institutional dynamism, provoking parties to relentlessly probe the deficiencies of government and to argue for institutional improvements. For example, most of the opposition to partisan gerrymandering is driven by the party or parties that are disadvantaged by gerrymandering. Another example is the civil rights movement, where people angry at the whole system of discriminatory rules and informal norms fought for substantial, if still grossly imperfect, institutional changes.

Yet this kind of competition can also be the source of bad policy. When one party does not want to acknowledge that climate change is a problem, because the plausible solutions would damage the interests of its backers, it may try (as U.S. Republicans are trying) to undermine the science by telling its supporters that scientists are lying, or by suppressing findings. It might even start dismantling the institutional architecture that supports such knowledge.

To take another example, the newly Republican-majority U.S. Congress got rid of its Office of Technology Assessment in 1995 in part because its scientists told inconvenient truths about Republican policies, including the feasibility of Ronald Reagan’s Strategic Defense Initiative. Parties struggling to compete with each other will always find it tempting to rig the game in their favor, making democratic competition a potential source of democratic instability.

Democratic policy success and stability thus involves a dynamic, rather than a static, equilibrium. But if partisan disputes within democracy become so polarized that one party pursues institutional changes that lock in its advantages, the equilibrium may break down.

The very Internet information operations that autocracies use to survive are used to destabilize democracies.

Maintaining this dynamic equilibrium is at the crux of Democracy’s Dilemma: How do you maintain democratic rules and preserve deep disagreement? If democracy is to succeed in responding creatively to new problems, it needs to be able to draw upon the diverse beliefs of its citizens. Yet it also has to ensure that these differences don’t destabilize it.

This suggests that democracy can go wrong in two ways. First, if it suppresses disagreement or diversity among its citizens, either by censoring particular perspectives or by drowning them out by amplifying others, it starts to lose its advantage and drift towards autocracy. Illiberal democrats such as Viktor Orbán in Hungary are on such a route. Second, if it is so overwhelmed by the differences it contains that different factions or parties no longer believe in each other’s commitment to democracy, it will become liable to disruption, dysfunction, and disarray.

These risks aren’t new. Scholars have long realized that open forms of communication and exchange could be weaponized. What has changed is technology. Previously, people couldn’t game the system on any serious scale. But new technologies—especially centralized social media, automation, and now machine learning—have bridged that gap. Now individual propagandists can have outsized effects while disguising their own origins, and machine-generated opinionating will soon be able to overwhelm conversations online.

Such changes may make democracy less stable. It is plausible that there are self-equilibrating pressures within a well-functioning democracy that help to prevent it from sinking into chaos or autocracy. But it is also possible to have a self-reinforcing cycle of failure. A faltering illiberal democracy could give way to chaos and factionalism. A chaotic democracy will have weaker equilibrating forces and hence provide greater opportunities for an autocrat to take and cement power. A chaotic democracy is also more vulnerable to outside attack, especially by regimes wanting to exacerbate the chaos in order to serve their own strategic objectives.


This description of Democracy’s Dilemma helps us to identify what information security specialists would describe as the “attack surface” of democracy. Democracy is vulnerable to attacks that create positive feedback loops of self-reinforcing and damaging expectations. Such attacks succeed when they exacerbate already existing political social divisions, transforming disagreements that might otherwise be an engine for valuable policy and institutional change into disruptive spirals of mutual distrust.

But we can’t focus solely on external manipulation efforts such as those from Russia. Domestic actors are likely to cause bigger and more immediate problems.

These spirals are most likely to develop where distrust—the sense that others might be able to rig the game—is mutually reinforcing. The most obvious target is elections, but there are other targets as well, such as legislative hearings and public comment processes—forms of public consultation that guide the policy making process. Other institutions, such as the U.S. census, shape politics by determining the allocation of congressional seats and shape policy by determining the facts on which spending decisions are based. All of these mechanisms are crucial to democracy, and all are vulnerable to disruption.

Consider a hypothetical example. If I believe that Donald Trump is Vladimir Putin’s catspaw, or that socialists are plotting to remove the president, I won’t simply accept it when my preferred candidates lose elections. Similarly, if I discover that my opponents aren’t committed to democracy and will cheat with a good chance of getting away with it, then I may be tempted to cheat myself. This could create a self-reinforcing dynamic of fear and distrust that might start to undermine general confidence in elections and lead to general refusal to comply with electoral results.

Such dynamics explain why Russian influence operations didn’t just try to exacerbate disagreements between conservatives and liberals, but also between Black Lives Matter demonstrators and their opponents and between Bernie Sanders supporters and Hillary Clinton supporters. The effort was to turn pre-existing tensions into outright distrust. These dynamics also suggest that Russian influence operations weren’t primarily aimed at getting Trump elected, but instead at ensuring maximum chaos in the event of a Hillary Clinton victory. Just before the election, “Guccifer 2.0,” a front for Russian military intelligence, was claiming that the election was rigged, thus helping build a case for angry Trump supporters to try to bring U.S. politics to a standstill.

These examples illustrate how international actors can weaponize information flows to target the domestic politics of democracies. Moreover, they can do so by repurposing the same tools of confusion and disarray that they have developed to shore up their own domestic security. The very Internet information operations that autocracies use to survive are used to destabilize democracies.

Domestic political actors, however, may also disrupt collective political knowledge over process, either deliberately (where they believe that they will benefit from weaker democratic institutions) or as a side product of short-term goals (where, for example, they disingenuously contest electoral results, with possible longer term implications for their supporters’ trust in the electoral process).

This means we can’t focus solely on external manipulation efforts such as those from Russia. Its 2016 campaigns are primarily important as specific evidence of a more general set of informational vulnerabilities, where domestic actors are likely to cause bigger and more immediate problems.

Consider, for example, three apparently unconnected recent controversies: efforts by groups supporting Democrat Doug Jones in the 2016 Alabama senatorial election to manipulate social media against conservatives; problems in the Federal Communications Commission (FCC) 2017 commenting process on net neutrality; and disagreements over the 2020 U.S. census. None of these examples involved direct foreign attacks on U.S. democracy—the kind of attacks that most of the news media focuses on. But they do illustrate different potential attack vectors against the common political knowledge that helps to stabilize democracy.

Take the Doug Jones case first. Although there is disagreement over circumstances, extent, aims and lines of authority, it appears that groups supporting Jones—the Democratic candidate in a heated 2017 Alabama special Senate election—used manipulative techniques on Facebook to target conservatives. Specifically, they created a fake Facebook page aimed at dividing Republicans. This page amplified the reports that Roy Moore, the Republican candidate, had pursued teenage girls. It also suggested that he was supported on social media by Russian bots.

Defending against disruptions to common political knowledge requires a very different approach than the information war perspective, which emphasizes deterrence, counterattack, and active defense.

The effort appears, in part, to have been spurred by the desire to fight fire with fire: in this case, the belief that the Russians had manipulated social media to help Trump get elected led some Democrats to experiment with comparable tactics. If Republicans in turn consider escalating their own use of such techniques, the likely consequence of this spiral will be increased disagreement and contestation over the legitimacy of elections. This consequence is a by-product, rather than direct aim, of the actors involved. The news that Republican consultants are setting up fake local newspapers ahead of 2020 provides just one example suggesting that this escalation is underway.

Now consider efforts to game the FCC’s comment process in the 2017 fight over net neutrality. Net neutrality is highly popular with a mobilized and technically literate population of Internet users, but it is universally disliked by large telecommunications companies, since it restrains them from using their privileged position to extract rents. When the FCC, under new Trump-appointed chairman Ajit Pai, proposed abandoning net neutrality, it had to accept public comments. The result was a flood of legitimate comments from identifiable citizens, who were overwhelmingly in favor of keeping net neutrality, as well as a far larger flood of automatically generated comments—millions of which had erroneous or stolen email addresses—that supported Pai’s plans to abandon it. The resulting numbers apparently favored getting rid of net neutrality, but only because of a deliberate generalized attack on the FCC’s public commenting system. The attack undermined the legitimacy of the commenting process, robbing the supporters of net neutrality of what would otherwise have been an overwhelming demonstration of apparent public support. Ongoing investigations by the New York attorney general’s office and the FBI have targeted telecommunications trade groups, lobbyists, and advocacy organizations with subpoenas. This action calls into question the integrity of all future public input processes.

Finally, the 2020 U.S. census process is emerging as another new information battleground. The census plays a key role both in determining the distribution of seats in the House of Representatives and in the allocation of spending in a wide variety of government programs. It is an example of common political knowledge: citizens should generally agree on its accuracy, but that agreement is now being undermined.

The Trump administration wants to include a new question about citizenship status, which would likely depress the willingness of noncitizens, especially those with complex visa situations, to respond to census officials. This response bias would lead to a substantial undercount of noncitizens. This particular problem is exacerbated by activists on social media who are circulating information aimed at convincing noncitizens and minorities not to fill out the census. They are doing so in order to “protect” those who could be targeted for deportation, of course, but at the same time they are undermining an important source of shared political knowledge in pursuit of particular political goals. These actions, combined with other problems of funding and mismanagement, may result in many believing that the census is inaccurate, further bolstering beliefs that the government, representational votes, and funding allocations are rigged.


These three examples illustrate the vulnerabilities of democracy to specific informational effects and techniques. They all disrupt common political knowledge. Defending against them requires a very different approach than the information war perspective—which emphasizes deterrence, counterattack, and active defense—that currently dominates public argument. It also means avoiding any easy equivalence between the informational problems of autocracies and the informational problems of democracies. It is cheaper for autocracies to resort to censorship and information control because they rely less on decentralized choice and distributed public disagreement as engines of innovation.

Instead, we need to think about how to build negative feedback loops that pull democracy back closer to its dynamic equilibrium. What do negative feedback loops look like? Let us go back to the three examples discussed above.

The goal of these proposals is not to eliminate dissent about democratic institutions and policies. Our goal is to channel dissent in ways that reinforce democracy and make it stronger.

First, elections. The Doug Jones case spiraled so easily because Democrats felt the 2016 election had been unfairly manipulated. It is not just fake news: Russian hackers were known to have probed voter rolls in several states. While there was no evidence of vote tampering in the U.S. presidential election, the climate was one of distrust. Additionally, Russia attempted and failed to manipulate the public announcement process of the 2014 Ukraine election. If this had succeeded, it would have influenced public willingness to accept the result.

Security experts already have a well-established consensus on how to protect the core function of voting: voter-verifiable paper ballots, random post-election auditing, and better coordination between the federal government and state and local authorities could alleviate mistrust and lead to better policies. But stopping voting abuses—or even the perception of voting abuses—from spiraling out of control also requires more vigorous enforcement of the law. Election security is about much more than voting systems. First Amendment rights should not provide a general license for actively deceitful political communications strategies. Unchecked, these will cause a cycle of deceptive action, counteraction, and reaction to counteraction.

Second, we need to recognize that the public policy commenting process is broken. Awareness of the problem defends against attacks aimed at misrepresenting public opinion, but not attacks aimed at further destabilizing confidence in public commenting as a whole. The problems are only going to become worse as machine-learning systems are deployed to generate realistic comments and synthetic media. Soon representatives will not be able to tell when they are hearing from an actual constituent or a propaganda spewing bot. Some combination of authentication of commenters with after-the-fact random sampling and auditing would likely mitigate the problem, albeit at the cost of weakening or preventing anonymous commenting. Michael Neblo, Kevin Esterling and David Lazer have proposed exciting new forms of online townhall deliberation though building these at scale will require serious security design.

Lastly, protecting institutions such as the census from manipulation will require significant institutional reforms. Steve Ballmer has suggested applying the same kinds of political protections to the census office as exist for the office of the comptroller-general—such as a lengthy term of office and onerous appointment requirements. This would be a good start, but combatting disinformation will also require greater transparency of process, so that the public understands both how the census is carried out and how decisions about the census are made. Additionally, the agency needs a budget sufficient both to carry out the census at an appropriate level of sophistication and to provide public education that both encourages participation and results in broad trust in the results. More attention to cybersecurity of the systems that collect and aggregate census information is also essential, especially as people will be able to fill out the 2020 census online.

The goal of these proposals is not to eliminate dissent about democratic institutions and policies. Maintaining dynamic stability does not mean trapping political actors in existing institutions like flies in amber, but instead guiding dynamic forces so that they help reinforce the equilibrium, strengthening democracy rather than undermining it.

Our claim that the best way to shore up democracy’s vulnerabilities is to strengthen democratic institutions is unsurprising, but it is surprisingly uncommon in contemporary debate.

Our goal is to channel dissent in ways that reinforce democracy and make it stronger. The best way to tackle Democracy’s Dilemma is to build appropriate and justified confidence that democratic processes indeed work as they ought. This isn’t a call to make democracy fairer. While that is important, it is different from the related problem that we describe here: to make democracy more resilient to attempts to game it. As long as powerful actors can advance their interests at the expense of the body politic, democracies are vulnerable to information attacks. We need to redesign political systems and institutions with security against gamification in mind. While this implies better and more visible public communication, it also implies the existence of institutions and systems that work more or less as they are supposed to.

Our claim that the best way to shore up democracy’s vulnerabilities is to strengthen democratic institutions is unsurprising, but it is surprisingly uncommon in contemporary debate. A strong democracy is more likely to be a secure one, and the vulnerabilities of U.S. democracy to informational attacks reflect problems in democratic institutions, rather than the unerring skill and craftiness of the attackers.

Addressing Democracy’s Dilemma will involve figuring out new tradeoffs between openness and vulnerability. This has always been a core problem of democracy, but solving it today will require new techniques and the development of new kinds of knowledge. If democracy is a complex system, helping it work better is a complex problem that will benefit from the varying perspectives not only of political scientists and information security specialists but also sociologists, economists, lawyers, engineers, media scholars, and data scientists. In the long run, we need a new set of disciplinary debates to emerge organically from these arguments: a democratic counterpart to the political technologists of authoritarian and semi-authoritarian regimes.

The risks of inaction are serious. In an optimistic scenario, we would experience weaker and more chaotic democracies, with poorer economic and social outcomes for citizens. In a pessimistic one, we could see a backsliding of democracies towards authoritarianism. In either case, democracies will become more vulnerable to manipulation by foreign authoritarian governments—and domestic authoritarian influences—who are better able to wield information power to advance their own objectives.

Radical changes to make democracy work better aren’t simply important in themselves; they are also justified by security concerns. Such changes are not easy to accomplish in a political system that seems almost purposely designed to stymie such reforms, but we hope we have provided a broader rationale for them, as well as the beginnings of a systematic approach for thinking through what they might involve.