We are a public forum committed to collective reasoning and imagination, but we can’t do it without you. Join today to help us keep the discussion of ideas free and open to everyone, and enjoy member benefits like our quarterly books.
Farrell and Schneier identify a central dilemma for social media platforms and the Internet. If a system designed to separate signal from noise—to amplify worthy contributions while suppressing misinformation, whether using moderators or machines—grows sufficiently popular, it will face a deluge of content, some of it specifically intended to game the system. Facebook has faced this challenge in banning manipulative election-related ads; Twitter has faced it in banning hate speech. Reddit has faced it in banning trolling and doxing, and YouTube has faced it in banning harmful video content targeted at children or showing explicit violence. Many smaller platforms face the same challenges at varying scales, as users banned from larger sites migrate to smaller ones less able to quickly respond.
This is indeed a problem for democracy. But the threat of election interference by foreign actors through online channels is fundamentally different, and more distressing, than the threat of domestic misinformation to which Farrell and Schneier give equal, sibylline warning. The integrity of a democracy’s elections is paramount, and it is enshrined in law. U.S. legislation—both existing (the Bipartisan Campaign Reform Act, the Foreign Agents Registration Act) and proposed (the Honest Ads Act)—addresses election-related advertising and communication by foreigners and foreign media organizations, while international agreements (Article 21, Section 3 of the Universal Declaration of Human Rights, Articles 1 and 25 of the International Covenant on Civil and Political Rights) protect the right of a country’s people to determine the outcome of its own elections without foreign interference.
It is true that these legislative aims have not always been perfectly upheld. (FARA was used to support Japanese-American internment during World War II, for example, and the United States has credibly been accused of placing a thumb on the scale during other countries’ political processes, including the recent examples of Serbia, Afghanistan, Iraq, and Libya.) But their spirit is directionally correct, and they are more easily upheld—through sanctions, legal action, and technological correctives, including automatically labeling foreign-sponsored ads—than current attempts to identify and quarantine domestically produced misinformation. The label of propaganda is inconsistently and parochially applied; one country’s patriotic slogan is another’s jingoistic dogma.
What might happen if we use such laws to more comprehensively restrict foreign actors from participating in election-related discourse on social media? It is likely they would resort to more remote means of conducting attacks: paying independent contractors and cyber-mercenaries, via forums that facilitate hacking for hire, to disseminate propaganda in their native countries for profit. Deterring this type of activity is difficult. The machine learning tools many social media platforms rely on are good at pattern matching—for example, by comparing the hashes of files containing copyright-protected or otherwise illegal video and images—but less adept at determining whether a page represents a legitimate local news outlet as opposed to a recently created facsimile, or whether a video shows an actual or faked confrontation between white teenagers and Native American activists.
Even though it is costly to hire teams of content moderators to audit algorithmic recommendations, human judgment remains one of the best tools available; the context and social cues inherent in choices of account names, profile pictures, locations, and local vernacular are learned over a lifetime but faked by those with limited experience. Domestically produced propaganda, then, is both more difficult to police and more difficult to identify than foreign-produced content, as its creators may be operating with the same level of native knowledge as their audience. In domestic cases, it is also harder to tell whether the intent is to subvert the democratic process, or to reform it. Our task as civically minded academics may instead be to educate and to critically evaluate claims made by our fellow citizens.
As we prepare for the 2020 U.S. elections and other global elections that may be the target of Internet-enabled interference, it is useful to anticipate how emerging technologies might enable the spread of propaganda. Future online election interference will likely take two tracks: (1) the spread of disinformation intended to discredit political candidates and the political process, discourage and confuse voters from participating in elections, and influence the online discussion of political topics; and (2) the use of information operations to disrupt election infrastructure on the day of the election and immediately afterward during vote tallying and result certification—including, but not limited to, changing vote records and tallies, interfering with the operation of voting machines, impeding communications between precincts and election operations centers, and providing disinformation that misdirects voters to the wrong polling place or suggests long wait times, ID requirements, or precinct closures that do not actually exist.
We should also expect the number of bad actors to increase; Russia will not be alone. The barriers to mounting disinformation campaigns will depend less on available computing power and technical skill and more on the ability to quickly iterate among strategies, produce text in passable English or another targeted native language, and identify psychological weaknesses in a target segment of the electorate. Russia’s motivation may have been Vladimir Putin’s longstanding grudge over what he saw as Hillary Clinton’s accusations of unfairness in Russian political processes, but success is its own motivator, and other countries with an interest in the favor or economic status of the United States will be emboldened by the minimal repercussions of Russia’s interventions.
New technologies—including deepfakes, AI text-generation engines, and more sophisticated networks of bots on Twitter and other sites that permit pseudonymous accounts—will continue to be used, by actors both foreign and domestic, to spread disinformation, discredit candidates, confuse voters, and influence the discussion of divisive political topics. These technologies need not be fully convincing, nor capable of deceiving digital forensic auditors, in order to be effective. A faked image or video that is convincing at first glance but later revealed to be a forgery will cast suspicion on other low-resolution or thinly sourced images and video, and will ultimately serve to breed doubt and resignation about all pieces of political media and the political process in general. The spread of a viral hoax can also serve to push users off associated platforms, as seen in recent pushback against YouTube content aimed at children. This might seem like a good thing—boycotting flawed platforms—but it would also narrow the channels through which information is received and disseminated.
The next few years are critical to the successful integration of democracy with technologies that amplify and enable individual involvement with political processes. There is no reason to assume that our democracies are subject to an exponentially increasing Moore’s Law of longevity, according to which they will be maintained in perpetuity by the effects of small improvements to efficiency and scale made by individual contributors and tech companies. They must also be actively defended by legislative action, judicial redress of criminal interference, and executive reinforcement of norms. It is also apropos that calls to regulate Facebook, and to remove Mark Zuckerberg from his William Randolph Hearst–like controlling position of a media empire grown on sensationalism and adapted for political purposes, are increasing as attention to the 2020 election and its candidates’ campaign issues grows.
The desire to prevent a repeat of the 2016 election’s divisive atmosphere and post-election confusion and evasion of responsibility is both fervent and bipartisan. We must work to ensure this desire jumpstarts technological and normative commitments and reforms that will secure both our election-related critical infrastructure and our online media platforms.
Allison Berke is executive director of the Stanford Cyber Initiative and teaches Computers, Ethics, and Public Policy and Comparative Technology Security Policy at Stanford. She has a PhD in bioengineering from the University of California, Berkeley.
…we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox
Twenty years of cruel anti-immigrant policy have left thousands of asylum seekers in limbo, detained in offshore prisons or in mainland commercial hotels.
Racial redress should be modeled on the global anticolonial tradition of worldbuilding.
The threat to American democracy springs, most fundamentally, from the social fragmentation wrought by a post-industrial economy.