It wasn’t supposed to be like this.

Farrell and Schneier remind us of the early promise of the Internet. According to this end-of-history technological utopianism, the pace of technological change would sweep away evil hierarchies and liberate humanity. After all, if simple copier machines helped take down the Soviet Union, then surely the vastly more powerful Internet would do the same for China and Iran. President Bill Clinton summarized the Wired-fed spirit of the times joking that “there’s no question China has been trying to crack down on the Internet. Good luck! That’s sort of like trying to nail Jell-O to the wall.”

This naiveté has putrefied. Information technologies, it turns out, greatly simplified life for authoritarians—by providing tools for mass surveillance at a far greater scale and depth, allowing far subtler control over larger populations, and enabling intensely personalized coercion. Some dictators did get swept away, among them those deposed in the digitally driven revolutions of the Arab Spring, but the result hasn’t been more democracy or freedom; it has been more blood, chaos, and hatred. Now this digital despair has swept home, accelerating deep divisions in our own society. There now seems to be a perverse Gresham’s Law of the Internet. In its weak form, it says “bad speech drives out good speech.” In its strong form, it is rather “Bad speech drives out good: I hope you die, asshole!”

The development of a society and economy so deeply dependent on the ever-accelerating and seemingly unstoppable flow of information hasn’t just caused a dilemma for democracy.

In this climate, both the diagnosis and solutions of Farrell and Schneier hit the mark, but they are perhaps too limited. After all, the development of a society and economy so deeply dependent on the ever-accelerating and seemingly unstoppable flow of information hasn’t just caused a dilemma for democracy. There are comparable predicaments in cyber security, network neutrality, the future of work, ethics for artificial intelligence, privacy, innovation, digital inequality, Internet governance, and national borders and sovereignty. Across all these issues we face essential tradeoffs between competing public goods, each of which is right and preferable on its own, but which have dangerous, often unmeasurable externalities. The system is so complex that even a partial solution to one causes unpredictable, but usually negative, knock-on effects across the others. And the demand for convenience, by users and makers alike, stifles so many promising solutions.

Not everyone bought into the breathless techno-optimism. The futurist Alvin Toffler saw that the dizzying, ever-multiplying array of possible futures drives conflict over competing versions over the preferable ones. He described future shock in which the accelerating pace of technological change would lead to “personal and psychological, as well as sociological consequences.” He feared that unless humankind “quickly learns to control the rate of change in his personal affairs as well as in society at large, we are doomed to a massive adaptational breakdown.”

Some people adapt, even thrive in the new information age. But many others watch helplessly as culture accelerates away from them, alienating them within their own society. “The victim,” Toffler warns, “may well become a hazard to himself and others.” Indeed, “the malaise, mass neurosis, irrationality, and free-floating violence already in contemporary life are merely a foretaste of what may lie ahead unless we come to understand and treat this disease” of future shock. This poignant description of the global angst ricocheting around the globe anticipated the anger and divisiveness of the 2016 U.S. national election and its aftermath.

To have a more fundamental impact, humanity must make more fundamental changes.

Yet Toffler wrote Future Shock not in 2016 but 1970. Humans are not at the starting gate of this information age but five decades along in it. Many of the challenges we face have the same root cause as those of our grandparents, only with higher speeds and less latency. The information age is only in its earliest stage, with decades or even centuries to go—either to prolong our pain or give humanity more chances to adapt.


Some of the adaptations we must make are obvious. Certainly we must encourage civility and dialogue, do better at fact checking, and drive out malicious software and botnets. Farrell and Schneier focus on the important role of negative feedback loops, “to pull democracy closer to its dynamic equilibrium.” This is surely useful. The best of these approaches will also have positive knock-on effects to the other digital dilemmas and be self-sustaining. After an initial investment, they must sustain their own momentum. (Fact checking, for its part, is unlikely to be self-sustaining.) There are countless other technical changes that would make life in the information age more bearable. But too many of these resemble the roadkill of good ideas strewn over five decades of driving on the information highway.

We delude ourselves if we do not talk clearly and explicitly about deceleration. 

To have a more fundamental impact, humanity must make more fundamental changes—something different than what we’ve been trying over the past two generations. If the underlying issue driving these problems is the accelerating pace of technological change, that is where we must focus the bulk of our attention. This means a conversation on whether and how to decelerate the pace of change to give individuals and society more time to adapt.

Yet in the United States, at least, we often have Stockholm syndrome with innovation; we become a willing prisoner. The problem is especially pervasive in my own field of cyber security: innovation drives the introduction of new hardware and software which cyber practitioners know will be deeply insecure. Finding ourselves in so deep a hole, you might think “stop digging” would be a natural response. But no, even a hint that slowing down the overall pace of innovation might give us more time and a wider range of options is usually met with horror: “If America stops innovating, others will get ahead of us.” We know each new insecure innovation makes us weaker, yet we cannot stop.

With the Internet of Things, security failures and attacks will not just disrupt objects made of bits and bits or silicon, but of concrete and steel. New 5G mobile technologies will be a new central nervous system of the everything. These must not deployed deliberately to be as secure as possible, not rushed as with nearly all previous major IT innovations. The General Data Privacy Regulation, now the law of the European Union, is likely to inhibit innovation; perhaps this is a feature and not a bug, ensuring that future technological developments will adapt to us, rather than forcing us to adapt to them.

It may be that this is a false tradeoff. Despite all the historical evidence, perhaps Toffler painted too gloomy a picture; perhaps these interrelated problems can be managed by newer, smarter technologies or wiser policies enacted by inspired leaders or informed publics. Still, we delude ourselves if we do not talk clearly and explicitly about deceleration, too. It must be an option we take seriously—to solve not just Democracy’s Dilemma, but also all the other dilemmas driven by the accelerating pace of technological change.