Join the conversation
Subscribe to Our Emails
Boston Review is a public space for the discussion of ideas and culture. Sign up for our newsletters and don’t miss a thing.
The internet has become an environment of total tracking and total control.
The pace of technological change we have seen over the past fifteen years has been so breathtaking, so unrelenting, that it’s worth pausing to reflect on it for a moment. Fifteen years ago, our world was very different. Bill Clinton was President. The Red Sox had not won the World Series for almost a century. Mobile phones existed, but were little more than walkie-talkies with flip-tops. And the idea of total surveillance was unthinkable, a spectre of dystopian fiction and failed communist and fascist states from our grandparents’ time.
Fifteen years ago, our politics simply would not permit total surveillance, along the lines of the Stasi or J. Edgar Hoover’s COINTELPRO. The veterans of the cold war against communism and the hot war against fascism were still alive and vigilant. Surveillance was not only politically inconceivable but also technically impossible. Telephone metadata existed (including new forms of metadata for mobile phones), but the amount of data humans generated in their daily activities was vastly smaller.
What a difference fifteen years makes. Our technological and political environment has radically changed. The combination of the consumer phase of the digital revolution and the decrease in the political will to watch the watchers has meant that far more about us is digitized, much of it without any oversight or regulation. After the shock of 9/11, Congress authorized the Patriot Act, and many secret government programs (some legal, some not) operated under the radar.
As we move forward, it is essential that we figure out how to translate the values of free speech, privacy, due process, and equality into the new digital environment—what they mean in an era of big data and pervasive surveillance, and how to build them into the fabric of our digital society. We might decide to reject any idea of digital due process. Or we might embrace it fully. Most likely, we’ll end up somewhere in between. Regardless, privacy of some kind will be inevitable — but what those rights look like depends on a democratic environment.
Incrementally, the Internet has been transformed from a place of anarchic freedom to an environment of total tracking and total control.
Let’s consider first the inevitability of privacy. We often think of privacy as a factual state – how much do people know about me? As more and more information is collected and tracked, and fewer dimensions of human life remain opaque to observation, privacy would seem to be in retreat, perhaps irreversibly so. From that perspective, it is easy for commentators to suggest, glibly, that privacy is dead. Just this week, Thomas Friedman suggested in the New York Times that “privacy is over.” But there are other ways of thinking about privacy. Web sites and doctors’ offices have privacy policies about how they will keep your information private and confidential. In an information age, this way of understanding privacy—as keeping secrets rather than merely being secret—will become more important. When we collect information about people, what happens to that information? Is its use unrestricted? Is its disclosure unrestricted? These areas will be regulated in one form or another. Law will play a role, and if Congress is unable or unwilling to regulate, then leadership will come from elsewhere, whether the White House, the FTC, or foreign sources of law, like this month’s decision by a Spanish Court giving people a right to control how search engines report results about them. Globally-operating technology companies are bound by global rules, and European and Canadian regulators don’t buy the “death of privacy” fallacy. Even putting law to one side, information rules will be imposed inevitably through social norms, technology, or the competition of the market. Witness Facebook’s continual improvement of its “privacy controls” after a decade of pressure.
When we understand that “privacy” is shorthand for the regulation of information flows, it’s clear that information rules of some sort are inevitable in our digital society. The idea that privacy is dead is a myth. Privacy—the rules we have to govern access to information—is just changing, as it’s always been changing. The rules governing the creation, ownership, and mortality of data can be permissive or restrictive; they may create winners and losers, but they will exist nonetheless. And some of those rules are not just going to be privacy rules (rules governing information flows), but privacy-protective rules - ones that restrict the collection, use, or disclosure of information.
Consider the National Security Agency. The NSA purports to prevent harm by tracking our movements and communications—denying us a factual state of privacy we have enjoyed in the past from the state. This window into our lives is one kind of privacy rule. But the NSA also argues that it needs to perform its operations in secret—secret data collection, secret technologies, secret courts. It claims that if it were forced to disclose its operations, the targets of its surveillance would be able to avoid it. This is also a privacy rule—the NSA argues that operational privacy is necessary for it to do its job. Facebook and other technology companies also use trade secret law, computer security tools, and non-disclosure agreements to keep their own data private. When the very entities that would deny the existence of privacy rely on privacy rules to protect their own interests, it becomes clear that privacy is not doomed. This is what I call the transparency paradox.
But if we care about civil liberties, we need to foster an ecosystem in which those liberties can thrive. Take freedom of speech, for example. We often (correctly) talk about the robust culture of free speech enjoyed by Americans. Certainly, the Supreme Court’s interpretation of the First Amendment has played an important role in the exercise of this essential freedom. Legal doctrine has been important, but the cultural, social, and economic inputs have been equally essential in making free speech possible. Without a robust democratic atmosphere, freedom of speech can become a shallow protection, in which people might say a lot without saying anything of substance at all.
In the twentieth century, this atmosphere was created by a large middle class, universal literacy, broad access to education, a culture of questioning authority and protection for dissenters, and cheap postal rates for printed matter, among other things. In the digital age, if we care about our democratic atmosphere, we need to worry about things like access to technology, the “digital divide,” network neutrality, digital literacy, and technologies to verify that the data on the hard drives hasn’t been tampered with. We also need to ensure access to effective technological tools like cryptography, information security, and other technologies that promote trust in society. Reed Hundt’s fine essay “Saving Privacy” reminds us that government transparency is essential for democracy, that we need to empower individuals in their use of privacy and security tools, and that there will still be an essential role for law to play in the digital world we are building together.
Let's be sure that intellectual privacy is part of our digital future.
Most of all, though, we need to worry about intellectual privacy. Intellectual privacy is protection from surveillance or interference when we are engaged in the processes of generating ideas– thinking, reading, and speaking with confidantes before our ideas are ready for public consumption. Law has protected intellectual privacy in the past. But the digital revolution has raised the stakes. More and more, the acts of reading, thinking, and private communication are mediated by electronic technologies, including personal computers, smart phones, e-books, and tablets. Whether we call it surveillance or transparency, being watched has effects on behavior. When we watch the NSA or the police, they behave better. And when the police watch us, so do we, whether it is not speeding for some of us or not stealing for others.
But critically, when we are using computers to read, think, and make sense of the world and engage with ideas, there is no such thing as a bad idea or bad behavior. If our society is to remain free, we must be able to engage with any ideas, whether we agree with them or not. This is true across a range of topics, from Mein Kampf to the Vagina Monologues, and from erotica to Fox News. But constant, unrelenting, perpetual surveillance of our tastes in politics, art, literature, TV, or sex will drive our reading (and by extension our tastes) to the mainstream, the boring, and the bland. As we build our digital society, we need to ensure that we carve out and protect the intellectual privacy that political freedom requires to survive.
Fifteen years ago, the Internet was heralded as a great forum for intellectual liberation—a place to think for ourselves and meet like- (and different-) minded people unmediated by censors or surveillance. Yet, incrementally, the Internet has been transformed from a place of anarchic freedom to something much closer to an environment of total tracking and total control. All too often, it may seem like the digital future is unfolding before our eyes in some kind of natural and unstoppable evolution. But the final state of Internet architecture is not inevitable, nor is it unchangeable. It is up for grabs. In the end, the choices we make now about surveillance and privacy, about freedom and control in the digital environment will define the society of the very near future. I fear that the “privacy is dead” rhetoric is masking a sinister shift, from a world in which individuals have privacy but exercise transparency over the powerful institutions in their lives, to a world in which our lives are transparent but the powerful institutions are opaque. That's a pretty scary future, and one which we’ve told ourselves for decades that we don’t want. The availability of cheap smartphones and free apps shouldn’t change that. We should choose both control of our digital information and the benefits of our digital tools. We can make that choice, but the “privacy is dead” rhetoric is obscuring the existence of the choice.
Let’s realize that privacy—some system of rules governing information—is inevitable, and argue instead about what kind of digital society we are building under the rhetoric. If we care about living in a society with free speech and free minds, let’s be sure that intellectual privacy is part of our digital future.
Thumbnail image: Vu Bui
...we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox
Readers Also Liked
Printing Note: For best printing results try turning on any options your web browser's print dialog makes available for printing backgrounds and background graphics.