Books & Ideas

Rethinking Privacy

October 20, 2014


Photograph: John Naccarato

Anxiety about surveillance and data mining has led many to embrace implausibly expansive and rigid conceptions of privacy. The premises of some current privacy arguments do not fit well with the broader political commitments of those who make them. In particular, liberals seem to have lost touch with the reservations about privacy expressed in the social criticism of some decades ago. They seem unable to imagine that preoccupation with privacy might amount to a “pursuit of loneliness” or how “eyes on the street” might have reassuring connotations. Without denying the importance of the effort to define and secure privacy values, I want to catalogue and push back against some key rhetorical tropes that distort current discussion and practice.

One problem is that privacy defenses often imply a degree of pessimism about the state inconsistent with the strong general public regulatory and social-welfare roles that many defenders favor. Another is a sentimental disposition toward past convention that obscures the potential contributions of new technologies to both order and justice. And a third is a narrow conception of personality that exalts extreme individual control over information at the expense of sharing and sociability.


Paranoia

In urban areas, most people’s activity outdoors and in the common spaces of buildings is recorded most of the time. Surveillance cameras are everywhere. When people move around, their paths are registered on building access cards or subway fare cards or automobile toll devices. Their telephone and email communications, Internet searches, and movements are tracked by telephone companies and other intermediaries. All their credit card transactions—which, for many people, means nearly all of their transactions—are documented by time, place, and substance. The health system extracts and records detailed information about their psychic and bodily functions. Anyone arrested, and many who fear arrest, in the criminal justice system typically surrender a variety of personal information and often have to submit to ongoing monitoring. Even within the home, water and energy consumption are monitored, and some people choose to install cameras to monitor children or protect against burglars.

To many people, this society looks like the panopticon—a prison designed as a circular tower so that the inmates can be easily observed by a centrally located authority figure. Jeremy Bentham originated the panopticon idea as a low-cost form of subjugation for convicted criminals. Michel Foucault adopted it as a metaphor for what he regarded as the insidiously pervasive forms of social control in contemporary society. To him, schools, hospitals, workplaces, government agencies all engaged in repressive forms of surveillance analogous to the panopticon.

In the United States, paranoid political style has been associated traditionally with the right and the less educated. But Foucault helped make it attractive to liberal intellectuals. His contribution was largely a matter of style. Foucault was the most moralistic of social theorists, but he purported to disdain morality (“normativity”) and refused to acknowledge, much less defend, the moral implications of his arguments. He gave intellectual respectability to the three principal tropes of the paranoid style.

First, there is the idea of guilt by association. The resemblance between some feature of a strikingly cruel or crackpot regime of the past or in fiction—especially in Nineteen Eighty-Four—and a more ambiguous contemporary one is emphasized in order to condemn the latter. Thus, the elaborate individualized calibration of tortures in eighteenth- and nineteenth-century penology is used to make us feel uncomfortable about the graduated responses to noncompliance in contemporary drug treatment courts. George Orwell’s image of television cameras transmitting images from inside the home to the political police is used to induce anxiety about devices that monitor electricity usage so that the hot water tank will re-heat during off-peak hours.

The paranoid political style has been associated with the right. Foucault brought it to liberals.

The second trope of the paranoid style is the portrayal of virtually all tacit social pressure as insidious. What people experience as voluntary choice is substantially conditioned by unconscious internalized dispositions to conform to norms, and a key mechanism of such conformity is the actual, imagined, or anticipated gaze of others. Almost everyone who thinks about it recognizes that such pressures are potentially benign, but people differ in their rhetorical predispositions toward them. The individualist streak in American culture tends to exalt individual choice in a way that makes social influence suspect.

Foucault disdained individualism, but he introduced a conception of power that was so vague and sinister that it could be applied to make almost any social force seem creepy. When Neil Richards writes in the Harvard Law Review that surveillance “affects the power dynamic between the watcher and the watched, giving the watcher greater power to influence or direct the subject of surveillance,” he is channeling Foucault. So is Julie Cohen, when she writes in the Stanford Law Review: “Pervasive monitoring of every first move or false start will, at the margin, incline choices toward the bland and the mainstream.”

We have come a far cry from Jane Jacobs’s idea of “eyes on the street” as the critical foundation of urban vibrancy. For Jacobs, the experience of being observed by diverse strangers induces not anxiety or timidity but an empowering sense of security and stimulation. It makes people willing to go out into new situations and to experiment with new behaviors. Eyes-on-the-street implies a tacit social pact that people will intervene to protect each other’s safety but that they will refrain from judging their peers’ non-dangerous behavior. Electronic surveillance is not precisely the same thing as Jacobean eyes-on-the-street, but it does offer the combination of potentially benign intervention and the absence of censorious judgment that Jacobs saw as conducive to autonomy.

The third trope of the paranoid style is the slippery slope argument. The idea is that an innocuous step in a feared direction will inexorably lead to further steps that end in catastrophe. As The Music Man (1962) puts it in explaining why a pool table will lead to moral collapse in River City, Iowa, “medicinal wine from a teaspoon, then beer from a bottle.” In this spirit, Daniel Solove in Nothing to Hide (2011) explains why broad surveillance is a threat even when limited to detection of unlawful activity. First, surveillance will sometimes lead to mistaken conclusions that will harm innocent people. Second, since “everyone violates the law sometimes” (think of moderate speeding on the highway), surveillance will lead to over-enforcement of low-stakes laws (presumably by lowering the costs of enforcement), or perhaps the use of threats of enforcement of minor misconduct to force people to give up rights (as for example, where police threaten to bring unrelated charges in order to induce a witness or co-conspirator to cooperate in the prosecution of another). And finally, even if we authorize broad surveillance for legitimate purposes, officials will use the authorization as an excuse to extend their activities in illegitimate ways.

Yet, slippery slope arguments can be made against virtually any kind of law enforcement. Most law enforcement infringes privacy. (“Murder is the most private act a man can commit,” William Faulkner wrote.) And most law enforcement powers have the potential for abuse. What we can reasonably ask is, first, that the practices are calibrated effectively to identify wrongdoers; second, that the burden they put on law-abiding people is fairly distributed; and third, that officials are accountable for the lawfulness of their conduct both in designing and in implementing the practices.

The capacity of broad-based electronic surveillance—the sort that collects data on large or indeterminate numbers of people who are not identified in advance—to satisfy these conditions is in some respects higher than that of the more targeted and reactive approaches that privacy advocates prefer. Such approaches rely heavily on personal observation by police and witnesses, reports by informants of self-inculpatory statements by suspects, and confessions. But these strategies have their shortcomings. Scholars in recent years have emphasized the fallibility of human memory and observation. Witness reports of conduct by strangers are often mistaken and influenced by investigators. Those who report self-inculpatory statements often have dubious motivations, and, with surprising frequency, even confessions prove unreliable.

Inferences from broad-based electronic surveillance are not infallible, but they are often more reliable than reports of personal observation, and they can be less intrusive. Computers programmed to identify and photograph red light violations make much more reliable determinations of the violation than a police officer relying on his own observation. And they are less intrusive: the camera can be set to record only when there’s a violation, whereas a police officer would observe and remember much more. Yet many civil libertarians, including some ACLU affiliates, oppose them. One of their key arguments is that the systems generate tickets in many situations where the driver might have had an excuse for not stopping in time that would have persuaded a police officer to dismiss the violation. (The case for excuse can still be made in court, but a court appearance would cost more than the ticket for many.) The argument is not frivolous, but it is a curiosity typical of this field that people concerned about the abuse of state power often oppose new technology in favor of procedures that give officials more discretion.

Broad-based surveillance distributes its burdens widely, which may be fairer.

For democratic accountability, panopticon-style surveillance has an underappreciated advantage. It may more easily accommodate transparency. Electronic surveillance is governed by fully specified algorithms. Thus, disclosure of the algorithms gives a full picture of the practices. By contrast, when government agents are told to scan for suspicious behavior, we know very little about what criteria they are using. Even if we require the agents to articulate their criteria, they may be unable to do so comprehensively. The concern is not just about good faith, but also about unconscious predisposition. Psychologists have provided extensive evidence of pervasive, unconscious bias based on race and other social stereotypes and stigma. Algorithm-governed electronic surveillance has no such bias.

The panopticon can be developed in ways Foucault never imagined to discipline the watchers as well as the watched. The most vocal demands for electronic surveillance in prisons these days come from prisoners and their advocates. Lawsuits challenging physical abuse by guards often produce court orders requiring more video cameras and restricting guards’ ability to take prisoners to areas where they are not recorded. People who worry about coerced confessions favor mandatory taping of police interviews of suspects, and many jurisdictions have adopted this practice. One response to complaints of racial profiling in traffic stops has been to have police wear body cameras that tape every encounter. Some civil libertarians oppose such practices, but those who favor them are trying to restrain state power, not enlarge it.

More generally, broad-reach electronic mechanisms have an advantage in addressing the danger that surveillance will be unfairly concentrated on particular groups; targeting criteria, rather than reflecting rigorous efforts to identify wrongdoers, may reflect cognitive bias or group animus. Moreover, even when the criteria are optimally calculated to identify wrongdoers, they may be unfair to law-abiding people who happen to share some superficial characteristic with wrongdoers. Thus, law-abiding blacks complain that they are unfairly burdened by stop-and-frisk tactics, and law-abiding Muslims make similar complaints about anti-terrorism surveillance.

Such problems are more tractable with broad-based electronic surveillance. Because it is broad-based, it distributes some of its burdens widely. This may be intrinsically fairer, and it operates as a political safeguard, making effective protest more likely in cases of abuse. Because it is electronic, the efficacy of the criteria can be more easily investigated, and their effect on law-abiding people can be more accurately documented. Thus, plaintiffs in challenges to stop-and-frisk practices analyze electronically recorded data on racial incidence and “hit rates” to argue that the criteria are biased and the effects racially skewed. Remedies in such cases typically require more extensive recording.

The critics’ preoccupation with the dangers of state oppression often leads them to overlook the dangers of private abuse of surveillance. They have a surprisingly difficult time coming up with actual examples of serious harm from government surveillance abuse. Instead, they tend to talk about the “chilling effect” from awareness of surveillance.

By contrast, there have been many examples of serious harm from private abuse of personal information gained from digital sources. At least one person has committed suicide as a consequence of the Internet publication of video showing him engaged in sexual activity. Many people have been humiliated by the public release of a private recording of intimate conduct, and blackmail based on threats of such disclosure has emerged as a common practice. Some of this private abuse is and should be illegal. But the legal prohibitions can only be enforced if the government has some of the surveillance capacities that critics decry. Illicit recording and distribution can only be restrained if the wrongdoers can be identified and their actions effectively restrained. Less compromising critics would deny government these capacities.

With low crime rates and small risks of terrorism in the United States, privacy advocates do not feel compelled to address the potential chilling effect on speech and conduct that arises from fear of private lawlessness, but we do not have to look far to see examples of such an effect abroad and to recognize that its magnitude depends on the effectiveness of public law enforcement. To the extent that law enforcement is enhanced by surveillance, we ought to recognize the possibility of a warming effect that strengthens people’s confidence that they can act and speak without fear of private aggression.


Nostalgia

Harm from surveillance that intrudes on core areas of solitude and intimacy is easy to identify. Such intrusion is rightly subject to high burdens of justification. But most surveillance is different. Often it involves conduct subject to ordinary observation in public or information that a person has willingly provided to strangers, often to facilitate business or commercial dealings.

Once we go beyond the solitary-intimate realm, it becomes harder to delimit the scope of privacy concerns. A common approach is to privilege assumptions based on past experience. Thus, the Supreme Court elaborates the constitutional prohibition of “unreasonable searches and seizures” by looking to “expectations of privacy.” Expectations are a function of custom. It follows that telescopically aided airplane surveillance of someone in his backyard is generally OK because we are used to telescopes and airplanes flying over, but using thermal imaging technology to look inside the house requires a warrant because it is a technology to which we are not yet habituated. Helen Nissenbaum, in her highly regarded Privacy in Context, takes a similar approach. Her guiding principle is “contextual integrity,” which means the implicit customary norms in any given sphere of activity. For example, a highway toll collector seeing contraband in the backseat of a car does not pose a problem to privacy because such observation is familiar, but police examination of electronic toll records to determine whether the car was near the scene of a crime at the relevant time would pose a problem.

Here again we see people of generally liberal views resorting to conservative rhetorical and theoretical tropes when it comes to privacy. Most privacy advocates probably consider the appeal to custom in arguments about the death penalty or gay marriage as a sign of intellectual bankruptcy. The distinctions that the customary principle produces seem arbitrary in relation to any substantive conception of privacy.

The substantive conception to which the advocates are most drawn is the notion of a right to control information about one’s self. James Whitman argues in the Yale Law Journal that this conception evolved through the democratization of aristocratic values. The aristocrat’s sense of self-worth and dignity depended on respect from peers and deference from subordinates, and both were a function of his public image. Image was thus treated as a kind of personal property. Whitman says this view continues to influence the European middle class in the age of equal citizenship. As the ideal was democratized, it came to be seen as a foundation for self-expression and individual development.

European law evolved to express this cultural change. Whitman showed that the idea of a right to control one’s public image underlies French and German privacy law, and it appears to animate European Union privacy law, which advocates admire for its stronger protections than those of U.S. law. For example, French and German law impose stricter limits on credit reporting and the use of consumer data than U.S. law. The EU directive mandates that individuals be given notice of the data collection practices of those with whom they deal and rights to correct erroneous data about them. More controversially, a proposed revision prohibits decisions based “solely on automatic data processing” for various purposes, including employment and credit. By contrast, U.S. law tends to be less protective and less general. Its privacy law tends to be sector-based, with distinctive regulations for health care, education, law enforcement, and other fields.

Whitman associates the weaker influence of the idea of personal-image control in the United States with the stronger influence here of competing libertarian notions that broadly protect speech and publication. Expansive notions of privacy require a more active state to enforce them. This was recently illustrated by a decision of the EU Court of Justice holding that the “right to be forgotten” may require removal from an Internet website of true but “no longer relevant” information about the plaintiff’s default on a debt. The prospect of courts reviewing Internet data to determine when personal information is “no longer relevant” has emphasized the potential conflict between privacy and other civil rights.

But reservations about the broad conception of dignity Whitman describes go deeper. There is a powerful moral objection to it grounded in ideals of sociability. Even in Europe, during the period in which the ideal was democratized, there was a prominent critique of it. A character in a nineteenth-century English novel preoccupied with controlling his public image is likely to be a charlatan or a loser. Not for nothing is Sherlock Holmes the most prominent hero in the canon. His talents are devoted to invading the privacy of those who would use their image-management rights to exploit others. And as he teaches that the façade of self-presentation can be penetrated by observation and analysis of such matters as frayed cuffs, scratches on a watch, or a halting gait, he sets up as a competing value the capacity to know and deal with people on our terms as well as theirs.

Expansive privacy requires more active state enforcement.

Even among innocuous characters, preoccupation with self-image control often appears as a pathology that inhibits rather than enhances self-expression and development. This preoccupation is associated with a rejection of urban life and its spontaneity and diversity. Think of Sir Leicester Dedlock in Bleak House and Sir Walter Elliot in Persuasion, minor nobles clinging to aristocratic ideals. They know that the best way to maintain control of your image is to avoid contact with strangers, people you have no power over, and clever people who might penetrate your disguises. To embrace the vitality of the city requires a willingness to give up some control over one’s image and accept risks of being understood and dealt with on terms that are not your own. In both books, the unwillingness to run these risks is associated with personal stultification.

If the right to control personal information was extended in Europe from the aristocracy to the rest of the society, it was at the same time diluted for everyone. When Darcy leaves his estate at Pemberley, he exits a world in which he is “seen as he chooses to be seen,” as the scoundrel Wickham puts it enviously. In the middle class world of Meryton, he is subjected to eavesdropping and gossip (the social media of yesteryear). And he is confronted by people, notably Lizzie Bennet, who dare to “read [his] character” back to him in their own manner. In the process of responding, he grows and finds romantic fulfillment but only by giving up control. Pride and Prejudice, perhaps the most popular novel written in English, is a treatise on the impossibility and undesirability of giving anyone control over the information about himself.

As there are emotional and social benefits to giving up control over personal information, so there are economic benefits. It is not unfair to take account of people’s credit histories in making loan decisions. When lenders do this effectively, credit is, on average, cheaper. Nor does it seem especially unfair to take account of a factor such as the purchase of home safety devices that predicts relevant behavior like repayment of a loan. Some uses of personal information should be prohibited. Where predictive information tracks axes of historical subordination, such as race and gender, there may be good reason to limit their use, as the law does with respect to various insurance decisions. The reason, however, has to do with concerns about subordination, not some broad right of privacy. The U.S. sector-based approach is better equipped to take account of the varying and competing stakes than the EU categorical one.


Individualism

A major goal of many privacy proponents is to limit collection of personal data either by regulations requiring affirmative consent for such collection or by technology that limits reading or retaining the data. They don’t want Google to be able to analyze people’s Internet searches or state governments to be able to analyze highway toll payment data without specific consent, or perhaps a warrant. They also advocate technologies such as the hardware-software package offered by the Freedom Box Foundation designed to enable users to thwart mining of their data over the Internet.

Advocates object most strongly to data collection designed to yield specific conclusions about the individual, but they persist even when anonymized data is used to assess general patterns. Since anonymization is never perfectly secure, it exposes people to risk. Moreover, the privacy norm sometimes shades into a property norm. It turns out that some people carry around economically valuable information in their bodies—for example, the DNA code for an enzyme with therapeutic potential—and that information about everyone’s conduct and physical condition can, when aggregated, be sold for substantial sums. For some, the extraction of such information without consent looks like expropriation of property. They would like to see explicit extension of property rights to require consent and compensation for use of personal information. In Who Owns the Future? (2014) Jaron Lanier develops this line of thought, suggesting that we create institutions that enable individuals to monetize their personal data—individual accounts would be credited every time a piece of data is used.

In addressing such issues, a lot depends on how we understand consent. Consent can mean clicking on an “I agree to the terms” button that refers to a mass of small-print boilerplate that hardly anyone can be expected to read. Or it may mean simply the failure to find and click on the button that says “I refuse consent.” The advocates want something more demanding. Moreover, they don’t want the cost of the decision to be too high. If insisting on privacy means exclusion from Google’s search tool or Amazon’s retail service, many proponents would view that as unfair. If Google or Amazon charged a price for not mining your data, many would call it extortion—like asking someone to pay in order not to be assaulted. So the idea of “consent” touches on deep and unresolved issues of entitlement to information.

Such issues have arisen in connection with employer-sponsored wellness programs that encourage employees to get checkups that include a “health risk assessment” designed to generate prophylactic advice. At Pennsylvania State University such a program recently provoked a wave of privacy protests, apparently directed to parts of a questionnaire that addressed marital and job-related problems, among other things. The protesters also objected that the questionnaires would be analyzed by an outside consultant, even though the information would be subject to the confidentiality provisions of the federal Health Insurance Portability and Accountability Act. The University allowed people to refuse to participate subject to a $100 per month surcharge.

The strong privacy position has disturbing implications for medical research.

No doubt such programs may be unnecessarily intrusive and may not safeguard information adequately, but the objections made in this case do not appear to have depended on such concerns. The $100 surcharge was based on an estimate of the average additional health costs attributable to refusal to participate. The premise of the protests seems to have been that the interest in not disclosing this information even under substantial safeguards is important enough that those who disclose should be asked to subsidize those who do not.

Social change often raises new questions about rights. When airplanes first appeared over people’s homes, the question arose whether they were trespassing; when zoning codes limited what owners could build on their land, the question arose whether government had taken a portion of the individual’s property and were thus obliged to compensate them. More often than not, the law has refused to recognize claims of this sort. One reason has been fear that they would preclude many generally advantageous social practices. Another has been the belief that, except where the costs imposed by the practices cumulate visibly on particular individuals or groups, they are likely to even out over the long run. In a famous opinion declining to hold that a regulation of coal mining violated property rights, Justice Holmes spoke of an “average reciprocity of advantage” that over time obviated the need for individual compensation by distributing benefits evenly across the society.

The reciprocity theme occasionally surfaces in privacy discussion. Lanier’s proposal to monetize data arises from a sense of injustice about the relative rewards to, on the one hand, data-mining entrepreneurs and high-tech knowledge workers, and on the other, the masses of people whose principal material endowment may be their control over their own personal information. In the health sector, doctors have been caught trying to derive patent rights from information embedded in their patients’ DNA without informing the patients.

But privacy advocates rarely acknowledge the possibility that average reciprocity of advantage will obviate over time the need for individual compensation in some areas. Might it be the case, as with airplanes and zoning laws, that people will do better if individual data (anonymized where appropriate) is made freely available except where risks to individuals are unreasonably high or gains or losses are detectably concentrated? There will always be a risk that some data will be disclosed in harmful ways, such as when personal data leaks out because of ineffective anonymization. However, the key question is whether we will make a social judgment about what level of risk is reasonable or whether we shall accord property rights that allow each individual to make her own risk calculus with respect to her own data.

The latter approach would likely preclude valuable practices in ways analogous to what would happen if airlines had to get owners’ consent for passing over private property. Moreover, strengthening rights in personal data could exacerbate, rather than mitigate, distributive fairness concerns. While it is surely unfair for doctors to earn large capital gains from DNA extracted without consent, wouldn’t it also be unfair (admittedly in a lower key) for Freedom Box users to benefit from the Center for Disease Control’s mining of Google searches for new viruses while denying access to their own Internet searches?

The strong privacy position has disturbing implications for medical research. In the past, medicine has strongly separated research from treatment. Research is paradigmatically associated with randomized controlled clinical trials. Treatment experience has been considered less useful to research because treatment records do not describe the condition of the patient or the nature of the intervention with enough specificity to permit rigorous comparisons. But information technology is removing this limitation, and, as the capacity to analyze treatment information rigorously increases, the quality of research could improve as its cost lowers.

However, this development is in some tension with expansive conceptions of privacy. A prominent group of bioethicists led by Ruth Faden of Johns Hopkins has recently argued that the emerging “learning health care system” will require a moral framework that “depart[s] in important respects from contemporary conceptions of clinical and research ethics.” A key component of the framework is a newly recognized obligation on the part of patients to contribute to medical research. The obligation involves a duty to permit disclosure and use of anonymized treatment data for research purposes and perhaps also to undergo some unburdensome and non-invasive examination and testing required for research but not for individual treatment. (Anonymization is unlikely to be effective with data made generally available online, but regimes involving selective and monitored disclosure have proven reliable.) The group justifies its proposal in terms of reciprocity values. Since everyone has a good prospect of benefiting from research, refusing to contribute to it is unfair free riding.

Of course, the reciprocity idea assumes that researchers will make the fruits of the research derived from patient information freely available. People would be reluctant to agree to make a gift of their information if researchers could use it to make themselves rich. Effective constraints on such conduct should be feasible. Much medical research, including much of the highest value research, has been and continues to be done by salaried employees of charitable corporations.

Applied in this context, Lanier’s proposal to monetize individual data looks unattractive. There is a danger that a lot of valuable information would be withheld or that the costs of negotiating for it would divert a lot of resources from research and treatment. It is not clear what the resulting redistributive effects would be. Perhaps they would approximate a lottery in which the only winners would be a small number of people with little in common except that they happened to possess personal information that had high research value at the moment. At a point where we do not know who the winners will be, we would all be better off giving up our chances for a big payoff in return for assurance that we will have free access to valuable information. We can do this by treating the information as part of a common pool.

If it were the only way of transferring resources to the economically disadvantaged, monetization might be defensible as a social policy of desperation. But it seems a shabby and inefficient substitute for decent set of public institutions to discipline monopolistic power, provide public goods, and guarantee basic income, education, and health care. Astra Taylor argues compellingly in The People’s Platform (2014) that techno-futurist discourse suffers from deep skepticism about public institutions. Yet much of the current information techno-structure, both good and bad, is a product of publicly initiated and supported research. There is no reason to think that the capacities for creative innovation that the futurists celebrate cannot be applied effectively in the public realm.

Read more: 

Comments

"In the United States, paranoid political style has been associated traditionally with the right and the less educated. But Foucault helped make it attractive to liberal intellectuals."

 

This is all so depressing. Bureaucracy breeds paranoia. And American liberals never understood that Foucault was not a liberal. That was the point.

 

Neither was Baudelaire.

---Immediately, I sprang at the beggar. With a single blow of my fist, I closed one of his eyes, which became, in a second, as big as a ball. In breaking two of his teeth I split a nail; but being of a delicate constitution from birth, and not used to boxing, I didn't feel strong enough to knock the old man senseless; so I seized the collar of his coat with one hand, grasped his throat with the other,and began vigorously to beat his head against a wall. I must confess that I had first glanced around carefully, and had made certain that in this lonely suburb I should find myself, for a short while, at least, out of immediate danger from the police.

 

Next, having knocked down this feeble man of sixty with a kick in the back sufficiently vicious to have broken his shoulder blades, I picked up a big branch of a tree which lay on the ground, and

beat him with the persistent energy of a cook pounding a tough steak.

 

All of a sudden—O miracle! 0 happiness of the philosopher proving the excellence of his theory! —I saw this ancient carcass turn, stand up with an energy I should never have suspected in a

machine so badly out of order, and with a glance of hatred which seemed to me of good omen, the decrepit ruffian hurled himself upon me, blackened both my eyes, broke four of my teeth, and

with the same tree-branch, beat me to a pulp. Thus by an energetic treatment, I had restored to him his pride and his life. Then I motioned to him to make him understand that I considered the discussion ended, and getting up. I said to him, with all the satisfaction of a Sophist of the Porch: “Sir, you are my equal! Will you do me the honour of sharing my purse, and will you remember, if you are really philanthropic, that you must apply to all the members of your profession, when they seek alms from you, the theory it has been my misfortune to practice on your back?”

 

He swore to me that he had understood my theory, and that he

would carry out my advice.---

 

Liberalism was and is authoritarianism: individualism ends in the rule of individuals. That's something they're beginning to learn. Just a few years ago liberal philosophers were expressing open curiosity about libertarianism, but libertarians now are explcit in their oppositions to democracy. But liberals disdain the rule of law as the rule of old words, preferring the rule of their own reason. The rule of law is conservative, as republcanism is conservative compared to liberalism. Republicanism states its priors: it's a virtue ethic. Value free science undermines that ethic in favor of truth. And the "research imperative" devolves to Stalinism

 

http://www.ucpress.edu/book.php?isbn=9780520246645

---

Though unfamiliar to most scientists and the general public, the term expresses a cultural problem that caught my eye. It occurs in an article written by the late Protestant moral theologian Paul Ramsey in 1976 as part of a debate with a Jesuit theologian, Richard McCormick. McCormick argued that it ought to be morally acceptable to use children for nontherapeutic research, that is, for research with no direct benefit to the children themselves and in the absence of any informed consent. Referring to claims about the “necessity” of such research, Ramsey accused McCormick of falling prey to the “research imperative”, the view that the importance of research could overcome moral values.

 

That was the last time I heard of the phrase for many years, but it informs important arguments about research that have surfaces with increasing force of late. It captures, for instance, the essence of what Joshua Lederberg, a Nobel laureate for his work on genetics and president emeritus of Rockefeller University once remarked to me: “The blood of those who will die if biomedical research is not pursued will be upon the hands of those who don’t do it.”---

 

“The blood of those who will die if biomedical research is not pursued will be upon the hands of those who don’t do it.”!! How's that for pervisity?

 

In order to strengthen each of us as individuals, we need to strengthen the bonds of our relations to one another. Liberalism divides us into technocratic rulers and idiot klub kids, the scientists and their subjects, the rulers and the ruled. Hofstadter was the anti-Baudelaire, a nudnik and a creep. And Oxbridge philosophy is the poltics of Mr Chips. Students have no privacy. It's the priviledge only of their teachers.

 

You search for abstract truths. I say raise the level of play for all the players and let the truths take care of themselves.

The typos are my fault; the formating isn't. Get a comment system that  works.

Relying on resemblance to past regimes or fictions is guilt by association, but Jane Austen, who is not even talking about interaction between individuals and governments, is mobilized as proof that more state surveillance is good for us. Is this a joke or merely dumb?
 
To think a publisher as generally rigorous as Boston Review would peddle the kind of illogic even a college paper would be ashamed of. And to what end? Mere contrarianism, it seems to me. A bloated, grasping, baiting provocation too thoughtless to even garner a reaction, it appears.

It's a refreshing change to come across such an intellectually coherent article. Unfortunately the damage done by Foucault, Derrida and unemployable Sorbonne Sociologists of the 1960s means it's beyond the capacity of many Western Society's 'acdemics' let alone the general public.
In the 1940s-50s growing up in a working class home the paranoia of poorly educated rightwingers was a fact of life to which I became resigned; but in the 1960s watching it infect the Left was (and still is) a painful shock.
More articles like this are certainly needed, even if many faux progressives are unable to understand them.

It's like publishing a defense of Stalin after the obvious is obvious. In this moment, when the revelations are astounding, Simon decides that it's all Foucault's fault? I can't stand Foucault, either -- what's he got to do with NSA data mining and all the rest?

Contrarian clickbait. Yes, anyone who is upset with the disappearance of privacy is either a frothing Ruby Ridger or a vestige of American po-mo leg-humping. Got it.

The article's premise involves a rather major misreading of, or partial, surface engagement with Foucault's varied historical/philosophical investigations. Foucault's concern was primarily with what he called "the art of life," that is, the ability to "make" one's primary relations, through the ways and kinds of dynamics that would contribute to the "what" of one's own and others' humanity, to hone  or craft, as one crafts a poem, as that poem further and unforeseeably becomes itself, by the very process of that crafting This process involves, too, having some agency in the what/whom to which one is subjected to, and it is when this unavoidable act of subjection is pre-determined by totalitarian "uses" of large institutions (such as corporate capitalism in our era) that move toward monocultures, in terms of the kinds of relations and experience that are valued (or permissible), where, and by whom.

That we are all constituted by the power of relations that in a sense "create" us, is for Foucault, unavoidable. It is the violence without which there is no human community, without which it is impossible to be human. Thus power itself, for F is like creativity: neither fully positive nor negative, but an unavoidable force in the construction of the shared world and our own psyche-bodies. In his studies of architecture and prisons, Foucault notices the actual structures whereby value, what is considered important in the constitution and control of humanity, are constricted.  Thus, he is critiquing power that shrinks rather than expands realms of thought and possible varieties of relation in human living. In terms of architecture. it's easy to see how the immensity of a Nazi administration building and of a centuries old Roman Catholic cathedral each serve to "shrink" the person who walks into them. Massiveness of architecture inhibits the free flow of other, more subtle forms of power. Massiveness reflects an increase in the violence of power, i.e. its toll on humanity's art of life.

The article above focuses on privacy, and critiques the individualism of it, yet Foucault would be more interested that this conversation about privacy exists at all, in the terms that it does, especially that "the information age" is a given and a human's participation in it is a mandatory condition of being human (which nowadays it is). that is a kind of totalitarian use of power. He would ask, I think, what is being obscured or devalued by the prominence of the topic , the extremely limited way the questions are asked, the terms used. This isn't a critique, here, of this particular article, but of the terms which have been set against each other. This article in some sense tries to complicate those terms which is a great thing to do.

But for Foucault, my guess, and it's only a guess, since he was an utterly unpredictable, incredibly relational and therefore wildly creative thinker and human, is that he would be less interested in the "rights of the individual" (a dialogued in which the rhetoric of the law predetermines the parameters of the discussion) vs the "value" of the sharing of information, than in what the conversation using these dominant terms are precluding. For Foucault, to subvert power as it moves toward totalitarianism is to break open the given terms of the conversation, the accepted categories, from below, as were, by bringing forth the energies latent within that which has been excluded from the public discourse. He is claimed as a "founder" of queer theory for exactly that reason, for refusing the accepted the terms and categories of gender and sexuality  (male female straight gay lesbian bi) as the given (rather than as a historical, categorical fixing of relational behaviors)and thus the (only) possibilities. The discourse itself is always already (that's Derrida) limited by its terms. Subversive power (also not good or bad, except as counter to grossly limiting discourse) exists in the ways those categorical terms cannot stay static, but burgeon away from themselves by the excluded relations they obscure. It tends to be "from below' in the "what doesn't fit" , is excluded or present by its absence (like Derrida's exploring the lacunae or gaps in a text, the excluded... the what cannot appear as constituting what does appear)  from the conversation. That which does not appear can indicates the violence of the predominant discourse.

In the present case, a more productive (and thus more powerful) set of questions would be to ask, what issues other than individual privacy and the benefits of shared knowledge for health, crime, etc  are at stake but obscured? I noticed a quick example in the article: the inside/outside as one possible boundary of surveillance, the suggestion by some that  surveillance is "ok" when it is in "public" and not so in within our homes. But, we are still "in public" when at home, whether or not we are ever online. We are in public whenever we read a book or watch television or make love or have a conversation, those things we do in "private." All those activities are necessarily shaped by the public realm, and that public realm is never absent from our bodies and psyches.  In this way, one could argue that everywhere is public and subject to public laws. In another direction,

the inside/outside of house is a fallacy, too, in that, when we walk out in the world, we also remain within the privacy, that is, within the process of making and being made as human beings. We have conversations outdoors, we read billboards and books and shake other people's hands. We are "privately" being made, with some of our own agency in fact, with each of these events. The shared glance of a stranger, when you think about it, can be the most intimate interaction of your day, the most free, chosen in the moment of the relational event. Looked at this way, surveillance is intrudes on our privacy anywhere and everywhere, because to develop an "art of the self" involves a greater agency in the relations we choose that will constitute us. That we do not get to choose this omnipresent set of voyeurs is an immense detriment to the art of self by which power proliferates in the kinds and modes of actual relationships between bodies, in the modes toward which power moves toward its most creative proliferations. 

.And there it is: one of the obscured but perhaps entirely significant dynamics, when considering the omnipresence of surveillance and information technology with respect to the art of humanity:  bodies. Freedom is about bodies. Thought is bodily, knowledge (information!) is bodily. Human relations at their most intense and thus most significantly human, are bodily, haptic. Foucault, lover the the body in the art of humanity, would ask, where is the touch between human bodies? Not as an "idea" public or private, but where and how are actual human bodies touching, and what is at stake in diminishing that greatest source of subversive power, the haptic, eros? 

This is one tiny, not fully thought-out response, and this comment much longer than I intended, especially on a busy day, but the misuse of Foucault is common as is the setting up of the conversation as being about privacy or public information collecting/access, when that conversation itself is using terms supplied by the dominant discourse, obscuring other modes of human subjection, knowing, information, and constitutive relation that are not being considered but may be far more significant when it comes to survival and the richness of human experience. Where in these discourses of "information technology" is the mystery of how human bodies find each other and the nonhuman world in ways that proliferate humanity in unforeseeable ways? That "technology of the selves" (his term) would be more pertinent to Foucault, when one reads the body (yes!) of his work. The good news is that no one controls power, which is always dynamically working both for and against itself, so to speak. Sooner or later, the structures outpace their own power (force). The problem is, how much of our humanity, self, and of the soul or art of humanity entire, is decimated along the way? The cultural arts, literatures, music, philosophy, history, love, require the public space for/of their making, but being made subject to that public space in totalitarian ways, impinges on the relative freedom creative making requires. Including the making of our own lives. We need to realize what's at stake is not the law but the futures of how we want to be, or can be, fully human. When it comes to the laws, we need to make fine, practical distinctions with a much, much more nuanced sense of how the public good functions and what it is to value the privacy of the making of oneself as art. 

Parts of this argument have already been proven untrue: "Eyes-on-the-street implies a tacit social pact that people will intervene to protect each other’s safety but that they will refrain from judging their peers’ non-dangerous behavior." The bystander effect has already demonstrated that people will not necessarily interfere simply because there are more "eyes-on-the-street." In fact, more studies are needed to measure the effect of the electronic eyes and whether they make the average citizen more likely to intervene--or more complacent. 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions. CAPTCHA is not case sensitive.
Image CAPTCHA
Enter the characters shown in the image.