How Good Health Became a Numbers Game
It is no longer necessary to feel ill in order to be ill.
June 5, 2017
Jun 5, 2017
13 Min read time
It is no longer necessary to feel ill in order to be ill.
Risky Medicine: Our Quest to Cure Fear and Uncertainty
University of Chicago Press, $26 (cloth)
Good health today has become a numbers game. Clinics take your measurements—blood pressure, heart rate, height, weight, age—regardless of whether you feel ill, and if you are ill, therapeutic decisions are now largely statistical, turning on probabilistic calculations. Screening for prostate cancer might reveal a silent, deadly tumor, or a non-threatening, slow-growing one that would never have caused trouble. To diagnose is to assign an ailment and its treatment to a numbered category suitable for billing and parsable as data. The randomized clinical trial has become the basic mechanism for determining whether therapies work, and its measure of proof is statistical significance.
It is no longer necessary to feel ill in order to be ill. The experience of being “at risk” has converged with the experience of disease itself.
It hasn’t always been this way. As late as 1950 most physicians believed statistics might be useful for epidemiologists but had little relevance for clinical medicine. Statistics can tell you whether most people get better, but never if a particular patient will. For this reason, many doctors were suspicious of averages, rates, and distributions; instead they focused on unique cases, specific trajectories, and distinctions that made a difference for each individual patient.
The rise of medical statistics has not just changed the way physicians evaluate therapies. It has also fundamentally changed our understanding of health and disease. As physician and historian Robert Aronowitz argues in Risky Medicine: Our Quest to Cure Fear and Uncertainty, it is no longer necessary to feel ill in order to be ill. A patient may feel fine and yet be treated as sick because her indicators point to elevated risk of disease or premature death. The experience of being “at risk” has, Aronowitz contends, converged with the experience of disease itself.
• • •
When science-minded reformers first began to target medicine in the late nineteenth century, they did so on the basis of the laboratory sciences, rather than statistics. In the wake of germ theory, the goal of the new biomedicine was to apply the discoveries of the laboratory sciences, especially biology and chemistry, to medical knowledge.
One tenet of biomedicine is that even though patients differ, disease is an objective biochemical entity caused by agents such as viruses and bacteria, which produce measurable physiological changes. Certain diseases fit this model extremely well. In 1900, for example, syphilis was diagnosed by identification of a specific group of symptoms that had been known for centuries. In 1905 it was discovered that syphilitic patients—and only such patients—had the bacterium Treponema pallidum in their blood. Within a year a test had been developed to diagnose the disease by testing for the presence of T. pallidum. After it was discovered in the 1940s that penicillin killed the bug, penicillin became the standard method of treatment. Each individual might have his or her own unique experience of the disease, but there is always a clear biological cause. This is the central dogma of modern biomedicine.
The problem was, and is, that most diseases do not work this way. Doctors have long known that the one pathogen–one disease relationship oversimplifies the reality of illness. Even if a specific pathogen is closely associated with a particular disease, the fact that a patient carries the pathogen does not necessarily mean that the patient has the disease. And conversely, therapeutic interventions are rarely as simple as removing a specific pathogen. Most diseases have multiple, complex, or unknown causes.
Statistics—the basic mechanism for determining health and sickness—can tell you whether most people get better, but never if a particular patient will.
Probability and statistics emerged to address precisely this sort of uncertainty. Reformers in the American Medical Association pushed for statistical analysis of specific tests of therapeutic interventions only after it was clear that neither laboratory research nor ad hoc case reports from physicians enabled them to reliably distinguish effective treatments from quack cures. Though medical trials have existed in some form since at least the eighteenth century, formal clinical trials, designed to produce data that could be analyzed statistically, were only invented in the twentieth century. By the 1920s and ’30s, statistically trained reformers such as Raymond Pearl, Jesse Bullowa, and A. Bradford Hill claimed they had the ability to interpret experimental results with newfound power and precision. Formal clinical trials gained credibility because medical reformers were able to convince skeptical doctors that, in the absence of decisive laboratory evidence, studies of effects in the aggregate could produce reliable knowledge about the treatment of individuals.
By midcentury, reformers had a list of desiderata for medical trials that are now familiar: tests of therapeutic efficacy should involve comparison, use blinding to reduce bias, deploy control groups and placebos to know how much of the effect can be attributed to the treatment, and rely on statistical tests to determine whether differences in outcomes may have been due to chance. A massive influx of new “wonder drugs” in the 1940s and ’50s—antibiotics, antipsychotics, steroids, and diuretics—provided a platform for, and a test of, the reforms.
Pharmaceutical companies and regulators alike saw the benefits of the new tools. Statistical interpretations of clinical trials appeared to measure drug efficacy. Despite their inherent uncertainty, statistical methods also seemed increasingly authoritative, as probabilistic models spread across a number of scientific fields. By 1972 the Food and Drug Administration clarified that proof of safety (required after 1938) and effectiveness (required after 1962) should both be demonstrated by controlled clinical investigations. The gold standard for evaluating therapeutic interventions was now the statistically-interpreted randomized clinical trial.
Over the same period, the longitudinal Framingham Heart Study (initiated in 1948) and the Surgeon General’s 1964 Report on Smoking and Health provided evidence that certain behaviors are so strongly correlated with bad outcomes that even without direct proof of biomedical causality, the government may have an interest in taking public health measures—say, restricting cigarettes and promoting a low-fat diet. These large-scale studies also suggested that health might be redefined through the measurement of weight, cholesterol, blood pressure, and other indicators.
As Aronowitz shows, the statistical interpretation of observational studies and clinical trials has fundamentally changed conceptions of disease and health. We spend so much time and money trying to prevent bad outcomes that, even when we do not have problematic symptoms, we still act as if we are suffering from chronic disease. Being at risk for disease, and having a chronic disease, Aronowitz concludes, entail regimens of constant surveillance and measurement directed not to relieving symptoms but to reducing the risk of feared developments.
For preventive interventions to become big business, medicine had to be made probabilistic.
Though insurance companies used “risk factors” early in the century to price policy premiums, only after 1950 did it become widespread practice in medicine for certain behaviors or conditions to be labeled as risky because they were correlated with future disease states. Before preventive interventions could become big business, health had to be redefined on the basis of indicators, associations, and risks. That is, medicine had to be made probabilistic.
• • •
Aronowitz is not quick to assign blame for this transformation. While some pharmaceutical companies pushed for the focus on risk and then benefitted financially from sales of preventive interventions, Aronowitz argues that it is actually the tremendous success of medicine—the reduction of mortality and morbidity over the course of the last century—that has contributed most to the knowledge of new risks and the development of new interventions that address these risks.
Risk prevention, however, can be a double-edged sword. While procedures such as prostate-specific antigen screening and mammography have reduced the number of deaths from cancer, they also have well-known costs. False positives can result in unneeded medical care, interventions with their own risks and complications. Screening can be expensive and can exacerbate gaps in health-care quality.
But it is not enough to simply balance the cost and benefit of screening, in Aronowitz’s telling. The most damaging aspect of modern preventive medicine is the burden of living in a risky world: we have to vigilantly screen for potential diseases, and be ready to act aggressively should anything turn up.
Cancer and heart disease provide Aronowitz’s best examples because they fit the model of the risky disease: they are common, well-known, and possibly deadly; their causes are multifaceted and complex; and their courses and subtypes vary widely. Cancer, the subject of Aronowitz’s last book, Unnatural History: Breast Cancer and American Society (2007), is the “ultimate risky disease,” he writes. We are warned about the sundry activities, objects, and foods that make us slightly more susceptible to cancer. (Depending on the study, for example, chocolate, red wine, red meat, and coffee may increase the risk of cancer or steel you against the worst of maladies.) But while screening and organ-ectomies reduce cancer death rates, they also affect far more people than might ever have died from cancer or even known that they had cancer.
The term “survivor” shapes the experience of cancer, effacing differences among diverse people, emphasizing the permanent risk of recurrence, and demanding unblinking surveillance.
Aronowitz persuasively shows how the very term “survivor” shapes the experience of cancer. It effaces differences among a large and diverse group of people, emphasizing the presumed permanent risk of recurrence and demanding unblinking surveillance.
The problem, aside from the psychological and financial tolls this kind of risk management exacts, is that it is not obvious we are any healthier as a result. For example, diagnosing people as having “prediabetes”—a condition of elevated blood sugar—does not help treat any symptoms, of which there often are none. It is solely a state of risk for type 2 diabetes. The diagnosis will usually entail medication and further screening, which have been shown to offer a statistical advantage in avoiding organ damage. The costs can be difficult to ascertain, however, as they involve not just the consequences of constant monitoring, but also the broader effects of resource allocation: money and attention are paid to easily measurable risk adjustment rather than to facilitating difficult lifestyle changes that might improve a patient’s overall health.
At the same time, because the risk-reducing treatment often produces no noticeable effect in itself, patients are unable to judge its effectiveness through their bodily experience. Aronowitz warns that patients consequently become more vulnerable to being manipulated by pharmaceutical companies’ pitches and doctors’ testing recommendations, interventions directed not towards the experience of good health, per se, but towards the largely invisible effect of reduced risk.
• • •
It is hard to know what to do about this state of affairs, but Aronowitz offers some sensible suggestions. For instance, he recommends high-level efforts to limit excessive medical interventions. That is, he wants to see new ways to “prevent prevention” at the national level, perhaps by making it more difficult to promote therapies that act solely to lower future risks, at least until their overall effects on health have been ascertained.
Aronowitz also recommends greater attention be paid to how “global circulation of risk interventions” can skew public health priorities. One of his chapters focuses on the introduction of Gardasil, a vaccine for human papillomavirus (HPV). HPV is believed to play a carcinogenic role in cervical cancer so Gardasil is both a textbook example of biomedicine—a therapy targeted at a particular disease-causing pathogen—as well as a drug meant to reduce cancer risk. But while HPV infection is common in the United States, cervical cancer is actually rare, and the actual effect of Gardasil on cervical cancer is unknown. As a result, the therapy was marketed not as a vaccine for cancer, but as an expensive way for the well-off to reduce the risks associated with sexually transmissible disease. The therapy was understood on the basis of its social and psychological effects as much as, or more than, its biomedical efficacy, and it is a case, Aronowitz argues, in which “societal wariness regarding another marginally effective, highly profitable risk-reducing product played a large part” in its reception.
Tellingly, HPV vaccines are marketed in rich countries where their impact is mainly directed towards the risks of infection rather than in poor countries where the burden of cervical cancer is much higher and the life-saving potential therefore greater. The consequences of a move to risk-reduction have thus become intertwined with global disparities in health care.
Trust in practitioners is essential, but so is the recognition that they exist for more than presenting the various risk calculations facing us.
For individuals, one response to the rise of risky medicine is to opt out. Aronowitz approvingly describes colleagues who have declined routine screening for prostate cancer. He relates how he and his spouse, a psychiatrist, used their “physician clout” to refuse ultrasounds during a pregnancy: “we preferred not learning about some abnormality we could not remedy or a false positive finding over the small chance of learning about some risk that we might act upon.” The implicit advice seems to be to do everything we can as patients to ignore the medical industry’s emphasis on prevention and instead focus on the treatment of symptomatic diseases.
Though he is far too careful to do so himself, one might push further. After all, most detectable and treatable serious diseases are rare. The case some parents make for selectively vaccinating their children is not all that different from the case Aronowitz makes for avoiding ultrasounds and other preventative procedures, but most laypeople lack the medical knowledge needed to distinguish real threats from imagined ones. There is indeed a difference between declining an ultrasound and avoiding the measles vaccine, but the trouble is determining where the line should be drawn between unnecessary and necessary therapies. What might be a rational response of individual, medically savvy parents—refusing procedures, tests, or even vaccines in certain situations—could be disastrous at a wider level if herd immunity is compromised or previously rare diseases make a resurgence. Moreover, this line will almost certainly be drawn on the basis of statistical inference itself: hence Aronowitz’s use of the concepts of “false positive” and “small chance” in his discussion about how to respond to risky medicine. We hardly challenge the state of risky medicine if our decisions about what risks can be safely ignored are themselves based on probabilistic calculations.
More problematic, perhaps, is that we—physicians and patients alike—tend not to be that good with numbers. Medical licensing boards test only basic statistical knowledge and, as research by the psychologist Gerd Gigerenzer has shown definitively over two decades, many doctors still do not know how to calculate the probability that a positively screened person has a disease.
But there is also a broader cost to quantifying the uncertainties of health. At the heart of illness and disease are always questions of fundamental, and unanswerable, uncertainty: Am I ok? Will this hurt me? What will happen to my body in particular? A procedure with a 95 percent success rate or a novel therapy evaluated using a meta-analysis of randomized clinical trials may be acceptable to practitioners, but individuals may not accept that level of uncertainty. This tension has been overlooked as clinical medicine and epidemiology have converged under the banner of “evidence-based medicine” and started using the same statistical techniques to answer questions about both groups and individuals. When large-scale prospective and retrospective studies and expansive clinical trials become the tools of clinical decision-making, it is easy to forget that the calculations individuals make about their own treatment may differ from those made about medicine and health in the aggregate.
Even with a thorough understanding of statistical inference, and in the best case of precise and accurate risk measurements, we will not be able to know whether an aggregate study is relevant in a particular case, whether statistically significant differences are meaningful differences, or whether a near-certain chance of minor complications is worth a small decrease in the likelihood of acquiring a deadly disease. More data will never provide definitive answers to these questions. Statistics, especially those concerning health, do not tell us how they should be understood; they do not come equipped with an explanation. The more statistics are deployed, the greater the need to reemphasize interpretative skills. The realities of insurance claims, of profit motives, and of increasingly technical medical knowledge all inhibit opportunities and incentives for medical personnel to aid patients in making sense of numbers. Trust in practitioners is essential, but so is the recognition that they exist for more than presenting the various risk calculations facing us. We need them to connect aggregate risks to our own individual, unique experience of health and well-being. Living in an era of risky medicine only reaffirms that interpreting statistics remains a deeply qualitative, and fundamentally humanistic, task.
Help fund the next generation of Black journalists, editors, and publishers.
Boston Review’s Black Voices in the Public Sphere Fellowship is designed to address the profound lack of diversity in the media by providing aspiring Black media professionals with training, mentorship, networking opportunities, and career development workshops. The program is being funded with the generous support of Derek Schrier, chair of Boston Review’s board of advisors, the Ford Foundation, and the Carnegie Corporation of New York, but we still have $50,000 left to raise to fully fund the fellowship for the next two years. To help reach that goal, if you make a tax-deductible donation to our fellowship fund through August 31 it will be matched 1:1, up to $25,000—so please act now to double your impact. To learn more about the program and our 2021-2022 fellows, click here.
June 05, 2017
13 Min read time