Why Trust Science?
Naomi Oreskes, with Ottmar Edenhofer, Martin Kowarsch, Jon A. Krosnick, Marc Lange, Susan Lindee, and Stephen Macedo
Princeton University Press, $24.95 (cloth)

Why ask “Why trust science?” When many people worry about the safety of genetically modified food, parents resist the advice of pediatricians to vaccinate their children against common childhood diseases, religious people still say they take the earth to be fewer than 10,000 years old, and the president of the United States declares climate change to be a hoax perpetrated by the Chinese, public trust in scientific research would seem to be in dire straits. The time is ripe for reassurance. Even to pose the question indicates that something has gone wrong.

Public trust in scientific research would seem to be in dire straits. The time is ripe for reassurance.

The first thing to say, though, is that not all these failures of trust are equally significant. Doubts about the scientific consensus on the history of life on Earth, for example, are not especially troubling in themselves. It would be better, no doubt, if schoolchildren learned how geological evidence expanded the timescale of planetary history half a century before Darwin published On the Origin of Species. Yet even if scientific education fails in these respects, it is not one of the world’s greatest tragedies; for much of everyday life, behaving as if the world were only a few thousand years old makes little difference. Failure to act to mitigate the effects of climate change is another matter entirely.

What’s more, distrust of science is not a simple and homogeneous phenomenon. Recent surveys reveal that public confidence in science is actually increasing. They also show that particular kinds of scientists—doctors and other practitioners with a public face—are seen as more trustworthy than remote researchers, even as suspicions proliferate about the sources of supposed scientific knowledge, whether private corporations or the government. On the other hand, even though people in the English-speaking world—historically, the most skeptical about climate change—are now more likely to agree that the Earth is warming, their change of mind has not been brought about by any growing appreciation for climate research. They seem to have been convinced, instead, by the profusion of reports describing extreme weather events—even though cautious scientists would be reluctant to attribute the many floods, droughts, wildfires, storms, and heat waves of recent news to global heating.

We need to know why to trust science, then, in part because we need to know why to believe the scientific consensus on climate change, and Naomi Oreskes is the obvious person to provide the answer. Her new book takes up the question explicitly; it grows out of her Tanner Lectures on Human Values, delivered at Princeton in late 2016, and includes commentaries by a historian, a philosopher, and three social scientists.

A distinguished historian of science, Oreskes began her career in the earth and atmospheric sciences, and her scholarly books on the history of geology have won wide acclaim. A talented writer for technical and general audiences alike, she has devoted much of the past decade to studying skepticism about anthropogenic climate change. Her influential book Merchants of Doubt (2010), written with the historian Erik M. Conway, gave a lucid and accessible demonstration of the ways powerful special interests have sown confusion about the dangers of tobacco and second-hand smoke, acid rain and the ozone hole—just as they now attempt to rally the troops to oppose the scientific consensus on climate change. In more recent work she has elaborated her account of the climate controversy and urged immediate action to address the threats posed by a warming world.

In this project, restoring public trust in climate science would be a helpful step forward. Yet that is only part of what our planet needs if it is to sustain human life. Even if all the world agreed on the reality of anthropogenic global warming and on the gravity of the consequences for life on our planet, further difficult questions would arise. How are the needs of future generations to be balanced against the sufferings of people living today? How exactly are the potential perils of a seriously heated earth to be avoided? How are the burdens and costs to be distributed? How is the international cooperation required to be forged and sustained? As Evelyn Fox Keller and I argue in The Seasons Alter (2017), all these questions need to be posed, distinguished, and answered if the human population is to extricate itself from the mess some of its members have made (often unwittingly, though today in full consciousness).

It would surely be easier to tackle them, though, if we stopped bickering about the causes and effects of climate change—the science that has been settled by consensus. We should be grateful, then, for a good answer to Oreskes’s question. It might also deliver, as a bonus, happily vaccinated children, shoppers who do not automatically flinch at the thought of food containing GMOs, and citizens who appreciate the Darwinian view of life.


Oreskes’s answer appears in a schematic and abbreviated form near the end of her first chapter. Two features of science, she claims, account for its trustworthiness: its “sustained engagement with the world” together with “its social character.” Her emphasis on the second feature may surprise readers used to thinking of science as a tidy epistemic enterprise neatly insulated from social influence, but this view emerges clearly from her sober review of studies of science by historians, philosophers, sociologists, and anthropologists during the past half century.

For Oreskes, the objectivity of science is consensual: it depends on critical debate within a diverse community of investigators.

She reviews that history in her first chapter. The narrative she presents moves firmly away from the quest to identify a particular “method” that makes science trustworthy, toward an emphasis on the distinctive “collective” dimension of science. Before the 1950s, logical empiricist philosophers—taking up the “dream of positive knowledge,” as Oreskes calls it, that originated with the French sociologist August Comte in the nineteenth century—tried to articulate a precise account of scientific method, emphasizing the importance of testing scientific theories against empirical observations. Historical studies of scientific practice, including early writings by the microbiologist Ludwig Fleck and the physicist Pierre Duhem as well as later work by the historically oriented thinkers Thomas Kuhn and Paul Feyerabend, revealed the inadequacy of those attempts. Feyerabend’s most famous book, in fact, is called Against Method (1975).

Those studies, in turn, paved the way for a sociology of scientific knowledge, whose initial thrust—in the work of what came to be known as the Edinburgh school, by Barry Barnes, David Bloor, and Steven Shapin—was often read as suggesting that the beliefs advanced by scientists were no more credible than those maintained by anyone else. Eventually, however, more subtle proposals emerged, fueled by feminist scholars such as Helen Longino. They restored the objectivity of science, Oreskes writes, by viewing it as a collective achievement. She praises feminist work on science, in particular, for showing that the objectivity of scientific research depends on critical debate and exchange within a diverse community of investigators. As she puts it, “diversity does not heal all epistemic ills, but ceteris paribus a diverse community that embraces criticism is more likely to detect and correct error than a homogenous and self-satisfied one.”

Oreskes is right to celebrate these social studies of science, and she is guarded in her acceptance of what they have offered. This work has indeed revolted against oversimplified claims that evidence and reasoning suffice, on their own, to bring about scientific consensus. Yet, rejecting this view of the scientific method has sometimes led  scholars to replace one bad picture with another. The disembodied impartial observers and rigorous thinkers in narratives of old—members of Francis Galton’s secular priesthood, perhaps—give way, in some newer accounts, to unthinking networkers pursuing, in Bruno Latour’s famous phrase, “politics by other means.” Oreskes doesn’t go all the way in dismissing the cognitive and epistemic explorations of researchers; her scientists haven’t yet turned into zombies, mere vehicles for the mindless transmission of social and political forces.

Yet to my mind, she adopts a diluted version of what feminist thinkers such as Longino have to offer. Bringing together a diverse group of people is not likely to achieve very much if their exchanges take the unproductive forms apparent in many clashes about “the facts,” debates in which some parties often seem impervious to evidence. Unless collective investigation uses reliable methods for gathering evidence and for analyzing the findings, it can easily end in an impasse. Surely, part of the story must explain just how “sustained engagement with the world” is supposed to go—and how exactly that engagement interacts with the social character of science. About this, Oreskes says too little.

Or, more exactly, she says too little in those passages in which her discussions operate at the general level—where she is trying to say how science, in all its healthy forms, works to earn public trust. Excellent historian that she is, she provides lucid and convincing case studies of how particular pieces of scientific research—from critiques of continental drift to eugenics and the relation between hormonal birth control and depression—go awry and how, precisely, they diverge from those that go well. Indeed, throughout the book she elaborates on aspects of “engagement with the world” and social exchange that tacitly give more substance to her abstract answer. Moreover, she qualifies the schematic thesis I have cited with some thoughtful “caveats,” as she calls them, and qualifications.

Unless collective investigation uses reliable methods for gathering evidence and for analyzing the findings, it can easily end in an impasse.

Still, her general account, including all the caveats and codicils, leaves it quite unclear how she thinks the mental life of scientists actually goes. None of this is likely to provide an adequate answer to the skeptic who takes Longino’s appeal to diversity at face value and wonders why an even more diverse community—say, one containing climate deniers—wouldn’t be still more objective, and why it shouldn’t operate according to the rules of intellectual exchange that those who challenge climate change prefer. Unless more is said to explain how the cognitive and the social fit together, a precise answer to the title question proves elusive. In recognizing the inadequacies of earlier attempts to address methodological questions, without considering in full detail how the new perspective might supply better answers, Oreskes deprives herself of the resources to complete her central project. The commentaries on her chapters offer helpful clues for developing it more precisely.


Many people would approach Oreskes’s question very differently from the way she has chosen—placing more emphasis on how scientists engage the world, rather than how they work collectively to produce knowledge. Some, for example, would follow the historian Susan Lindee’s suggestion in her reply to Oreskes: science is trustworthy just because it works. “Every day we move through systems built from scientific knowledge,” she writes. “Many beloved and highly trusted technologies of everyday life are the direct result of legitimate and trustworthy scientific research.” Lindee takes the quotidian example of frozen peas as her running object lesson in trust.

Oreskes contends this defense of science won’t do. The history of science, she points out, is full of claims that particular hypotheses and theories are successful, and thus on the track of truth. The vast majority of those hypotheses and theories are, by current lights, false—we might even say, radically false. Hence, we should be loath to suppose that our own scientific beliefs, successful as they appear to be, will endure. Very probably, our successors will regard them as error-ridden.

Many ways of responding to this argument have emerged in recent decades. For the purposes of evaluating the trustworthiness of science, the best is to take a pragmatic approach to the historical record. Whether or not the hypotheses of the past turned out to be correct, those who adopted them on the basis of their successes—the problems they helped to solve and the predictions they helped to make—were entirely warranted in doing so. As are we in our similar situation today. A thoughtful scientist, reminded of the track record of inquiry, might reply to Oreskes as follows: “I rely on hypotheses that show themselves to be successful and put them to work in my efforts to solve further problems. I treat them as true. Of course, I’m aware that I may be wrong about this. My adoption of them is provisional. But taking them seriously and using them as I do—treating them as true—is a good strategy. For I see two possible outcomes. Maybe I shall be lucky and the hypotheses will endure as part of science in the indefinite future of human inquiry. Or maybe they will be replaced by something superior. Yet the history of science also shows that superior theories tend overwhelmingly to emerge from attempts to push apparently successful hypotheses as far as possible. Precisely because people take them seriously—treat them as true—they serve as stepping stones to better science. You can now see why pursuing this strategy is a good one: heads I win, tails I win.”

If our scientist is also up to date on recent discussions of values in science—perhaps she has read Heather Douglas’s Science, Policy, and the Value-Free Ideal (2009)—a qualification or two may follow. “There are, naturally, limits to the use I make of hypotheses I endorse. Sometimes applying a particular claim might have considerable impact on human welfare. If I were wrong, people’s lives might be severely damaged. It’s important, therefore, that I consider the contexts in which past successes have been achieved, not extrapolating too far in transferring ideas into domains where the application is risky in this way and the course of action unprecedented.” (In this, our scientist has appreciated a point emphasized by two social scientists, Ottmar Edenhofer and Martin Kowarsch, in their commentary: scientific decisions should sometimes be based on cost-benefit analysis.)

Success consists in solving particular problems. But the scientist’s judgment of the importance of that problem is not the last word.

But any attempt to defend the trustworthiness of science along these lines has further work to do. In particular, it has to explore what counts as success. Here, the vision of science as a collective venture—meaning not just one that involves critical exchange, but also one that is integrated into policies that affect human lives—must enter. Oreskes’s emphasis on the social dimensions of scientific work has an important place in the account. Success consists in solving particular problems. The scientist’s judgment of the importance of that problem is not, however, the last word. Ultimately, as I argue in Science in a Democratic Society (2011), problems and solutions make for success if they advance the wider interests of humanity.

Problems lie on a continuum. Some are highly theoretical, understood by and of concern to a tiny group of specialists. Others arise in daily research practice, involving the concrete details of experimental systems. Yet others bear directly on how large numbers of people, the vast majority of them non-scientists, pursue their everyday activities. As Lindee recognizes, successful science undergirds the way we live now. The areas of scientific research most obviously connected with quotidian existence are—unsurprisingly—the ones in which success breeds trust. More remote parts of science—the “speculations” of high theory, some would say—inspire less confidence. “People love and trust technology,” Lindee concludes. “The flow of prestige and legitimacy ‘down’ from science to technology—the flow of trust, viability, proof of value, of ‘working’—should perhaps be transposed, for the good of science and the good of the world.”


Lindee’s insight—that science is trustworthy in part because of how it engages the world—can be supplemented by the approach adopted by Marc Lange in his reply to Oreskes. Lange is a sophisticated philosopher who understands the difficulties besetting the logical empiricist attempt to identify one precise method unique to scientific inquiry. Yet he wants to find a place for the role of reliable techniques—the practices of gathering observations and constructing chains of reasoning—in buttressing the trustworthiness of the sciences. In his approach, the singular gives way to the plural: stop talking about scientific method, he contends, and look instead to the valuable methods different areas of inquiry employ.

The high school textbook’s caricature of scientific method is both bad philosophy and bad history.

He is up against a powerful myth. For generations children have begun their study of science by reading about something called “the scientific method.” It is supposed to have been introduced in the seventeenth century, and is credited variously to Francis Bacon, Rene Descartes, Galileo Galilei, Robert Boyle, and Isaac Newton. Scientific inquiry begins by formulating a hypothesis, the caricature goes. It then proceeds to collect observations or experimental data to test that hypothesis. In the light of the empirical results, the investigator obeys the verdict of nature. If all goes as predicted, the hypothesis is retained; if not, the hypothesis is discarded.

I myself learned early that this cannot be the whole story. Sadly, my brilliant experimental demonstrations of the falsehood of almost all the physical principles I was taught in high school went unappreciated by my teachers. They simply told me to go back and do the experiment properly. I rarely succeeded.

What are we to make of this humdrum occurrence, familiar to anyone who has ever tried to carry out an experiment? As Oreskes clearly explains, a negative message from nature can usually be interpreted in alternative ways. Perhaps the observations were sloppy, or the apparatus was set up incorrectly, or some background assumption was to blame. Sixteenth-century Copernicans were challenged to explain why the angle from which fixed stars are viewed—what astronomers call stellar parallax—doesn’t seem to vary at different times of the year, even though, according to their heliocentric views, the Earth occupies different positions in its orbit. They might have replied that the angular measurements were unreliable. But Galileo took the alternative approach of disputing a background assumption, contending that the universe is far larger than it had been taken to be, and, because the fixed stars are so far away, stellar parallax is undetectable.

The high school textbook’s caricature of scientific method is not just bad philosophy, entirely inadequate to account for scientific practice. It is also bad history, with tenuous links to the growth of early modern science. The methodological pioneers of the early modern era differed sharply in their views about how proper inquiry is to be carried out. Moreover, it was already obvious in the seventeenth century that their official suggestions were unhelpfully vague; the polymath Gottfried Leibniz famously lambasted the vacuity of the Cartesian “method.” (Speaking of the rules Descartes set for himself in the Discourse on Method, he wrote: “They are like the precepts of some chemist: take what you need, do what you should, and you will get what you want.”) Where, then, did the myth of a scientific method come from?

The imprecise injunctions we associate with the Scientific Method—collect data systematically, try to apply mathematics to natural phenomena, engage in thought-experiments and real experiments alike—were gradually rendered more definite in concrete investigations. Throughout the seventeenth century, investigators, inspired by these vague methodological suggestions, singly and in combination, framed and pursued inquiries into the motions of bodies, the propagation of light, atmospheric pressure, the retention and dissipation of heat, and other aspects of the physical world. As their efforts proved successful, issuing in reliable knowledge of natural regularities, they also delivered specific techniques for investigation and for drawing conclusions from the results obtained. Learning more about nature, they learned more about how to learn about nature.

In the intervening centuries, their successors have built on these pioneering efforts. Through exploration of an ever wider range of questions, those initially vague ideas about method have been made more definite, extended in diverse ways in different domains. Today’s young scientists learn the methods of the field in which they specialize. They are taught in “methods” courses whose contents diverge radically: particle physicists, geneticists, paleontologists, and neuropsychologists all need their own distinctive training—not just because they study different objects, but because they study them in different ways. If the skills they acquire are subsumed under a single conception, it inevitably collapses to the vagueness of the seventeenth-century proposals.

The idea of a monolithic Scientific Method is mythical, then, but it is based on a genuine historical insight. From the early seventeenth century to the present, there are long chains of divergent development connecting the initially imprecise ideas of those we call the “founders of modern science” to the diversity of methods now used in various fields of research. To trace that history is to recognize how different ways of making the original suggestions more definite yielded recognizable successes, inspiring further extensions of the techniques used, a long process of revision, refinement, and reform, out of which ever more powerful techniques for pursuing a wider range of questions gradually emerged.


Consider now what all these reflections mean for climate science. Why should we trust it? Not simply because a diverse community has engaged in some unspecified way with the world. Rather, because earlier successes in understanding heat transmission from the sun to the Earth, as well as the early recognition that not all the heat reflected at our planet’s surface escapes into space, have combined with reliable techniques for measuring the concentrations of various gases in the atmosphere and for calculating the temperatures of many places at different times in the past to show two important things. First, that there is a correlation between the concentrations of greenhouse gases and the Earth’s mean temperature. Second, that the correlation can be explained only by taking human activities to be responsible for the observed warming trend.

Reasonable doubt about the conclusion can focus on any of the elements of the evidence I have sketched. Why should anyone believe that carbon dioxide concentrations make a difference to the planet’s temperature? Because reliable techniques for predicting and controlling the transmission of heat, largely worked out in the nineteenth century, directly yield the basic picture of the greenhouse effect. Why should anyone think that past temperatures can be measured? Again, because reconstructions based on tree rings and ice cores are based on claims that have successfully generated reliable results across a wide domain. Why should we suppose that the researchers involved have applied the techniques carefully in their reconstructions of the Earth’s past climate or considered all the potential explanations of the correlation between temperature and concentration of greenhouse gases? Because they belong to a diverse community in which data and lines of reasoning are constantly scrutinized, and within which there are large rewards for showing that some consequential piece of current orthodoxy is mistaken. Why shouldn’t you believe that climate change is a cleverly organized hoax? Because in a community of that kind, there is no way of bringing it off. To do so would be analogous to enforcing uniformity through a large and sprawling empire.

We see, then, that part of the answer to the question of trust—as it arises in climate science or in any other field—turns on social facets of the scientific community, the aspects Oreskes emphasizes. Yet that dimension must be supplemented by recognizing the distinctive ways that scientists engage with the world, the rules of evidence they deploy in their deliberations and interactions, the techniques on which they agree, and the ways that evidential standards and research skills are grounded in a history of successful practice. Although there is no such thing as Scientific Method, unless it is simply a vague collection of discordant ideas utterly irrelevant to the day-to-day practice of science of today, there are scientific methods, products of a long history of inquiry, forged in strenuous efforts to solve problems. When considering the trustworthiness of science, those methods are crucial.

Yet there may still be reasons for questioning this cheery story I have told. The fourth commentary on Oreskes’s chapters, offered by the psychologist and political scientist Jon Krosnick, worries that current research practices, especially in biomedical and psychological research, are not what they should be, beset as they are by what has been called a “replication crisis.” Scientists are rushing into print, reporting results that others cannot reproduce. The reliability and regularity in the account I have given of scientific success has begun to look doubtful.

Generalization about science—as if it were a single enterprise, governed everywhere by one mythical Method—should be resisted.

In an incisive reply to Krosnick, Oreskes offers an important perspective on these recent troubles. She explains how some of the studies claiming to show a pervasive problem of replication failure have themselves proven to be (methodologically!) flawed. She rightly emphasizes the uncertainties in identifying a “crisis” even for the areas of research most deeply affected by problems of replication. She distinguishes various factors that might cause an inability to replicate. Commentators tend to jump too quickly from the failure of expert colleagues to replicate an investigator’s findings to an accusation of fraud. The difficulty of specifying all the conditions of the successful experiment is a commonplace of recent studies of science. Expand the “methods” section of the paper as you will, it will prove difficult, if not impossible, to list all the details about what you have done. Indeed, you may be quite unaware of some of the relevant factors. Particular conditions in the laboratory, or particular local conventions for performing some procedure, can make all the difference to the ability to replicate.

On the other hand, the pressures on young researchers today—to establish themselves by publishing quickly, to obtain support for their work when budgets for science are being slashed—may well lead them to seek shortcuts. Nobody knows how many investigators send off their papers earlier than they would have wished. What we do know, as Oreskes lucidly points out, is that replication difficulties beset particular fields, perhaps those in which competition for funds is most intense or in which experiments are most sensitive to a range of potentially perturbing factors. Given the state of the evidence now available, it would be wise to refrain from premature distrust, and to investigate, as carefully as possible, the causes of trouble. In particular, generalization about science—as if it were a single enterprise, governed everywhere by that mythical Method—should be resisted. We might note that many fields have few, if any, retractions. Ironically, climate science fares relatively well in this respect. In her measured diagnosis of failures of replication, Oreskes is at her best.

Techniques for generating stable solutions to problems emerge unevenly across the domains we group together as the natural and the social sciences. The more intricate the systems under study, and the more variables are potentially in play, the greater the difficulties in recognizing and controlling them. Instead of declaring a replication crisis or leaping to denunciations of an epidemic of fraud in the lab, it would be better to explore the limitations of the orthodox methods in the fields in which replication appears most difficult. Only in the light of a clear specification of reliable techniques is it reasonable to speak of lapses from a community’s methods as “sloppy science.” (Deliberate fraud, when it occurs, is quite another matter.)


In the end, then, we should trust science when it is pursued as a collective enterprise, subject to standards recognized by the practitioners, and when the standards are derived from reliable results. Properly conducted research conscientiously uses techniques of observation and experimentation that have generated recognizably stable successes, and analyzes the results using methods that have been shown to work. Since the seventeenth century, to different extents in different fields, domains of research have acquired a rich corpus of such methods and techniques. That corpus is transmitted to young investigators in their training. It guides their subsequent research, and it supplies the standards against which their activities should be measured. As they pursue their particular projects, their mentors, colleagues, and rivals hold them to those standards.

And so the collection of solved problems grows. Physicists become able to make extraordinarily precise predictions about the behavior of elusive particles, chemists develop new techniques for reliably synthesizing compounds, biologists read and even modify the genomes of organisms, and atmospheric scientists predict with considerable accuracy how increases in the concentration of greenhouse gases will affect the frequency and intensity of various types of extreme events. Successes of these kinds are sometimes translated into products that affect our daily lives: computers and lasers and new drugs and robots—and frozen peas. When the reliability of those results is readily apparent—as in the examples with which I began: the safety of some GMOs, the importance of vaccination, the great age of the earth, and the reality of climate change, caused by human activities—withholding trust is out of place.