The COVID-19 pandemic seems to take every public problem—vast social inequality, political polarization, the spread of conspiracy theories—and magnify it. Among these problems is the public’s growing distrust of scientists and other experts. As Archon Fung, a scholar of democratic governance at Harvard’s Kennedy School, has put it, the U.S. public is in a “wide-aperture, low-deference” mood: deeply disinclined to recognize the authority of traditional leaders, scientists among them, on a wide range of topics—including masks and social distancing.

In a way, a low-deference stance toward experts and authorities is just what a well-functioning democracy aims at.

As the world continues to struggle through waves of disease, many seek a world more inclined to listen to scientific experts. But getting there does not require returning to the high-deference attitude the public may have once held toward experts. Turning back the clock may well be both impossible and undesirable. In a way, a low-deference stance toward experts and authorities is just what a well-functioning democracy aims at.

There is a deep puzzle here for science and policy-making. Complete rejection of expertise not only makes little epistemic sense (for there is no doubt that expertise exists); the complexities of the modern state make trust in others’ expertise indispensable. On the other hand, unqualified deference to those in positions of power and privilege vitiates the basic principles of democracy.

How do we reconcile these facts? A recent report I helped edit at The Hastings Center, a nonpartisan ethics research center, proposes one possible path forward: building robust institutions for “civic learning.” On this view, the way to restore public trust in science is to empower citizens to become critical consumers of expertise by providing meaningful opportunities to deliberate about issues, make decisions, and shape policy. This vision joins up with other calls recently made by scholars of ethics, science, and law for “systems that can provide for open and inclusive decision making in an institutionalized manner rather than as ad hoc efforts”—systems that might take such participatory forms as citizen panels, advisory councils, public hearings, and other fora.

Realizing the promise of this vision for civic learning and public deliberation will take work on many fronts, given the overlapping causes of our current crisis of expertise. Some of these are large, structural issues: widening economic inequality, political polarization, and the nature of social media. When people feel that they have been left out, they can come to believe—indeed, they can have good reason to believe—that the cards are stacked against them; one consequence is that they are less likely to trust what they are told and more likely to fall for disinformation and conspiracy theories.

But beyond these structural matters, there are also certain views that stand in the way of more robust democratic engagement with expertise, including views about the nature of scientific knowledge. If the COVID-19 pandemic has taught us anything about scientific expertise, it is that any effective program of civic learning will have to take on popular but deeply misleading narratives about how science works.


Essential to such revision will be building a more mature skepticism about truth claims and objectivity—one that appreciates the inevitably social and political aspects of scientific practice, especially when it shapes policymaking during high-stakes crises such as the COVID-19 pandemic, without at the same time suggesting reliable scientific knowledge is impossible.

Any program to restore trust in science will have to take on popular but misleading narratives about how science works.

Philosopher and sociologist of science Bruno Latour discussed this challenge in a widely cited paper published in 2004 in which he described the misuse and misunderstanding of the social constructivist view of science he had helped develop. On the constructivist view, facts are created by networks of scientists talking and arguing with each other, creating the technologies and institutions that make new understandings possible and working toward shared understandings. The view is sometimes taken to mean, however, that facts are simply made up. Latour thought his work had been misinterpreted this way, and he feared that the misinterpretation had filtered out into the wider world, fostering public skepticism about truth claims as it went.

Whether Latour was right about the impact of this particular field of scholarly work on public distrust would be difficult to prove, given the many social and economic factors in play. But scholarly skepticism about truth and objectivity has hardly been limited to the sociology of science; a great deal of work in the humanities and social sciences has long raised broadly similar concerns. Philosophers from various traditions—including but not limited to those lampooned as “postmodernist”—have cast doubt on the idea we can describe the world as it really is, independent of our ways of talking about it. Political scientists have noted, as Deborah Stone put it in her influential book Policy Paradox: The Art of Political Decision Making (1988), that “facts do not exist independent of interpretive lenses, they come clothed in words and numbers.” Psychologists and economists, for their part, have produced reams of literature on the many cognitive biases that affect our thinking, even our perceptions.

Even my own fairly down-to-earth field of bioethics, which tends to eschew grand questions about truth, has highlighted the slipperiness of facts. Fifty years ago, in the first article of the first issue of the Hastings Center Report, the late Daniel Callahan, one of the cofounders of bioethics, mused that “what we choose to call a fact is strongly conditioned by our interests and biases. Whoever said ‘You can’t argue with facts’ had not been reading any scientific journals.” If ideas have consequences, then this broad movement within the humanities and social sciences may have played some role in fostering public skepticism about science. At any rate, addressing the skepticism must start by accepting the broad and long-term scholarly problematizing of truth and objectivity. We cannot insist on any simplistic, unqualified deference to “scientific expertise.”

How are we to resolve this problem? “The good thing about science,” astronomer and popular science communicator Neil deGrasse Tyson likes to say, “is that it’s true whether or not you believe in it.” And the proof that it’s true is that it works—as shown by smart phones, those constant reminders of successful science.

“Whoever said ‘You can’t argue with facts’ had not been reading any scientific journals.”

But this can’t be the right answer, in part because not everything that masquerades as science is science; Tyson’s reply doesn’t go far enough in helping us to identify which is which. Nor will the reassurance that science is true whether or not we believe in it assuage the victim of climate change, who would like her fellow citizens to act on what the climate science is saying. Tyson recognizes as much; his vision is ultimately one of deference to science (even if not to scientists)—a world in which, as he put it in an interview this week in the New York Times Magazine, “science would reign supreme once again.”

Thus we are back to the problem of deference. The truth is that many people believe that scientists are biased, whether because climate scientists exchange impassioned emails about how to present and interpret their data or because medical researchers tend to say and do what Big Pharma wants. At the same time, many people are also told that science proceeds according to textbook caricatures of “scientific method”: blocking out one’s biases and seeing the world as it really is. The facts must speak for themselves, and the good scientist learns to hear them without letting prejudices and preconceptions get in the way. Tyson’s celebration of science tends in this direction. Science deserves deference in part, he argues, because its practitioners self-consciously render themselves free of bias, indeed even of emotional investment:

The fact that scientists are human like everybody means that there is a susceptibility to bias. The difference is the scientist is supposed to have good self-awareness of that bias so that they can check for it. You ask yourself, Do I have an urge for this experiment to come out one way or another? We are trained to invoke, as far as we can see, analysis of bias. So science may be the most honest enterprise humans have ever constructed. So about myself: I always try to check to see if I have bias. You know how to reduce bias? You don’t invest emotions.

If only it were that easy!

Chasing this circle of ideas, what we end up with—a split-screen view of objective science and biased scientists—is a trap. On an exalted view of science, actual scientists can never measure up. If they are just like the rest of us, of course they have filters and biases (and indeed ones that they cannot perfectly eliminate). But if they work with filters and biases, they are frauds, and the legitimacy of scientific knowledge is vitiated.

The way to square this circle is to acknowledge that what objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work. The philosopher of science Karl Popper expressed this idea clearly in his 1945 book The Open Society and Its Enemies. “There is no doubt that we are all suffering under our own system of prejudices,” he acknowledged—“and scientists are no exception to this rule.” But this is no threat to objectivity, he argued—not because scientists manage to liberate themselves from their prejudices, but rather because objectivity is “closely bound up with the social aspect of scientific method.” In particular, “science and scientific objectivity do not (and cannot) result from the attempts of an individual scientist to be ‘objective,’ but from the friendly-hostile co-operation of many scientists.” Thus Robinson Crusoe cannot be a scientist, “For there is nobody but himself to check his results.”

What objectivity science is able to deliver derives not from individual scientists but from the social institutions and practices that structure their work.

More recently, philosophers and historians of science such as Helen Longino, Miriam Solomon, and Naomi Oreskes have developed detailed arguments along similar lines, showing how the integrity and objectivity of scientific knowledge depend crucially on social practices. Science even sometimes advances not in spite but because of scientists’ filters and biases—whether a tendency to focus single-mindedly on a particular set of data, a desire to beat somebody else to an announcement, a contrarian streak, or an overweening self-confidence. Any vision of science that makes it depend on complete disinterestedness is doomed to make science impossible. Instead, we must develop a more widespread appreciation of the way science depends on protocols and norms that scientists have collectively developed for testing, refining, and disseminating scientific knowledge. A scientist must be able to show that research has met investigative standards, that it has been exposed to criticism, and that criticisms can be met with arguments.

The implication is that science works not so much because scientists have a special ability to filter out their biases or to access the world as it really is, but instead because they are adhering to a social process that structures their work—constraining and channeling their predispositions and predilections, their moments of eureka, their large yet inevitably limited understanding, their egos and their jealousies. These practices and protocols, these norms and standards, do not guarantee mistakes are never made. But nothing can make that guarantee. The rules of the game are themselves open to scrutiny and revision in light of argument, and that is the best we can hope for.

This way of understanding science fares better than the exalted view, which makes scientific knowledge impossible. Like all human endeavors, science is fallible, but still it warrants belief—according to how well it adheres to rules we have developed for it. What makes for objectivity and expertise is not, or not merely, the simple alignment between what one claims and how the world is, but a commitment to a process that is accepted as producing empirical adequacy.


As Solomon argues in an essay in the report I helped edit, disseminating this more mature view of science is necessary if we are to cultivate public awareness of the difference between good and phony science. Doing so could also help to reshape public perceptions about the social standing of scientific experts. The fact that scientific truths are a matter of social agreement brings them down to earth. Where the Tysonian view that science is true whether you believe it or not may be heard as aggressive, self-congratulatory, or dismissive of criticism, the norms of science point toward humility, openness to challenge, and a recognition of one’s dependence on social embeddedness. Science is certainly a special way of producing information, but it is special not because it is free of bias. It is special because of its rigorous processes for producing and vetting agreement.

The way to restore public trust in science is not through unqualified deference to experts. Instead we must empower citizens to deliberate about issues, make decisions, and shape policy.

The COVID-19 crisis has offered a real-time demonstration of these aspects of scientific practice. We have all been watching as scientists collectively produce and sort through information about the course of the illness, how the virus is transmitted, which measures we can take to prevent it, and possible treatments for the disease. The process is messy; results don’t always agree; bad information gets airtime; certain studies are more reliable than others; some scientists will change their minds as new information comes to light. What ultimately matters are the methods for articulating and exploring the uncertainty.

Even the mistakes that experts have made along the way, such as the early report that there was no asymptomatic transmission and the World Health Organization’s long-lasting position that there was very little airborne transmission, can provide public insight into the nature of science. If scientists own up to these mistakes, they can reveal the give and take—the fundamental testing and tentativeness—on which science depends. And by understanding this process, we can all better assess and appreciate the contributions of scientific experts: we can see when they are adhering to the norms of science, so we can make more accurate assessments about which scientific experts are trustworthy.