Science is under fire as never before in the United States. Even amid the COVID-19 pandemic, Donald Trump and his Republican allies dismiss the findings of health experts as casually as those of climate scientists. Indeed, conservatives sometimes portray scientists as agents of a liberal conspiracy against American institutions and values. Since the 1990s GOP leaders have worked to limit the influence of scientists in areas ranging from global warming to contraception to high school biology curricula.

Before the 1920s, many Americans viewed science as a kind of “people’s knowledge,” a practical, commonsense mode of reasoning that stood against all forms of elite authority.

But it is not just conservatives who question scientific authority in the United States. Alarm at many applications of biological research, for example, crosses party lines. This impulse usually targets genetic engineering and biotechnology, but it also fosters skepticism toward vaccination and other medical practices. Across the political spectrum, citizens tend to pick and choose among scientific theories and applications based on preexisting commitments. They are frequently suspicious of basic research procedures as well; many believe that peer review and other internal policing mechanisms fail to remove powerful biases. Conservatives often charge that peer review enforces liberal groupthink, while some progressives say it leaves conventional social norms unexamined.

Even as individuals, scientists face growing skepticism. Concern about scientific misconduct is widespread, and most Americans doubt that the perpetrators face serious repercussions. Significant numbers trust the experts who apply knowledge more than those who produce it. And such suspicions are especially strong among Black and Latinx Americans—largely Democratic constituencies—as well as among Republicans. Viewing these patterns, many scientists fear that they now live in a “post-truth” world where much of the citizenry has turned against them. The March for Science movement launched in 2017 represents an unprecedented mobilization of rank-and-file researchers against perceived cultural and political threats to the scientific enterprise as a whole.

As the 2020s dawn, it is crucial to understand the sources and contours of skepticism toward science and scientists.

As the 2020s dawn, it is crucial to understand the sources and contours of this skepticism toward science and scientists. We stand on the brink of revolutions in fields from biotechnology to robotics to computing, even as global warming accelerates. As a result, arguments over science underlie some of our most divisive and consequential policy debates. From climate change to fracking, abortion to genetically modified foods—and much else besides—contemporary political battles generate disputes over the legitimacy of scientific theories, methodologies, institutions, concepts, and even facts. In this context, scholars, citizens, and policymakers must think carefully about science and its cultural and political ramifications. The prevailing views on these matters will significantly determine our future—and perhaps even our survival as a species. And to understand why science is so widely distrusted in the United States, it is essential to understand how that attitude has arisen.

One might start with the political influence of theologically conservative Christians in recent decades. Since Ronald Reagan’s election in 1980, a fraught but durable coalition of free-market advocates and Christian conservatives has anchored the Republican Party. The Christian Right has targeted myriad scientific theories and innovations as part of its “culture war” against modern liberalism. Today its power is such that Republican leaders routinely speak out against “secularism,” in such varied guises as abortion rights, strict church-state separation, and Darwinism in the schools. Theological conservatives also tend to reject climate science, viewing environmentalism as a dangerous, socialistic religion.

Going back to the 1920s, prominent groups of Americans have challenged scientific authority, decrying its moral implications and ascribing to it a host of negative social effects.

Yet the rise of the Christian Right cannot fully explain phenomena such as the breadth of antivaccination sentiment and concerns about genetic engineering. A second narrative, common among working scientists and scholarly interpreters, holds that a broad-gauged revolt against science took place in the wake of the 1960s. That period brought not only the conservative backlash but also a host of countercultural impulses, including New Age spirituality and belief in UFOs, astrology, and the paranormal. The era’s political movements also fueled opposition, as a new generation of critics identified science as an ideological tool of the establishment. Plummeting levels of trust in institutions, especially after Watergate, implicated science as well. At the level of research funding, meanwhile, the 1970s brought tighter budgets, new layers of bureaucratic procedure, and intense pressure to generate immediate, practical outcomes.

We thus have a number of ready explanations for science’s contemporary travails. But there is much more to the story than these familiar impulses. Skepticism toward science was hardly new in the 1970s, despite its changed forms and heightened impact on research funding. Going back to the 1920s, in fact, prominent groups of Americans have also challenged scientific authority in a different way, decrying its moral implications and ascribing to it a host of negative social effects. Ever since World War I, many critics who accepted Darwinian evolution have nevertheless identified science as a dangerous cultural presence that causes profound moral harm. They have argued that science advances a faulty view of human persons and human relations, injecting a pernicious social philosophy into the cultural bloodstream. To fully explain today’s distrust of science, we must account for the longstanding fear that it authorizes false and damaging understandings of who we are and how we behave. Often this response has focused on broad philosophical frameworks associated with science, but the methods and findings of the social sciences have also drawn considerable criticism, as have extrapolations from biology to human behavior.

By now, such charges have reverberated through American public culture for a century, and they have hardly been confined to theological conservatives. Since the 1920s many other critics have argued that science poisons the wells of culture, although these groups have typically traced the offending moral framework to the social sciences or naturalistic philosophies associated with science rather than Darwin’s theory. This style of argumentation spread especially widely after World War II, reorienting public images of science as it did. In the 1950s and early 1960s, a remarkably broad array of mainline Protestants, humanities scholars, conservative political commentators, and even establishment liberals joined theological conservatives in arguing that science represented a moral, and even existential, threat to civilization. They often employed classic tropes from nineteenth-century romanticism, contrasting the vital force of living, organic, subjective beings with the dead hand of cold, rational, reductive machines. Many argued that scientists had concocted a speculative and damaging view of human behavior by illegitimately extending the reductive, mechanistic, materialistic approach of science to the study of human beings. Scholarly critics dubbed this philosophical error “scientism” in the 1940s, and the term came into wide usage by the late 1950s.

Science, critics contended, replaced the familiar view of humans as moral actors with a new conception that ignored their capacity for moral choice and reduced them to the status of animals or machines.

It is no coincidence that such arguments proliferated just as science’s influence reached new heights. The postwar period, which we now remember as the “golden age” of American science, brought a society-wide reckoning with the place of science in modern culture. Critics of varied political and religious persuasions argued that even the horrors of atomic warfare paled in comparison to science’s capacity to unravel the social fabric itself. Science, they contended, replaced the familiar view of human beings as moral actors with a new conception that ignored their capacity for moral choice and reduced them to the status of animals or machines. Such arguments helped to pave the way for the upheavals of the late 1960s and 1970s, even as the radical theorists of that era altered critiques of science’s cultural effects to fit their own purposes. A tendency to trace social ills to the cultural sway of an ideologically infected science carried through that transformative period and up to our own day, even as the details of the indictment have changed.

It may seem surprising that few Americans ascribed pernicious social effects to science before the 1920s. After all, a populist suspicion of elites and experts runs deep in American political culture. Yet that populist sentiment rarely targeted scientists before the 1920s. Many Americans viewed science as a kind of “people’s knowledge,” a practical, commonsense mode of reasoning that stood against all forms of elite authority. The political ascendance of Progressivism after 1890 made science increasingly central to governance, but the habitual identification of science with a populist rejection of authority largely persisted. Meanwhile, hardly any Americans believed that science had given their culture its distinctive character. Even those religious leaders who equated Darwinism with materialism thought that it threatened American culture in the future, not that it had already remade that culture. Up through World War I, the vast majority of Americans assumed that they lived in a Christian country, for better or worse. Indeed, the early twentieth century brought some of the loftiest expectations to date that the United States, and indeed the world, would be Christianized in every aspect.

After the 1920s small groups of cultural critics began to trace social changes that alarmed them to the cultural influence of science.

Such hopes survived the 1920s in many circles. But small groups of cultural critics began to trace social changes that alarmed them to the cultural influence of science. Some lamented the mobilization of science by city and state governments: in classrooms, where biology lessons and sex education courses violated conventional norms, and in mandatory vaccination programs, which involved state agencies intervening directly in citizens’ bodies. Other critics worried about the growing federal bureaucracy, which continued to gain regulatory authority despite the rightward shift in electoral politics after 1920. Still others thought a climate of utilitarianism and industrialism had corrupted politics and learning alike. The hedonistic tenor of the 1920s consumer culture and the violations of sexual propriety by Jazz Age youth also signaled to some critics a widespread loss of moral guideposts.

Above all else, however, loomed the popular vogue of psychology, with its emphasis on cultural conditioning, childhood traumas, and other nonmoral, nonrational causes of behavior. The post–World War I years witnessed an explosion of popular interest in all of the natural and social sciences. But psychology became a veritable craze, with millions of readers devouring popular treatments and applying the new interpretive categories to themselves and others. Small cadres of literary scholars, southern writers, and mainline Protestants, along with larger groups of Catholic leaders and conservative Protestants, connected the vogue of psychology to wider social and cultural changes. They identified science as the source of a dangerously amoral worldview that had captured the public mind and eroded society’s cultural foundations. These critics of the 1920s levied a charge that would become increasingly common in subsequent decades: modern science had dissolved conventional understandings of the human person and led the entire culture astray.

Over time, the specific contours of this argument shifted with cultural and political changes. In the 1930s, the emergence of a moderate welfare state under Franklin D. Roosevelt reshaped perceptions of science’s cultural impact, and additional groups came on board. The bureaucratic innovations of the New Deal fed into the powerful associative logic of commonsense reasoning, leading a number of Americans to equate science with the technocratic, managerial liberalism of Roosevelt and his allies. Over the next few decades, this association would take firm hold, leading many of the New Deal’s challengers to question the authority of science and turning some critics of the social sciences against the welfare state. Meanwhile, many other skeptics argued in the 1930s and 1940s that the secularization of modern societies at the hands of scientists and their allies had created a moral vacuum that was filled by the totalitarian state.

Other skeptics argued in the 1930s and 1940s that the secularization of modern societies at the hands of scientists and their allies had created a moral vacuum that was filled by the totalitarian state.

The association of science with a secular form of welfare liberalism deepened in the 1950s and early 1960s, but the details of postwar critiques also reflected new conditions. Overlaid on concerns about nuclear destruction after World War II was what many saw as the imminent threat of manipulative, implicitly totalitarian programs of control by experts. As the Cold War took hold, the New Deal state became the “national security state” and birthed the military-industrial complex. With science growing ever more central to American governance, both instrumentally and ideologically, all manner of critics concluded that a spiritually deadening, technocratic outlook was forcing American society into science’s inhuman mold. A scientific understanding of humanity, in this view, permeated the culture at large, having radiated outward from the universities to shape public opinion and policy formation. Reality itself, many thought, was changing to fit the narrow, reductive interpretation of the scientists and planners: People were treating one another like machines, and behaving more and more like machines themselves. Such fears often centered on the alarming prospect of social engineering—the possibility that social scientists could reshape personalities and social practices in keeping with predetermined ends. The power-hungry social engineer and the mindless technocrat became stock figures in American cultural criticism after 1945.

The threat here lay in science’s apparent denial of the moral freedom of the individual, which many critics believed had turned American liberalism into a near-copy of Soviet ideology. (“Communism is based upon a scientific and value-free methodology,” a letter to the Catholic journal America declared.) Such critics identified science as a materialistic and deterministic mode of thought that reduced all phenomena to unchanging patterns of cause and effect, ruling out the existence of minds, ideals, values, and other nonmaterial entities. Applying this model to human behavior destroyed human autonomy and dignity, they argued. All around them, these critics saw “machine men” with “machine values,” “faceless ciphers” lacking any “consciousness or aims.” They sought to save the public from the social chaos created by science’s application to human behavior by subordinating empirical knowledge to the normative resources of religious, literary, or political traditions.

By the 1950s, concerns about the social chaos that would result from science's application to human behavior were deeply entrenched in American culture.

Such concerns became deeply entrenched in American culture by the 1950s. The postwar years produced not only the institutions and funding structures that still shape scientific research today but also many of our foundational assumptions about science’s contours and cultural meanings. In recent years, our histories of that crucial period have foregrounded science’s growing authority. Citing lavish research budgets, the prestige of physics, the ascension of psychological expertise, the cultural sway of white-coated experts, and the technocratic character of Cold War–era politics, they portray the postwar United States as the scene of naïve and almost universal trust in science. Yet there is another side to this story. The postwar era also brought potent fears that science had spread into intrinsically moral realms and cast its pall on the culture at large. Those commentators were numerous, prominent, and influential. Even as science and scientists took on important new roles in American society, the expansion of their authority also inspired a national referendum on the social, cultural, and political meanings of science that featured deep undercurrents of fear and mistrust alongside assertions of beneficence. A wide variety of critics argued that science, rather than big business, the welfare state, the military, or the churches, set the tone for American public life.

Above all, mid-twentieth-century critics emphasized science’s impact on philosophical anthropology: theories of the human person. “It is here, on the nature of man, between those who would respect him as an autonomous person and those who would degrade him to a living instrument, that the issue is joined,” the political journalist Walter Lippmann wrote. “From these opposing conceptions are bred radically different attitudes towards the whole of human experience, in all the realms of action and feeling, from the greatest to the smallest.” Most critics sought to bring a seemingly amoral, nihilistic science under the sway of some version of “humanism” that emphasized the moral freedom of the individual. Varieties of humanism proliferated in response to the apparent threat from science: there were Christian, conservative, Marxist, classical, and literary humanisms, and many hybrids as well. Each portrayed the human person in immaterial, voluntaristic terms, ignoring the body and identifying exercises of individual subjectivity—valuing, preferring, choosing—as the truly human modes of behavior.

The postwar equation of science with a cold, depersonalized stance contradicted the alternative conceptions of nature and science that had flourished among many working scientists and naturalistic philosophers during the interwar years. By the early twentieth century, most biologists and some philosophers had made room for human values, ideals, and purposes within their understandings of nature. Such views also permeated the social sciences in the 1920s and 1930s. Although physical scientists and engineers tended to view morality in fairly traditional terms, as the product of Christian faith, by the 1920s most biologists, social scientists, and allied philosophers had concluded that the moral freedom of the individual figured centrally in a naturalistic, evolutionary understanding of life on earth. But theorists of this variety struggled in vain against skeptics who insisted that applying the scientific method to the human world entailed squeezing out or imagining away its moral content.

In the sixties, left critics joined conservatives in arguing that scientism represented the characteristic ideology of a ruling elite. 

For these commentators, studying any subject scientifically meant applying a particular conceptual framework—a reductive, materialistic, mechanistic, and often quantitative lens. Science was the study of matter in motion, guided by strict causal relations that could be discerned through sensory evidence and expressed in quantitative terms—ideally, rigorous, mathematical formulas like Newton’s laws. On this view, science embodied the mechanistic viewpoint of classical, nineteenth-century physics; it was isolated from normative claims and confined solely to spatiotemporal phenomena.

By definition, such a science was strictly neutral with regard to morality—and thus, the critics declared, utterly impotent as a guide in human affairs, except insofar as one’s goals were purely technical and instrumental. Although the scientific method fit the physical world, human dynamics stood outside the “nature” that scientists could explore. From this perspective, studying human beings scientifically meant assuming that they acted like physical objects. After World War II, a growing number of critics considered this kind of “scientism” not only a faulty philosophy but also a dangerous cultural force that permeated the modern age—increasingly seen as an “age of science”—and had produced its characteristic problems. They traced social norms, cultural practices, government policies, and even wars to science’s amoral outlook. And such arguments appeared among critics from across the political spectrum and religious believers of virtually all theological persuasions.

These views of science and modern culture carried through to the 1960s, shaping that decade’s multiple, overlapping revolts. The left-wing humanism of many student radicals and activist professors often hewed surprisingly closely to the views of liberal, centrist, and even conservative commentators from the 1950s. They, too, portrayed a society relentlessly driven by technical imperatives to trample on human values at every turn. But the stream of humanistic criticism also flowed into new channels as it was caught up in the political earthquakes of the 1960s and 1970s. Left critics now joined conservatives in arguing that scientism represented the characteristic ideology of a ruling elite. Yet these radicals identified science as a bulwark of traditional social norms, not a corrosive threat to such norms. They increasingly argued that science’s cultural influence buttressed social inequalities, keeping favored groups in power and others down. In the 1970s, left-leaning critics also ascribed pernicious effects to biology as well as the social sciences. Here, the target of criticism was not science’s morally relativistic character but rather its entanglement with assertions of innate group differences. This kind of argument circulated among critical scholars with growing frequency in the late twentieth century and shaped debates over biotechnology and other controversial issues.

Despite profound changes, the underlying assertion persists: science causes serious social and political problems by enforcing faulty understandings of humanity.

Meanwhile, many free-market thinkers of the 1970s continued to link modern science to socialism as well as moral relativism. Over time, some eventually warmed to the anti-Darwinism of the burgeoning Christian Right, which would later deepen the convergence by adopting the economic conservatives’ climate denialism in the 1990s. Conservatives deplored the regulatory initiatives launched by Richard Nixon and congressional Democrats in the early 1970s, which yoked scientific research to federal power at new bureaus such as the Environmental Protection Agency and the Occupational Safety and Health Administration. A shared dislike of expert-driven policy initiatives helped broker the alliance between Christian conservatives dismayed by the secularity of the American state and economic conservatives alarmed by its size and scope.

Since the 1970s, claims about science’s baleful cultural influence have anchored important strands of radicalism and conservatism, even as they have largely disappeared from the rhetorical arsenals of liberals and centrists. From both ends of the political spectrum, one hears sweeping challenges to modernity, defined as an age of enthrallment to scientific rationality. Today’s critics often trace modern culture back to Descartes, Bacon, and Newton, not the shifts of the nineteenth and twentieth centuries. Recent critics have also linked science to capitalism and state power, while adopting a more pluralistic tenor than their postwar counterparts, who usually proposed a universal framework of values. Even religious traditionalists often adopt a pluralistic approach today, arguing that science must share the stage with an array of religious views. Despite these profound changes, however, the underlying assertion persists: science causes serious social and political problems by enforcing faulty understandings of humanity. That mode of analysis is much less common among mainstream commentators today than it was in the 1950s and early 1960s, but it remains influential in universities and among theological conservatives.

For a century, then, influential groups of American commentators have argued that science anchored a faulty cultural understanding of human beings and social relations—and, many added, reinforced the power of a dominant liberal elite in the process. This fact has mattered a great deal. In the mid-twentieth century, especially, anyone who attended a college or university in the United States, read magazines, listened to congressional leaders, or engaged in other ways with American public discourses heard numerous versions of the charge that science represented a moral threat to civilization, due to its corrosive effects on humanity’s self-conception. Our contemporary understandings of science, and even of our social and political worlds, reflect the potent impact of this critical tradition.

Generations of commentators have taken for granted that science entails a morally detached approach to the world. The consequences, though hard to measure, have been substantial.

This tradition has shaped American politics, in particular, by challenging the legitimacy of welfare liberalism. The mid-twentieth-century complex of ideas and institutions that historians call the “New Deal order” suffered from numerous practical and conceptual weaknesses. For example, many white Americans’ distrust of racial minorities made them unwilling to devote tax dollars to promoting social equality and produced sharp disparities in employment and housing behind the scenes. But surely it also mattered that vocal critics at every point on the political spectrum—including many mainstream liberals themselves, as well as prominent religious leaders—argued over the years that the American welfare state was dangerously technocratic, bureaucratic, and dehumanizing. They contended that social science, a foundational resource for the New Deal agencies, was ideological rather than neutral and threatened humanity by corrupting its self-understanding. Such critics identified the welfare state as the product of a “disintegrated liberalism,” resting on “the illusion that scientific observation and logic alone will suffice in the treatment of human affairs.” In so doing, they tied the New Deal to undemocratic, even totalitarian projects of social engineering that turned autonomous individuals into raw material for experts to manipulate.

Across the past century, this style of criticism has also led critics on the left to repeatedly slide from economic to cultural understandings of power—and often to shift the blame for prevailing social conditions from capitalist elites to scientific elites, from political economy to rationality. In the late nineteenth and early twentieth centuries, social critics assumed that big business held the reins of power, buying the policies it desired while using its cultural influence to sustain a free-market ideology that disabled political opposition. By the postwar years, populist critiques of concentrated power increasingly turned away from big business toward experts. In this view, real power in modern America lay in the hands of secular, liberal professors, not business leaders, preachers, or politicians.

Since then, the emphasis on science as a threat to human values has taken new forms, even as moral commitments have become central to political identities in the United States. In recent decades, public debates have increasingly revolved around a series of competing declension narratives that posit a moral deficit in the nation’s public life—and often trace that deficit, in part or in whole, to science’s influence. Conservatives have long identified the New Deal as the moment of decline, when the United States lost its moral compass—to some, because a relativistic, naturalistic, and technocratic mentality took hold in American culture and reshaped public institutions and practices accordingly. New Leftists often located that technocratic turn in the years after World War II, while neoconservatives and the Christian Right focused on the 1960s. Proponents of each narrative have discerned a pervasive sense of moral aimlessness that they often linked to the cultural influence of science.

Responding effectively to these threats will require us to think much more clearly and precisely about the configurations of scientific expertise that surround us.

Through all of this disputation, the core image of science as a value-neutral, and thus innately amoral, enterprise has sunk ever deeper into the cultural bedrock. Generations of commentators have taken for granted that science entails a morally detached approach to the world, even as they clashed bitterly over its applications and implications. The consequences, though hard to measure, have been substantial. As the third decade of the twenty-first century opens, a potent new disease is spreading and the planet is lurching toward environmental disaster. Responding effectively to these threats will require us to think much more clearly and precisely about the configurations of scientific expertise that surround us—and often shape our lives in minute detail. Turning our attention from science’s champions to its critics can help us do just that.


Editors’ Note: This essay is adapted from Science Under Fire: Challenges to Scientific Authority in Modern America by Andrew Jewett, published by Harvard University Press. Copyright © 2020 by the President and Fellows of Harvard College. Used by permission. All rights reserved.