We are a public forum committed to collective reasoning and the imagination of a more just world. Join today to help us keep the discussion of ideas free and open to everyone, and enjoy member benefits like our quarterly books.
Where do you place the boundary between “science” and “pseudoscience”? The question is more than academic. The answers we give have consequences—in part because, as health policy scholar Timothy Caulfield wrote in Nature last April during the first wave of COVID-19, “tolerating pseudoscience can cause real harm.” We want to know which doctrines count as bona fide science (with all the resulting prestige that carries) and which are imposters.
This is the “demarcation problem,” as the Austrian-British philosopher Karl Popper famously called it. The solution is not at all obvious. You cannot just rely on those parts of science that are correct, since science is a work in progress. Much of what scientists claim is provisional, after all, and often turns out to be wrong. That does not mean those who were wrong were engaged in “pseudoscience,” or even that they were doing “bad science”—this is just how science operates. What makes a theory scientific is something other than the fact that it is right.
Of the answers that have been proposed, Popper’s own criterion—falsifiability—remains the most commonly invoked, despite serious criticism from both philosophers and scientists. These attacks fatally weakened Popper’s proposal, yet its persistence over a century of debates helps to illustrate the challenge of demarcation—a problem no less central today than it was when Popper broached it.
One cannot understand the fate of falsification without appreciating the context in which Popper’s answer emerged. Popper was born just after the turn of the twentieth century in Vienna—the birthplace of psychoanalysis—and received his doctorate in psychology in 1928. In the early 1920s Popper volunteered in the clinics of Alfred Adler, who had split with his former mentor, the creator of psychoanalysis: Sigmund Freud. Precocious interest in psychoanalysis, and his subsequent rejection of it, were crucial in Popper’s later formulation of his philosophical views on science.
Philosophy of science was a big deal in Popper’s Vienna, and the decade when he was a student saw the flourishing of a group of thinkers called the Vienna Circle, which in the beginning included among its core members such people as the philosophers Moritz Schlick and Rudolf Carnap, the physicist Philipp Frank, the mathematicians Hans Hahn and his sister Olga Hahn-Neurath, and Olga’s husband, the social scientist Otto Neurath. This group elaborated the dominant philosophy of science of the first half of the twentieth century: logical empiricism. Not only did the Vienna Circle and its like-minded peers in Berlin dominate European philosophy of science, but after the rise of National Socialism in Germany, many of the leading lights—who were either Jewish, or socialist, or both—emigrated to the United States, where they reestablished their school of thought. Popper was thrust into globetrotting for similar reasons, emigrating first to New Zealand and then in 1946 to London.
Logical empiricism can be usefully understood by examining its component terms. Its advocates are empiricists because they believe that “sense data” constitute our only reliable sources of information about the natural world. Building on centuries of philosophical thought—most notably that of David Hume, the eighteenth-century Scottish philosopher who was especially important for Popper, and Ernst Mach, an Austrian physicist who emphasized the centrality of sense data for the natural sciences—logical empiricists rejected as “metaphysical” any claims about the structure of nature that could not be traced back to sensory observations. Moving beyond the empiricism of Hume and Mach, however, the logical empiricists also stressed the significance of logical relations in coherently assembling the shards of reality brought to us through our senses. These logical relations were not necessarily grounded in empirical data themselves, but they were essential to ascertaining nonmetaphysical truths about nature.
At first, Popper was quite taken with logical empiricism, but he would diverge from the mainstream of the movement and develop his own framework for understanding scientific thought in his two influential books The Logic of Scientific Discovery (1934, revised and translated to English in 1959) and Conjectures and Refutations (1962). Popper claimed to have formulated his initial ideas about demarcation in 1919, when he was seventeen years old. He had, he writes, “wished to distinguish between science and pseudo-science; knowing very well that science often errs, and that pseudoscience may happen to stumble on the truth.”
But how to do it? The results from a British expedition to study the solar eclipse of May 29, 1919, provided the key insight. Astronomers Arthur Eddington and Frank Dyson organized two groups to measure the deflection of starlight around the sun in order to test a prediction from general relativity, recently formulated by Albert Einstein. One of Einstein’s crucial predictions was that light’s path would be bent by strong gravitational fields, and during an eclipse one would be able to measure the precise degree of curvature for light hailing from stars located behind the solar disk. According to Eddington and Dyson, the measured curvature more closely adhered to Einstein’s theory than to that predicted by Newtonian gravity. The news made an immediate international sensation, catapulting Einstein to his global celebrity.
Popper was struck by Einstein’s prediction. “Now the impressive thing about this case,” he wrote decades later, “is the risk involved in a prediction of this kind.” Had the measurements found Einstein in error, Popper said, the physicist would have been forced to abandon his theory. Popper built his demarcation criterion around the bravado of wagering against refutation: “One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.” This demarcation criterion is by far the most widely recognized of Popper’s philosophical contributions, although it was somewhat of a digression. He first presented it at a lecture sponsored by the British Council at Peterhouse at the University of Cambridge in 1953, and it was later published in Conjectures and Refutations. This post–World War II articulation of his demarcation criterion has often obscured its Austrian origins, though Popper in the lecture stressed its historical roots in post–World War I Vienna.
All demarcation criteria are designed to exclude something. What Popper really wanted to do was to show why psychoanalysis and Marxism were not scientific. Those theories had been widely understood as “scientific” in his Viennese milieu because of a logical empiricist theory called verificationism. According to this view, a theory is scientific if it is verified by empirical data.
For Popper, this condition was grossly insufficient. There was plenty of data that apparently confirmed psychoanalysis, he claimed. Every piece of data about personalities might be another brick in the confirmatory edifice for Freud, just as every event in politics or economics seemingly further confirmed Marxist theories such as the centrality of class conflict in history or the surplus value of labor. What this meant for Popper is that logical empiricists were looking at things the wrong way around. The issue was not whether a theory was confirmed—anything might be interpreted as confirming if you formulated the theory flexibly enough. The point was whether it was possible to falsify the theory. Was there any imaginable observation such that, should it be found, Freudians or Marxists would concede that their theories were false? If the answer was no, these were not sciences.
The appeal of falsificationism is obvious. It provides a bright line, and it rewards the boldness that we often like to see exemplified in science. How well does it work?
The short answer is: not very. Philosophers of science recognized this almost immediately, for two main reasons. First, it is difficult to determine whether you have actually falsified a theory. This is largely a restatement of one of Popper’s own objections to verificationism. How do you determine that an observation actually constitutes a confirmation of a theory? Well, you interpret it within its framework, and sometimes those interpretations produce the lamentable distortions that Popper decried. But the same holds true for falsifying a theory, too. Suppose you did an experiment in your laboratory to test a theory, which predicts that under certain conditions your fact-o-meter should register a value of 32.8, and you got a result of 5.63. What do you do? Should you run to the journals and proclaim the death of that theory?
Not so fast. How do you know that your experimental result was accurate? Maybe the reason you did not get the value of 32.8 is that your fact-o-meter malfunctioned, or perhaps you did not perform the experiment under precisely the right conditions. In short, it is rare to have a thumbs-up/thumbs-down result as in the 1919 eclipse expedition. (As a matter of fact, the results of that expedition were more equivocal than Eddington made them seem. It was several years before absolutely incontrovertible results in support of general relativity were obtained, largely by observatories in California.) If any disconfirming result stood to invalidate a theory, then every tenet of modern science would have already been falsified by middle school science students failing to replicate utterly uncontroversial standard experiments. This is clearly nonsense. While it sounds like a good idea to insist on falsifying observations, it is far from straightforward to determine when precisely this has been done—and that defeats the purpose of having a bright-line standard.
The second problem with Popper’s proposal has to do with the actual demarcations it gives us. The very minimum we should expect from a demarcation criterion is that it slices the sciences in the right places. We want our criterion to recognize as scientific those theories that are very generally accepted as hallmarks of contemporary science, such as quantum physics, natural selection, and plate tectonics. At the same time, we want our criterion to rule out doctrines such as astrology and dowsing. Popper’s falsifiability standard is not especially helpful in this regard. For starters, it is difficult to present the “historical” natural sciences, such as evolutionary biology, geology, or cosmology—those fields where we cannot “run the tape again” in the laboratory—exclusively in terms of falsifiable claims. Those sciences provide persuasive explanations of nature through the totality of a narrative chain of causal inference rather than a series of empirical yes-no votes. Popper thus inadvertently excludes important domains of contemporary science.
The situation with inclusion is even worse. The difficulty was sharply expressed by philosopher of science Larry Laudan in an influential article from 1983. Popper’s criterion, he wrote,
has the untoward consequence of countenancing as “scientific” every crank claim that makes ascertainably false assertions. Thus flat Earthers, biblical creationists, proponents of laetrile or orgone boxes, Uri Geller devotees, Bermuda Triangulators, circle squarers, Lysenkoists, charioteers of the gods, perpetuum mobile builders, Big Foot searchers, Loch Nessians, faith healers, polywater dabblers, Rosicrucians, the-world-is-about-to-enders, primal screamers, water diviners, magicians, and astrologers all turn out to be scientific on Popper’s criterion—just so long as they are prepared to indicate some observation, however improbable, which (if it came to pass) would cause them to change their minds.
Laudan’s critique went further: any bright-line semantic criterion—that is, a formulation that relied on a linguistic test like Popper’s—would necessarily fail. He went on to describe the demarcation problem as a “pseudoproblem,” a claim that infuriated many philosophers who insisted that it remained a vital question in the philosophy of science. Yet the fact that Laudan was a tad overzealous in his phrasing does not invalidate his point: Popper’s criterion does not condemn to the fringe many of the doctrines we would like to see banished there. On the contrary, creationists and UFOlogists often quote Popper to assert that their own positions are scientific and those of their opponents are pseudoscientific.
A closer examination of Popper’s thought reveals that his formulation requires endorsing positions that are likely uncongenial to most falsifiability partisans. In his original demarcation article, as well as his monumental Logic of Scientific Discovery, Popper was explicit that his framework demands that we give up the possibility of ever attaining the truth about nature (or anything else). According to Popper, no scientific theory can, strictly speaking, ever be true. The best scientists can achieve is to say that a theory is not yet false. The existence of atoms, relativity theory, natural selection, the cellular structure of life, gravity, what have you—these are all provisional theories awaiting falsification. This is a consistent picture, but it cuts against the intuitions of many practicing scientists and philosophers, not to mention many of the rest of us.
As comforting as it would be for Popper’s clean demarcation criterion to resolve the question of separating science and pseudoscience, both logical analysis and a sociological glance at how scientists and laypeople actually demarcate demonstrate that it does not work. This raises another question: Why does it remain so popular?
The ubiquity of the falsifiability standard is partly an inadvertent consequence of a legal battle in the United States about “creation science”—a scientized rendering of the Judeo-Christian creation story as depicted in Genesis.
Controversies over teaching evolution in U.S. public schools simmered during most of the twentieth century, occasionally bursting into open conflagration. The first and most notorious of these was the “Scopes Monkey Trial” of July 1925. In spring of that year, Tennessee passed the Butler Act, which criminalized the teaching in public schools of human evolutionary descent from primate ancestors. The American Civil Liberties Union enrolled teacher John Thomas Scopes to knowingly violate the law to test the constitutionality of the ban in court, arguing that by forbidding Darwin’s theory because it violated a particular religion’s creation story, the Butler Act violated the First Amendment’s prohibition of a state religion. Scopes was found in violation of the law and was fined $100. He appealed to the Tennessee Supreme Court, which set aside the fine on a legal technicality but upheld the constitutionality of the law on the grounds that while it forbade the teaching of evolution, it did not require the teaching of any other doctrine of human origins, and thus did not benefit any specific religion. And that is where matters rested. By 1927 fourteen states had debated similar measures, but only Mississippi and Arkansas enacted them.
Two incidents sparked a reevaluation. The first was the Soviet Union’s launch of the first artificial satellite, Sputnik, on October 4, 1957. The success triggered an extensive discussion about whether the United States had fallen behind in science education, and reform proposals were mooted for many different areas. Then the centenary of the publication of Darwin’s On the Origin of Species (1859) prompted biologists to decry that “one hundred years without Darwinism are enough!” The Biological Sciences Curriculum Study, an educational center funded by a grant from the National Science Foundation, recommended an overhaul of secondary school education in the life sciences, with Darwinism (and human evolution) given a central place.
The cease-fire between the evolutionists and Christian fundamentalists had been broken. In the 1960s religious groups countered with a series of laws insisting on “equal time”: if Darwinism (or “evolution science”) was required, then it should be balanced with an equivalent theory, “creation science.” Cases from both Arkansas and Louisiana made it to the appellate courts in the early 1980s. The first, McLean v. Arkansas Board of Education, saw a host of expert witnesses spar over whether Darwinism was science, whether creation science also met the definition of science, and the limits of the Constitution’s establishment clause. A crucial witness for the evolutionists was Michael Ruse, a philosopher of science at the University of Guelph in Ontario. Ruse testified to several different demarcation criteria and contended that accounts of the origins of humanity based on Genesis could not satisfy them. One of the criteria he floated was Popper’s.
Judge William Overton, in his final decision in January 1982, cited Ruse’s testimony when he argued that falsifiability was a standard for determining whether a doctrine was science—and that scientific creationism did not meet it. (Ruse walked his testimony back a decade later.) Overton’s appellate court decision was expanded by the U.S. Supreme Court in Edwards v. Aguillard (1987), the Louisiana case; the result was that Popper’s falsifiability was incorporated as a demarcation criterion in a slew of high school biology texts. No matter that the standard was recognized as bad philosophy; as a matter of legal doctrine it was enshrined. (In his 2005 appellate court decision in Kitzmiller v. Dover Area School District, Judge John E. Jones III modified the legal demarcation standards by eschewing Popper and promoting several less sharp but more apposite criteria while deliberating over the teaching of a doctrine known as “intelligent design,” a successor of creationism crafted to evade the precedent of Edwards.)
Jettisoning falsifiability won’t solve our initial problem, however: demarcation is simply inevitable. Scientists have finite time and therefore must select which topics are worth working on and which are not: this implies some kind of demarcation. Indeed, there seems to be a broad consensus about which doctrines count as fringe, although debate remains about gray areas.
Other approaches might prove more successful. Philosopher and former professor of biology Massimo Pigliucci, for example, has suggested that the problem with falsicationism is its one-dimensionality, not its effort to establish clear criteria. Perhaps we could add more dimensions that correspond to the heterogeneity of scientific practice. Some sciences, he notes, focus on expanding empirical knowledge; others focus on deepening our theoretical understanding. Some sciences even do both. But failing to do either is a reasonably good indication that the subject is not to be considered scientific. This approach is not flawless, but it avoids some of the pitfalls that beset Popper.
Instead of trying to develop a criterion that will encompass all claims to scientific status—an ambition shared by Popper and Pigliucci—you might instead concentrate on what we can think of as “local” demarcation criteria: characterizations that encompass groupings of fringe doctrines without claiming to provide a be-all, end-all solution to the demarcation problem.
We might, for example, sort fringe doctrines into “families” that can be usefully analyzed together. At least four such families are discernible today. First, there are vestigial sciences, based on past “legitimate” science that is now out of date (like astrology). Then there are hyperpoliticized sciences yoked to ideological programs (such as Stalinist objections to genetics). A third group are counterestablishment sciences that replicate the sociological structures of mainstream science (creationism is the classic example). And in a final group we might collect the lineage of theories that have posited extraordinary powers of mind (telepathy experiments and spoon-bending).
These categories often overlap; you might fit a particular doctrine, such as Mesmerism, into more than one family. These four categories are also surely far from exhaustive; the fringe mirrors the heterogeneity of science itself. Still, reflecting upon the diversity of fringe doctrines can provide tools to understand how mainstream science works—and offer resources for thinking about the inevitable, if imperfect, task of demarcation.
Editors’ Note: This essay is adapted from On the Fringe: Where Science Meets Pseudoscience by Michael D. Gordin. Copyright © 2021 by Oxford University Press. All rights reserved. Used with permission.
…we need your help. Confronting the many challenges of COVID-19—from the medical to the economic, the social to the political—demands all the moral and deliberative clarity we can muster. In Thinking in a Pandemic, we’ve organized the latest arguments from doctors and epidemiologists, philosophers and economists, legal scholars and historians, activists and citizens, as they think not just through this moment but beyond it. While much remains uncertain, Boston Review’s responsibility to public reason is sure. That’s why you’ll never see a paywall or ads. It also means that we rely on you, our readers, for support. If you like what you read here, pledge your contribution to keep it free for everyone by making a tax-deductible donation.
Vital reading on politics, literature, and more in your inbox. Sign up for our Weekly Newsletter, Monthly Roundup, and event notifications.
Decades of biological research haven’t improved diagnosis or treatment. We should look to society, not to the brain.
Though a means of escaping and undermining racial injustice, the practice comes with own set of costs and sacrifices.
Pioneering Afro-Brazilian geographer Milton Santos sought to redeem the field from its methodological fragmentation and colonial legacies.