Nate Silver's Data Narratives
April 23, 2014
Apr 23, 2014
5 Min read time
Nate Silver’s re-launch of FiveThirtyEight is only a few weeks old but is taking fire from all angles. Each critic seems to find a different flaw, picking apart everything from the site’s early content to its guiding philosophy and staff. Marc Tracy is skeptical of Silver’s single-minded focus on forecasting concrete outcomes; Ryan Cooper rebuts the claim that FiveThirtyEight can “just do analysis” without letting liberal or conservative ideology intrude; Paul Krugman points out that trying to let data “speak for itself” is a hopeless endeavor, since data always have to be interpreted in the context of theories.
These criticisms of Silver’s content and method are all valid, but FiveThirtyEight’s fundamental problem is the tension between being clever or original and focusing on what matters most. In this respect, Silver (and other data journalists) may have more in common with traditional reporters and commentators than he realizes.
Silver’s feud with the political pundit class runs deeper than the divide between quantitative and qualitative reporting. As Cooper notes, Silver reserves his sharpest words for op-ed columnists who are “spitting out the same column every week” without doing much original thinking. Silver frames his site around the epigram of the fox (who “knows many things”) and the hedgehog (who “knows one big thing”)—the latter allegedly represented by Thomas Friedman and his ilk, the former by Silver and his team.
Data journalists are ideally positioned to counteract the tendency of journalists to report on the day to day rather than underlying trends.
Silver can’t be accused of writing the same column over and over, but his greatest virtue as an election prognosticator was that he did know a very “big thing”: that the candidate who leads in most polls is likely to win. As Silver himself has noted, averaging public polls produces fairly good predictions, and the statistical modeling that made him famous adds only marginal accuracy. The op-ed columnists found his approach simplistic, fancying themselves to be foxes who foretell winners from things like lawn signs, crowd sizes, or “vibrations,” as Peggy Noonan infamously tried to do on the eve of the 2012 election. But Silver was right and the Noonans of the world were wrong, victims of what Seth Masket describes as a “career incentive to somehow find something new to say in this deeply repetitive environment.”
This incentive is what leads commentators to report on things that change from day to day rather than more robust underlying trends. Writing about the 2016 presidential election requires referring to Obama’s approval ratings and the economy’s growth rate over and over again; discussing budget negotiations requires reiterating that mandatory spending and defense make up the bulk of federal spending; writing about congressional elections requires reminding readers that even in “wave” elections, most incumbents are reelected. These are as basic as statistics get, and hardly require much mathematical skill to understand. Yet journalists regularly trip up on all three, focusing instead on (respectively) candidate speculation, high-profile discretionary government programs, or nebulous anti-incumbent sentiment.
Data journalists are ideally positioned to counteract this tendency. But when they too need to come up with new material on a daily basis, as FiveThirtyEight intends to do, quantitative types can fall into their own version of the same trap. To avoid repetitiveness, number crunchers can be tempted to dig ever deeper in search of Freakonomics-style hidden influences. This temptation is heightened by the fact that, as relative outsiders to the world of political reporting, data journalists are under special pressure to produce eye-catching results. Blogging about basic truths about politics or policy doesn’t qualify, since—as political scientist John Sides puts it—it tends to provoke a reaction from political commentators along the lines of “that study confirmed what I already thought so it’s a stupid study.” Accordingly, many of the new FiveThirtyEight’s early pieces have been original statistical analyses of off-beat topics such as dating and pro wrestling.
The pressure to be original presents two dangers. First, data mining is a great way to churn up spurious correlations. If, as Stanford’s John Ioannidis argues, most published research is false, the same is surely true of blog posts by writers who need to find a statistical relationship in time to make deadline. Second, using advanced techniques to analyze current events, even when done well, risks reinforcing the impression that data journalism is arcane, for math wizards only. Silver would certainly, and correctly, disagree, but it is clear from the tone of some of his fiercer critics that they think data journalism is something that happens behind an impenetrable curtain.
This is not to say that good reporting must be repetitive. The best journalists find ways to explore new aspects of issues or place them in a different light without taking the focus off the figures that matter most. The old FiveThirtyEight’s election coverage exemplified this kind of journalism. Ezra Klein—whose Vox.com site is often grouped with FiveThirtyEight as an example of data journalism—wrote that Silver’s ability to turn data into narrative meant that “election obsessives could go to Silver every single day and read something new, even though nothing had really happened.” Paul Krugman, Silver’s new arch-nemesis, is another good example: his writing for popular audiences uses hard numbers not to replace narrative but to assess it—to show that some narratives are plausible while others are not.
Krugman is right that numbers don’t speak for themselves, but they do provide something that many political commentators lack: a sense of scale or proportion. Contra Dylan Byers, data journalism can be rigorous without producing forecasts as precise as Silver’s election models. The value of quantitative analysis often lies in its ability to determine whether X’s effect on Y is large or small, rather than a precise relationship between X and Y. Political scientists can’t forecast public opinion with much certainty, for example, but they can demonstrate that changes in partisan control have a large effect on public opinion while presidential speeches typically do not. These findings are inconvenient for commentators who claim Obama could accomplish anything if he would only show leadership like Reagan did. And inconvenience is precisely the point: data journalism makes political debates stay grounded in plausibility in ways that traditional reporting can’t achieve on its own.
Introducing a sense of scale to political debate is a huge endeavor, and not one that FiveThirtyEight—or Vox, or the similar projects underway at the New York Times and The Washington Post—can accomplish alone. But it’s a better goal for data journalism than producing daily forecasts or grand theories.
While we have you...
...we need your help. You might have noticed the absence of paywalls at Boston Review. We are committed to staying free for all our readers. Now we are going one step further to become completely ad-free. This means you will always be able to read us without roadblocks or barriers to entry. It also means that we count on you, our readers, for support. If you like what you read here, help us keep it free for everyone by making a donation. No amount is too small. You will be helping us cultivate a public sphere that honors pluralism of thought for a diverse and discerning public.
April 23, 2014
5 Min read time