One observation in this article should be taken to heart by every development agency and NGO: development practice has suffered from too many big ideas.

From the Cold War onward, the landscape of development economics has been scarred by ideological battle. At every stage opposing camps have shouted that all that’s needed is debt relief, or extra aid spending, or even an end to aid. The extent of poverty, illness, hunger, and misery still afflicting the world make it plain that no one big idea has proved compelling. Does that mean the opposite is true, that all well-meaning development effort instead should go into small, practical experiments?

Glennerster and Kremer have a wealth of valuable experience in evaluating the results of randomized experiments to see whether they have enhanced the education or health of groups of people in developing countries. Their approach has the potential to improve enormously the effectiveness of the tax dollars spent on overseas aid or donations to development charities because it can help prioritize different programs. Whether choosing between spending initiatives to raise school enrollment, or between spending on schooling versus clean water, having a solid empirical assessment of the likely impacts is surely important. The growing use of randomized experiments is to be welcomed for its contribution to the evidentiary basis of policy; development agencies and NGOs have no excuse for failing to evaluate their impact.

The wider relevance of experimental evidence should not go unchallenged, however. Some argue that small-scale experiments only give small-scale results, which don’t necessarily carry over into different contexts—an inducement to attend school that works in India might not be so effective in the different cultural context of Kenya. Small, practical experiments do not make it easy to extract universal truths about people’s choices and the collective outcomes to which these give rise. And in that case, experimental results are not so much pieces of the “development puzzle,” as Glennerster and Kremer call them, as unconnected dots that might not add up to a bigger picture.

Glennerster and Kremer’s argument about the relevance of behavioral economics to the analysis of experiments should be seen in the context of this critique of their approach. Otherwise the introduction into development economics of this fashionable idea of “nudging” people’s behavior seems misplaced. After all, what the authors describe as seemingly irrational behavior—in order to account for the surprisingly large effects of some small changes in incentives—can be explained through rational models.

There’s no need to call on the peculiarities of human psychology to explain precipitous drops in demand in response to small price increases or the highly adverse impact of small costs on investment in education. In many contexts it is enough to say that most of the people concerned have very low cash incomes and cannot easily borrow. In other situations, the explanation is a perfectly rational Catch-22: there is no payoff from investing in a child’s education if there are not many good jobs available, and employers won’t create good jobs in places where the general level of education is low.

Lack of information or asymmetries of information between different groups of people also play a large part in the challenge of economic development, as Joseph Stiglitz pointed out years ago in the work for which he was awarded his Nobel Prize. Glennerster and Kremer give an example of the relevance of information when they describe the impact on school enrollment in the Dominican Republic of simply explaining the benefits of education.

This is not to say that there is no role for policy nudges in development economics. There are clearly some contexts in which psychological traits lead most people to behave in ways that defy the conventional economic framework of rational self-interest, particularly when it comes to making calculations about a distant and uncertain future. Most people will choose a short-term gain over a less tangible long-term one, hence the pervasive under-investment in retirement plans. For similar reasons, I am often tempted to buy something just because it comes with an enticing, but small, free gift. These idiosyncrasies can no doubt be used to advantage by agencies in developing countries: why not borrow well-known sales techniques to market public health?

However, the validity of the experimental approach does not depend on behavioral psychology. If there were one phrase to sum up conventional, everyday economics, it would be: “incentives matter.” That seems to be exactly the point Glennerster and Kremer want to drive home, and I agree wholeheartedly.

Besides, there is a more convincing response to the critique that experimental results are not generalizable: if true, then nor are any other empirical results in economics. All data are specific to their context. In fact, most of the data used in empirical economics are particular to their time and location and a blend of many different influences, whereas at least experimental results isolate one feature of interest. Randomized experiments should not be the only approach to testing hypotheses, but they are an important addition to the economist’s toolkit.

The detour the authors make here into irrational explanations for behavior that is entirely rational diverts attention from their stronger message about the contribution experimental results can make to devising effective incentives for economic development. For that matter, rich nations, too, should be making more use of evidence from randomized policy experiments, rather than relying on the sweeping assertions of one party or another about their ideas for school improvement or public health. For, as Glennerster and Kremer conclude, “There are strong similarities in how people make decisions about investments in health and education across contexts.”