With Responses From
Mar 16, 2011
4 Min read time
I agree with Rachel Glennerster and Michael Kremer’s basic points that small incentives can generate large changes in behavior among the poor, and that in general the new field of behavioral economics and the empirical method of random evaluation have made the study of development interventions richer and sharper. But there are some areas of their argument where I would put different emphasis or where I find their claims somewhat exaggerated.
First, even though Glennerster and Kremer discuss them together, the innovations of behavioral economics and random evaluation are conceptually distinct. One can usefully apply random experiments to test hypotheses where the agents’ activities do not depart from standard assumptions of rationality; similarly, non-experimental methods can test hypotheses derived from behavioral economics.
Second, that small changes in initial conditions can make large differences in outcomes is accepted even within the rational-agent paradigm. The behavioral approach adds a new dimension, but the experiments cannot usually determine when and under what circumstances small incentives will cause large changes in behavior. The experiments make us more pragmatic, but are we any wiser about causal processes? A successful experiment tells us what works, but not why or how; this, in many cases, makes the experimental approach less useful for policy design or broader application.
Third, while Glennerster and Kremer recognize that “the details of institutional circumstance can matter a lot,” they are nonetheless impressed to report, “despite our striking surface differences, there are strong similarities in how people make decisions about investments in health and education across contexts.” I don’t deny this, but it is important to keep in mind that economists have a professional inclination toward false universality and that our departures from narrow rationality can be highly culture-specific. For example, commitment devices—intended to overcome some of the “weaknesses of will” that behavioral economists have explored, such as procrastination or the tendency to place undue weight on present, as opposed to future, conditions—that may work in Calvinist northern Europe may not work in slacker cultures of Sicily or Bengal.
People are often motivated more by self-esteem than by income or welfare, and different cultures sometimes cater to self-esteem in completely different ways. Sociologists tend to emphasize norm-based human behavior or action where ends and means, which are pre-defined in economics models, co-evolve through social interaction. Loyalty norms, which shape social interactions, differ greatly between countries, and “visceral factors,” which some behavioral psychologists emphasize in understanding lack of self-control, can operate in highly variant ways. Honor killings are an extreme example of how group-loyalty norms and visceral factors can combine to generate a particular type of irrational outcome.
Finally, Glennerster and Kremer risk overstating the transformative character of random evaluation in the field of development. While this method has certainly made the search for causal identification in empirical development economics much cleaner—particularly in the case of certain types of development interventions—one must not lose sight of the wider scope of the discipline as a whole. Development economics is legitimately concerned with various historical and structural explanations for why poverty persists in some contexts more than in others, and many of these explanations can’t be found through random evaluation or general experimental methods. While Glennerster and Kremer stress that their approach is not just the next “‘big-think’ fad,” I am sure they are aware of the limitations of experimental methods in understanding many development processes.
Development economics does not just provide a better manual for project evaluation. Evaluation may be all-important for an organization such as the World Bank, which is concerned about whether the loans it disburses are properly used. But there are other issues in the discipline that may be ignored as the experimental program becomes its own kind of fad: researchers often lose interest in important questions that cannot feasibly be explored using randomization methods. Some of these questions are historical, institutional, and structural in nature and have very little to do with the impact of simple treatments such as those of a micro-health or educational intervention.
For instance, governments often have to make decisions about the proper industrial policy to pursue in light of potential market and government failures—how much to invest in physical infrastructure, where to locate a power plant or build a shipyard. Small-scale experimental methods do not offer much guidance here.
Even in social-service efforts in education and health, NGO-enabled experiments such as those documented by Glennerster and Kremer often are confined to short-run impact of relatively small projects. They cannot shed much light on the enduring effects of projects that are supposed to last, nor can they teach us a great deal about scaled-up projects where political entities necessarily get involved and distributional issues—who gains and who loses—may be at least as important as finding out the average impact of a given treatment.
It is not always easy to distinguish between observed outcomes that may be due to behavioral factors departing from self-interested rationality and those generated by out-of-equilibrium play in a complicated game. Similarly, heterogeneity of agents, not captured in the average effect in an experiment, can be a competing explanation of, say, the non-adoption of new technology by peasants. And in situations of extreme poverty and deprivation, the poor often internalize the severe social and economic constraints they face in life, and their observed behavior may be idiosyncratic or ambiguous in terms of interest perception, taking the form of fatalism, low aspirations, and underestimation of needs. In such cases a soft-paternalistic nudge on the part of the policy-maker may not be enough.
While we have you...
...we need your help. You might have noticed the absence of paywalls at Boston Review. We are committed to staying free for all our readers. Now we are going one step further to become completely ad-free. This means you will always be able to read us without roadblocks or barriers to entry. It also means that we count on you, our readers, for support. If you like what you read here, help us keep it free for everyone by making a donation. No amount is too small. You will be helping us cultivate a public sphere that honors pluralism of thought for a diverse and discerning public.