Studies cannot alone provide the answers.
March 17, 2011
With Responses From
Mar 17, 2011
4 Min read time
Studies cannot alone provide the answers.
Rachael Glennerster and Michael Kremer offer a balanced celebration of the growing body of work that explores behavioral similarities in how people access education and health care in development settings. They review evidence showing that the rational-consumer model does not predict behaviors; that costs, incentives, and information do not have simple and consistent effects on people’s choices; and that paying attention to timing and magnitude of costs and incentives—by evaluating programs in randomized trials—is important in the design of effective solutions to health and education problems.
But some practitioners, particularly educators, are uncomfortable with the application of randomized control trials to policy-optimization questions. Educators are acutely aware that their own decisions are not binary; nor are those of their clients, students, and students’ parents. Education programs are different from health interventions, and perhaps less amenable to control-trial methodologies because of their duration and complexity. One cannot teach a child to read with an immunization schedule, nor will a quick shot of a word or two protect anyone from ignorance.
This does not undermine the validity and utility of Glennerster and Kremer’s work for education-program design, but it is a reminder that these studies, no matter how numerous and well done, cannot alone provide the answers.
The authors acknowledge some challenges to the value of the studies, but practitioners raise several more.
First, in most randomized control studies, there is no opportunity for trial and error. Typically, policies and programs evolve over time, are adjusted and improved (or sometimes diminished) as results and responses are collected and analyzed. New and complex interventions suffer when this evolutionary process is off limits. The tested versions of interventions may be suboptimal, consumer behaviors may change, and potential effectiveness may be underestimated. But without tinkering, programs may never achieve their full potential.
Second, tailoring development programs to a specific place and the people who live there is widely seen as an element of effective provision of education for the poor. The studies discussed here complement but do not replace the insights of other approaches—such as participatory rural appraisal, whereby an outsider facilitates conversation within a community about its problems—that aim to collect timely, valid, and useful information for program design. There are many ways to learn about needs, opportunities, challenges, time frames, desired results, barriers to access, and incentives.
Finally, even complex designs seldom do justice to the alternatives that parents and students face assessing costs of schooling and the long-term benefits of learning.
Consider the Madagascar example Glennerster and Kremer cite. Here randomized experimentation complements other methods. Providing parents with information about the returns of education led to almost twenty years of additional schooling for every hundred dollars spent. Parents who responded to that information were themselves learning on many levels: absorbing information, linking new information to their knowledge of their financial assets and their children’s potential, calculating near- and long-term returns, possibly forecasting income and budgeting resources, and so forth.
There are decades of research on parents’ calculations of the benefits of education and decision-making that hinge on whether the payback will accrue in the long or short term. In cultures where women marry and live with their husband’s family, parents historically have eschewed investing in girls’ education because the benefits accrue to another family. In many families, the decision to educate girls continues to be influenced by marriageability. A next generation of studies might explore the interaction of parents’ context-specific calculations with the timing and dimensions of near-term incentives for and costs of education. Such studies could integrate insights of approaches less rigorous than randomized control trials, but highly relevant to designing cost effective and therefore sustainable interventions.
Good research in education has high yields. Yet examples of education trials are scarce in Glennerster and Kremer’s article. Perhaps that is a reflection of the less comfortable fit of randomized control methodology with education than with health. In the case of health, the most parsimonious of metrics—mortality—is available. But learning is challenging to measure, education involves many services delivered over the course of many years, and inputs are extremely hard to control. As a result the experimental method is difficult to apply to the education process.
But precisely because learning is powerful, characterizing and measuring it well is important. To do this requires carefully assessing what kinds of learning are measured by evaluations, and what kinds of learning parents expect when they invest in education.
The tale of trial-and-error improvements to rural chlorination initiatives happily reminds us that a single randomized study will never provide a definitive answer to a complex problem. Each evaluation, however, can be a useful piece in the mosaic of knowledge that we use to design development programs. In education, rigorous processes of trial, adjustment, and retrial to improve children’s learning—not just their access to school—have been too scarce: as access in the developing world has surged over the past twenty years, quality has declined. High rates of failure to learn basic skills compromise the future returns on families’ investments, and evidence of parental reluctance to pay for poor-quality education and poor learning outcomes is already emerging, especially with regard to girls.
Randomized trials can test whether an education program actually delivers learning, meets parents’ expectations of the economic and social returns of learning, and improves attendance. However, randomized trials in education are expensive and difficult to do well. To make measurable progress toward the day when every child in school is actively learning, education also needs rigorous research approaches that can be undertaken at local scale, provide quick feedback, and are not extremely costly.
Help fund the next generation of Black journalists, editors, and publishers.
Boston Review’s Black Voices in the Public Sphere Fellowship is designed to address the profound lack of diversity in the media by providing aspiring Black media professionals with training, mentorship, networking opportunities, and career development workshops. The program is being funded with the generous support of Derek Schrier, chair of Boston Review’s board of advisors, the Ford Foundation, and the Carnegie Corporation of New York, but we still have $50,000 left to raise to fully fund the fellowship for the next two years. To help reach that goal, if you make a tax-deductible donation to our fellowship fund through August 31 it will be matched 1:1, up to $25,000—so please act now to double your impact. To learn more about the program and our 2021-2022 fellows, click here.