Bill Easterly asked me to comment on his critique of randomized control trials (RCTs) in development.
I should preface anything I say with this: I am engaged in a flurry of RCTs, but have not actually finished any one of them–all are mid-stream. I’m also just barely out of the PhD. So I’m basically not qualified to speak on the subject.
Fortunately, in the blogosphere, that is not a prerequisite for an opinion.
What opinion would that be? Well, I share almost all of Bill’s enthusiasm for the potential for RCTs in development, as well as his criticisms and recommendations. But here are a few areas where I have something additional to say.
Bill Easterly: RCTs can cause hard feelings between treatment and control groups within a community or across communities.
Not necessarily. Is it the trial that causes hard feelings, or the fact that we are giving aid to one person and not another? The latter happens as a rule. So long as aid is scarce, there’s always a ‘control group’not getting the program.
There are two differences with an RCT: getting the program is random, and we’re collecting data on the unlucky ones. Does collecting the data create hard feelings? Not in my experience. What about the lottery aspect? The main reaction I’ve heard from communities about RCTs: “Finally, we know why some people get aid and others don’t, and we all have a fair shot.”
Aid allocation is often ad hoc, sometimes corrupt, and never transparent to the people who don’t get it. In some cases, lotteries among the deserving might actually be an improvement, or at least no worse. That’s when RCTs can be appropriate.
BE: Can you really generalize from one small experiment to conclude that something “works”?
Probably not. I think even the value of replication is overestimated; I’m worried that five years from now, when we have data on eight kinds of education programs, each run in ten countries, that we won’t be able to say which intervention, on average, is better than the others. The standard errors on the meta analysis could be too large. (I hope I’m wrong, though)
In any case, I think we’ll learn a lot about poverty dynamics over time, about how people respond to incentives, and a huge range of other questions. Most of these findings won’t come from the randomization, but from analyzing unusual patterns of response to a program, or why some people respond differently than others. That is, it will come from observational analysis. This knowledge will be hugely valuable.
BE: The most useful RCT results are those that confirm or reject a theory of human behavior.
Maybe. They will certainly be important. I would not be surprised if the most useful results are the ones we didn’t expect: the weird behavior; the result that runs in the opposite direction of what we predicted. The inductive analysis that follows could open up huge areas of behavioral research. This is my (inexpert) sense of how the experimental psychology literature revolutionized our understanding of human behavior.
BE: But RCTs are usually less relevant for understanding overall economic development.
I would change this to “areas of overall macroeconomic development”. I think we will learn a lot about the fundamentals of how people climb out of poverty, the determinants of entrepreneurship, the roots of social and political participation, and responses to incentives. The big questions about economic development are not all macro-level. Indeed, the “big” micro questions have received less attention because of (until recently) an absence of data.
A completely different criticism of RCTs: they’re typically designed for academic publications first and policy change second, yet are usually sold on the policy change argument. I’ve argued elsewhere how RCTs could be better managed to deliver policy change, without sacrificing academic excellence. Institutes like IPA and J-PAL, the World Bank and 3ie have made big strides in this area over the past few years, but I think they would be the first to say that there is still more to do.
One problem is that too few aid organizations are thinking strategically about how to make RCTs (and research in general) inform their strategy and operations. They’ve mostly outsourced that thinking to people whose incentives and priorities are important, but different. That is also starting to change, and I’m very optimistic about the next five years of studies.
4 Responses
Isn't the type of aid being delivered in an RCT also really important? It's one thing to randomize who gets, say, access to some kind of new agricultural technology and quite another to make it clear to a group of people that they won't be getting lifesaving medication or a mosquito net. Granted, those are oversimplifications and obviously we don't test medications. And as you point out, the alternative isn't much better.
Thank you so much for putting so well the sense that I think most people have. No, it isn't perfect, but we can't wait for perfect–and in the end, being more transparent and collecting more data shouldn't be criticized when the real counterfactual is just as imperfect, with a lower potential for good.