Chris Blattman

Search
Close this search box.

Randomized evaluations: The handbook(s)

k10085Everything you need to know in one volume, by JPAL’s Rachel Glennerster and Kudzai Takavarasha.

Basically this is a how-to guide for the practitioner, or researchers who are new to RCTs and want a checklist and how-to guide to avoid common pitfalls. I certainly wish I’d had it a few years ago.

For veterans it’s a useful reminder and checklist. And teaching tool. I’ll use it for undergraduate and graduate classes.

For the more technically-minded, good pairings are Mostly Harmless Econometrics (by Angrist and Psicke) and Field Experiments (by Gerber and Green. Both are required reading for any serious field empiricist. Deaton’s Analysis of Household Surveys is also free online, or you can buy a paper copy.

I wish there were a good handbook like this on data collection in developing countries. Any reader suggestions? I bought several copies of an old Casley and Lury book, now out of print and out of date.

Update: Was just alerted to Overseas Research: A Practical Guide, which I have not read but looks promising for new researchers.

5 Responses

  1. Chris – your readers may also be interested in “Field Trials of Health Intervention in Developing Countries: A Toolbox” by Smith and Morrow. As the title implies it’s written specifically with health in mind, but I think a lot of it applies more broadly. It’s also not specifically geared at RCTs, but I believe most of the examples focus on community-randomized trials. Lots of practical stuff in there — we used it for a class on study design and execution in developing countries in the international health department at Hopkins:
    http://www.amazon.com/gp/product/0333640586/ref=as_li_ss_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0333640586&linkCode=as2&tag=bretkell-20

  2. Chris:

    Based on my experience described above, adjusting for baseline covariates is not routine in the JPAL/IPA crowd. Or, at least, it was not routine a few years ago.

    Just to be clear: my message is not just to adjust for background variables (that’s obvious to all, I think!) but to go to the trouble of measuring background variables. That was where the trouble came in: the person I talked with did not have any good background variables kicking around, and I was suggesting that it could be worth the effort to put in some work to measure and construct such variables. So the advice I’d like to see is, at the design stage, to record good pre-test variables (as is done, of course, in education research). It’s a design question not just an analysis question, but I didn’t really make that point clear in my comment above (or maybe in my discussion with the poverty researcher a few years ago).

  3. Alas, the book is in my New York office and I am in Florida, and I don’t know offhand. But I think adjusting for baseline covariates is routine in the JPAL/IPA crowd from which this book emerges.

    Some people are purists and want to see unadjusted mean differences, I suppose because they are suspicious of monkey business, but most of us just do this in a supplementary table.

    David McKenzie has a nice article making your point about covariates and efficiency for the field experiments crowd, as well as some thoughts on the trade-offs between baselines and further endlines, and relative efficiency of differences-in-differences estimates versus simply controlling for baseline levels of the dependent variable.

    Personally I’ve found the efficiency gains from baseline covariates to be very small, and seldom affect my estimates, except in one case when I happened to have chance but significant imbalance at baseline.

  4. Chris:

    Do they recommend that researchers record pre-treatment variables? I’m just wondering, for the following reason: A few years ago I went to a talk given by a prominent advocate of randomized trials for studying social interventions. The results of the particular study being discussed were not quite statistically significant (and I credit the speaker for presenting such results rather than sweeping them under the rug). Afterward I suggested regressing on some pre-treatment variables on the theory that this would reduce residual error, which was obviously a concern in this example. The speaker patiently replied that, no, it was a randomized study so there was no need to adjust for anything. To which I patiently replied that, yes, I knew all about randomized studies, but if you adjust for relevant pre-treatment variables you can get big gains in efficiency. The speaker then started to come up with excuses why no pre-treatment variables were available. These excuses might have been legitimate but to me it sounded like rationalization, that the speaker wanted to avoid doing any more work.

    Anyway, the point of this story is that randomization is great, but I wouldn’t want people to think that it makes statistical analysis irrelevant or that it’s a license to throw away information. So I hope that this new textbook makes it clear that you can randomize and adjust for pre-treatment variables too.

Why We Fight - Book Cover
Subscribe to Blog