We tend to associate randomized trials with new drugs for blood cholesterol and baldness, not with school textbooks, mosquito bed nets, and police patrols. But a growing number of development agencies are beginning to change our minds; they are assessing the impact of humanitarian aid and development programs with the randomized control trial.
I am going to assume that this fact is already a familiar one. For convenience, this talk will actually presume a great deal: that you appreciate the case for evidence-based programming; that you understand the nature of a randomized trial; and that you believe the ethical and political concerns with randomization are important but in many cases can be overcome.
My goal is not to make the case for randomized evaluation. Rather, I want to challenge the conventional wisdom. In particular, I want us to be more ambitious, more aware of the needs of implementers, and more forward-thinking in our approach to evaluation. Much of what is being done in evaluation today is very good, but I will argue we can do better. The hard work and ingenuity of researchers and implementers the world round have pushed us to a point where we are ready for a revision of evaluation practice: an impact evaluation 2.0.
Thursday morning I spoke to evaluation specialists and the heads of social development at DFID, Britain’s Department for International Development. Read the full text of my comments here.