Chris Blattman

Search
Close this search box.

Impact Evaluation 2.0

We tend to associate randomized trials with new drugs for blood cholesterol and baldness, not with school textbooks, mosquito bed nets, and police patrols. But a growing number of development agencies are beginning to change our minds; they are assessing the impact of humanitarian aid and development programs with the randomized control trial.

I am going to assume that this fact is already a familiar one. For convenience, this talk will actually presume a great deal: that you appreciate the case for evidence-based programming; that you understand the nature of a randomized trial; and that you believe the ethical and political concerns with randomization are important but in many cases can be overcome.

My goal is not to make the case for randomized evaluation. Rather, I want to challenge the conventional wisdom. In particular, I want us to be more ambitious, more aware of the needs of implementers, and more forward-thinking in our approach to evaluation. Much of what is being done in evaluation today is very good, but I will argue we can do better. The hard work and ingenuity of researchers and implementers the world round have pushed us to a point where we are ready for a revision of evaluation practice: an impact evaluation 2.0.

Thursday morning I spoke to evaluation specialists and the heads of social development at DFID, Britain’s Department for International Development. Read the full text of my comments here.

4 Responses

  1. I understand it is almost a year since you posted this, but I happened to read it just now and wanted to share a few comments.

    – I especially liked your focus on the value of qualitative evaluation and the focus of Impact Evaluation 2.0 on processes and management rather than different interventions altogether.

    – However, you also argued that Impact Evaluation 2.0 focuses on understanding causal relationships, as opposed to IE 1.0 does not?

    – Secondly, in your example of changing different components of micro finance, how different is it form conventional IE 1.0? Does the 1.0 version not include such programs?

    Thanks,

    najim

  2. I strongly agree with your comments on evaluation and the limitations of randomised experiments. Yes, they provide good information but, No they have serious limitations. This is the case (a) for interpretation and (b) in trying to use the results form them as the basis for-large-scale non-experiments and for generalisation. Labour economics has gone through this debate since the 1980s.

    Also, conventional methods of evaluation can be extended to try and provide a explanatory narrative to link policy interventions with outcomes.

    For a recent example concerning the impact on outcomes of infrastructure regulators (e.g. in electricity/energy & telecoms) , see Brown, Stern and Tenenbaum “Handbook for Evaluating Infrastructure Regulatory Systems”, World Bank 2006. This – and related work – suggests using analytical narratives, ie carefully structured comparative case studies.

Why We Fight - Book Cover
Subscribe to Blog