Impact Evaluation 3.0?

Three years ago I gave a talk at DFID called Impact Evaluation 2.0.

At the time my blog had a readership of about one (thanks, Mom!) and I never expected the talk would get around much, which is why I gave it such an arrogant title.

To my enduring surprise, some people actually read it. I have seen it circulated at conferences on impact evaluation, and to my horror someone was actually going to quote it in an academic article. (I persuaded them otherwise.)

The talk is in need of some serious revision. A paper is in the works. That probably means 2012.

My point in 2008: to talk about how impact evaluations could better serve the needs of policymakers, and accelerate learning.

Frankly, the benefits of the simple randomized control trial have been (in my opinion) overestimated. But with the right design and approach, they hold even more potential than has been promised or realized.

I’ve learned this the hard way.

Many of my colleagues have more experience than I do, and have learned these lessons already. What I have to say is not new to them. But I don’t think the lessons are widely recognized just yet.

So, when asked to speak to DFID again yesterday (to a conference on evaluating governance programs), I decided to update a little. They had read my 2.0 musings, and so the talk was an attempt to draw out what more I’ve learned in the three years since.

The short answer: policymakers and donors — don’t do M&E, do R&D. It’s not about the method. Randomized trails are a means to an end. Use them, but wisely and strategically. And don’t outsource your learning agenda to academics.

Probably I should actually wait until Ive published my field experiments before I shoot my mouth off any further. But as readers of this blog may know: why stop now?

So, I bring you slides and (abbreviated) speaking notes for… “Impact evaluation 3.0?

Comments and criticisms welcome.

Update: Incidentally, I should also point people to others singing a similar (but more harmonious) tune: