Learning from experiments that didn’t happen

The authors attempted to implement randomized experiments to evaluate the impact of seven matching grant programs offered in six African countries, but in each case were unable to complete an experimental evaluation. One critique of randomized experiments is publication bias, whereby only those experiments with “interesting” results get published. The hope is to mitigate this bias by learning from the experiments that never happened.

This paper describes the three main proximate reasons for lack of implementation: continued project delays, politicians not willing to allow random assignment, and low program take-up; and then delves into the underlying causes of these occurring.

Paper here.

Not earth-shaking but important to do and unfortunately unusual that the authors took the time to write up these results.

Kudos to them: Francisco Campos, Aidan Coville,  Ana Fernandes, Markus Goldstein, and David McKenzie, all at the World Bank.