Michael Clemens says yes, in a new CGD blog post:
there was no fundamental reason why the selection of treatment villages for the MVP could not have been randomized. There was certainly a large pool of candidate villages, and the people running the MVP are some of the most capable scientists on earth, so they are very familiar with these methods and why they matter.
But treatment selection was not random, and it may be too late to evaluate the initial 13 MVs scientifically. It would be very easy, however, to scientifically evaluate the next wave.
My take: yes, evaluate away, but we probably won’t learn much that is useful from a simple randomized control trial. I’ve written about this before:
even if we looked at control villages, and saw an impact, what we would learn from it? “A gazillion dollars in aid and lots of government attention produces good outcomes.” Should this be shocking?
We wouldn’t be testing the fundamental premises: the theory of the big push; that high levels of aid simultaneously attacking many sectors and bottlenecks are needed to spur development; that there are positive interactions and externalities from multiple interventions.
The alternative hypothesis is that development is a gradual process, that marginal returns to aid may be high at low levels, and that we can also have a big impact with smaller, sector-specific interventions.
To test the big push and all these externalities, we’d need to measure marginal returns to many single interventions as well as these interventions in combination (to get at the externalities). I’m not sure the sample size exists that could do it.
We may (*gasp*) have to resort to non-random, even non-quantitative evaluation. Surely this is going on. I’d say the real question to the MV’s is this: where’s the peer-reviewed evidence so far?
See my full post here.