Has the randomized trial movement has put the auditors in charge of the R&D department?

16720017-Abstract-word-cloud-for-Randomized-controlled-trial-with-related-tags-and-terms-Stock-Photo

My main problem with RCTs is that they make us think about interventions, policies, and organizations in the wrong way. As opposed to the two or three designs that get tested slowly by RCTs (like putting tablets or flipcharts in schools), most social interventions have millions of design possibilities and outcomes depend on complex combinations between them. This leads to what the complexity scientist Stuart Kauffman calls a “rugged fitness landscape.”

Getting the right combination of parameters is critical. This requires that organizations implement evolutionary strategies that are based on trying things out and learning quickly about performance through rapid feedback loops, as suggested by Matt Andrews, Lant Pritchett and Michael Woolcock at Harvard’s Center for International Development.

RCTs may be appropriate for clinical drug trials. But for a remarkably broad array of policy areas, the RCT movement has had an impact equivalent to putting auditors in charge of the R&D department. That is the wrong way to design things that work. Only by creating organizations that learn how to learn, as so-called lean manufacturing has done for industry, can we accelerate progress.

That’s Harvard’s Ricardo Hausmann writing in Project Syndicate.

I had the following reactions:

  • Absolutely, organizations should be doing innovating through rigorous trial and error. And the case needs to be made, since many organizations don’t know how to do this.
  • But let’s be honest: most governments and NGOs did not have R&D departments that got hijacked by randomized trials. Most organizations I know were not doing much in the way of systematic or rigorous research of any kind. Outside one or two donors and development banks, the usual research result was a mediocre consulting report rigged to look good.
  • In fact, most organizations I know have spent the majority of budgets on programs with no evidence whatsoever. In the realm of poverty alleviation, for example, it turns out that two of the favorites, vocational training and microfinance, have almost no effect on poverty.
  • This goes to show that, without a market test, some kind of auditing or other mechanism is probably needed. Especially the money-wasting behemoths of programs that are still so common.
  • Sometimes the answer will be large-scale randomized trials. The way I see it, trial-and-error-based innovation and clinical trials are complements not substitutes. Most of the successful studies I’ve run have followed a period of relatively informal trial-and-error.
  • There are a few radicals in academia and aid who say everything should have a randomized trial, but I think the smart ones don’t really mean it, and the others I don’t take seriously. They are also the exception. If you look at the research agenda of most of the so-called randomistas, experiments are only a fraction of their work.
  • In political science, the generation before me fought (and still fights) the methodological war. My generation mostly gets on with doing both qualitative and quantitative research more harmoniously. I feel the same way about the randomista debate. People like me do a little observational work, a little forecasting, a little qualitative work, some randomized trials, and I’m even starting to do some trial-and-error style work with police in Latin America. I don’t think I’m the exception.
  • If anything, the surge of randomized trials have paved the way for rigorous trial-and-error. I’ve seen this at my wife’s organization, the International Rescue Committee. Eight years of randomized trials showed their organization and their donors that some of their biggest investments were not making a difference in the lives of poor people. This has built a case for going back to the drawing board on community development or violence prevention, and now they are starting an R&D lab that looks very similar to Hausmann’s vision. They can do this because expanding a research department to manage randomized trials brought in the people, skills, and evidence base to make a case for innovation.
  • There are some structural problems in academic research that make this hard. Organizations like Innovations for Poverty Action and the Poverty Action Lab have drawn bright red lines around randomized trials, and most of the time don’t facilitate other kinds of research. But I can see adaptive and rigorous innovation fitting in.
  • (Updated) Some people have said “oh but there are too many randomized trials and too much emphasis.” This is the nature of new research technologies. People overdo them at first, since the opportunities are so large. Not so long ago everyone ran cross-country regressions, or wrote a little theoretical game. These are still useful, but they’ve receded as new methods appear. So, this too will pass. Randomized trials will join the pantheon of mediocre methods at our disposal. (The saddest statement is that, to the aid industry, and to much of social science, a randomized trial is “new”. Scientists are aghast at this.)

My view: we can push rigorous trial and error up without pushing other approaches to learning down.

106 Responses

  1. This response seems to miss, or perhaps obscure, the point. In my understanding, Hausmann is suggesting that development organizations take a Toyota-style approach to innovation, in which front-line workers have authority to adapt, make suggestions, and eventually change the way the organization works. In this case, power to innovate lies in the front-lines, among implementers.

    In contrast, Blattman seems to depart from the premise that high-level managers, or academics, are the ones authorized to have ideas, and these ideas are then transmitted to the fieldworkers who implement them. Thanks to rigorous testing, the best ideas can be disseminated. Power is centralized, and held by the proper authorities.

    So the debate is not about methods or rigor, it is about authority to innovate and power to decide.

  2. I’ve heard variants of this prediction “Randomized trials will join the pantheon of mediocre methods at our disposal. ” from lots of smart people. I want A) a metric by which it could be judged to be true or false in 10 years and B) to bet against it.