With a global email field experiment using 1,419 micro-finance institutions (MFI) as subjects, we test the effects of scientific findings about microfinance on organizations’ willingness to learn more about MFI effectiveness and pursue an offered partnership to randomly evaluate their programs.
In the positive treatment subjects were randomly assigned to receive a summary of a study by prominent authors finding that microcredit is effective. The negative treatment provided information on research – by the same authors using a very similar design – reporting the ineffectiveness of microcredit. We compare both conditions to a control in which no studies were cited.
The positive treatment elicited twice as many responses as the negative treatment – and significantly more acceptances of our invitation to consider partnering on an evaluation of their program – thus suggesting significant confirmation bias among microfinance institutions. The randomization revolution thus faces real challenges in overcoming development organizations’ apparent aversion to learning.
A new paper by Brigham, Findley, Matthias, Petrey and Nielson.
To get a general sense of magnitude: Take up of positive messages is about 10%, and 5% for negative ones. (This is actually higher than I would have expected, for both treatments.)
I wouldn’t have singled out randomized trials. One thing I have noticed in many NGOs: bad news is buried. The number of consultant evaluations that get toned down or shelved when the news is bad is atrocious. Rigorous trials have a big advantage: they are big and expensive and external enough that they can’t be buried. Does this mean that the evidence from trials is the hardest to bury? I would think so.
24 Responses
For the past two decades, I led a microfinance support organization—Freedom from Hunger—which has a deserved reputation for commitment to rigorous research on the impacts of microfinance as practiced by partner MFIs (including RCTs in successful collaboration with academic researchers). The research was always conducted as part of a global learning agenda that often resulted in the kind of “updating” sought by the authors of this randomized trial. I cite these credentials to underscore my credibility in commenting on the authors’ interpretation of their research results.
I have no problem with the methodology or the internal validity of the results. I do question the value of the effort, which seems to be an elaborate demonstration of the obvious, even of the authors’ own “confirmation bias.” But my real problem is with the conclusions drawn, which are almost comical in their naivete about realities of the researcher-practitioner tension, especially in the world of microfinance. I would not have responded positively to the email, and not because of the presumed aversion to learning.
Consider the context when these emails were sent in late 2011 (Odell was quite right about the contextualization deficit in RCT reports): the microfinance community was keenly aware of the negative press generated by the now famous trio of RCTs that were reported with much fanfare in 2009-10. Much as the authors of those reports denied that they had concluded that microfinance doesn’t work, their protests seemed disingenuous. One of the papers was titled “The Miracle of Microfinance?” That looks like a straw man set up for a determined skeptic to knock down – a PR blunder that obscured the real study results. Moreover, the ensuing enthusiasm for randomized trials downplayed the costs—not only to the researchers, but also to the hosting practitioners. The practitioners and the study subjects themselves must endure operational disruptions that are not trivial. Practitioners committed to learning (not as many as we would prefer, but we knew that already, didn’t we?) have questions that are not the same as the researchers are interested in, and the researchers are in control. Even for questions of interest to the practitioner, the RCT-generated answers are seldom useful to the practitioners because the RCT approach is not well suited to revealing why things are the way they are. A practitioner who has committed millions of dollars to building a microfinance institution has a right to be exasperated with a researcher who tells them that the null hypothesis is not refuted, with the clear implication that their work is worthless. What are they supposed to do with this information? How are they supposed to “learn” from such results? Finally, consider the impact of all these negative yet mostly accurate perceptions of the academic researchers doing RCTs with MFIs. By late 2011, the brand characteristic of the randomistas that had become most salient for MFI leaders was “arrogant presumption.”
With this context, it is no surprise that the positive responses to the email invitation were relatively few. It is reasonable to assume that the average reaction of those who bothered to open the email (opening rates are low for any email from a stranger) varied from suspicion to “I can’t be bothered.” Such negative responses are even more likely when the message is essentially negative about microfinance. Think about it. Some academic researchers are offering to help you learn about the effectiveness of your program. You are interested because you have lots of questions and you seek answers to improve your program performance, both for your clients and for your MFI. Then you consider how likely it is that these unknown academics will really help you learn. You think back on what you’ve heard about similar research and researchers and their public reports. You think about the tepid message in the email, especially if you got the one that is explicitly negative about microfinance effectiveness. Likely to be helpful enough to justify the cost and aggravation of having these folks from New York City (think of the brand issues there!) camped in your operational midst for months?
Come on! What has this got to do with “confirmation bias” or “aversion to learning?” Perhaps more researchers should put more energy into learning how to work with practitioners rather than belittling them with this kind of arrogant presumption.
So NGOs resemble other organizations who like to ignore data that doesn’t fit with their plans or paradigms (World Bank/IMF – “SAPs are great…damn the results!”…European Governments…”Austerity is the way…damn the results!…Charlie Brown and Lucy…”This time I’ll kick the football..aaaarggh!” Glad to hear NGO folks are just as human as the rest of us.
I’m not that surprised by these findings as the incentives are stacked against organizations being open about negative findings, especially in the context of the donor environment both for government and individual donors. As OECD-Evalnet states donors can do much more to incentivize a culture of learning. See here for something I wrote a while ago (as an insider) on “Creating a demand for knowledge” which looks at some of the internal reasons why bad news is not always welcome http://kmonadollaraday.wordpress.com/2011/03/21/do-we-need-to-create-a-greater-demand-for-knowledge/
Donors also play a role in creating incentives for NGOs and other implementing organizations. Donors could do more to support a culture of learning (that doesn’t readily bury negative findings) by rewarding high quality evaluation evidence and functional results monitoring systems. They can (and do?) reward organizations that have systems capable of producing credible evidence about effectiveness and impact, “regardless” of the results reported – as long as suitable follow-up actions are taken to correct course.
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
Every time I read any questioning of how should specific organizations react to information, and presuming they should do so differently to any other, I have to stop and ponder. In this case I have to ask: what makes the authors of the study believe that heads of MFI would react to information differently, if compared to heads of corporations, heads of governments or, dear I say it, heads of families? The idea that good news are re-divulged much more than bad ones should not be a surprise and what surprises me is that researchers keep testing MFIs and NGOs against what I would call the “angelic yardstick”. MFIs and NGOs leaders are, one would expect, driven and moderately to highly voluntarist people, and most of them (I would venture, a fantastically high majority of them) see successes, be them small or big, happening on a regular basis. Otherwise, they would stop working altogether and move to a different activity. I know that the flip side of the coin of the “angelic yardstick” is that NGO and MFI management, not seeing any success turns the blind eye by preferring their own job security and income over the effectiveness of what they do. Truth is, in general, good management is aware of weaknesses (and seeks to hide them while, if it can, trying to overcome them) and of strengths (boasting them as much as possible). So I would agree that, for management, either because of a voluntaristic bias towards seeing more the beautiful than the ugly in their work, either because strategically it is advised to do so, will always under-divulge negative reports. They are less likely to believe in them and less willing to make them public. Having worked in many fields, corporations, an NGO and academia, in all of them actively participating in management reflections or benefiting from a fair amount of internal transparency, I find the management of information to be pretty similar in all of these. To expect MFIs to act according to academic balance,… well, I find it to be expecting them to act according to what better suits researchers, but not actually what is more likely to suit them. This does not preclude, however, that MFI management should pay attention and keep track of every evaluation of its field of operations, and adjust accordingly. It may, however, not be the case that the best course of action is to divulge that what they do, actually, doesn’t work. It’s almost like asking a researcher to openly acknowledge that her/his research actually doesn’t change a thing. They truly do not/cannot believe it to be true.
@BP – broadly agreed that you’d likely see the same confirmation bias across both types of study. I think it’s a strange choice too, and am maybe less comfortable with the sexing it up…
I’m not quite convinced about the “negative” treatment used in this study. The versions of the e-mail used in the positive and negative treatments both refer to results from real papers. But the positive version (which begins with the statement “Academic research suggests that microfinance is effective”) sounds vaguely like it could be a summary of the modestly positive effects found in randomized evaluations of microcredit programs. If we assume that at least some of the MFI executives who received these e-mails are aware of those recent evaluation results, then they may have seen the negative version (which opens with “Academic research suggests that microfinance is ineffective”) as an unjustified generalization. That could have affected the respondents’ confidence in the authors of the e-mail, and so potentially have influenced their decision on whether or not to respond.
@ Brett: I doubt very highly that MFIs take a stand on social science methodology and care about RCT versus non-RCT. So you’re right that takeup of positive would be higher in either case–we wouldn’t learn much from the additional division of the sample into RCT/non-RCT. I think Chris’ suggestion was just that the implication is not bad for the randomization revolution so much as it is bad for MFIs and social science in general (i.e. development orgs are unwilling to learn from evidence). Highlighting RCTs is a strange choice, but I assume they did this to sex up the paper a little, which I completely understand.
cool study on NGOs adversity to learning RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/YAohqQlSdh
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
Right, why not do a factorial design where you’re testing positive vs. negative and RCT vs non-RCT results. I imagine takeup of positive would be higher than negative in both cases…
@cblatts Maybe this is what divides a stoic/ideological org vs. a flexible/pragmatic org. The instrument of the org shouldn’t define its aim
@cblatts I’m assuming this is rhetorical?
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4
RT @cblatts: Do development organizations like to hear contrary findings? http://t.co/M2DLoLNDe4