Chris Blattman

Search
Close this search box.

Development economics: shaped by the data not the question?

By restricting themselves to the econometric analysis of survey data, development economists are boxed into a Cartesian trap: the questions they ask are constrained by the limitations inherent in the process by which quantitative data from closed-ended questions in surveys are collected

A related criticism (to which we return later) is that many kinds of econometric analysis fail to examine what actually happens during the process of project implementation (the “black box”) and consequently are unable to determine the extent to which failure to achieve intended impacts is due to “design failure” or to “implementation failure”.

In other words, their research questions are being shaped by their data instead of their data by the questions.

That is Bamberger, Rao and Woolcock on why we should be using qualitative methods inside quantitative studies, especially randomized control trials.

If you’ve ever read a quantitative paper by a qualitative researcher, you were probably horrified by the technique and scornful of the ignorance. Now put on the other hat.

For most of us survey researchers, rigorous qualitative research means asking the taxi driver about local politics. Miscellaneous interviews and field notes on the program are improvements on the process, but in rigor they’re the ethnographic equivalent of non-random sampling, gross measurement error, and endogenous inference.

I’d also argue we need to go further than simply documenting the process of the program, or the particular context. If we want to understand why an entrepreneur succeeds, an AIDS information message sinks in, or a commitment device succeeds, we have a lot to learn from close, regular, rigorous observation. This usually means visiting a respondent many times, building trust, and more than simply asking questions, long periods of observation.

The trouble is, short of marrying a qualitative methods specialist (a tactic some of us have been known to employ), what’s a development economist to do?

The long answer may be a paper Jeannie and I write in future. For six of our randomized trials we have full-time local qualitative teams.

The short answer, however, is get thee a co-author. And a local team of researchers with a talent for ethnographic work. The transcripts and analysis they produce aren’t a perfect substitute for my own time in the field, but they are far more insightful than I ever expected.

In the meantime, here’s a qualitative research syllabus from my Yale colleague, Libby Wood. Books Jeannie buys our qual teams in the field: see here and here, and (more boring) here.

Other reader suggestions?

8 Responses

  1. Very interesting discussion. I’m excited that some contemporary economists are discovering qualitative methods, even if as spousal benefits! I’m a biochemist who stumbled into global health–combining epidemiology and medical anthropology for my doctoral work almost 15 years ago–studying women’s gynecological morbidity in India. I’ve written about the use of qual methods as stand alone methods, but more importantly in combination with quantitative methods. See this book (2003, so a bit dated but it tells you that many of us have been combining qual and quant methods for a long time now!) in which I have a chapter on using qualitative methods in gynecological morbidity research:
    Investigating Reproductive Tract Infections and Other Gynaecological Disorders: A Multidisciplinary Research Approach
    http://www.amazon.com/Investigating-Reproductive-Infections-Gynaecological-Disorders/dp/0521818125/ref=sr_1_1?s=books&ie=UTF8&qid=1281238237&sr=1-1#reader_0521818125
    (Sorry, I couldn’t find a shorter link!)
    A couple of things I’ve learned along my journey as a global health researcher:
    1) Qualitative methods are not just important–they are necessary to generate and confirm testable hypothesis, develop precise and accurate quantitative tools, explain quantitative results. A multi-disciplinary approach is crucial for inquiry.
    2) Qualitative methods reveal so much about the WHY of the results we count and quantify in quantitative work.
    3) Qualitative methods are used poorly by many quantitative researchers, and get a bad rap for this reason. For example, many throw in a few focus groups to get qualitative data and think that the job is done. I’ve seen so much of this in global health research. Part of it is because quantitative researchers think that they can just DO qual methods because it is easy to just interview a few people and conduct a few focus groups (wrong!), funders of studies require it (but don’t really track how these methods are incorporated for high quality results) and because it is cool to include quotes from informants (often as a string of quotes with no analysis of what these represent). Unfortunately, many researchers not only collect poor quality data, but also lack the training to systematically analyse qualitative data. You are very fortunate to have Jeannie on your research team for RCTs. I think we will learn alot more about the intervention’s impact, then from an RCT alone. I look forward to reading the paper that both of you write!

  2. As I never get tired of saying, this is simply a movement back towards what economics used to be. Keynes is rigourous; Marx is rigourous; Smith is rigourous: all drew on extensive qualitative evidence on how the economy works.

    How to get qualitative insights? Study a bit more broadly than just economics and statistics. Any anthropologist or historian is compelled to read widely in other disciplines and will thus learn something of their methods. It never failed to shock me how little my fellow economists in my degree and master’s courses had ever read outside of economics.

  3. Creswell, boring? You go too far, sir!

    Seriously, it’s increasingly obvious that mixed methods are the best way to do this work. And I’m increasingly perplexed as to why our forebears spent so much time and effort fighting between quant and qual approaches when clearly both have something to offer. Our questions are ultimately “does it work and why?” But there are so many different ways for a program or policy to be “successful.” It’s worth figuring out ways to do the things you’ve recommended to get a more complete answer.

  4. Chris, you might want to check out this paper by Betsy Levy Paluck on the topic of qualitative work and field experiments.

  5. In short: YES.

    So how do we get better qualitative insights? I think if you approach this problem at the data-aggregator level, you’re already screwed. We need higher standards for primary data, and that starts with junking anything that reduces complex, chaotic outcomes to a good-to-bad ranking. If the data doesn’t explain what the actual problem is, how do you know what needs to be fixed?

    The good news is, this is not as difficult as it sounds. In my sector, dealing with institutions, the Global Integrity Report has been doing this for years, and TI’s National Integrity Systems have been doing it before that. Find the people who know what’s going on, and have them offer a short explanation for tightly defined, repeatable questions. You can still quantify it, but the numbers become a summary and an entry point to the narrative — never the whole story. The political economy analysis in DFID’s Drivers of Change series presents another useful approach.

    The need for thick data is a major theme to our guide to governance metrics:
    http://commons.globalintegrity.org/2008/09/users-guide-to-measuring-corruption.html

    Chris — I enjoy your blog, as always. Thanks!

    Jonathan at Global Integrity

  6. Chris,

    I am a quantitative guy so my perspective may be off, but my impression is that qualitative evidence plays a big role in formulating hypotheses and developing explanations in evaluations involving randomizations. A few months ago I saw Roland Fryer present his work on monetary incentives in schools. His idea — let’s pay students to get them to achieve, and let’s see what kind of payment works best — is reasonable enough, and the ambition of the project is extremely exciting. Many of his results suggested that paying kids (middle and high school students) for performance on tests or grades did not lead to higher test scores or grades; paying second graders to read, however, did improve test scores (for non-hispanics). The most interesting part of the talk came when he speculated on why subsidizing inputs turned out to be more effective then subsidizing outcomes: his qualitative team, which interviewed lots of students, reported that the middle and high school students just didn’t know how to improve their test scores. When asked, “what would you do differently,” they responded “read the question,” and mentioned other test-taking strategies, but never suggested study techniques. The paper is here: http://www.economics.harvard.edu/faculty/fryer/files/Incentives_ALL_7-8-10.pdf; the qualitative part plays a big part of the discussion.

    More generally, many of the practitioners I’ve talked to tell stories like this. “We did some experiment, we expected X, but got Y. When we asked subjects about it, they said…” and often what they say forms the basis for the next experiment. As you say, that’s not the best form of qualitative study, of course, but it does indicate an openness towards the methods.

    1. I agree 100%.

      What is striking is how much emphasis we put on causal identification and other statistical issues, and then how imprecise and casual we are about the process, the reasons, or the mechanism. It’s a striking contrast.

      It’s also problematic because, often unconsciously, economists take the (bad) qualitative insight and interpretation very seriously. We pretend to be married to rigor, but in fact we enjoy and use and repeat a good story as much as the quantitative results, when one is rigorous and the other is arbitrary. Then we refuse to admit it influences our thinking.

      That’s an unfair caricature, but not that unfair.

Why We Fight - Book Cover
Subscribe to Blog