By restricting themselves to the econometric analysis of survey data, development economists are boxed into a Cartesian trap: the questions they ask are constrained by the limitations inherent in the process by which quantitative data from closed-ended questions in surveys are collected
A related criticism (to which we return later) is that many kinds of econometric analysis fail to examine what actually happens during the process of project implementation (the “black box”) and consequently are unable to determine the extent to which failure to achieve intended impacts is due to “design failure” or to “implementation failure”.
In other words, their research questions are being shaped by their data instead of their data by the questions.
That is Bamberger, Rao and Woolcock on why we should be using qualitative methods inside quantitative studies, especially randomized control trials.
If you’ve ever read a quantitative paper by a qualitative researcher, you were probably horrified by the technique and scornful of the ignorance. Now put on the other hat.
For most of us survey researchers, rigorous qualitative research means asking the taxi driver about local politics. Miscellaneous interviews and field notes on the program are improvements on the process, but in rigor they’re the ethnographic equivalent of non-random sampling, gross measurement error, and endogenous inference.
I’d also argue we need to go further than simply documenting the process of the program, or the particular context. If we want to understand why an entrepreneur succeeds, an AIDS information message sinks in, or a commitment device succeeds, we have a lot to learn from close, regular, rigorous observation. This usually means visiting a respondent many times, building trust, and more than simply asking questions, long periods of observation.
The trouble is, short of marrying a qualitative methods specialist (a tactic some of us have been known to employ), what’s a development economist to do?
The long answer may be a paper Jeannie and I write in future. For six of our randomized trials we have full-time local qualitative teams.
The short answer, however, is get thee a co-author. And a local team of researchers with a talent for ethnographic work. The transcripts and analysis they produce aren’t a perfect substitute for my own time in the field, but they are far more insightful than I ever expected.
Other reader suggestions?