Chris Blattman

Search
Close this search box.

Academic research: Use with caution

That’s the recomendation from Holden at GiveWell, after a long look at microlending debate between Mark Pitt and critics David Roodman and Jonathan Morduch.

Holden gives excellent advice. An excerpt:

Never put too much weight on a single study. If nothing else, the issue of publication bias makes this an important guideline…

Strive to understand the details of a study before counting it as evidence. Many “headline claims” in studies rely on heavy doses of assumption and extrapolation…

If a study’s assumptions, extrapolations and calculations are too complex to be easily understood, this is a strike against the study. Complexity leaves more room for errors and judgment calls, and means it’s less likely that meaningful critiques have had the chance to emerge…

If a study does not disclose the full details of its data and calculations, this is another strike against it – and this phenomenon is more common than one might think…

Context is key. We often see charities or their supporters citing a single study as “proof” of a strong statement (about, for example, the effectiveness of a program).

I could not have said it better. The post is worth reading in full.

Publication and confirmation bias are horrifically rampant. And data are seldom available. I am struggling to obtain 4-year old data for a replication myself, at the moment.

Two changes I would consider:

1. Journals should require submission of replication data and code files with final paper submissions, for posting on the journal site. (The Journal of Conflict Resolution is one of the few major political science or economics journals I know that does so faithfully.)

2. PhD field and method courses ought to encourage replication projects as term assignments. (Along with encouragements to diplomacy–something new scholars are slow to learn, to their detriment.)

Other suggestions for the profession?

8 Responses

  1. I really like suggestion 2. The only thing I would add is that such replications should be posted to a journal or database of replications so that outsiders have some sense of whether a given paper has has their analysis replicated or not.

  2. One of the most useful papers I read in grad school was Gary King’s “Publication, Publication”:

    http://gking.harvard.edu/files/abs/paperspub-abs.shtml

    It is a how-to guide on writing a publishable paper, starting with replicating the findings of an existing article.

    I have been trying myself to get access to data of a published paper (from a journal whose author instructions include: “Authors whose manuscripts are accepted for publication are required to make all information in datasets used in the article freely available to researchers, on request.”), with absolute silence from the author. I wish there were some incentive to get the author to reply… lack of response makes me even more skeptical of the findings.

  3. Many program evaluations use administrative data that can’t be released for replication even when anonymized because the data-sharing agreements don’t authorize the researchers to release it.

  4. “On the other hand, we’re economists and we need incentives.”

    The problem exists in medical sciences as well, for the same reason—no reward for publishing replications.

  5. I just finished the 2nd year of the Penn State Economics PHD program. This last semester we had to do a big empirical methods project. One option was to replicate an existing empirical study. A number of my classmates did this, and almost everyone got the same results as in the original studies.

  6. The last thing a profession like economics, which exists to support existing power structures no matter what, wants to do is Actual Science.

  7. Sunlight is the best desinfectant, so just getting economists to replicate others’ work would go a long way.

    I believe it’s AER policy to ask for code and data, and some researchers, to their great credit, are making all their stuff available on their website (Nathan Nunn is one example).

    On the other hand, we’re economists and we need incentives. I think right now only the Journal of Applied Econometrics is publishing ‘narrow’ replications; giving more outlets (and more credit) for replication would go a long way.

    Also see this article by Hamermesh on replication in economics: https://webspace.utexas.edu/hamermes/www/CJE82007.pdf

  8. Another suggestion I can think of is the addition of some kind of data dredging test, in order to avoid false positives on the data.

Why We Fight - Book Cover
Subscribe to Blog