Chris Blattman

Search
Close this search box.

How to save referees from awful papers and save authors from awful referees

Brendan Nyhan has an idea: on how to improve the review process where causal inference is involved:

Why not try to shift the focus of reviews in a more valuable direction? I propose that journals try to nudge reviewers to focus on areas where they can most effectively improve the scientific quality of the manuscript under consideration using checklists, which are being adopted in medicine after widespread use in aviation and other fields

Let’s see if you can guess my favorite checklist.

Here are some items from one he suggests:

  • Does the author provide their questionnaire and any other materials necessary to replicate the study in an appendix?
  • Does the author use causal language to describe a correlational finding?
  • Does the author specify the assumptions necessary to interpret their findings as causal?

And here are some items from a second:

  • Did you request that a control variable be included in a statistical model without specifying how it would confound the author’s proposed causal inference?
  • Did you request any sample restrictions or control variables that would induce post-treatment bias?
  • Did you request a citation to or discussion of an article without explaining why it is essential to the author’s argument?

It seems to me that articles are so heterogeneous it would be hard to come up with a checklist that works for most papers that is not cumbersome to the referee. But it could be worth a try. If limited to quantitative causal inference papers it would be a step forward.

The first checklist could simply be a manuscript submission guide or checklist for authors before they submit. They have stronger incentives to answer. Anyways, I applaud experimentation along these lines.

For more, here are a few older inks:

24 Responses

  1. Conversely, for those authors obsessed with ‘causal inference’ here is an important question … Once you have identified what you take to be the relevant causal effect did you simply stop and declare victory? Or did you offer a theoretical argument complete with reasonably specified mechanisms for how X causes Y (instead of just estimating the difference between Y and Y’)? I’d be really, really happy if journals stopped publishing self-satisfied but more or less wholly unpersuasive papers that fail to even gesture toward undertaking the latter tasks. I regularly recommend reject when reading such manuscripts.

Why We Fight - Book Cover
Subscribe to Blog