Chris Blattman

Search
Close this search box.

Science fails us again (or, how consistent is the peer review process?)

At least in psychology, not very.

we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”

Full article is in Behavioral and Brain Sciences. I don’t see an ungated copy plus please put one in the comments if you do.

[Edit: In my Monday haste, I neglected to notice this is a 1982 study.]

I’m not surprised this happens. I am very surprised at the magnitudes. We all ought to keep this in mind when we bow down to the great god of science.

I would love to see this study replicated in economics and political science, or life sciences, if only because I have a low opinion of the psychology review process (having gone through it myself, and read a lot of the journals).

It actually wouldn’t surprise me if the median psych paper is methodologically flawed, if only because they love to publish studies (like this one) with sample sizes of 18.

23 Responses

  1. Am I wrong, or does this imply the initial reviewers knew who had submitted the paper (and their institution) which explain that they made a very lenient review of a supposedly prestigious paper, whilst when seeing an unknown name they then became very scrupulous and stringent ?

  2. How do you figure this is a failure of “science”? In any case, there was a cool essay somewhere – I can’t find it now – about a college student who plagiarized Pulitzer-esque writing for a term paper and received a bad grade. And who submits a paper makes a huge difference.

    It’s not the same, and not academic, but probably plays at the same urge. Apparently for some econ journals, the “reputation of the author” is used as a heuristic.

  3. This isn’t evidence against science; this is evidence that psychology is largely unscientific. But, honestly, I spend much of my time as a graduate student trying to point out the flaws in the APA style: that many of the studies we were given were largely anecdotal, with no real checks on biases, but only a lot of verbose bluster, inadequate literature reviews, and some bogus statistical exercises meant to justify studies that consisted in “observing 100 subjects” in order to confirm predetermined conclusion X. But that’s not the same as science. And you really can’t use one to debunk the other. This is basically a straw man against science. What’s next? Acupuncture studies suck, so everyone is okay to keep denying well-established science?

Why We Fight - Book Cover
Subscribe to Blog