Science fails us again (or, how consistent is the peer review process?)

At least in psychology, not very.

we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.”

Full article is in Behavioral and Brain Sciences. I don’t see an ungated copy plus please put one in the comments if you do.

[Edit: In my Monday haste, I neglected to notice this is a 1982 study.]

I’m not surprised this happens. I am very surprised at the magnitudes. We all ought to keep this in mind when we bow down to the great god of science.

I would love to see this study replicated in economics and political science, or life sciences, if only because I have a low opinion of the psychology review process (having gone through it myself, and read a lot of the journals).

It actually wouldn’t surprise me if the median psych paper is methodologically flawed, if only because they love to publish studies (like this one) with sample sizes of 18.