What’s wrong with journal publishing and peer review?

what happens to the comments of peer-reviewers of rejected papers? Practically nobody hears about them. Peer reviewers are unpaid consultants, they receive no credit for their reviews, they waste their time, and then their comments are discarded, while the papers that they showed to be wrong eventually get published and cited and shape the scientific literature.

Three public health professors chronicle the many, many unpleasant facts in academic publishing. Their brutal gaze falls on medicine most of all, but my brief exposure suggests the social sciences are not so dissimilar.

Our old friends the BMJ (see Monday’s post) come out badly. Not as bad, however, as our peaceful, fish-eating neighbors to the north:

An analysis of papers from Norway (a country with overall high-quality research) showed that 36% of the citations received within a 3-year window are self-citations.

Via Marginal Revolution. Worth reading in full.

9 thoughts on “What’s wrong with journal publishing and peer review?

  1. The BMJ’s ability to pick up errors in its peer review process is fairly abysmal as the BMJ’s own editor and colleague convincingly demonstrated in a research project on the journals own peer review practice.

    The article’s key finding: BMJ reviewers detected less than one third of ‘major methodological errors’ deliberately inserted into papers sent out for review in the trial.

    That is not so good. What is worse is that it is extraordinarily difficult to get rebuttals published in the journal——even when you can demonstrate ‘major methodological errors’.

    See: http://jrsm.rsmjournals.com/cgi/reprint/101/10/507

    Andrew Mack

    PS. This is the abstract of the article:

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection.

    607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted.

    The number of major errors detected varied over the three papers.The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups.The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper.

    Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.”

  2. I say this all the time, but why can’t disciplines “shake things up” by setting up some sort of practitioner’s nominated best paper (and best paper within a field) award every year with online voting. Screw Nobel and AER/QJE honchos. Let economists elect and vote on what they think is the best paper they’ve read all year. Each PhD holder gets 1 vote an nobody’s vote matters more than another. It’s like SAG awards instead of the Oscars. We can even have subcategories like “best written by a non-tenured prof” or “best written, but maybe not best technique” and “most clever idea, though maybe not well written”. And find a way to prevent people from voting themselves 100 times…I know SSRN is sorta the closest thing, but there can be multiple downloading from different computers, assigning the paper to students to read, etc.