Chris Blattman

Search
Close this search box.

Whatever doesn’t kill your paper makes it stronger

A new study of the submission and citation history of 80,000 papers in 923 biological science journals.

Roughly three-quarters of all articles were initially targeted to the journal that would eventually publish them, indicating that authors were generally efficient at targeting their research and limiting the risk of rejection. Surprisingly, however, articles that were rejected by one journal and resubmitted to another were significantly more cited than “first-intent” articles published the same year in the same journal.

“We think the most likely explanation is that inputs from editors and peer reviewers, and the greater amount of time spent working on resubmissions, makes papers better and improves the citation impact of the final product”

If number of resubmissions are a guide to quality, my papers are better than I think.

Source. Hat tip.

13 Responses

  1. I don’t have access to the original article, but isn’t there a much simpler explanation? Imagine papers of quality level >1 are accepted at top journals, and >2 at second-tier journals. People only know their paper quality within a range of width 1. All papers have quality uniformly drawn from [0,3], and you submit to the best possible journal you have any chance of being published in. Papers submitted first to second-tier journals, then, have quality on [1.5,3], uniform, and those submitted to first-tier but rejected have quality on [1,1.5] uniform. That is, the papers first submitted to good journals are better than those first submitted to worse journals, even conditional on first rejection.

Why We Fight - Book Cover
Subscribe to Blog