“Please reject me”

James Fowler makes an interesting pitch on the POLMETH listserv.

I frequently advocate for increasing the fraction of papers that get desk-rejected without review.

This not only reduces the number of referees that editors need, but it also reduces the cycle time from initial writing to publication for papers that would be a better fit at a different journal.

…My colleagues who worry about increasing the fraction of desk-rejects note that one issue is that some authors will perceive the practice as being unfair — we need to give everyone an equal shot at review and we need to ensure that scholarship does not come to be dominated by a small number of players at specific departments. Those are reasonable concerns.

So why don’t we do this instead: give authors the ability to *opt-in* to a higher desk-reject threshold (say, 50% get rejected without review). In other words, an author can ask an editor to reject the paper quickly if he or she does not think it will succeed.

As someone who doesn’t have any name recognition at journals (trust me, most journal editors have never heard of this blog) I love the idea of desk rejects. Nothing is worse for a junior scholar (or any scholar) than to have an article sit in limbo at a top journal for 9 months, only to be rejected with a single letter where the referee did not seem to even read the paper (don’t worry, I won’t name journal names, JPE.) Oops.

Anyways, I purposefully send my papers first to journals that are known to desk reject. This presumably works in their interest, because they get right of first refusal.

I’m less worried than Fowler’s colleagues about bias towards the big shots. Several econ journals have started attaching author names to the paper. I seem to recall a study (anyone know where I can find it?) that shows that no-name authors get at least as good a chance as big-name authors. I don’t see why editors shouldn’t act any differently than referees.

I think the big losers here would be journal editors, who now face even more work. But I’m presuming here that desk rejecting takes more time than sending out for referee reports (and chasing said referees).

I would say desk rejects, without opt-ins. Who could figure out the equilibrium of that game?

Reader thoughts?  Might be useful to state your rough position (junior or senior, oft-published or not).

10 thoughts on ““Please reject me”

  1. There’s a hilarious study by Peters and Ceci (1982) in which the authors selected twelve articles that had been written by researchers at prestigious institutions and recently published in prestigious peer-reviewed psychology journals with non-blind refereeing practices, and simply re-submitted these articles to the same journals that had published them. The only change they made to the articles was to substitute fictitious authors’ names and institutions for the real ones. Only three of the resubmissions were detected; the remaining nine were reviewed and rejected, usually for ‘serious methodological flaws’. This seems to suggest that there’s some value in blind refereeing after all.

  2. completely agree, but desk-rejects have to be justified with some care – though I feel that’s still less work than going through the entire process. I got a very quick & thoughtful desk reject from BJPS once and that certainly increased the chance that I’ll submit there in the future.

  3. I agree, but the best solution to concerns about big-shot bias is to maintain author-anonymity for the crucial decision of whether to send out for review or reject. As soon as the editor pushes the button for one or the other (triggering an irreversible process) she sees the authors names (so can select reviewers etc). Since most journals use web-based submission this would be very easy, and AEA’s statement that double blind increases admin cost is pretty lame.

    [junior lecturer, no reputation, not so oft-published yet, about to be submitting a few!]

  4. This isn’t an econ journal, and it’s not about papers with big-shot authors, but here’s an example where one journal switched to double-blind review & the proportion of authors with female names increased compared to other similar-field journals.

    Double-blind review favours increased representation of female authors
    Trends in Ecology & Evolution
    Volume 23, Issue 1, January 2008, Pages 4-6
    http://www.sciencedirect.com/science/article/pii/S0169534707002704

    I see no reason to assuming that journal editors are any less biased than reviewers and no reason why the name of a big shot doesn’t influence editors & reviewers.

    More broadly, this seems like a solution that doesn’t address the problem. The problem is that it is culturally acceptable in econ for a reviewer to sit on a manuscript for 8 months. In every science journal I’ve reviewed for, I start getting reminder emails after 2 week that get increasingly pushy and the decision is made with or without my review after a month. I’ve seen a desk rejection in econ take over a month.

    At least in science, the big difference between a desk rejection & a review is you can get real feedback with a review. To create a system where younger researchers with less submission experience are less likely to get substantial 3rd party feedback on their research seems fairly damaging to me.

  5. Reading about these stark differences in practice (and transparency, and fairness..) between econ and science publications prompts me to encourage my daughter to study the latter.

  6. Journals rely on unpaid external reviewers to fulfil what is arguably their most important function (helping people to filter good papers from the mass of others on the internet). Reviewers de-prioritize the reviews, consequently the process takes years, and researchers end up with a choice of reviewed work or recent work.

    Doesn’t this tell us the whole system is seriously flawed? How about a voluntary system of blind-reviewing works to be published on the internet via a clever website that would help manage the process of pre-review (someone chooses who would be a good reviewer), review, and revision? If a few prestigious departments in each field signed up to the system that could start it off with the credibility it would need…

  7. There seems to be an excess demand for publishing relative to reviewing; maybe economists need to be more willing to review, and less willing to get published.

    In a similar vein, and as I believe you’ve argued elsewhere on this blog (Chris), maybe economists should also be more willing to duplicate older studies, and less willing to publish original research.

  8. I believe the paper you are referring to is Rebecca Blank’s 1993 AER piece, “The Effects of Double-Blind versus Single-Blind Reviewing: Experimental Evidence from The American Economic Review.” The AER did a randomized trial of single-blind vs double-blind reviewing (alternating every other paper). Blank finds that blind reviewing does not affect Top 5 schools nor schools ranked 50+. Papers from schools ranked 6-50 do worse in blind reviewing.