Chris Blattman

Search
Close this search box.

The avalanche of bad research

Only 45 percent of the articles published in the 4,500 top scientific journals were cited within the first five years after publication. In recent years, the figure seems to have dropped further.

…As a result, instead of contributing to knowledge in various disciplines, the increasing number of low-cited publications only adds to the bulk of words and numbers to be reviewed. Even if read, many articles that are not cited by anyone would seem to contain little useful information. The avalanche of ignored research has a profoundly damaging effect on the enterprise as a whole.

Not only does the uncited work itself require years of field and library or laboratory research. It also requires colleagues to read it and provide feedback, as well as reviewers to evaluate it formally for publication. Then, once it is published, it joins the multitudes of other, related publications that researchers must read and evaluate for relevance to their own work. Reviewer time and energy requirements multiply by the year. The impact strikes at the heart of academe.

That commentary in the Chronicle of Higher Education.

I share the loathing for terrible work, and increasingly obscure and specialized journals, which publish the work of an insular cabal. The great tragedy is not the production of the work, but the initiation of so many new students into mediocrity.

Even so, do great novelists bemoan the rash of bad novels, and the system of incentives that produce them? Perhaps. Both strike me as the result of a richer society where more and more people can turn their attention to writing–academic or otherwise. There will always need be outlets. Why gnash your teeth so?

The greatest cost is reviewer time. Possibly one in three of the articles I am asked to review are horrendous. But I’m a junior faculty, and so likely to get more of these than most. They are quick to spot and so, while they eat up a couple of hours, often do no more.

There are benefits to an intellectual market with low barriers to entry. A few hours a month isn’t a terrible price to pay to consume the results.

9 Responses

  1. Nice post. I think on average research gets cited more often probably has something useful to teach us. That does not mean it has to be the best quality research either (though one would hope that good quality research receives due credit). I don’t think it should surprise us that weaker research gets published and cited. Sometimes we have more to learn from mistaken work than airtight arguments with pristine empirics. Personally, I’ve learned at least if not more research that helps clarify a flawed line of reasoning or empirical strategy. The intellectual scrutiny and criticism that often follows is surely a public good for all researchers.

  2. Burnside & Dollar (1997) was an empirically weak paper scrutinised to the hilt. It is also one of the most widely cited papers on the subject of aid effectiveness.
    Are widely cited papers always superior to those who are not cited so widely?

  3. And once we have to care about Lucas, or even pretend that “capital shouldn’t be taxed if no new capital is being created” is a Brilliant Observation and Touchstone of the Field since 1994/5/6, we have spend time on that street, because everyone else does, no matter where the car keys are.

  4. The problem is that it is hard to draw the line of “bad research” vs. “good research”.
    What’s the difference bw a paper with 0 cites and a paper with 5 cites?
    And pushing it further, 50 cites?

    The point is that, in the long run, we care only about the top 3 articles published in a given year. Think Akerlof, Lucas, etc.
    Research is super-elitist…

  5. The problem is also one of knowledge accumulation. Political scientists prefer being new and catchy rather than derivative, which is what studies that build on existing work are frequently labeled (our friends in the natural sciences are deeply confused by this accusation). The result is a series of smaller, more obscure journals rather then better mainstream journals.

    I don’t think novelists complain as much about bad novels, since it makes the better ones stand out- and these novelists are also not wasting hours reviewing their peers, unlike academics (the editors should be more angry than the writers).

  6. I guess you call for writing research that will be quoted. Like writing a blog for the Google search index. I think it must be possible to optimize your research to improve citation frequency, if not better content.

  7. In keeping with the previous comments, I’ll mention that I’ve heard no fewer than three of my senior colleagues advise students and junior faculty not to “waste time” reading published literature. At least in empirical microeconomics, the fact that we don’t cite each other might not mean the work is bad; it might mean that we reward each other for talking, and punish each other for listening.

  8. I wouldn’t assume that non-citation is *entirely* about bad research. Some blame should go to the arcane and protracted procedures to obtain the full text of some journal’s articles. Supporting your point, though, I do think there may be an issue with the types of publications valued by some disciplines; I’ve seen lab experimentalists publish their newly generated data without significant review of existing literature (thus limiting citations).

  9. good post, ignoring more than half of new research is hardly going to contribute to new research actually being current and allow for non-conventional ideas and findings to get the exposure they deserve.

Why We Fight - Book Cover
Subscribe to Blog