Ioannidis says most published research findings are false. This is plausible in his field of medicine where it is easy to imagine that there are more than 800 false hypotheses out of 1000. In medicine, there is hardly any theory to exclude a hypothesis from being tested. Want to avoid colon cancer? Let’s see if an apple a day keeps the doctor away. No? What about a serving of bananas? Let’s try vitamin C and don’t forget red wine. Studies in medicine also have notoriously small sample sizes. Lots of studies that make the NYTimes involve less than 50 people – that reduces the probability that you will accept a true hypothesis and raises the probability that the typical study is false.
Alex Tabarrok breaks down the Ioannidis argument and applies it to economics. Economics may do better, he says, because our sample sizes are larger and there is more theory to guide what we do.
Agreed, but I am not sure theory is used often enough. The number of theory-less papers that cross my desk, searching for cross-national correlates of poverty or conflict, makes me despondent. The number of famous papers that fall apart when you replicate the results with three more years of new data furthers my skepticism. I don’t know if I believe a single cross-national result any longer.
Alex’s bit is worth reading in full–two or three times, to wrap your brain around it.
4 Responses
Thank you for your ideas. I think, all new starts in any field of science tend to be wrong, because before come to the real result the scientista can face to a lot of mistakes, ups and downs. Some great ideas about improving academic writing: http://g3dev.info/blogs/post/2576
Ops, sorry, jumped the gun and didn’t notice that Tabarrok had mentioned the paper.
Also crshalizi at three toed sloth runs with the idea of publication bias… http://bactra.org/weblog/698.html
Don’t the following also cover the same area:
Brad De Long’s *Are all economic hypotheses false?” Journal of Political Economy, 1992
I don’t think the models help this in economics at all. It’s much easier to get the result you want by reparameterizing your model or choosing a new distribution or recasting your PDE or the like than it is to get more data to back up a scientific hypothesis. I think this can only increase the false discovery rate of economic papers, and I’d venture that it VASTLY increases the false discovery rate relative to pretty much any science.
This is not to say I disagree with Ioannidis’ general point, just that I’d find it hard to believe that the problem isn’t orders of magnitude worse in economics than in most other disciplines that the argument could apply to.