Chris Blattman

Search
Close this search box.

IPA’s weekly links

Screen Shot 2017-10-27 at 12.55.39 PM

Guest post by Jeff Mosenkis of Innovations for Poverty Action.

Screen Shot 2017-10-27 at 12.55.39 PM

The links are back from vacation. We may have a few back links to catch up on over the next weeks, so here we go:

  • Rachel Meager has public speaking tips for economists.
  • If you want to catch up on a Twitter conversation including me, Chris, and a bunch of other people responding to the Cuddy article on what replication fights in psych mean for econ there’s a 168-slide storify here.
    • I wondered if econ is happily driving along at 65 mph waving at psych as it heads towards its own cliff, because I think every field is unaware of its own blind spots. For econ, I thought it wouldn’t be small samples, but lack of attention to the survey questions and what their measurement tools are getting at (in my experience other social science fields sweat these details far more than economists).
  • But a few days later I was proven wrong. Stanford statistician John Ioannidis (known for the paper showing most medical studies are probably wrong), and colleagues came out with a paper showing most empirical econ studies are underpowered (using 159 meta-analysis data sets of 6,900 studies). In plain language that means they in fact didn’t have big enough samples to support the effects they claim:

nearly 80% of the reported effects in these empirical economics literatures are exaggerated; typically, by a factor of two and with one-third inflated by a factor of four or more.

The paper‘s part of a section on reproducibility in econ (all papers ungated), but there are some slides here in more accessible language. (h/t Lee Crawfurd)

  • If misery loves company, a Nature survey of 1,500+ physical scientists found that more than 70% had tried and failed to reproduce another researcher’s experiments, and more than 50% had failed to reproduce their own experiments. Of the group that had tried to publish replications, successful replications were more frequently published than unsuccessful ones.
  • But some good news, there’s a new journal devoted to just publishing replications in empirical economics.
  • Some non-academic jobs:
  • For the academic-types, graduate students can once again blog their job market paper on the Development Impact Blog.
  • But a good thread for current and future graduate students – a reminder not to pin your sense of self-worth on your academic career. Too much is out of your hands. (h/t Raul Pacheco-Vega)

4 Responses

  1. Ok I was going to say that these were scary links perfect for Halloween and then Julian had to come and make it less scary.

  2. Ioannidis is qualitatively right but quantitatively wrong for multiple reasons, starting with the fact that he doesn’t know the true effect size any more than the individual authors do, so he can’t possibly state with certainty what % are exaggerated. However let’s focus on power: suppose unknown quantity X is only policy relevant if it is at least 0.1 std dev, so ten papers all choose sample sizes to be powered to detect an effect of that magnitude. Suppose the true effect is in fact 0.05 std dev, and the ten papers are completely unbiased and find a range of estimates hovering around that number. Then science has done its job perfectly (and the authors’ choice of power was exactly what it should have been, or at least eminently justifiable) but Ioannidis’s approach will claim that they were all underpowered since they didn’t have the sample size to reliably detect the observed X = 0.05 SD.