Chris Blattman

Search
Close this search box.

Where have all the random control trial results gone?

Writing in the New Yorker, Jonah Lehrer highlights the slipperiness of empiricism:

all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.

…In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: Davis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

The culprit? Not biology. Not adaptation to drugs. Not even prescription to less afflicted patients. Rather, it’s scientists themselves.

Journals reward statistical significance, and too many academics massage or select results until the magical two asterisks are reached.

But more worrisome is that much of the problem might be more unconscious: a profession-wide tendency to pay attention to, pursue, write up, publish, and cite unusually large and statistically significant findings.

Social science is well behind natural science and medicine in registering experiments and replicating results. For me, it’s sobering to see that, having accomplished these feats, the harder sciences are still finding most of their cherished results disappear.

Social science suffers less from medicine’s problem of small sample sizes and gazillions of small experiments. There are limitations, however, we share. Take the classic Ioannidis piece, Why Most Published Research Findings Are False:

a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance.

Interest piqued? Alex Tabarrok on Ioannidis here.

3 Responses

  1. Chris, in my neck of the empirical woods (psychology/health science) the standard has slowly shifting away from statistical significance – which, of course, only implies that a result is reliable – but not necessarily meaningful – to effect size, which addresses the latter in a more important way. Nevertheless, it is disappointing to see study and after study in the health sciences employ enormous n’s or plug in a huge number of often arbitrary variables – both of which virtually guarantee a statistically significant (but not necessarily meaningful) result just because of the way the math works out. Although our access to information has increased dramatically, the quality may not be there in some instances – given the ease of access, more than ever, it is essential to be a critical consumer of the information that is being fed to us – even by so-called reputable sources. The should apply to the population at large – not just us “academics”.

  2. Insightful post. I want to point out, as Esther and Michael might, that the passage you cite from Ioannidis applies word-for-word to all empirical methods, whether rigorous or not. In fact, despite the title, the whole post applies to all empirical methods. These are not special limitations of randomized trials.

Why We Fight - Book Cover
Subscribe to Blog