Dear journalists: Please stop writing about “scientific research” with an absurdly small number of subjects

What got missed in many of the media reports was that the research was incredibly limited. The study ran for only nine days, and involved 43 children — and, importantly, no comparison (or “control”) group.

That is Julia Belluz of Vox explaining why you should be skeptical about the childhood obesity study that got so much attention last week. An author claimed “We reversed their metabolic disease in just 10 days, even while eating processed food, by just removing the added sugar and substituting starch, and without changing calories or weight.”

Now is not the time to lecture on statistical significance, but think on this: It’s hard to show that men are on average heavier than women without a sample size of 100 or more. Presumably something less probable should have a higher burden of proof. There is no rule of thumb, but if you must have one, look for a sample size of many hundred. And always for a credible control group.

On the “adequate sample” but “questionable control group”, many of you probably heard last week that red meat causes cancer, and might be as bad as smoking. Anahad O’Connor at the New York Times takes a closer look, and sees ridiculous over-exaggeration of results:

Smoking raises a person’s lifetime risk of developing lung cancer by a staggering 2,500 percent. Meanwhile, two daily strips of bacon, based on the associations identified by the W.H.O., would translate to about a 6 percent lifetime risk for colon cancer, up from the 5 percent risk for people who don’t enjoy bacon or other processed meats.

I am reminded of Neuroskeptic’s 9 circles of scientific hell, where I would like to see a new level for “sample size of 32”.

118 thoughts on “Dear journalists: Please stop writing about “scientific research” with an absurdly small number of subjects

  1. Chris,

    In the US there are about 130k cases of colorectal cancer per year. If bacon raises the risk from 5 to 6% a year, that’s a 20% increase in cases, or about 25k additional cases per year.

    Meanwhile, lung cancer incidence is about 200k/year. If we go with the 2500% figure, let’s assume that basically all lung cancer is caused by smoking.

    So smoking causes about 8x as much cancer as a bit of bacon. Throw in the salami and Big Macs and I’d say we could have ourselves a real runner up, but either way 8x is a far cry from the 5->6% vs 2500%(!) nonsense. If there’s a bottom level of journalistic hell for lying with statistics, I’d reserve a prime spot for Mr O’Connor.

  2. I should have said “from 5 to 6% lifetime” above.

    I have to say, using the tiny baseline lung cancer rate to come up with the 2500% increase figure, and then comparing that number directly to an incidence rate… egregious and galling is what it is.

  3. On the other hand, it continues to amaze me that the context of a story or article is often misplaced. Consider the various ways of viewing a concept study (small sample just looking at change or effect preparatory for other data-gathering and analysis), descriptive study, experimental study, meta-analysis, Much negative reaction occurs when a study of one type is judged by the standards of another type. Much discomfort is avoided by recognizing what type of article you are reading. Looking back over the history of science, a wide range of data reporting has taken place. Today, in my opinion, we can begin to tease out the subgroups that react diffentially to interventions that beg the question of “why” and “how” “why not” “when” “under what conditions” etc. Scientists are curious, and when answers are not clear-cut in a study we can follow-up in ways that might provide some interesting answers. Ranting leads nowhere positive but some people are just negative, hostile, oppositional and/or otherwise negatively reactive by nature. Sometimes we discover some interesting phenomena when we take a closer look at findings. Small data can be helpful and can be reported. Single-subject design exists and it is both valid and reliable given the proper meassures. So, spare yourself the righteous indignation and apply the correct context, even when obvious errors exist (just classify the study as a lesson in ignorance).

  4. A major distinction in article types is between indicative and definitive articles. Journalists would be well advised to indicate “possible” effects that can be followed-up with larger and more controlled “evidence.” Remember, scientists today never “prove” anything, but just find or fail to find evidence. The evidence and be analyzed in different ways over time and new variables may be exposed. The big picture of science is a changing, dynamic mosaic wherein individual tiles can change as a result of new findings. Try and try again to put conscious and subconscious biases aside and BE OBJECTIVE (if you can).