Chris Blattman

Search
Close this search box.

What should social science learn from the faked Science study on gay marriage?

Not much.

If you haven’t heard about the article, the apparent fraud, and the aftermath, follow the links. The duped coauthor, Don Green, speaks to New York Magazine here. It’s a great interview.

My view: Asking what social science should learn is like asking how we reform corporate governance after Apple’s accountant steals $2 million. After the audit firm caught him.

Actually, paying attention to such fraud is a harmful distraction. This kind of blatant fraud is rare. What concerns me is that every other paper, as far as I’m concerned, massages its data until it fits a nice story. Those that don’t are less likely to get published in the best journals. This is true of everything from ethnography to experiments.

According to my “drama queen” rule, the journal Science is worse than some. The rule is simple: if a journal issues press releases and embargoes work for the biggest news splash, take it less seriously. Gratefully political science and economics journals do not do this.

Some other rules of thumb I use: Real data never look perfect. Large results are usually wrong. And scholars who have big splashy result after splashy result have a huge file drawer full of papers with null results. Discount their work.

But saying we shouldn’t learn much from this episode doesn’t mean we learn nothing at all. Here are a few points I take away:

  • The production of knowledge is changing, with teams of researchers on bigger projects. We all have to trust our coauthors not to make mistakes or be sloppy. Probably we all trust a little too much, and are too lazy in checking each others work. Especially work by our least experienced coauthors.
    • Social science probably needs to move to a slightly lower “trust equilibrium” to do good work, even if that means fewer projects and findings.
    • This balance is going to be trickiest for the senior scholars who foster dozens of studies and students. The people who do this are delivering a huge good to the world by apprenticing so many people, and producing so many great social scientists. But it comes with risks. I don’t know the right balance.
  • Fields and methods that are more transparent will get bitten by more discoveries of malfeasance. Do not penalize them for this, or you mess up incentives even more.
    • I have heard snide remarks about field experiments over this incident. The virtue of experimental work is that there are strong norms of describing replicable methods, and sharing data and code. Arguably this makes it easier to discover problems than with ethnography or observational data. Indeed it did, in just weeks.
    • The conversation worth having is how norms of data sharing and replicability can be extended to all kinds of empirical work.
  • This episode reinforces what I tell my students: your reputation for careful, conscientious work is everything in this business.
    • Don’t undermine it by hiding your study’s weaknesses, massaging results, or keeping null results in the file drawer. Even if the journals penalize the current paper, your reputation will be enhanced, because people notice.

In the interests of full disclosure, I have some biases: I run lots of field experiments; foster grad student coauthors; consider Don Green a friend, colleague and mentor; and (last but not least) stole $2 million from Apple but didn’t get caught.

23 Responses

  1. “What concerns me is that every other paper, as far as I’m concerned, massages its data until it fits a nice story.”…. This is true of everything from ethnography to experiments.”

    I’d like to know what kind of “massaging” you mean, and how you can possibly claim that 1/2 of all scientific papers have been “massaged”. Since that would include half of my own papers, I’m personally quite interested in your answer to my question!

  2. As one of Chris’s coauthors (I’ve got my eye on you now, Blattman!), I’m not sure the division of labor is quite that stark. I don’t play with the raw data myself, but… I have access to it; I was in the field watching when some of it was collected; I worked closely with the field managers and survey enumerators; I work with Chris’s RAs who run the analysis along with him; etc etc. Fraud would be extremely difficult here, so I agree with Chris that the majority of the problems in the field lie elsewhere.

  3. This was my interpretation of your last sentence:
    Hypothetical ~ Actual
    Stole and got caught ~ Faked data and got caught
    Blattman stole and did not get caught ~ Blattman faked data and did not get caught…

  4. @3rdmoment: I think we agree. The strict division of labor is common but it is definitely not risk minimizing, and so maybe not optimal. It’s done so often that I won’t judge too harshly. Probably a different balance has to be struck. Though I’m not yet sure what’s optimal.

  5. Thanks for the reply (and for the original post, which is great).

    While I understand the division of labor, I think there is a big difference between “one author did the data analysis” and “only one author was even permitted to see the data.”

    And while I haven’t worked with this particular kind of data, it’s my experience that cleaning up raw data in social science requires lots and lots of opportunities for mistakes, sloppiness, faulty or arbitrary assumptions, etc, which may or may not turn out to be important to the conclusions. While all this stuff should be well documented, there is really no substitute for spending some time with the raw data if you want to know what’s going on, and a second pair of eyes would often be a good idea. So I’m not convinced that such a sharp division of labor is optimal, even ignoring the possibility of fraud.

    If it is true that the IRB process is a practical obstacle that reduces this kind of reasonable collaboration, that seems like a problem to me.

  6. @3rd moment: It’s a good question. It’s actually not all that unusual. I’m usually the empirical guy on my projects so I almost always have a hand in the raw data, but many of my coauthors do not, especially if their contribution is to theory or design or some substantive aspect of the study. There are one or two studies where I am not involved in the raw data, however, because my contribution is conceptual or something else. It wouldn’t be useful to have coauthors if there weren’t division of labor, even when that division in stark.

Why We Fight - Book Cover
Subscribe to Blog