What should social science learn from the faked Science study on gay marriage?

Not much.

If you haven’t heard about the article, the apparent fraud, and the aftermath, follow the links. The duped coauthor, Don Green, speaks to New York Magazine here. It’s a great interview.

My view: Asking what social science should learn is like asking how we reform corporate governance after Apple’s accountant steals $2 million. After the audit firm caught him.

Actually, paying attention to such fraud is a harmful distraction. This kind of blatant fraud is rare. What concerns me is that every other paper, as far as I’m concerned, massages its data until it fits a nice story. Those that don’t are less likely to get published in the best journals. This is true of everything from ethnography to experiments.

According to my “drama queen” rule, the journal Science is worse than some. The rule is simple: if a journal issues press releases and embargoes work for the biggest news splash, take it less seriously. Gratefully political science and economics journals do not do this.

Some other rules of thumb I use: Real data never look perfect. Large results are usually wrong. And scholars who have big splashy result after splashy result have a huge file drawer full of papers with null results. Discount their work.

But saying we shouldn’t learn much from this episode doesn’t mean we learn nothing at all. Here are a few points I take away:

  • The production of knowledge is changing, with teams of researchers on bigger projects. We all have to trust our coauthors not to make mistakes or be sloppy. Probably we all trust a little too much, and are too lazy in checking each others work. Especially work by our least experienced coauthors.
    • Social science probably needs to move to a slightly lower “trust equilibrium” to do good work, even if that means fewer projects and findings.
    • This balance is going to be trickiest for the senior scholars who foster dozens of studies and students. The people who do this are delivering a huge good to the world by apprenticing so many people, and producing so many great social scientists. But it comes with risks. I don’t know the right balance.
  • Fields and methods that are more transparent will get bitten by more discoveries of malfeasance. Do not penalize them for this, or you mess up incentives even more.
    • I have heard snide remarks about field experiments over this incident. The virtue of experimental work is that there are strong norms of describing replicable methods, and sharing data and code. Arguably this makes it easier to discover problems than with ethnography or observational data. Indeed it did, in just weeks.
    • The conversation worth having is how norms of data sharing and replicability can be extended to all kinds of empirical work.
  • This episode reinforces what I tell my students: your reputation for careful, conscientious work is everything in this business.
    • Don’t undermine it by hiding your study’s weaknesses, massaging results, or keeping null results in the file drawer. Even if the journals penalize the current paper, your reputation will be enhanced, because people notice.

In the interests of full disclosure, I have some biases: I run lots of field experiments; foster grad student coauthors; consider Don Green a friend, colleague and mentor; and (last but not least) stole $2 million from Apple but didn’t get caught.