IPA’s weekly links

Guest post by Jeff Mosenkis of Innovations for Poverty Action.

  • Economist Deirdre McCloskey (formerly Donald), has an essay in the WSJ about transitioning publicly to being a woman at the age of 53, after already being established in her career. The moment when she knew she’d been accepted as a woman by her colleagues:

In early 1996 I was standing around with a half-dozen other economists at tolerant Erasmus University in Holland, talking about economics, as economists tend to do. I was the only woman. I made a point. The men ignored it. Two minutes later, George made the identical point. They all grew excited: “George, that’s a great point! You’ll get it into the American Economic Review! A Nobel can’t be far behind!”

(h/t Dina Pomeranz)

  • Radiolab has a new podcast about the stories behind important Supreme Court cases, More Perfect, produced by my old colleague Suzie Lechtenberg, a gifted radio producer. The first episode is about the death penalty and tracks down the guy based out of a London driving school upon whom all U.S. executions depended on for a time. (Web, iTunes)

“According to the UN, of the 60 largest troop contributing countries, only 14 have not reported cases of sexual abuse committed by their forces in the past five years.”

Tom Murphy may have summarized the problem best in this headline. But even if officials don’t care about their troops’ actions (which it seems), these reports are a wonderful excuse for recalcitrant local governments to keep the U.N. out.

  • ICYMI most of the new Handbook of Field Experiments is available online.
  • There’s a new STATA cheatsheet on programming, and here are some for R (which recently passed SAS in academic publication popularity).
  • The GRIM (Granularity-Related Inconsistency of Means) test is an easy way to test if a researcher dropped data from summary statistics without reporting it (or made a mistake). It works for means of Likert scales or any question that requires a whole number response (say, a seven-point scale where the only options are 1,2,3,4,5,6, or 7, no 3.2’s) . If researchers report a sample size of 350, with a mean of 4.7 on a 7 point scale, it just checks if it’s possible to get a mean of 4.7 from 350 whole-number responses. When two researchers looked through 260 psych papers, they found that half contained means that didn’t pass the test, and found mistakes in all of the corresponding the data sets they were able to get. (h/t  Stephanie Wykstra)
  • Here’s a good explanation of what the Venezuelan government did to send their country into a rapid tailspin.