Not only are most scientific results false, they are also pointless?

A student of Dan Ariely’s was experimentally varying begging tactics, when the unexpected happened:

There was another beggar on the street – a professional beggar – who approached young Daniel and said, “Look kid, you don’t know what you’re doing. Let me teach you.” And so he did.

This beggar took our concept of effort and human contact to the next level, walking right up to people and offering his hand up for them to shake. With this dramatic gesture, people had a very hard time refusing him or pretending that they did not seen him. Apparently, the social forces of a handshake are simply too strong and too deeply engrained to resist – and many people gave in and shook his hand.

Of course, once they shook his hand, they would also look him in the eyes; the beggar succeeded at breaking the social barrier and was able to get many people to give him money.

Full story here. Well worth reading.

Not to generalize from one experience too much, but it does make one wonder how many of our surveys and non-experimental papers and field experiments are completely detached from the way the real world works. But in most cases the pro never walks by to steer us straight.

6 thoughts on “Not only are most scientific results false, they are also pointless?

  1. What’s surprising, to me at least, is that the researcher didn’t think to go out and interview beggars *first* to figure out what the actual methods people use are, and then vary those (or at least include them in the various methods surveyed). Instead he just assumed he knew what people were doing. Actually, maybe academics assuming they know what people are doing without actually asking them isn’t at all surprising. Sadly.

    I guess this is a perfect example of where qualitative and quantitative researchers can usefully collaborate.

  2. Agreed, and Ariely and others generally do a good deal of this. But at least in economics and to a large extent political science, there is almost never any substantive discussion of the qualitative investigation done about the basic efficacy of the intervention and what was done to suggest it works in the way we think it works and why. If we find a result, we need that context to interpret the change, and if we don’t find a result we need it even more.

  3. I’m really surprised you can get this past IRB – not only is it deception (which, best I can tell, isn’t resolved), you’re also imposing direct costs on the research subjects, who think they’re giving money to a poor person, but instead are donating to a charity. I’m not happy about that.
    (and it’s not like there are any huge benefits to the research to outweigh this).

  4. The title of this post is highly sensationalist given its content. Your concluding paragraph undervalues academics and institutions who are collaborating directly with implementing partners (often on “lowly” program evaluations).

    I like that you try to take a contrarian view and question assumptions, but I usually like your humility even more. It has been lacking recently.

  5. I loved this post – its very Chris. Short, interesting, informative, insightful.

    (obviously i disagree about the lack of humility – perhaps thats because i know your tone of voice?)

  6. You’re highlighting a major shortcoming of how social sciences are often practiced. Surveys and experiments should be the stage of analysis following theory and initial testing/reviews (“discovery” perhaps?), as in biology/pharmaceutical testing.