Chris Blattman

Search
Close this search box.

The economics and psychology of cheating

In standard economics, cheating is supposedly a straight cost-benefit analysis. People look at the odds of getting caught and the associated punishment, and then cheat when it makes sense to do so. However, in our experiments we find that people do not act strictly according to this model; they cheat only to the extent that they can continue to feel good about themselves and rationalise their actions. You can call it a Personal Fudge Factor, a limit up to which human beings comfortably cheat without feeling bad about it.

That is Dan Ariely blogging at Farnam Street. Read in full, especially for the ways this Fudge Factor can expand or compress.

7 Responses

  1. 1. From the author’s comments it seems this experiment is done on people in developed countries, and therefore external validity to those in developing settings might be small to nil — as the moral economies of ‘cheating’ (even that word itself might not be the right one) could be totally different for populations outside of the developing country social milieu. Despite this, the author does extrapolate to other countries: “These challenges might be even more pronounced in a developing country like India where the rules are not always that clear and conflicts of interest proliferate.” This though perhaps betrays a misunderstanding of the complex moral worlds experienced by those in India, for instance, complete with symbols signifying barriers or opportunities, with feedback loops providing bits of information on which the actor or social group can make his/her/its next decision. Hence, one could make the precise opposite conclusion than does the author: that the Western subject experiences a destablizing decline of symbolic efficiency – meaning that feedback loops get short-circuited, or blurred by too much information – and hence ‘rules’ themselves become fuzzy and malleable. My point is not that Americans are more likely to ‘cheat’ ; rather my meager and unoriginal point is that we might be very suspect of the assumption that experiments done in a lab on people conditioned by their own society’s myriad recursive information flows can be projected onto others who have lived in very different spheres.

    2. It is problematic to present these ‘losses’ as recoverable: an economist’s way to view it might be to argue that there are likely general equilibrium effects to cleaning up this ‘cheating’ — perhaps the system finds a stable equilibrium because it allows this kind of cheating to maintain. And there are enormous costs to upending the current bargains. Imagine an oversight system that polices the (literally thousands) of ways we all violate “the Law” everyday – from the mundane (a card that prevents us from speeding on the highway (by clocking our on-time and off-time)), or the significant (meaningful oversight of the financial industry). In the former case, we can imagine social upheaval as people who are already pushed to social limits are now fined or prevented from getting places they feel like they need to go; in the latter case, we can imagine massive capital flight, as bankers find the havens that won’t reach into their bulging pockets. Seems like the problems are larger than simply saying, “we need more police”.

    3. as such, perhaps there is a problem with using the word ‘cheating’: in the lab setting there is obviously some dissonance experienced by the subjects around the issue of violating projected norms (evidenced by the differential outcome after priming), but politics and life is lived in social milieu where actors are making lots of decisions rapidly, and may not have time or opportunity to be penetrated by the specific morality (the Ten Commandments) of the experimenters.

  2. Some have called it neutralization, Ariely should read Sutherland (1949) and especially “Other people’s money” by Cressey (1979) before making up new notions for known phenomena.

  3. There are plenty of experiments of this kind showing such “virtues” as willingness to cooperate and strong reciprocity – willingness to incur a cost to “punish” other people. In the literature of econ of social interactions, it´s common to model that as a payoff to the player who cooperates (i.e. doesn´t cheat) in a game (say, the prisoner´s dilemma).

    I guess this literature followed the rise of the concept of trust and other cultural factors in development, in the 90´s. There are lots of papers with cool experiments; see some of Samuel Bowles or Henrich et al 2001. Many theoretical models were developed to explain that behaviour, developing Akerlof´s social models from the 80´s. Then they mixed with models of cultural transmission (like Bisin & Verdier 2001) to see how these traits (like trust) would evolve along time, and how they could explain things like the persistence of institutions (that is, why is it difficult for poor countries to adopt more successful institutions).

    The one I´m reading right now is The Scope of Cooperation, by Guido Tabellini (2008), which contains many of those features. It develops on people being more willing to cooperate with “closer” people, as show by many experiments, and goes on to show that it´s possible for a society to be trapped in a bad equilibrium (in terms of cooperation and institutions).

    Only saying that because it might be worthwhile for those who believe culture matters to development.

  4. Hmm, I wonder whether that also applies to rioting – ie people do it because they think they can hide in a crowd. or something.

Why We Fight - Book Cover
Subscribe to Blog