Why is other-regarding behavior so often misguided? I study a new explanation grounded in the idea that altruists want to think they are helping. Frictions arise because perception and reality can diverge ex post, especially when helping remotely (as for example in international development projects). Among other things the model helps explain why donors have a limited interest in learning about effectiveness, why charities market based on need rather than effectiveness, and why beneficiaries may not be able to do better than accept this situation. For policy-makers, the model implies a generic tradeoff between the quantity and quality of generosity.An interesting effort to answer in theory what is observable in practice. It would explain such puzzles as why programs with good intentions are continued long after they are proven not to achieve the outcomes sought (such as Head Start) or even injure those whom it intended to benefit (rent control, affirmative action, hate speech legislation, etc.)
A further elaboration from the paper, which is worth reading.
Other-regarding behavior poses a challenge for social scientists. On the one hand, some people are remarkably generous. Americans give about 2% of GDP to charity each year, for example. This suggests that they care deeply about helping others. Yet in many cases generous people are also quite poorly informed about how to help effectively. For example, only 3% of charitable givers even claim to have done any research comparing the effectiveness of alternatives. This pattern is in fact so common that it is embodied in colloquial language, where “well-intentioned” is a euphemism for “poorly informed.” Yet if people really are well-intentioned, why don’t they become well-informed?
The predominant interpretation in the literature has been that funders want to be effective, but struggle to learn how because of market failures. Information about effectiveness is a public good (Duflo and Kremer, 2003; Levine, 2006; Ravallion, 2008; Krasteva and Yildirim, 2011), and communication from practitioners to funders is often distorted by strategic considerations (Pritchett, 2002; Duflo and Kremer, 2003; Levine, 2006). Addressing such market failures was
one stated purpose for creating many of the institutions that today produce and disseminate effectiveness research – the Center for Global Development, the Jameel Poverty Action Lab, Innovations for Poverty Action, and the Center for Effective Global Action, among others.
This paper examines an alternative (and complementary) interpretation: funders do not want to be more effective. Instead, they want to think that they are effective. To underscore how distinct these concepts can be, consider donating to a charity that feeds malnourished African children. This induces agreeable thoughts of children eating nutritious meals. Now
suppose you learn that the charity is ineffective – perhaps an expos´e reveals that management committed serious fraud. Presumably this reduces your satisfaction. What is more interesting is that, if you had not learned of the fraud, you would have continued to experience “warm glow” (Andreoni, 1989) thinking about your impact even though in reality no such impact existed. Put bluntly, your altruistic preferences cannot literally be over childrens’ outcomes; these occur on another continent, outside of your experience.
I formalize this idea in a model of a single benefactor whose actions affect a beneficiary. The state of the world is uncertain, so that the benefactor does not know ex ante how his decision will affect the beneficiary ex post. The unusual feature of the model is that this uncertainty persists ex post with positive probability. For example, a donor may never learn whether the charity he gave to is honest. As a result the benefactor faces ex post ambiguity: he may observe information that is insufficient to reveal the state and have to interpret it. This is an interesting problem precisely because he has no way of learning the correct interpretation over time, even if the game repeats, since the true state remains unobserved. I therefore examine the case in which the benefactor interprets ambiguity in the way that maximizes his expected utility. I find that he optimally holds empirically correct beliefs about observable quantities, but interprets ambiguity optimistically. For example, a donor correctly forecasts the probability that he will learn about a scandal involving his chosen charity. On learning of no scandals, however, the same donor assumes that “no news is good news” and views the charity as definitely honest.