One of the things I believe is critical, and that we currently do somewhat poorly, is to integrate information across fields of study as well between past and present. We rush forwards and rarely integrate. I am all the time discovering a key study in an area to which I have devoted a great deal of thought and research. And then there will be something absolutely pertinent, with great information and it is off in some remotely affiliated field of study having almost no natural connection to the topic, or it was published twenty years ago. Or both.
That is what has happened to Caplan. Caplan has just published The Myth of the Rational Voter, and the paper to which he alludes, Mental Contamination and Mental Correction: Unwanted Influences on Judgments and evaluations by Timothy D. Wilson and Nancy Brekke, is highly pertinent. But he has only come across it post-publication. And it is from twenty years ago.
Better late than never. There is so much cognitive gold out there waiting to be discovered.
The second point of interest is the argument advanced by Wilson and Brekke. One of my recurrent themes is the bain of what I have been calling cognitive pollution. Acquired knowledge that has never been examined and yet is wrong or misunderstood in some material way. Without examination, that unexamined knowledge simply complicates all later decisions that might depend on it in some fashion.
Wilson and Brekke use the term mental contamination for a slightly different phenomenon. Here is their descriptive passage from their paper.
As noted by Gilbert (1991, 1993), there is a long tradition in philosophy and psychology, originating with Descartes, that assumes that belief formation is a two-step process: First people comprehend a proposition (e.g., "Jason is dishonest") and then freely decide whether to accept it as true (e.g., whether it fits with other information they know about Jason). Thus, there is no danger to encountering potentially false information because people can always weed out the truth from the fiction, discarding those propositions that do not hold up under scrutiny. Gilbert (1991, 1993) argued persuasively, however, that human belief formation operates much more like a system advocated by Spinoza. According to this view, people initially accept as true every proposition they comprehend and then decide whether to "unbelieve" it or not. Thus, in the example just provided, people assume that Jason is dishonest as soon as they hear this proposition, reversing this opinion if it is inconsistent with the facts.This jumps out at me in part because it reflects a common experience I have had in arguing with people convinced about a particular proposition which they have never examined and which is simply factually not true and yet which, when confronted with a discussion that forces examination, they simply throw up their hands. Just yesterday I had someone tell me, paraphrasing, "Your data might be right but I don't have the energy or motivation to argue. We'll have to agree to disagree."
Under many circumstances, the Cartesian and Spinozan systems end up at the same state of belief (e.g., that Jason is honest because, on reflection, people know that there is no evidence that he is dishonest). Because the second, verification stage requires mental effort, however, there are conditions under which the two systems result in very different states of belief. If people are tired or otherwise occupied, they may never move beyond the first stage of the process. In the Cartesian system, the person would remain in a state of suspended belief (e.g., "Is Jason dishonest? I will reserve judgment until I have time to analyze the facts"). In the Spinozan system, the person remains in the initial stage of acceptance, believing the initial proposition. Gilbert has provided evidence, in several intriguing experiments, for the Spinozan view: When people's cognitive capacity is taxed, they have difficulty rejecting false propositions (see Gilbert, 1991, 1993).
In that particular conversation, the argument had been advanced that "I have always attributed this to the fact that men coming home from the war discovered women quite competently taking their place in factories and workplaces. This meant that women had to be shamed into a retreat from the larger world. Women were, by and large, so glad to have their men back home that they were complicit in the shaming and the retreat." To which I responded with BLS data showing that women in fact had not retreated from the workforce, that there had been a multi-decadal increase in female labor force participation rate from 1900 through the 1990s, that that LFPR had increased during the decade of the 1940s including the war years. There was a two decade long drop at the end of the war from 1946 to 1960 for one age cohort (20-24) reflecting the baby boom but that all other age cohorts increased LFPR year-by-year through the nineties and even the 20-24 cohort resumed their increasing LFPR from 1960 onwards. So the central argument that women had retreated from the labor force was demonstrably wrong. But as quoted above, that didn't matter because the refutation contradicted a preferred, but unexamined, belief.
Here is the model Wilson and Brekke discuss regarding mental contamination and mental correction (click to enlarge). If the default mode is the acceptance of a proposition without examination and it requires cognitive effort to examine and assess the accepted proposition, and if the proposition is pleasingly consistent with other existing unexamined assumptions, then, given the number of steps and effort there is, it is easy to see why there is so much cognitive pollution out there.
No comments:
Post a Comment