This paper presents 59 new studies (N = 72,310) which focus primarily on the “bat and ball problem.” It documents our attempts to understand the determinants of the erroneous intuition, our exploration of ways to stimulate reflection, and our discovery that the erroneous intuition often survives whatever further reflection can be induced. Our investigation helps inform conceptions of dual process models, as “system 1” processes often appear to override or corrupt “system 2” processes. Many choose to uphold their intuition, even when directly confronted with simple arithmetic that contradicts it – especially if the intuition is approximately correct.
Sort of calls into question all the nuance of nudging.
There is a good discussion at The Bat the Ball and the Hopeless by Alex Tabarrok at Marginal Revolution.
The bat and ball question is an age old paradox and insight to the human brain as it processes information. For those not familiar with it, you are given that a bat and a ball cost $1.10 in total and that the bat costs $1.00 more than the ball.
The question is, How much does the ball cost?
The great majority go with the answer that the ball costs 10 cents. In reality, and with little reflection, it becomes obvious that the answer is that the ball costs 5 cents.
For most questions you have four challenges
Define the problem - What is the formulae you are solving?Measure the problem - What are the metrics and measures and how confident are you in them?Frame the problem - What is the best way of communicating the problem?Contextualize the problem - What are the appropriate measures of context and adjacency?
For the past two decades there has been, within the halls of academe, a push to demonstrate how fallible human reason, logic, and empiricism can be. It is an argument congruent with a Platonic worldview, a world of central planners and philosopher kings making decisions on behalf of people inadequately prepared to make their own decisions. While dressed as a kind and well motivated worldview, it is indisputably authoritarian.
Certainly the human brain and human psychology have chinks in their epistemic armor. But the problem, in my opinion, is not nearly as bad as it is made out to be because academics often fail to explore the issues of framing and contextualization.
From Caplan:
In a paper in Cognition, Meyer and Fredrick test multiple versions of the bat and ball and related problems to try to uncover where people’s intuitions go wrong. The most remarkable two versions of which are shown below:A bat and a ball cost $110 in total.The bat costs $100 more than the ball.How much does the ball cost?Before responding, consider whether the answer could be $5.$_____———–A bat and a ball cost $110 in total.The bat costs $100 more than the ball.How much does the ball cost?The answer is $5.Please enter the number 5 in the blank below.$_____Remarkably, even when told to consider $5, most people continue to answer $10. Even more shockingly, most people get the answer right when they are explicitly told the answer and instructed to enter it, yet 23% still get the answer wrong! Wow.
The authors themselves elaborate.
…this “hinted” procedure serves to partition respondents into three groups: the reflective (who reject the common intuitive error and solve the problem on the first try), the careless (who answer 10, but revise to 5 when told they are wrong), and the hopeless (who are unable or unwilling to compute the correct response, even after being told that 10 is incorrect)…many respondents maintain the erroneous response in the face of facts that plainly falsify it, even after their attention has been directed to those facts….the remarkable durability of that error paints a more pessimistic picture of human reasoning than we were initially inclined to accept; those whose thoughts most require additional deliberation benefit little from whatever additional deliberation can be induced.
Certainly, this is a striking finding. When given a blatant hint as to the correct answer, most people still get it wrong. And when given the answer, and even instructed specifically to provide the given answer, 23% get it wrong.
Something else seems to be going on rather than simple illogic and innumeracy.
Scott Alexander had a great post some years ago which I discussed in Teen lizardmen. His observation was that, when looking at polling results you have to allow for the fact that 4% of the answers will be absurd and counter logic or evidence. The particular terminology arose from a survey which discovered that 4% of respondents believed that lizardmen are running the world (and 7% weren't sure.)
I think a lot of the interesting twists and turns are in the measuring, framing, and contextualizing of these kind of results.
When given the correct answer and instructed to use the right answer, how could 23% get it wrong. Think about the context. The experiment is obviously a psychological test of some sort. Do psychological tests lie and deceive their participants? Sure. All the time.
Might some participants take this into account. Might they see this as a test of their conviction in the right answer? Sure. Might as many as 23% believe that the testers are manipulating them into changing an obviously (to them) correct answer? That seems plausible.
Interesting study.
No comments:
Post a Comment