The cognitive reflection test (CRT) is a task designed to measure a person's tendency to override an incorrect "gut" response and engage in further reflection to find a correct answer. It was first described in 2005 by psychologist Shane Frederick. The CRT has a moderate positive correlation with measures of intelligence, such as the Intelligence Quotient test, and it correlates highly with various measures of mental heuristics.Quite interesting.
Later research showed that the CRT is a multifaceted construct: many start their response with the correct answer, while others fail to solve the test even if they reflect on their intuitive first answer. It has also been argued that suppression of the first answer is not the only factor behind the successful performance on the CRT: numeracy and reflectivity both account for performance.
A new paper last month, Cognitive Reflection is a Stable Trait by Michael Stagnaro, Gordon Pennycook, and David G. Rand indicates that CRT is likely a stable and predictive tool.
The original paper is Cognitive Reflection and Decision Making by Shane Frederick. Extracts from their findings.
One of the questions explores the relationship between CRT, IQ and patience (time discounting).
Those who scored higher on the CRT were generally more "patient"; their decisions implied lower discount rates. For short-term choices between monetary rewards, the high CRT group was much more inclined to choose the later larger reward (see items a and b). However, for choices involving longer horizons (items c through h), temporal preferences were weakly related or unrelated to CRT scores.He also looks at the relationship between high CRT scores and risk preferences.
To assess the relation between CRT and risk preferences, I included several measures of risk preferences in my questionnaires, including choices between a certain gain (or loss) and some probability of a larger gain (or loss). For some Cognitive Reflection and Decision Making 33 items, expected value was maximized by choosing the gamble, and for some it was maximized by choosing the certain outcome.
The results are shown in Table 3a. In the domain of gains, the high CRT group was more willing to gamble-particularly when the gamble had higher expected value (top panel), but, notably, even when it did not (middle panel). If all five items from the middle panel of Table 3a are aggregated, the high CRT group gambled significantly more often than the low CRT group (31 percent versus 19 percent; X2 = 8.82; p < 0.01). This suggests that the correlation between cognitive ability and risk taking in gains is not due solely to a greater disposition to compute expected value or to adopt that as the choice criterion.9 For items involving losses (lower panel), the high CRT group was less risk seeking; they were more willing accept a sure loss to avoid playing a gamble with lower (more negative) expected value.
The discussion on gender differences is interesting and suggestive. Just as in career outcomes and high IQs (not average IQs), men are about 2/3rds of the high CRT scores. Since IQ and CRT are moderately but not tightly correlated, it isn't simply a replication of the IQ pattern of distribution.
Men scored significantly higher than women on the CRT, as shown in Table 6. The difference is not likely due to a biased sampling procedure, because there were no significant sex differences for any other cognitive measure, except SAT math scores, for which there was a modest difference corresponding to national averages. Nor can it be readily attributed to differences in the attention or effort expended on the survey, since women scored slightly higher on the Wonderlic test, which was given under identical circumstances (included as part of a 45-minute survey that recruited respondents were paid to complete).The patterns are strong and striking but their interpretation is challenging.
It appears, instead, that these items measure something that men have more of. That something may be mathematical ability or interest, since the CRT items have mathematical content, and men generally score higher than women on math tests (Benbow and Stanley, 1980; Halpern, 1986; Hyde, Fennema and Lamon, 1990; Hedges and Nowell, 1995). However, men score higher than women on the CRT, even controlling for SAT math scores. Furthermore, even if one focuses only on respondents who gave the wrong answers, men and women differ. Women's mistakes tend to be of the intuitive variety, whereas men make a wider variety of errors. For example, the women who miss the "widgets" problem nearly always give the erroneous intuitive answer "100," whereas a modest fraction of the men give unexpected wrong answers, such as "20" or "500" or "1." For every CRT item (and several other similar items used in a longer variant of the test) the ratio of "intuitive" mistakes to "other" mistakes is higher for women than for men. Thus, the data suggest that men are more likely to reflect on their answers and less inclined to go with their intuitive responses.'
A novel perspective, strong data, striking outcomes that are challenging to interpret - that's an interesting paper.
No comments:
Post a Comment