Monday, March 23, 2015

If we look only at individual cognitive biases, we'll be tempted to infer that stupidity is everywhere

I have long argued that every system needs variance in order to evolve. Epistemological systems need variance in cognitive capabilities, in ideology, in class, in culture, etc. There are a number of good reasons, I believe, for this need for variance. In When Biases Collide by Chris Dillow, the author suggests a less than obvious reason which I suspect has merit.

This is an example of how cognitive biases can cancel out to produce an accurate opinion

[snip]

A new paper by Thomas Eisenbach and Martin Schmalz give us another example. Overconfidence, they say, might be used as a commitment device.

This is because many of us have time-inconsistent risk preferences: we don't worry about future risks until they are imminent, when we panic. For example, you might sign up for a charity parachute jump but then panic on the day. And actors and musicians feel stage fright just before they perform even though they chose to enter professions where they knew they'd have to go on stage. For retail investors, such preferences can be expensive. It can cause them to have heavy equity exposure in normal times, only to get cold feet when volatility increases, thus causing them to sell when prices are temporarily depressed.

Overconfidence, they say, can solve these problems. The investor who is overconfident about his abilities might think when shares fall "the market's being stupid; it'll come round to my way of thinking soon". This might be irrational overconfidence, but it saves him from the temptation to sell at the bottom. Similarly, the mediocre actor can overcome stage fright by telling himself that he's going to deliver a great performance.

I suspect that a lot of what we call rational behaviour is in fact the cancelling out of biases. This might occur within particular individuals, as in Mr Pearson's case. Or it might occur within groups. Maybe one reason why the stock market is (sometimes? often?) efficient isn't that all its participants are rational but rather that those who over-react offset those that under-react; experiments show that stupid traders can produce rational markets. There's (sometimes) wisdom in crowds and (often?) benefits to cognitive or ecological diversity because irrationalities can net out. This is why we often prefer committees to individual decision-making.

There's a political implication here. If we look only at individual cognitive biases, we'll be tempted to infer that stupidity is everywhere; after all, the list of such biases is a long one. This, though, is a mistake. We need the hard evidence of systematic error before we can infer that biases matter. And sometimes, this is lacking.
There has been a rash of books in the past decade decrying our cognitive biases, logical fallacies, empirical irrationalities, and persistent beliefs in factually disproved issues. Looking at the cumulative evidence in these books, you can only conclude that humanity has no future and little hope of progress, however progress might be defined. And yet progress we do.

My resolution to this paradox has been that our exercise of fallacies, biases, and erroneous beliefs are constrained by situational circumstances. For example, we might demonstrate a reliable risk aversion under routine circumstances but that we might consciously counteract that aversion under special circumstances.

Dillow offers another perspective that I find intriguing, i.e. that our plethora of biases, errors, and fallacies might balance each other out in the long run.

I discussed this long ago somewhere on this blog in terms of the programming attributes of heuristics and aphorisms. We have an array of risk aversion sayings (a bird in the hand is worth two in the bush, better safe than sorry, look before you leap) but at the same time we have an array of other aphorisms which encourage risk taking (carpe diem, the gods help those who help themselves, better to ask forgiveness than ask permission, better to have loved and lost than to never have loved at all, etc.) My argument then was that 1) the richness of a language in such aphorisms likely has some impact, and 2) that the impact of such aphorisms (seen as cultural coding) was likely a product of the net deployment of such heuristics under particular circumstances.

No comments:

Post a Comment