Sunday, November 18, 2012

The best we can do is a system that fails a little bit better

As usual, a thoroughly insightful article by Megan McArdle, Chasing the Tails.
Over the last few years, I have been having a lot of earnest conversations with people in the financial industry, and people who cover it, about the extent to which the crisis was produced, and/or worsened, by the attempt to quantify, or at least numerify, the exact amount of risk in the system so that it could be hedged and regulated away. "Value at risk" and its near kin produced the illusion of safety, as Taleb (to whom Davies is responding) has been screaming for years. Worse, it produced a systematically biased illusion of safety; everyone was making pretty much the same mistake. That mistake was to take the sort of risk that is safe 99.995% of the time--but catastrophic when everyone's bets went south at once. We may have minimized the number of bank failures in normal times only by increasing the risk of a single, catastrophic event that took the whole system down.
I agree. As we extract risk and error from any system, we increase short term tactical efficiency at the expense of long term adaptability. We typically reduce the the large number of minor tremors and instead store up geological energy for a few large earthquakes. The aggregate volume of adjustment is all the same, the only difference is whether we take it in small increments or in large lumps. And sometimes of course, we don't get to choose.

This paragraph also taps into one of my other bugbears; our tendency to mistake precision for accuracy (a good discussion of the distinction can be found in Samuel Arbesman's The Half-Life of Facts in Chapter 8. 99.995% is a reasonably precise measure of risk, but as experience has demonstrated, based on faulty knowledge and models as it was, it was highly inaccurate.

McArdle's comment
As one fixed-income manager mordantly noted, "The most dangerous thing in the world is a nominally risk free asset."
calls to mind that quote from Douglas Adams in The Hitchhikers Guide to the Galaxy
The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair.
This conclusion is, I think, correct but one that everyone wishes to avoid.
And yet it seems that we could move more in that direction. Reduce the leverage in the system, and hope that means fewer events like 2008.

Notice I said "fewer", not "none". Neither markets nor government are perfectible; the best we're going to get is ones that work pretty well most of the time. In 2005, everyone--homebuyers, bankers, regulators, legislators--was making essentially the same mistake. And while it's more comfortable to believe that this was malevolent, the more prosaic truth is probably that sometimes large groups of people get stuff badly wrong. We can't plan our way to a risk free system. The best we can do is a system that fails a little bit better.
Our core portfolio of knowledge is reasonably good in the near term. In order to maximize productivity (get richer), we are always seeking to drive out variation and error. However, the more efficient we become at producing income (through productivity) in the short term, the more at risk we put our accumulated wealth in the long term. In the far future and in the outer boundaries of our knowledge there is far greater uncertainty and variability and we have to be much more adaptable to surprises and exogenous shocks. Near term tactical efficiency demands stasis, predictability, and stability which is counter to what we need in the strategic long term.

Unconsciously, we are always stumbling around trying to find some sort of optimum balance between the stability that cultivates short term efficiency and the adaptability that prepares us for long term effectiveness. These two goals are rarely clearly perceived and the mechanisms that mediate them are at best poorly comprehended.

There is another lesson in the recent financial crisis. We explain short term failures in a narrative fashion, hewing closely to acceptable archetypes - "The financial crisis was caused by greedy and morally corrupt bankers whose actions were exacerbated by lethargic and ignorant regulators" or some such.

This is different from the truth of the future. We approach the veil of our ignorance through data and measurements and we learn new things in increments. We don't comprehend the truth. The Human System is multivariate, complex, chaotic, non-linear, homeostatic, self-correcting, contextually sensitive, dynamic and heterogeneous and ridden with tipping points and hidden feedback loops. It is natural that the more distant future should be less clear than the stark present. There is simply too much going on. Make an accurate prediction and you make out like a bandit. Very few people usefully predict something very far in the future.

No comments:

Post a Comment