Friday, June 5, 2020

But humility is a scarce commodity.

An excellent article making many of the points I have been discussing in recent years. From Covid vs. Climate Modeling: Cloudy With a Chance of Politics by Eric Felten.

Models can be useful when we have a deep understanding of causal mechanisms, clean and comprehensive data, and a comprehension of component sensitivities. They can then give us precise and accurate information which is useful in decision-making. Without those components, all models gives us are precise forecasts which are not accurate. And outside the modeling community (and even sometimes within it) people get distracted by the precision and equate precision with accuracy. A fatal error.

The challenge is that our rapidly evolving IT capabilities have allowed us to tackle increasingly complex forecasting issues even though our understanding of the causal mechanisms, absence of clean data, and our incomprehension of component sensitivities should preclude such modeling.

The most difficult systems to model are those which are complex, chaotic, constantly evolving, loosely coupled systems. Such as health, diseases, economics, climate, voting patterns, etc.

It is not that we should not attempt to forecast. It is that the value of modeling of complex, chaotic, constantly evolving, loosely coupled systems is in the discussion, not the forecast. The models will be wrong. But occasionally we obtain insight that make them useful.

Climate modeling is still in the development phase. We are still dealing with patchy, inconsistent, and unreliable data. We still wrestle with integrating regional climate models into global climate models. And fundamentally, the mean time for adjusting the model and gaining more data is much shorter than the average time elapsed to monitor accuracy of the forecasts.

Some key passages in the article:
"The key message,” Hulme tells RCI, is not to “mistake model-land for the real world. They are two separate places.” All models are wrong, he says, but some are useful. “Models are far better as tools to help us think with than they are as truth oracles. We must not think that models have some privileged access to ‘the future.’ That would likely lead to some very poor decisions.”

[snip]

If it is hard to model a single phenomenon, it is exponentially more difficult when a given model contains submodels, each with its own uncertainties. “Each time you add a new submodel, you are adding new degrees of freedom to the system with new feedbacks,” says Judith Curry, former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. “Then when you couple the new submodel to the larger model, you add additional degrees of freedom to each variable that the new submodel connects with.” In other words, with every submodel added the possibility of error compounds, multiplying the chance that the main model veers off target. “This issue,” Curry says, “remains at the heart of many of the problems and uncertainties in global climate models.”

[snip]

To make those adjustments, climate modelers follow theories; some use observations; some just make a “back-of-the-envelope estimate.” But it isn’t done randomly: Hourdin et al. write, “[S]ome models are explicitly, or implicitly, tuned to better match the 20th century warming.”

Whether it’s epidemiology, climate, or economics, says Sally Cripps of the University of Sydney, modelers need to “acknowledge and explain the uncertainty” in their enterprise. “The data science community in particular needs a little more humility,” she says. “It needs to hose down claims about Big Data being a crystal ball, and instead use the data to understand what we don’t know. That is the way forward.”
But humility is a scarce commodity when your livelihood depends on bold pronouncements.

No comments:

Post a Comment