Sunday, December 6, 2015

You can get it right, or you can make it intuitive, but it’s all but impossible to do both.

Two separate points in Not Even Scientists Can Easily Explain P-values by Christie Aschwanden. The first is that
What I learned by asking all these very smart people to explain p-values is that I was on a fool’s errand. Try to distill the p-value down to an intuitive concept and it loses all its nuances and complexity, said science journalist Regina Nuzzo, a statistics professor at Gallaudet University. “Then people get it wrong, and this is why statisticians are upset and scientists are confused.” You can get it right, or you can make it intuitive, but it’s all but impossible to do both.
This sounds right. I have only on rare occasions had to work with p-values as a critical issue. Each time I have to go back to original definitions, find the nuanced meaning and then wrestle with the application of that nuance to a topic that has to be communicated to a larger audience. A fool's goal.

"You can get it right, or you can make it intuitive, but it’s all but impossible to do both" is exactly right. And it is true across a much broader range of issues than p-values. When dealing with evidence of racial or gender bias, for example, the range of confounding variables is typically large and the effect sizes are small. In many cases, the right answer is that there is no evidence of discrimination. That is not to say that there is no discrimination. No. But there is no provable discrimination when you correctly take into account the effect sizes and the confounding variables. The correct answer is not the intuitive answer and it is hard to reconcile that to audience that often are much more accustomed to rhetorical argument than to rational argument.

Earlier in her piece, Aschwanden says
Last week, I attended the inaugural METRICS conference at Stanford, which brought together some of the world’s leading experts on meta-science, or the study of studies. I figured that if anyone could explain p-values in plain English, these folks could. I was wrong.
I first came across the concept of meta-studies perhaps 35-40 years ago in both economics and social sciences. At the time I was reasonably knee-deep in technical studies around several issues and I was interested in the concept of meta-studies, the aggregation of many small studies to shed light on an issue by increasing the population size. From Wikipedia:
The aim in meta-analysis then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. In essence, all existing methods yield a weighted average from the results of the individual studies and what differs is the manner in which these weights are allocated and also the manner in which the uncertainty is computed around the point estimate thus generated.
Conceptually this made sense but I wrestled with the practical execution. There seemed to me simply too many variables, and too many uncontrolled variables, between studies to actually usefully aggregate them in a meta-analysis. P-values was an indicative, but minor, concern compared to the larger issue of whether like studies were being aggregated versus only similar studies.

I have had little cause in the subsequent decades to revise my opinion. These are smart people doing interesting research but I remain extremely guarded about the validity of any conclusion arising from a meta-study.

Given the increasing evidence about how sloppily studies are conducted in the social sciences, it seems like this skepticism was well warranted.

No comments:

Post a Comment