He knows what he is talking about. He is the expert on fact-based marriage counseling.
Or not. From The Seven Principles for Making Marriage Work, a book review by Scott Alexander.
Another case of a self-appointed expert who is at best mistaken and at worst a con artist widely accepted by those in a position to know better. The core claim by John Gottman which Alexander is examining:
After years of research…I am now able to predict whether a couple will stay happily together or lose their way. I can make this prediction after listening to the couple interact in our Love Lab for as little as five minutes! My accuracy rate in these predictions averages 91 percent over three separate studies. In other words, in 91 percent of the cases where I have predicted that a couple’s marriage would eventually fail or succeed, time has proven me right. These predictions are not based on my intuition or preconceived notions of what marriage “should” be, but on the data I’ve accumulated over years of study.91% is impressively high and impressively precise. But is it impressively accurate?
Richard Heyman published the definitive paper on this in 2001, The Hazards Of Predicting Divorce Without Crossvalidation (kudos to Laurie Abraham of Slate, the only one of the journalists covering Gottman to find and mention this, and my source for some of the following). Heyman notes that Gottman doesn’t predict divorce at all. He postdicts it. He gets 100 (or however many) couples, sees how many divorced, and then finds a set of factors that explain what happened.Despite Gottman's claims about his knowledge, research, and expertise, its all nonsense and it has all the appearances that Gottman knows that it is nonsense.
Confused about the difference between prediction and postdiction? It’s a confusing concept, but let me give an example, loosely based on this Wikipedia article. The following rule accurately matches the results of every US presidential election since 1932: the incumbent party will win the election if and only if the Washington Redskins won their last home game before the election – unless the incumbent is black or the challenger attended a Central European boarding school, in which case it will lose.
In common language, we might say that this rule “predicts” the last 22 presidential elections, in the sense that knowing the rule and the Redskins’ record, we can generate the presidential winners. But really it doesn’t predict anything – there’s no reason to think any future presidential elections will follow the rule. It’s just somebody looking to see what things coincidentally matched information that we already have. This is properly called postdiction – finding rules that describe things we already know.
Postdictive ability often implies predictive ability. If I read over hospital records and find that only immunodeficient people caught a certain virus, I might conclude I’ve found a natural law – the virus only infects immunodeficient people – and predict that the pattern will continue in the future.
But this isn’t always true. Sometimes, especially when you’re using small datasets with lots of variables, you get predictive rules that work very well, not because they describe natural laws, but just by coincidence. It’s coincidence that the Redskins’ win-loss record matches presidential elections, and with n = 22 datapoints, you’re almost certain to get some coincidences like that.
Even an honest attempt to use plausible variables to postdict a large dataset will give you a prediction rule that’s a combination of real natural law and spurious coincidence. So you’re not allowed to claim a certain specific level of predictive ability until you’ve used your rule to predict out-of-training-data events. Gottman didn’t do this.
In his paper, Heyman creates a divorce prediction algorithm out of basic demographic data: husband and wife’s education level, employment status, etc. He is able to achieve 90% predictive success on the training data – nearly identical to Gottman’s 91% – without any of Gottman’s hard work. No making the couples spend days in a laboratory and counting up how many times they use I-statements. No monitoring their blood pressure as they gaze into each other’s eyes. Heyman never met any of his couples at all, let along analyzed their interaction patterns. And he did just as well as Gottman did at predicting divorce (technically he predicted low scores on a measure of marital stability; his dataset did not include divorce outcomes).
Then he applied his prediction rule to out-of-sample couples. Accuracy dropped to 70%. We have no reason not to think Gottman’s accuracy would drop at the same rate. But 70% is around the accuracy you get if you predict nobody will divorce. It’s little better than chance, and all of Gottman’s claims to be a master divorce predictor are totally baseless.
Most critically, regardless of what Gottman claims and what he thinks his research shows, when examined and tested by independent third-parties there is a big gap between claim and evidence.
What happens when people who aren’t Gottman evaluate the Gottman method? A large government-funded multicenter study testing a Gottman curriculum as well as several others found no effect of any on marital outcomes; control couples actually stayed together slightly more than ones who got marriage counseling. The Gottman curriculum seemed to do worst of the three curricula studied, although there were no statistical tests performed to prove it. I have no explanation for this.Alexander is usually pretty good at sorting wheat from chaff and he takes a lot of time and effort to discover what is true versus what is simply cognitive pollution.
No comments:
Post a Comment