From Researchers Found Puberty Blockers And Hormones Didn’t Improve Trans Kids’ Mental Health At Their Clinic. Then They Published A Study Claiming The Opposite. A critique of Tordoff et al. (2022) by Jesse Singal.
It drives me crazy when a scientific paper reports a significant positive impact and then does not reveal the effect size. "It caused great improvement" can cover everything from a 1% improvement to a five-fold improvement. Just tell us how big was the improvement.
And they don't. For many researchers, that's too big an ask. When there is no effect size, it is an enormous tell that either the research is rubbish and/or that the improvement wasn't especially material. It is a red flag just like a weak methodology description or a small sample size or not controlling for other variables or not sharing the data, etc., etc.
In this case, most of these weaknesses are on display regarding research on an issue which is undeservedly topical.
The study was propelled into the national discourse by a big PR push on the part of UW–Seattle. It was successful — Diana Tordoff discussed her and her colleagues’ findings on Science Friday, a very popular weekly public radio science show, not long after the study was published.All the publicity materials the university released tell a very straightforward, exciting story: The kids in this study who accessed puberty blockers or hormones (henceforth GAM, for “gender-affirming medicine”) had better mental health outcomes at the end of the study than they did at its beginning.The headline of the emailed version of the press release, for example, reads, “Gender-affirming care dramatically reduces depression for transgender teens, study finds.” The first sentence reads, “UW Medicine researchers recently found that gender-affirming care for transgender and nonbinary adolescents caused rates of depression to plummet.” All of this is straightforwardly causal language, with “dramatically reduces” and “caused rates… to plummet” clearly communicating improvement over time.
But the sample size was small, there were no controls of confounding variables, the methodological description was weak, the data was not shared, and the claims were for an exaggerated effect size.
And no one in the University of Washington PR department, no one in the JAMA Network Open, and no one at NPR or any of the other mainstream media which interviewed the researchers, bothered to actually check the findings.
Singal does. And is not impressed. Despite the claims, in fact those receiving the treatment did not improve. The researchers were only able to create the impression of such findings by such an abusive treatment of statistics as to seem to rule out incompetence and verge on corruption.
Singal's whole piece is surprisingly accessible as he navigates the statistical nuances but the upshot is that it is hard to see this as research with any rigor or any probability of being accurate. And there are many other reasons to understand why the finding is improbable. Which Singal also goes into.
Read the whole thing.
No comments:
Post a Comment