Monday, August 29, 2011

We often turn out to be wrong, even with giant, classic papers

Studies of studies show that we get things wrong by Ben Goldacre.
In 2005, John Ioannidis gathered together all the major clinical research papers published in three prominent medical journals between 1990 and 2003: specifically, he took the "citation classics", the 49 studies that were cited more than 1,000 times by subsequent academic papers.

Then he checked to see whether their findings had stood the test of time, by conducting a systematic search in the literature, to make sure he was consistent in finding subsequent data. From his 49 citation classics, 45 found that an intervention was effective, but in the time that had passed, only half of these findings had been positively replicated. Seven studies, 16%, were flatly contradicted by subsequent research, and for a further seven studies, follow-up research had found that the benefits originally identified were present, but more modest than first thought.

This looks like a reasonably healthy state of affairs: there probably are true tales of dodgy peer reviewers delaying publication of findings they don't like, but overall, things are routinely proven to be wrong in academic journals. Equally, the other side of this coin is not to be neglected: we often turn out to be wrong, even with giant, classic papers. So it pays to be cautious with dramatic new findings; if you blink you might miss a refutation, and there's never an excuse to stop monitoring outcomes.

No comments:

Post a Comment