From Science Fictions by Stuart Ritchie. Page 148.
At the centre of the arsenic-life story was NASA’s press release. Many people don’t realise that scientific press releases aren’t written only by press officers and PR agents: the scientists themselves are heavily involved. Indeed, sometimes they draft the entire release. The scenario where an innocent researcher is minding their own business when the media suddenly seizes on one of their findings and blows it out of proportion is not at all the norm. The main problem with press releases isn’t that they routinely report earth-shaking findings that turn out to be erroneous. Instead, it’s that they puff up the results of often perfectly serviceable scientific papers, making them seem more important, ground-breaking, or relevant to people’s lives than they really are. A 2014 study led by researchers at Cardiff University looked through hundreds of press releases for health-related scientific studies, matching them to the studies they were describing and to the eventual news stories they produced. It found that the press releases commonly engaged in three kinds of hype.
The first was unwarranted advice: press releases gave recommendations for ways readers should change their behaviour – for example, telling them to engage in a specific kind of exercise – that was more simplistic or direct than the results of the study could support. This was found in 40 per cent of the audited press releases. Another variety of hype was the cross-species leap. As we’ve seen previously, lots of preclinical medical research is done using non-human animals like rats and mice – a practice known as translational research, or animal modelling. The idea is that the basic principles of how, say, the brain or the gut or the heart work can be studied in the animal ‘model’ and then, with lots of work, the findings will eventually translate to humans, helping us design better treatments. Yet there are a lot of steps between making a discovery in mice (or in cells in a dish, or in computer simulations) and it being relevant to humans. There’s a whole cycle of development, validation and trials that must occur first, a painstaking process that can take decades. The vast majority of results from mice, somewhere around 90 per cent, don’t end up translating to human beings.
Animal researchers, of course, are well aware of this. Nevertheless, the Cardiff team found that it didn’t stop them from hyping up their press releases to imply, or even claim explicitly, that their initial-stage animal-based results had important human implications: 36 per cent of press releases did this. News stories about health research, in turn, frequently bury the admission that the study they’re describing wasn’t done in humans somewhere in the eighth or ninth paragraph. The psychophysiologist James Heathers has set up a novelty Twitter account that exists solely to retweet misleading news headlines from translational studies, such as ‘Scientists Develop Jab that Stops Craving for Junk Food’ or ‘Compounds in Carrots Reverse Alzheimer’s-Like Symptoms’ with a simple but accurate addition: ‘… IN MICE’.
The third “kind of hype found by the Cardiff team was possibly the most embarrassing. Everyone, especially scientists, is supposed to know that correlation is not causation. This basic insight is taught in every elementary statistics course and is a perennial feature of public debates about science, education, economics and more. When scientists look at an observational dataset, where data have been gathered without any randomised experimental intervention – say, a study charting the growth in children’s vocabulary as they get older – they’re generally just looking at correlations. That’s nothing to be ashamed of: there’s a lot we can learn about how things relate to each other in the world and building up an accurate picture of patterns of correlation is an essential foundation for understanding systems like the brain or society. We need to be awfully careful about how we interpret those correlations, however. If we find that drinking more coffee is correlated with having a higher IQ (which, by the way, it is), we can’t conclude that ‘coffee raises your IQ’. The causal arrow could just as easily point in the opposite direction, with being smarter making you drink more coffee. Alternatively, a third factor – such as being from a more affluent socioeconomic class, which might make you healthier and thus give you a higher IQ, as well as leading you to drink more coffee because it’s fashionable in your social circles – could be causing both of the others. These points are straightforward and boringly well-rehearsed, yet 33 per cent of the press releases in the Cardiff study threw causal caution to the wind and made it sound as if their observational, correlational results came from a randomised experiment that could reveal what caused what.
Hype in press releases was linked to hype in the news. The Cardiff researchers found that if the press release exaggerated the claim first, similar exaggeration in the media was 6.5 times more likely for advice claims, 20 times more likely for causal claims, and a whopping 56 times more likely for translational claims. (When the press release was more circumspect, journalists only exaggerated a small amount.) And although this itself was merely a correlational study, the Cardiff team followed up in 2019 with an impressive randomised trial. They worked with university press offices to modify randomly selected press releases by adding unwarranted causal statements into them, and compared their effects to releases that were more aligned with the evidence. As went the press releases, so went the headlines: exaggeration caused exaggeration. Another trial from 2019 filled in the next part of the story: hyped health news stories really did make readers more likely to believe a treatment was beneficial.
No comments:
Post a Comment