Monday, November 23, 2020

In every single one of the original papers, for every single one of the experiments reported, there wasn’t enough information provided for researchers to know how to re-run the experiment.

 From Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie.  Page 37.

Around the time that the replication crisis was brewing in psychology, scientists at the biotechnology company Amgen attempted to replicate fifty-three landmark ‘preclinical’ cancer studies that had been published in top scientific journals (preclinical studies are those done at an early stage of drug development, perhaps in mice or in human cells in vitro).  A mere six of the replication attempts (that is, just 11 per cent) were successful. The results from similar attempts at another firm, Bayer, weren’t much more impressive, at around 20 per cent.  This lack of a firm underpinning in preclinical research might be among the reasons why the results from trials of cancer drugs are so often disappointing – by one estimation, only 3.4 per cent of such drugs make it all the way from initial preclinical studies to use in humans.  

Just like in psychology, these revelations made cancer researchers wonder about the wider state of their field.  In 2013 they formed an organised, collaborative attempt to replicate fifty-one important preclinical cancer studies in independent labs.  Those studies included claims that a particular type of bacterium might be linked to tumour growth in colorectal cancer, and that some “mutations found in leukaemia were related to the activity of a specific enzyme.48 But before the replicators could even begin, they hit a snag. In every single one of the original papers, for every single one of the experiments reported, there wasn’t enough information provided for researchers to know how to re-run the experiment.  Technical aspects of the studies – such as the specific densities of cells that were used, or other aspects of the measurements and analyses – simply weren’t included. Replication attempts ran aground, prompting voluminous correspondence with the original scientists, who often had to dig out their old lab books and contact former members of their groups who’d moved on to other jobs, to find the specific details of their studies.  Some were reluctant to collaborate: 45 per cent were rated by the replicators as either ‘minimally’ or ‘not at all’ helpful.  Perhaps they were worried that the replicators might not be competent, or that failures to replicate their results could mean their future work wouldn’t get funded.  

Later, a more comprehensive study took a random sample of 268 biomedical papers, including clinical trials, and found that all but one of them failed to report their full protocol. Meaning that, once again, you’d need additional details beyond the paper even to try to replicate the study.  Another analysis found that 54 per cent of biomedical studies didn’t even fully describe what kind of animals, chemicals or cells they used in their experiment.  Let’s take a moment to think about how odd this is. If a paper only provides a superficial description of a study, with necessary details only appearing after months of emailing with the authors (or possibly being lost forever), what was the point of writing it in the first place? Going back at least to Robert Boyle in the seventeenth century, recall that the original, fundamental motivation for scientists reporting all the specifics of their studies was so that others could scrutinise, and try to replicate, their research. The papers here failed that elementary test, just as the journals that published them also failed to perform their basic critical function.

 

No comments:

Post a Comment