Sunday, November 22, 2020

A re-analysis of sixty-seven studies could only reproduce the results from twenty-two of them using the same datasets

From Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie.  Page 35.

Here’s something that’s perhaps even more alarming. You’d think that if you obtained the exact same dataset as was used in a published study, you’d be able to derive the exact same results that the study reported. Unfortunately, in many subjects, researchers have had terrible difficulty with this seemingly straightforward task. This is a problem sometimes described as a question of reproducibility, as opposed to replicability (the latter term being usually reserved to mean studies that ask the same questions of different data). How is it possible that some results can’t even be reproduced? Sometimes it’s due to errors in the original study. Other times, the original scientists weren’t clear enough with reporting their analysis: they took twists and turns with the statistics that weren’t declared in their scientific paper, and thus their exact steps can’t be retraced by independent researchers. When new scientists run the statistics in their own way, the results come out differently. Those studies are like a cookbook including mouth-watering photographs of meals but providing only the patchiest details of the ingredients and recipe needed to produce them.

In macroeconomics (research on, for example, tax policies and how they affect countries’ economic growth), a re-analysis of sixty-seven studies could only reproduce the results from twenty-two of them using the same datasets, and the level of success improved only modestly after the researchers appealed to the original authors for help.41 In geoscience, researchers had at least minor problems getting the same results in thirty-seven out of thirty-nine different studies they surveyed.  And when machine-learning researchers analysed a set of papers about ‘recommendation algorithms’ – the kind of computer programs used by websites such as Amazon or Netflix to suggest what you might want to buy or watch next, based on what people like you have chosen in the past – they could reproduce only seven out of the eighteen studies on the topic that had been recently presented at prestigious computer science conferences.  These papers are the real-life version of the classic Sidney Harris cartoon.

 

No comments:

Post a Comment