Neuroskeptic (see original for links):
Back in 2016, psychologist Susan Fiske caused much consternation with a draft article which branded certain (unnamed) bloggers as being “bullies” and “destructo-critics” who “destroy lives” through “methodological terrorism.”For a researcher who is concerned about being criticized for weak research methods, it is kind of interesting to see her response take form via weak research methods.
Fiske’s post (which later appeared in a more moderate version) was seen as pushback against bloggers who criticized the robustness and validity of published psychology papers. According to Fiske, this criticism often spilled over into personal attacks on certain individuals. Much debate ensued.
Now, Fiske is the senior author of the new study, which was carried out to examine the content and impact of 41 blogs that have posted on psychology methods, and, in particular, to find out which individual researchers were being mentioned (presumably, criticized) by name.
Another target of Fiske's original criticisms was Hilda Bastian. Bastian notes:
It’s tempting to let my “self-appointed data police” side loose. In some ways, the study has more relevance to a debate about weaknesses in methods in psychological science than it does to science blogging. It’s a small, disparate English-language-biased sample of unknown representativeness, with loads of exploratory analyses run on it. (There were 41 blogs, with 11,539 posts, of which 73% came from 2 blogs.) Important questions about power are raised, but far too much is made of analyses by gender and career stage for such a small and biased sample. And they studied social media, but not Twitter.Nothing like doubling down on weak research methodologies.
But when your postmodernist ideology is so weak, anything but weak methodologies is incompatible.
Back to Neuroskeptic.
The included blogs (listed in the supplementary material) were a fairly comprehensive list, as far as I can see. My blog has the second largest number of posts out of all the blogs included (1180), but this pales into comparison with Andrew Gelman‘s 7211, although that is a multi-author blog. All posts were downloaded and subjected to text mining analysis. Data was collected in April 2017.And here is the associated graph.
The results about the bloggers’ ‘targets’ were fairly unsurprising to me. It turned out that, out of a list of 38 researchers who were nominated as potential targets, the most often mentioned name was Daryl Bem (of precognition fame), followed by Diederik Stapel (fraud), and then Brian Wansink and Jens Förster (data ‘abnormalities’.)
These results seem inconsistent with the idea that bloggers were especially targeting female researchers, which had been one of the bones of contention in the 2016 debate. As the paper says:
Equal numbers of men and women were nominated, but nominated men were mentioned in posts more often
Click to enlarge.
Looks like there is a power law at work. Also looks like the bloggers are appropriately going after the most egregious examples of bad research (as measured in terms of the number of resignations or retractions). Finally, it looks like gender has nothing to do with being a target. Having bad research methodologies is what drives blog attention.
Its a start, even with the bad methodology. No telling what the results might be had she used a good methodology.
No comments:
Post a Comment