My prior two posts reviewed every study I could find addressing the issue of gender bias in peer-reviewed science. . . . The key findings from that second essay are that there was far more evidence of egalitarian or pro-female bias than there is of pro-male bias.Technically accurate but it seems to me somewhat misleading. Here it is in numbers.
Click to enlarge.
I think there is a subtle over-emphasis here in Jussim's formulation of "far more evidence" but that criticism is a matter of taste.
Out of eighteen studies, only four found that there was pro-male bias in peer-reviews science, the preferred narrative of the Mandarin Class. Only 22% of the research found that there was a pro-male bias in peer-reviewed science. That is true.
But I think it is probably more relevant that 44% (8/18) of the studies found that there was no bias in peer-review science.
Yet another way of reporting this is that of the studies which find that there is bias in peer-review science, 50% more of them find there is a pro-female bias (6 versus 4).
Finally, yet another way of reporting this is that of the studies which find that there is bias in peer-review science, only 40% find that it is pro-male bias. I think this final version is probably more meaningful than "there was far more evidence of egalitarian or pro-female bias than there is of pro-male bias."
A slightly different version would be "Of all the papers investigating bias, the evidence is mixed but overall there is little bias and where there is, the evidence indicates that there is a pro-female bias." I suspect that is the most effective of all.
But this is like an exercise in translating poetry between languages. All the formulations above, including Jussim's original, are technically accurate. It is a matter of taste as to which is most effective at conveying the nuance of the findings.
But the real meat of the article is not about the eighteen papers but rather, about which papers are trumpeted (number of citations) and which are ignored.
Click to enlarge.
That's pretty noisy.
Two things should be immediately and vividly clear from this. The studies showing peer review is unbiased or favors women:In other words, the more likely the research is to be true (i.e. much larger sample sizes), the more likely it is to find that peer-reviewed science is unbiased or favors women AND the less likely it is to publicized.
1. tend to be based on MUCH larger samples than studies showing biases favoring men (mean/median sample sizes of 11385.67 & 2311.5 versus 825..5 & 182.5The overall correlation between citations/year and sample size is -.36 (smaller studies are cited more frequently). Fraley & Vazire (2014) used this type of information to characterize the quality of journals. If used here to characterize the quality of individual articles, by this standard, the quality of articles showing bias favoring men is considerably weaker than that of articles showing peer review is unbiased or favors women.
2. tend to be cited at much lower rates than studies showing biases favoring men (means/median yearly citation rates of 91.75 & 51.5 versus 26.83 & 9.
And that's the danger of the Mandarin Class. They are wrong and they don't want to know that they are wrong.
Of course Iowahawk knew this years ago.
Journalism is about covering important stories. With a pillow, until they stop moving.
— David Burge (@iowahawkblog) May 9, 2013
No comments:
Post a Comment