Measuring Censorship in Science Is Challenging. Stopping it Is Harder Still
5 hours ago Guest Blogger 5 Comments
By Musa al-Gharbi Nicole Barbaro
December 14, 2023
In a new paper for the Proceedings of the National Academy of Sciences, we, alongside colleagues from a diverse range of fields, investigate the prevalence and extent of censorship and self-censorship in science.
Measuring censorship in science is difficult. It’s fundamentally about capturing studies that were never published, statements that were never made, possibilities that went unexplored and debates that never ended up happening. However, social scientists have come up with some ways to quantify the extent of censorship in science and research.
For instance, statistical tests can evaluate “publication bias” – whether or not papers with findings tilting a specific way were systematically excluded from publication. Sometimes editors or reviewers may reject findings that don’t cut the preferred direction with the preferred magnitude. Other times, scholars “file drawer” their own papers that don’t deliver statistically significant results pointing in the “correct” direction because they assume (often rightly) that their study would be unable to find a home in a respectable journal or because the publication of these findings would come at a high reputational cost. Either way, the scientific literature ends up being distorted because evidence that cuts in the “wrong” direction is systematically suppressed.
Audit studies can provide further insight. Scholars submit identical papers but change things that should not matter (like the author’s name or institutional affiliation) or reverse the direction of the findings (leaving all else the same) to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on who the author is or what they find. Other studies collect data on all papers submitted to particular journals in specific fields to test for patterns in whose work gets accepted or rejected and why. This can uncover whether editors or reviewers are applying standards inconsistently that shut out perspectives in a biased way.
https://wattsupwiththat.com/2023/12/29/measuring-censorship-in-science-is-challenging-stopping-it-is-harder-still/