You have /5 articles left.
Sign up for a free account or log in.

A man wearing a lab coat and a gloves holds up a paper covered in words and graphs but blocked by the word "rejected" in red.

In one example of scientists censoring scientific research, papers perceived as harmful may be held to a higher standard of acceptance by a journal’s editors.

Photo illustration by Justin Morrison/Inside Higher Ed | Getty Images | Rawpixel

A new paper points to an unexpected source for scientific censorship: scientists themselves.

According to the paper, published in the journal Proceedings of the National Academy of Sciences as a “perspective” piece, scientists commonly censor scientific findings for “prosocial” reasons, such as the fear that those findings could have harmful impacts, especially on marginalized groups.

That censorship can take many forms, including professors calling for the dismissal of their peers who study controversial topics and ethics boards more frequently rejecting research proposals that investigate discrimination against white men compared to other races and genders. Scientists also regularly censor themselves, the authors wrote, citing a survey of faculty at four-year institutions in which 25 percent reported that they were either “very” or “extremely” likely to self-censor in their academic publications.

The paper lists 39 authors, including lead author Cory Clark, director of the Adversarial Collaboration Project at the University of Pennsylvania and a behavioral scientist by trade. It draws on past research into academic censorship, as well as data from nonprofits like the Foundation for Individual Rights and Expression, which has studied instances in which researchers are targeted or attacked for their pedagogy or scholarship.

Entitled “Prosocial motives underlie scientific censorship by scientists: A perspective and research agenda,” the paper highlights a type of censorship that is much subtler than that perpetrated by government bodies and large institutions, sometimes with malicious intent. The authors note that it’s impossible to quantify instances of scientific censorship because a work that is successfully censored will never be available to the public. Instead, they aim to draw attention to the censorship that scientists commit in an effort to change the systems that allow it to go unchecked.

“A lot of what you hear is just anecdotes from scholars who feel as though their work has been treated unfairly. But you can never know why a particular paper was rejected or whether it actually was given an unfair evaluation by scientific journals,” Clark said.

One solution the paper offers is for journals to be more transparent by publishing reviews and editorial decision letters online, with names redacted if necessary.

“Right now, the norm is for the whole peer-review process to only [be] seen internally by the reviewers and the editor on the paper, and then the authors who receive those evaluations. As a consequence of that, no scientists have access to all those data on how papers are evaluated in the peer-review process,” Clark said. “I think that opening that up would provide a lot of really useful data that scholars could analyze to test whether there are kind of double standards and how certain papers are treated.”

Clark’s interest in the topic—particularly the biases that impact scientific decision-making—dates to 2012.

But in 2020, Clark herself, along with a group of her co-authors on the paper, requested and was granted the retraction of a research paper published in Psychological Science after it received negative feedback. The paper, which investigated ties among religiosity, crime and IQ, argued that there was a negative correlation between religiosity and violent crime, but not in nations with higher average IQs—which tended to be predominantly white, according to the data they used. The paper was criticized for feeding into the racist narrative that nonwhite people have lower IQs; however, the authors ultimately said they retracted it due to issues with the IQ and crime data.

The Need for Transparency

Ivan Oransky, co-founder of Retraction Watch, a website that tracks the retraction of academic articles, applauded Clark and her co-authors for suggesting ways to increase transparency in the academic publication process, which also include auditing for bias within academic journals and publicizing information about retracted articles. He hopes that such measures will help academe gain a better understanding of how often a scholar is truly being censored versus simply submitting a paper that is not up to par.

“In a lot of these discussions, I think part of the problem when you’re dealing with subjects that make some people uncomfortable is that there is a reflex, and sometimes not a well-founded reflex, to say that any criticism or certainly any retraction or strong condemnation, is censorship, when, in fact, it may be that there are just deep problems with the paper,” Oransky said.

John Slattery, director of the Grefenstette Center for Ethics in Science, Technology and Law at Duquesne University, said he appreciated the paper’s suggestions for improved transparency but also questioned whether journals have the necessary funds to rework their entire editorial processes.

“I think the overall paper is a really impressive addition to the scholarly research around censorship in general. It offers a number of really tangible suggestions, [but] I don’t know how well they’ll be received on the whole,” he said. “It’s sort of similar to the discussion around open science and opening research practices … it requires a lot of structural change on the back end of how a journal operates on a day-to-day basis.”

He also questioned another of the paper’s suggestions, which calls upon the scientific community to further investigate how harmful research papers can actually be.

“Although concerns about potential future harms are a common justification for scientific censorship, few studies have examined the veracity of harm concerns,” the paper stated. “How likely, extensive, and imminent is the harm? Do experts agree on the likelihood and range of magnitudes? Do scholars from different identity or ideological groups hold different harm estimates? Some evidence suggests that harmful outcomes of research are systematically overestimated and helpful outcomes systematically underestimated.”

But that line of questioning, he said, ignores many well-known historical examples of prominent scientific journals promoting and publicizing dangerous ideas, like eugenics, within the past century and a half.

While he doesn’t object to further research on how scholarship can cause harm, the negative impact that such scholarship has historically had on disabled, Indigenous, Black and brown communities is clear, he said.

“There are thousands and thousands of examples of scientific articles published in good scientific journals that lead to real tangible harm,” he said. “It’s never really a bad thing to say, ‘Let’s try to specify harms that are going to various communities.’”

Next Story

Written By

More from Books & Publishing