You have /5 articles left.
Sign up for a free account or log in.
The narrative of the reproducibility crisis has come to dominate scientific debate in recent years, with about 90 percent of respondents to a 2016 Nature survey agreeing that such a crisis existed, and more than 60 percent blaming it on selective reporting and pressures to publish.
These responses were driven by studies that found, for example, that researchers had failed to replicate 47 out of 53 cancer papers, and that the results of less than half of prominent psychology and economics papers could be replicated.
However, a review of more than 40 recent studies on reproducibility has led Daniele Fanelli, a fellow in methodology at the London School of Economics and Political Science, to conclude that, although misconduct and questionable research methods do occur in “relatively small” frequencies, there is “no evidence” that the issue is growing.
Writing in Proceedings of the National Academy of Sciences, Fanelli points out that some recent replication studies have produced higher rates of reproducibility and says it is unfair to set more store in the results of early exploratory studies than in papers that build on previous studies and are therefore more reliable.
Reproducibility also appears to vary heavily by subfield, methodology and the expertise of the researchers attempting to replicate findings, he says.
The number of yearly findings of scientific misconduct issued by the U.S. Office of Research Integrity has not increased, nor has the proportion of all investigations resulting in such a finding, based on data for 1994 to 2011, Fanelli says. And, he adds, although the number of retractions being issued by journals has risen, the number of retractions per retracting journal has not.
Fanelli questions whether pressure to publish can be blamed, highlighting that researchers who publish very frequently and in journals with high impact factors are less likely to produce papers that are retracted.
He concludes that science “cannot be said to be undergoing a ‘reproducibility crisis,’ at least not in the sense that it is no longer reliable due to a pervasive and growing problem with findings that are fabricated, falsified, biased, underpowered, selected, and irreproducible. While these problems certainly exist and need to be tackled, evidence does not suggest that they undermine the scientific enterprise as a whole.”
Fanelli told Times Higher Education that improving “how we conduct and communicate research in the 21st century is an absolute priority, [but] we don’t need to believe that there is a crisis to justify these efforts.”
“If the belief is incorrect, then we should revise it as soon as possible. If we don’t, then we risk misdirecting our efforts, ironically producing distorted and wasteful evidence in meta-research itself,” he said.
Fanelli’s arguments have sparked debate among scientists.
Christopher Chambers, professor of cognitive neuroscience at Cardiff University, said that he chooses to “steer away” from the term “crisis.” It “is emotional and polarizing, and so leads to distracting and frankly rather pointless arguments, like this one, about what to call it, rather than solving the problem,” he said.
Nevertheless, Chambers continued, the majority of life and social sciences studies were “not replicable,” and fixing this should be a priority. “Reproducibility isn’t optional; it’s central to the scientific method. If we abandon reproducibility, we abandon science,” he said.
Marcus Munafo, professor of biological psychology at the University of Bristol, said that whether the problem of reproducibility was worse than in the past was “difficult to determine, and not necessarily that relevant.” But he agreed that there were important issues to address.
“Much of the problem stems from the incentive structures that we work within -- the things that are good for scientists, like getting published, particularly in certain journals, might not be the things that are good for science,” he said. “While I wouldn’t describe where we are as a crisis, I certainly think there’s considerable scope for improvement.”
Malcolm MacLeod, professor of neurology and translational neuroscience at the University of Edinburgh, said that scientists should be wary of complacency, however. “The crisis terminology came about at a time when researchers were urging people to take notice of what was going wrong in science,” he said. “To lose that completely would be a mistake.”