Failure to Replicate
The word “replication” has, of late, set many a psychologist’s teeth on edge. Experimental psychology is weathering a credibility crisis, with a flurry of fraud allegations and retracted papers. Marc Hauser, an evolutionary psychologist at Harvard University, left academe amid charges of scientific misconduct. Daniel Kahneman, a Nobel-Prize-winning psychologist at Princeton University, entered the fray in 2012 with a sharply worded email to his colleagues studying social priming. He warned of a “train wreck looming” that researchers would avoid only if they focused more diligently on replicating findings. And the journal Social Psychology devoted its most recent issue to replication – and failed to replicate a number of high-profile findings in social psychology.
Yet psychologists are not the worst offenders when it comes to replication, it turns out. That distinction might belong to education researchers, according to an article published today in the journal Educational Researcher.
Rarity of Replication
Only 0.13 percent of education articles published in the field’s top 100 journals are replications, write Matthew Makel, a gifted-education research specialist at Duke University, and Jonathan Plucker, a professor of educational psychology and cognitive science at Indiana University. In psychology, by contrast, 1.07 percent of studies in the field’s top 100 journals are replications, a 2012 study found.
Makel and Plucker searched the entire publication history of the top 100 education journals – ranked according to five-year impact factors -- for the term replicat*. They found that 221 of 164,589 total articles replicated a previous study. Just 28.5 percent were direct replications rather than conceptual replications. (Only direct replications, which repeat an experiment’s procedure, can disconfirm or bolster a previous study. Conceptual replications, on the other hand, use different methods to test the same hypothesis.)
What’s more, 48.2 percent of the replications were performed by the same research team that had produced the original study. Attempts to replicate an experiment failed more often if there was no author overlap. When the same authors who published the original study published a replication in the same journal, 88.7 percent of replications succeeded. (The figure dropped to 70.6 percent when the same authors published in a different journal.) By contrast, replications conducted by new authors succeeded 54 percent of the time.
Replications might be appearing in journals outside the top 100, but these outlets attract scant scholarly notice. And some replications may not declare themselves as such, and would not show up in a search. These masked replications, however, fail to “serve their full duty as a replication,” Makel said in an interview.
Replications are an essential part of validating scientific knowledge. They control for sampling errors and weed out fraud. A replication might show, for instance, that an educational intervention’s effects are less pronounced than a previous study contended.
So why do so few replications appear in education journals? The article, “Facts Are More Important Than Novelty: Replication in the Education Sciences,” argues that education journals routinely prize studies that yield novel and exciting results over studies that corroborate – or disconfirm – previous findings. Conducting replications, the researchers write, “is largely viewed in the social science research community as lacking prestige, originality, or excitement.”
Researchers may fear that doing replications will not get them published, promoted or even hired. Nor will replications win them research grants, they worry. A replication that succeeds merely bolsters something we already know. A replication that fails, on the other hand, does not on its own invalidate a previous finding.
Makel and Plucker, however, say that replication matters greatly. What’s at stake, they say, is education’s standing as a discipline. Dismissing replication, they write, “indicates a value of novelty over truth … and a serious misunderstanding of both science and creativity.”
Legitimizing a Discipline
“A lot of people have made much of the difference between the natural sciences and the social sciences,” Makel said. “I do not associate science with a content area. I associate science with a process. I believe that a great many researchers in the education field would view themselves as doing science.”
An understanding of education research as a science is fairly new, said Plucker, his co-author.
Education is “definitely not at the top of hierarchy, even within the social sciences,” Plucker said. “I don’t think it’s traditionally well-respected.”
But the year 2002, when the Bush administration and Congress created the Institute for Education Sciences as the Education Department’s research arm, was a turning point for the field, he said. The last decade or so has seen tremendous reform in K-12 education – and with it, calls for research to guide public policy. In addition, education researchers have more complete data systems. They can follow a student through an entire K-12 education.
“Just the quality, especially now, of the work being done by early-career people – I just think it’s light-years advanced from where we were even seven or eight years ago,” Plucker said. “I think a lot of people now see it as a true social science.”
Replication, he said, is the next step the discipline must take in order to better legitimize itself in the eyes of other researchers and the public.
“We have better data, we have better data systems,” he said. “Now that we have those things, we really need a culture of replication and data-sharing to move us to the next level and keep this positive trajectory. If we don’t have it, can the trajectory continue? Yeah, but it’ll be a lot harder. One major fraud allegation can knock you off that razor’s edge.”
The article begins and ends with reference to the astronomer Carl Sagan's adage that science requires a balance between openness to new ideas and "ruthless skeptical scrutiny of all ideas, old and new." The homage might signal the aspirations the authors have for their discipline.
“When I talk to my friends in the natural sciences, they’re just baffled by how this is even a question or a controversy in psychology and education,” Makel said. “Replication is such a normal part of the process for them.”
The current “culture” of education research, he said, “puts a lot of emphasis on novelty. Whereas we’re saying, no, if we want to be respected as a scientific field we need to put more emphasis on fact.”
Neighboring fields in the social sciences – psychology, sociology, criminology – also suffer from a dearth of replications. But whereas psychology has weathered a number of fraud cases, the world of education research has had not a single fraud accusation in years, Plucker said. That’s a remarkable statistic, considering that conservative estimates place the number of educational researchers at 50,000 in the U.S. and 100,000 worldwide, according to the American Educational Research Association.
Education research’s spotless record is no accident, he said. It’s the result of scholars who aren’t checking each other’s work.
“I would love to believe that every single person doing education research around the world has ethics that are as pure as the driven snow,” Plucker said. “[But] the law of averages tells us there’s something out there.”
Without replication, he said, “we’ll never know which research sits on a foundation of stone and which sits on a foundation of sand.”
Replication can lead to bruised feelings, however. The failure of a group of researchers to replicate a study by Simone Schnall, a psychologist at the University of Cambridge, led to dueling blog posts. Brent Donnellan, a professor of psychology at Michigan State University, wrote that he and his team, in attempting to replicate Schnall's study, "encountered an epic fail." Schnall shot back with a blog post of her own, writing that the stream of requests for her data made her feel "like a criminal suspect who has no right to a defense." Commenting on her post, Daniel Gilbert, a psychologist at Harvard, wrote: "Simone Schnall is Rosa Parks -- a powerless woman who has decided to risk everything to call out the bullies ... The replication police need to apologize."
Plucker said that "there's always going to be hurt feelings." Researchers just need to handle replication requests professionally and fairly. And if replication were more common, a request for data would not feel like an attack, Makel added.
What to Do?
Researchers act in the way the research community wants them to, Makel said. So the trick is to change community norms. The authors point to a few ideas.
Journals could revise editorial policies to explicitly welcome the submission of replication studies. Journals could also reserve a portion of their page space for replication research. And the government should require that research submitted to the Education Department’s What Works Clearinghouse (a review of research on “what works” in education) be directly replicated, preferably by an independent research team.
Funding, too, will play a role. “If funders set aside dollars for replications specifically … researchers would apply for that money and do that research,” Makel said.
Not every study needs to be replicated, though. “The vast majority of education research does not need to be replicated,” Makel said. “The things that need to be replicated are the things that are having an influence on society. If we can’t confirm our own results then we lose the public trust and any credibility we hope to have on influencing policy.”
Plucker said he’d like replications to make up 10 percent of all published research – though he admits that’s unrealistic, at least now. “The amount we have now rounds down to zero pretty easily,” he said. “We need to get it at least in the 3 to 5 percent range.”
Search for Jobs