Killing Peer Review

July 19, 2011

When a cadre of international scientific research powerhouses announced last month that they were teaming up to create a top-shelf, peer-reviewed free journal in the medical and life sciences fields, some called it a "triumph of open access" — proof that the tide was turning in favor of a once-radical movement aimed at cutting through the traditional oligarchies and turning scholarly publishing on its head.

But to Joe Pickrell, a doctoral student in biology at the University of Chicago, the idea was not groundbreaking enough. It will not do merely to lower the barriers to viewing scholarly articles, he thought; academe must lower the barriers to reviewing them.

As one might expect from an advocate of modern publishing, Pickrell took to the blogosphere. "Left unanswered … is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with?" the Chicago grad student wrote on Genomes Unzipped, a genetics blog he shares with other young academics. "… Cutting journals out of scientific publishing to a large extent would be unconditionally a good thing," he continued, "and that the only thing keeping this from happening is the absence of a 'killer app.' "

Pickrell went on to describe, in general detail, the features this journal-killing app would require. It would bypass the formal peer review process, taking pre-publication papers and allowing a community of users (scholars and experts, most likely) to vote papers up or down — much like social bookmarking sites such as Reddit do for articles in the popular press. The idea would be to let readers decide which articles deserve top billing, rather than ceding that task to a tiny cloister of journal editors and their hand-picked reviewers. Papers with good feedback would shoot to the top of the list. And if scholars do want proxies to help them decide if an article is worthy of their trust and attention, they could turn to the recommendations of their friends and colleagues.

Reader comments started flowing in. Some cheered Pickrell’s post as a sort of manifesto for a rising generation of scholars. "I think this system of academic publication will continue to gain support as more people from our generation (the ones that grew up using community-oriented sites like Wikipedia, Reddit, etc.) further infiltrate academia," wrote Michael Alcorn, another biology Ph.D. candidate at Chicago. Others had not only had the same thought as Pickrell — they had actually built mock-up sites based on the principles he had described.

Still, skeptics wanted to know: In such a wild west of scholarly publishing, who would check facts? Pickrell’s answer is the same as Wikipedia’s: everybody. "I think the system could be totally self-regulating with a big enough community," he said in an interview with Inside Higher Ed. The most popular articles would receive the most attention, but they would also receive the most scrutiny. Errors are unlikely to escape a critical mass of studied readers. Mechanisms could be put in place to report errors and redact articles. (Think Wikipedia, but with original research and a specialized corps of volunteer editors.)

The challenge, Pickrell said, would be getting scholars to actively take part in the site — which, busy as they tend to be with activities that are more likely to impress a tenure committee, might be difficult. After all, one of the top journals in the life sciences, Nature, was unsuccessful in converting reader interest into actual feedback when it experimented with open peer review in 2006. One can imagine how difficult it would be to build a successful, publicly reviewed journal from scratch.

Ken Van Haren does not have to imagine. As a doctoral student at Duke University last year, Van Haren briefly tried to start a revolution on the cheap. “Despite being the ones who brought the tech revolution about, [scientific researchers] haven’t really used it for their own good,” he said. So Van Haren created, a website where scholars can post and vote up articles that have not been peer-reviewed. But he had no marketing budget and previous little time to spend outside his studies, and the site soon stagnated.

Still, Pickrell’s blog post tapped into a latent backlash against the traditional model that has given rise to pushes for open peer review in other fields. Last summer, the highly esteemed literary journal Shakespeare Quarterly opened its articles to public comment prior to publication. That experiment was reportedly more successful than Nature’s 2006 foray, and the journal is planning a second trial. Earlier this year, the Andrew W. Mellon Foundation gave MediaCommons and the New York University Press $50,000 to conduct the most formal study to date of open peer review.

These recent explorations are oriented to the humanities, but the sciences may be particularly suited to the open peer review model, said Kathleen Fitzpatrick, a professor of media studies at Pomona College and director of scholarly communications for the Modern Languages Association. Scientific progress often exceeds the pace of traditional publishing, she said; working papers, and other pre-publication drafts are already published online at websites such as, and some win credence (and citations) before they are formally reviewed and published.

"Because scientists have been accustomed to the open circulation of pre-publications, pre-review papers, they’re primed for thinking about new models of what constitutes peer review," Fitzpatrick said.

Ironically, the skeptics of open recommendation engines and critics of traditional journals share a concern: that articles might be given visibility for reasons other than quality. In the case of a website that would put a premium on social Web-era metrics such as “likes” or “diggs” or “retweets” — or even a more sophisticated rating system — the community of reviewers might be larger, but the criticism might be shallower. “You don’t want these things to just become popularity contests,” said Mohamed Noor, an evolutionary biologist at Duke.

Then again, it seems shameful to cede editorial authority to only two reviewers when modern technology enables the consultation of thousands more. “I think the ideal,” he said, “would be a combination of both.”

Noor believes such a compromise already exists in Faculty of 1000. Started nearly a decade ago by former BioMed Central publisher Vitek Tracz, Faculty of 1000 does not circumvent the normal peer-review process so much as create public layers of feedback and assessment on top of it. (Its tagline is "post-publication peer review.") The site relies on a volunteer corps of faculty, which now exceeds 10,000, to flag articles they believe are important. They and subsequent readers can give the article a score of 6, 8, or 10.

All the faculty reviewers for Faculty of 1000 are subject-matter experts — a Wikipedia-style free-for-all it is not. But, in an effort to fight silo-ism, the site allows scholars to rate papers outside of their specialty. And it recently opened a separate repository, called F1000 Posters, which also allows its members to vote up conference posters and slide presentations that have not yet been published or peer-reviewed.

To the extent that it does not blow up the initial gateway of traditional peer review, Faculty of 1000 is not the “killer app” Pickrell was calling for in his blog post. But it might be an important step toward using social technology to help the cream rise to the top in scholarly publishing.

“People read papers and they discuss them,” Pickrell told Inside Higher Ed, but “they don’t necessarily discuss them online. And I think eventually they will… The issue is going to be getting people involved, and that’s going to be less and less of an issue as time goes on.”

For the latest technology news from Inside Higher Ed, follow @IHEtech on Twitter.


Search for Jobs


  • Viewed
  • Commented
  • Past:
  • Day
  • Week
  • Month
  • Year
Loading results...
Back to Top