Removing the Blindfold
It's time to admit that peer review doesn't work the way people think it does, writes Katrina Gulliver.
What follows is a true story.
A junior scholar had been waiting months for a response on an article she had submitted to a good journal. One day she happened to be visiting a colleague’s office, as the colleague was bemoaning being hassled by an editor, having missed the deadline to “review this damn paper.” The title was visible on the colleague’s computer screen. “But that’s my article!” the junior scholar cried. There followed a moment of rather awkward silence, followed by some nervous laughter. The colleague, shamefaced about his tardiness as a reviewer, hastily dispatched a friendly critique of the piece to the editor.
If the colleague hadn’t realized the article was written by someone he knew, he probably would have put it off even further. In the ideal world, the review process is perfect, but unfortunately it involves the actions of humans.
We’ve all received scathing reviews of our pieces by anonymous reviewers. (Or at least I have. Perhaps the gentle reader has only ever received fulsome praise for his or her scholarly efforts, and if that is you, possibly you should stop reading here.)
But for those academic mere mortals still reading, we all know the harsh review, which often contains unfair criticism. (Exhibit A: “The author of this article did not make reference to Smith’s groundbreaking research in the field” -- never mind that Smith’s research has yet to be published, and there is no chance, none whatever, that the author of this significant piece is the person writing the review). Or the more usual reviewer disagreement: Referee 1 says the article has too much brown and not enough purple, Referee 2 says it has too much purple and not enough brown, and Referee 3 (to whom it has been sent to break this deadlock of opinion) says that this interesting article on feudal Japan doesn’t include enough about Richard Nixon. The ideal behind blind review places the reviewer as impartial Justice, but it is much easier to swing a sword than look at a scale when you’re blindfolded.
After a particularly blistering referee’s report (I find these best read with a bloody mary in hand; the reader’s experiences may vary), I’m sure I’m not the only one who has fantasized about kicking that referee’s shins at a conference. Of course I don’t know whom to kick. The distressing thing is there’s a good chance they know who I am.
Somewhere back in the mists of academic idealism, there was a point where scholars’ work was unknown until they presented it for publication. But now that we all leave trails of our research all over the web, the idea behind “blind” reviews seems quite naive. Googling a title will often yield a conference program, or a researcher’s departmental website. How many academics are so pure in their approach that they would AVOID looking up the topic of the paper under review? After all, it may be relevant to catch up on other literature on the topic in order to situate your review of the article in question.
For those of us who work in broad areas, it’s still the case that we will be asked to review (and be reviewed by) people completely unknown to us. Part of blind review’s theory is to avoid the conflicts of refereeing the work of friends (or enemies). But those in small subfields can already guess pretty closely who wrote an article they are asked to review. How many of us wouldn’t be more kind in a review of a piece we knew was written by a friend?
Which brings me to the issue of workshopping papers in public. I’ve heard people wonder whether doing so damages peer review. To which I would respond, no more than the Internet has damaged it already. With two articles of mine, I tried an experiment: posting my drafts on Google docs. I then posted links on Twitter and asked for anyone who was willing to comment.
(I realize that in STEM fields, posting paper drafts on ArXiv and other repositories for comment is more common, but in the humanities we don’t have this type of culture. We simply informally ask friends for comments.)
Getting colleagues from around the world to comment on my work made it stronger. And rather than feeling guilty about buttonholing the same few overworked friends to look at an article draft, the infinite generosity of my Twitter followers gave me volunteers. And they wrote constructive, useful things.
Some time ago, Daniel Lemire (a computer science professor at the Université du Québec) made the argument that blind review should be eliminated because work should be evaluated as part of a scholar’s broader career.
I’m not sure I agree with that, not least because I have my suspicions this already happens to the benefit of some Silverbacks, who manage to get pieces published that, had they landed on the editor’s desk as the work of an unknown Ph.D. student, would have been eighty-sixed in short order. However, I think it’s right to wonder how the current situation is actually operating (as opposed to how it “should”).
Lemire raises some interesting research that suggests rather than helping those outside the academy get published (which in theory it should, as supposedly the work itself is being judged rather than the author) in fact it works against them. Blind peer review is the standard by which we mark the quality and rigor of our scholarship. I do believe research needs impartial vetting but I’m not sure the current system should be it.
[Wondering about what happened to my friend’s article, mentioned at the start? It was not accepted by the journal, as the other referee had written a much harsher assessment.]
- Evaluating Colleagues: Time Well Spent
- Essay critiques Rebecca Schuman, Slate's higher education columnist
- Rejecting Double Blind
- Where in the World Is Carmen de Macedo?
- Essay on placing academic work in the right scholarly context
- Five Secrets to Publishing Success
- Open Access Without Tears
- Journal Submissions
Search for Jobs