We often judge information by the company it keeps. A story in The New York Times is more likely to be taken seriously than a news story that was published in a small town paper. A university press’s reputation is built by the strength of its list. It’s the principle underlying the “impact factor,” as flawed as that measure is for assessing the worth of any particular paper published in a “high IF” journal. An article in Science has a lot of clout because it’s published in Science. That journal is near the top when it comes to prestige. But it’s not perfect. It has come in for a lot of criticism for an article it published recently. Its author, John Bohannon, went to a great deal of trouble to establish that many journals today are accepting papers with little sign of peer review or with peer review so ineffectual that it fails to catch errors that should be obvious.
Bohannon has come in for quite a lot of criticism for his article, which was written while wearing his journalist hat, though he has a PhD in molecular biology to go with extensive credentials as a journalist. (Both Science and Nature combine well-respected scientific research with news and opinion pieces in an effort to keep scientists abreast of significant new research and current reporting and commentary on the world in which scientists work.) A major criticism is that he focused on open access journals exclusively and drew conclusions about them without establishing a control group. The problem was compounded when a new release from Science exaggerated the study’s meaning in a press release. Sal Robinson, at the Melville House book blog, adds a pointed critique that representing himself as a group of authors from an African country, running the text through Google Translate to make the English suitably substandard, makes it seem the problem is at least in part the fault of third world authors getting in over their heads and partly the fault of publishers who take advantage of ill-prepared would-be scientists with sketchy credentials. Gunther Eysenbach, an open access journal editor who refused to pass the article along to reviewers, but whose refusal didn't appear in the study's data, wonders if the author left out inconvenient data. Michael Eisen, who describes himself as “a strong proponent of open science,” responded to the sting with a blog post about how the author of a badly-flawed paper fooled Science, which accepted it without detecting serious flaws. Eisen argues that the problem is with peer review itself, not with open access publishing. Blaming the problem on open access, he writes, “is like saying that the problem with the international finance system is that it enables Nigerian wire transfer scams.” Mike Taylor rounds up the critiques and adds some of his own at Sauropod Vertebra.
It is undeniably true that a lot of silly scam operations apparently are profiting from the way we measure value of researchers in units of publications. One could argue that the proliferation of niche journals with tiny potential readerships and very little impact on the advancement of science only exist because they rake in money for publishers as filler for the “big deals” that eat library budgets and soak up some of the excess supply of authors desperate to publish. It’s not surprising, given the relative ease with which websites can be created with the unwitting assistance of witless and desperate authors, that scammers will find ways to make money from people foolish enough to fall for their nonsense. Even the obviously questionable offers from Mrs. Sese-Seko and her son Basher must find some gullible business partners, or why else would they continue to show up in my inbox?
This is hardly news to librarians. Wayne Bivens-Tatum wonders what all the fuss is about. Everyone knows there are junky journals out there. Many of the open access journals targeted by Bohannon’s sting rejected the article. At Duke, Kevin Smith suggests we thank Bohannon for pointing out that simply labeling a journal “peer reviewed” accomplishes absolutely nothing and that perhaps the time has come to rethink peer review in an era when it could be open. The Library Loon speculates about ways the library profession and scholars could work on a solution to the problem – including a plan submitted by a commenter (the Digital Drake, who adds to my suspicion that the smartest librarians have feathers) to create a well-backed responsive rating system and forum that operates a bit like Writer Beware, which is backed by a professional organization and maintained by some compulsively well-organized and strong editors. I totally want this to happen!
Publishing scams are certainly not a problem limited to open access or even to scientific and scholarly publishing. Maura Smale has a fascinating post at ACRLog about how confused a student was when checking the library catalog for a book he’d found on Amazon. The library didn’t have it. It never would. It was one of a series of books put out by a “publisher” that simply assembles Wikipedia articles into trade paperbacks with attractive, if completely irrelevant, flowers on the covers. The only people hurt by this spoof are consumers who don’t realize they could find the contents on Wikipedia for free – and, of course, anyone whose time is wasted sorting useful sources of information from the increasing amount of chaff. O brave new world!
Experimental tests of publishers at gatekeepers are nothing new. In 1982 in Behavioral and Brain Sciences two researchers reported on how well papers previously accepted for publication fared when submitted again to the same journals, only this time with fictitious authors whose biographical information located them at non-prestigious institutions. In only three cases were the articles recognized as previously published. One article was accepted. The remaining eight were rejected for lacking quality, proper methodology, or significance. The journal published dozens of responses from people in various fields, some taking issue with Peters and Ceci’s research design or findings, others reporting similar problems with the peer review system. One of the most entertaining responses described how a man painstakingly retyped the manuscript of a novel that had won the National Book Award and submitted it to 14 publishers, all of whom rejected it. So did 13 literary agents. So did the novel’s original publisher when he sent them a query and sample chapter. Nobody seemed to recognize the book. And then there is the famous Sokal hoax, in which a physicist, impatient with what he considered a nonsensical if fashionable approach to understanding the world favored by the postmodern set, got a paper riddled with elementary errors accepted by the journal Social Text (for a special issue on science; at the time the journal did not practice peer review, but subsequently adopted it).
This is why I always cringe when I hear instructors tell first year students “use peer reviewed sources” as if that single criterion is enough to sort the good from the bad (and as if non-experts will make good choices among the thousands of articles they’ll turn up in a search, hardly any of which will be comprehensible). I cringe when I hear librarians say “internet sources can’t be trusted, which is why you want to use library sources” – as if libraries and their licensed databases aren’t full of rubbish in need of vetting. This is why I encourage students to go beyond checklists when they evaluate sources. It’s not how information is dressed or the company it keeps, it’s what it has to say that matters, and how it arrived at its conclusions, and whether the work was carried out ethically and with an open mind.
In an era when so much is published, when there are so many less-than-idealistic reasons to publish, when the way we share knowledge and ideas is in flux, we need to help our students and our colleagues think critically about what really matters and how to build those values into the way we comprehend and contribute to the record of knowledge.