Academic Publishing and Zombies

Media studies professor says traditional journals and presses need to become more open and dynamic if they want to avoid becoming dead on their feet.
September 30, 2011

Zombies are a good way to get people’s attention.

Just ask Seth Grahame-Smith, author of Pride and Prejudice and Zombies. His mash-up of the 19th-century Jane Austen classic with the 20th-century pulp-horror trope became an unlikely bestseller in 2009.

So it is strategic that Kathleen Fitzpatrick, director of scholarly communication at the Modern Language Association and a professor of media studies at Pomona College, invokes the living dead early to illustrate her argument in Planned Obsolescence: Publishing, Technology, and the Future of the Academy (NYU Press). The scholarly press book, she writes, “is no longer a viable mode of communication … [yet] it is, in many fields, still required in order to get tenure. If anything, the scholarly monograph isn’t dead; it is undead."

No doubt Fitzpatrick is not the first reader, or author, to imagine a scholarly monograph as a bogeyman that devours human brains. But while the zombie metaphor provides a nice hook, Fitzpatrick makes her point more subtly by citing Grahame-Smith’s satirical reimagining of Jane Austen. For as much as the current debate over the future of scholarly communications has to do with the preponderance of zombie monographs, it has more to do with the potential of authorial mash-ups and the pride and prejudice of stodgy academics.

Here are two ideas Fitzpatrick proposes to kill for good: Peer review is necessary to maintaining the credibility of scholarly research and the institutions that support it; and publishing activity in peer-reviewed journals is the best gauge of a junior professor’s contribution to knowledge in her field.

Although peer review is often portrayed as an institution that arose with the scientific method, Fitzpatrick suggests the roots of peer review were “more related to censorship than to quality control,” serving mainly to concentrate academic authority in the hands of journal editors and, later, their expert reviewers.

While this system created an effective supply-side filter, it was also susceptible to bias, as Douglas Peters and Stephen Ceci demonstrated in a 1982 experiment. Peters, of the University of North Dakota, and Ceci, of Cornell University, took already-published articles in 12 esteemed psychology journals and resubmitted them, changing only the authors’ names and affiliations and the phrasing of the opening paragraphs. Three of the 12 articles were caught by journal editors as duplicates. Of the nine that were not, one was published. The other eight were rejected, most on methodological grounds.

In the days when ink was permanent, printing was expensive, and redressing the flaws of a shoddy published article was tedious, prepublication vetting by a cloister of gatekeepers made more sense, Fitzpatrick argues. These days, technology makes it possible to tap a larger crowd of academics to assess the merits of individual articles. Instead of assigning a few cops to guard the door, Fitzpatrick argues that journals should throw the door open to all comers, then deputize their readers to usher sound articles to a pedestal and banish bad ones to the margins. Scholarly journals would serve their constituencies better “by allowing everything through the gate, and by designing a post-publication peer review process that focuses on how a scholarly text should be received,” she writes, “rather than whether it should be out there in the first place.”

Today’s technology also enables more dynamic forms of annotation, Fitzpatrick says. Authors and readers need not parry over ideas in a string of static, disjointed essays when they can do so in the digital margins of the original text, she writes. Instead of ideas evolving over the course of dozens of discrete texts from multiple authors, multiple authors could converge on a single text, which would evolve in stride with the ideas the original writer had set down — a text that is “living,” as it were, rather than entombed in ink.

The way to make this work, Fitzpatrick says, is to change the currency of scholarly communications from paper to credit. Instead of rewarding faculty for getting a lot of paper published, universities should consider how helpful tenure candidates have been in parsing other people’s articles written and helping others refine their ideas, she says. Journals could help out with this by creating “trust metrics” that cede more weight to academics who consistently give constructive feedback. They could also encourage frequent, thoughtful reviews by making them prerequisites for publishing one’s own work — thus attracting the sort of critical mass of reviewers that Fitzpatrick argues is necessary for successful peer-to-peer review (and which some previous high-profile experiments with the model failed to get).

Under such a system, faculty members could glide to tenure on the wings of their reputations as positive contributors to the advancement of knowledge in their field — a metric the current “publish-or-perish” model does not adequately represent, Fitzpatrick says. “Little in graduate school or on the tenure track inculcates helpfulness,” she writes, “and in fact much militates against it.”

Accordingly, professorial culture is infected by pride in individual achievements and prejudice against publishing models that would de-emphasize them. But to the extent that individual academics continue in their lust for “power and prestige” by vying for exclusive spots in elite journals, they should not be surprised to find themselves as irrelevant and moribund — indeed, zombie-like — as print monographs have already become, warns Fitzpatrick.

“If we enjoy the privileges that obtain from upholding a closed system of discourse sufficiently that we’re unwilling to subject it to critical scrutiny, we may also need to accept the fact that the mainstream of public intellectual life will continue, in the main, to ignore our work,” she says. “Public funds will, in that case, be put to uses that seem more immediately pressing than our support.”

Inside Higher Ed caught up with Fitzpatrick to discuss Planned Obsolescence, which is available online (with reviewer annotations) and is scheduled for release in zombie-book form in November. Helpful critiques in the comments section are welcomed and make an eye-catching addition to any tenure portfolio:

Q: One of the most striking messages of this book is that in order for academe to truly realize the opportunities presented by modern channels of scholarly communications, academics will have to humble themselves. You say a publishing process that focuses on developing ideas and texts collaboratively, rather than flattering individuals with solitary bylines, would (a) be more honest, and (b) dovetail nicely with Web 2.0 platforms that enable distributed authorship. But humbling academics is no simple task. There are a handful of entrenched systems of thought and practice that stand in the way of this not just in the academy, but in the broader culture. Is there a hierarchy of obstacles here? Which needs to fall first?

A: I don’t think academics need to be "humbled," but would instead say that we need to reconfigure our priorities and understand that insofar as we have operated as individuals, it has always been by building on the work of others, and by putting our work into circulation such that it can be built upon. Some fields of course already operate in predominantly collaborative ways, as do most successful online projects, but the humanities in general have a deeply ingrained belief in the primacy of the individual voice. If we are going to take full advantage of the new ways of working that digital technologies make available, scholars will have to consider the possibility that we can accomplish more collectively than we can alone. This is not to say that the individual voice will be wholly subsumed within that of the Borg. Instead, it is meant to suggest that that voice will very often be found in more direct dialogue with other scholars, an interconnectedness that will make clear that, in fact, the individual voice that we so value has never been alone.

As you note, shifting our focus from the individual to the collaborative will require us to get past some fairly entrenched assumptions. We in the humanities will need to think differently about “credit,” so that collaborative work will count in hiring, tenure, and promotion processes. We’ll also need to develop new means of citing the contributions that our colleagues make to our work as it develops. But even more than our processes, we need to change our mindset: we need to understand ourselves as working toward collective goals; we need to value work done on behalf of a community as much as we do work that serves ourselves.

Q: You argue for an open, post-publication review process, where every paper is allowed through the gate and the academic vanguard – acting as a crowd, not a cloister – focuses on "how a scholarly text should be received rather than whether it should be out there in the first place." Whenever this sort of thing is proposed, doomsday prophets warn against a bum rush from pseudoscientists and historical revisionists who would blight the scholarly garden with "voodoo and quackery." Why are they wrong?

A: They’re wrong in large part because, as they say in open-source software circles, many eyes make all bugs shallow: the more people reviewing scholarly work, the stronger that work is likely to become. Understand of course that I’m not talking about crowd-sourcing of the model used by, say, Wikipedia, in which “just anyone” can contribute to and edit the project. (Though I note that participation by “just anyone” has resulted in an encyclopedia that is not only far vaster than could ever have been produced within a closed system of expert authoring and review, but also one that studies have shown to be of the same or better quality as traditionally-produced projects.) Instead, what I’m talking about in such an open, post-publication review process is non-anonymous discussion by a community of scholars working together on collective issues. In a system like this one – what Katherine Rowe has referred to as "our-crowd sourcing" – participants are able to assess not just the publications that scholars in the community produce but also the comments on those publications, according to standards that the community itself sets and maintains.

Q: You sharply criticize the current tenure system, in which faculty are evaluated on their productivity by the reductive metrics of how many papers they have managed to get published, and in what journals. You advocate for a system that instead assesses scholars based on how helpful they have been in reviewing and refining ideas that have been set down by others. Because those discussions are now preserved in the amber of the Web, the raw data describing a candidate's activities as an "academic citizen" might be increasingly available to tenure committees. Does the sort of computational analysis being developed in the digital humanities offer any hope for "scoring" a professor's civic contributions? What might be the limitations and potential hazards of such a system?

A: I’d like us to think about assessment of a scholar’s participation in his or her community of practice as being as important as the production of his or her own work – not replacing publication with reviewing as the core locus of scholarly engagement, but instead understanding the two as in close relationship with each other. If tenure and promotion reviews are supposed to evaluate not the quantity of work a scholar has done but rather the impact that the scholar is having on the field, we need to understand that impact often comes in our responses to the work that others do, not just in our own original contributions.

That said, I do think that some forms of computational analysis might be useful in helping us to think about how a scholar engages with his or her community. Obviously, we don’t want just to count things like hits or downloads or comments. Popularity is not the point, and simple metrics like these are easily manipulable. On the other hand, a number of scholars in fields like digital humanities and internet research have done significant work on network analysis; surely we can put the kinds of robust data gathering, analysis, and – most important – interpretation into practice in thinking about how our own discourse networks operate, how ideas move, how influence functions. These would not be simple measurements, but instead frameworks for understanding how a scholar is influencing the development of a field.

Beyond such computational measures, however, we still of course need to read the work. In fact, I suggest that we need to place more emphasis on such reading, in no small part because, in a post-publication review environment, we need not only to read the work being evaluated, but also to read the responses other scholars have had to the work. We have to stop displacing our judgment onto other entities, like journals and presses, and instead do the difficult work of evaluation ourselves.

Q: There are several points in the book where you talk about why, viewed through a historical lens, some of the shifts you are advocating are in fact less radical than they might sound. Can you talk about how your proposed overhauls fit into the long arc of scholarly publishing?

A: I was once on a conference panel with Bob Stein, the founder of the Institute for the Future of the Book, and in response to a question about the radical changes coming in a post-Gutenberg universe, he said "you have to understand: the last 500 years have been very unusual." And it’s true: the conventions that have developed alongside print have not been there all along, and they color our assumptions about academic publishing – that the author is an individual solely responsible for the text; that the text is original, unique, and stable; that the publisher grants the text’s imprimatur. Historians of the book have explored the degree to which these assumptions are not the inevitable result of the technologies of printing and binding, but are instead the product of social, economic, and intellectual institutions that have helped to maintain the ideologies that govern contemporary culture and its assumptions about knowledge. Needless to say, one of those institutions is the university.

These ideologies aren’t universal, and they aren’t without histories. There have been moments in western culture when “originality,” which is now highly prized, was instead seen as suspect, just as there have been moments when the collective voice was valued over the individual’s. If we instead assume that our work makes a contribution to the collective advancement of knowledge, we might begin to recognize a fundamental mismatch. And we might start to think about whether there’s something to be learned from the models of communication that we’ve left behind.

Q: As part of your process for publishing Planned Obsolescence, you put your advocacy to the test, posting early drafts of at least one chapter, and then the entire book, for discussion on CommentPress, and then incorporating the feedback into the finished work. How did this experiment affect the final version and your thinking about open review and collaborative authorship in general?

A: The experiment had a profound impact on the final version of the text. I had two excellent traditional peer reviews of the manuscript, which gave me very engaged advice about rethinking certain aspects of the text. But because of the dialogic nature of the online review, I was able to ask commenters to expand on some of their thoughts, to respond directly to their concerns, and – perhaps most important – to see commenters discuss the text with one another, at times hashing out different interpretations of the ideas in question. And since I knew who those reviewers were, I had a social context for their comments, leading me to take them that much more seriously.

But the open review process wasn’t perfect; I also discovered several things about it that need revision. For instance, we need to develop a way of interpreting silence in open reviews. The absence of comments could mean that everybody agrees with you; it could mean that nobody read the piece; and it could mean that the piece is so insanely wrong that no one wants to embarrass you (or create problems for themselves) by saying so. Figuring out what silence means, or how to get past those silences, will be key for open review.

Q: As the director of scholarly communication for the Modern Language Association, you would appear to wield more clout in the academic publishing world than others who have proposed a shock to the system. Yet the association still uses the peer-review model for its journal, PMLA. Are there any plans at MLA to put your suggestions into action there?

A: I was hired by the MLA in part because of the arguments I've been making – because of the executive council's desire for the organization to develop in new directions while remaining aware of the history and importance of the systems that are currently in place. We're exploring a range of options that will facilitate communication among our members in both innovative and traditional ways, and I'm honored to be able to assist in the process.

Q: Imagine three people who read this work and agree with you. One is a junior faculty member, one is a provost, and one is a university librarian. What can each do, beginning today, to help turn the barge in a more enlightened direction?

A: Good question! To the junior faculty member: don’t wait until after you have achieved the safety of tenure to take a chance on a new way of working; it might not be easy, but effort spent educating your senior colleagues about innovative modes of scholarly production will be effort well-spent. Seek out mentors and supporters, both within your institution and outside it, build a strong community of practice, get your work into open circulation, document the effects that work is having – and then teach the folks who are going to evaluate your work how to read it, and how to read the evidence of its effectiveness.

To the university librarian: help fight for the acceptance of new modes of scholarly communication by collaborating on new digital publishing projects with the university press, by creating structures to support faculty experimentation with new modes of production and dissemination, and by helping gather the data about usage and response that will help faculty members demonstrate the influence of their work. Getting involved with faculty projects at the outset will also help you to create a plan for their preservation. I’d also encourage the university librarian to open a frank conversation with faculty about the costs of resources and the limitations placed on their access; getting scholars engaged with questions about the sustainability of scholarly communication will require building awareness about the difficult choices facing libraries in an era of budget constriction and escalating journal costs.

Finally, to the provost: understand that scholarly communication is a core responsibility of the university – so fundamental to the university mission, in fact, that it must be thought of as part of the institution’s infrastructure, not as a revenue center. And every university must develop some kind of plan for scholarly communication. If you leave disseminating the work of your faculty exclusively to corporate publishers, corporations will profit from it at your institution’s expense. Instead, invest in the structures that will get your faculty’s work into broader circulation – not least because those structures will help you make clear to the concerned public why the university continues to matter today.

For the latest technology news and opinion from Inside Higher Ed, follow @IHEtech on Twitter.


Be the first to know.
Get our free daily newsletter.


Back to Top