Presses

The Shift Away From Print

For most scholarly journals, the transition away from the print format and to an exclusive reliance on the electronic version seems all but inevitable, driven by user preferences for electronic journals and concerns about collecting the same information in two formats. But this shift away from print, in the absence of strategic planning by a higher proportion of libraries and publishers, may endanger the viability of certain journals and even the journal literature more broadly -- while not even reducing costs in the ways that have long been assumed. 

Although the opportunities before us are significant, a smooth transition away from print and to electronic versions of journals requires concerted action, most of it individually by libraries and publishers. 

In reaching this conclusion, we rely largely on a series of studies, of both publishers and libraries, in which we examined some of the incentives for a transition and some of the opportunities and challenges that present themselves. Complete findings of our library study, on which we partnered with Don King and Ann Okerson, were published as The Nonsubscription Side of Periodicals. We also recently completed a study of the operations of 10 journal publishers, in conjunction with Mary Waltham, an independent publishing consultant. 

Taken together, these studies suggest that an electronic-only environment would be more cost-effective than print-only for most journals, with cost savings for both libraries and publishers. But this systemwide perspective must also be balanced against a more textured examination of libraries and publishers.

On the publisher side, the transition to online journals has been facilitated by some of the largest publishers, commercial and nonprofit. These publishers have already invested in and embraced a dual-format mode of publishing; they have diversified their revenue streams with separately identifiable income from both print and now increasingly electronic formats. Although the decreasing number of print subscriptions may have a negative impact on revenues, these publishers’ pricing has evolved alongside the economies of online only delivery to mitigate the effects of print cancellations on the bottom line.

The trend has been to adopt value-based pricing that recognizes the convenience of a single license serving an entire campus (rather than multiple subscriptions), with price varying by institutional size, intensity of research activity, and/or number of online users. By “flipping” their pricing to be driven primarily by the electronic version, with print effectively an add-on, these publishers have been able to manage the inevitable decline of their print business without sacrificing net earnings. They are today largely agnostic to format and, when faced with price complaints, are now positioned to recommend that libraries consider canceling their print subscriptions in favor of electronic-only access.

Other journal publishers, especially smaller nonprofit scholarly societies in the humanities and social sciences and some university presses, are only beginning to make this transition. Even when they publish electronic versions in addition to print, these publishers have generally been slower to reconceive their business models to accommodate a dual-format environment that might rapidly become electronic-only. Their business models depend on revenues received from print, in some cases with significant contributions from advertising, and are often unable to accommodate significant print cancellations in favor of electronic access. 

Until recently, this has perhaps not been unreasonable, as demand for electronic journals has been slower to build in the humanities and some social science disciplines. But the business models of these publishers are now not sufficiently durable to sustain the journals business in the event that libraries move aggressively away from the print format. 

Many American academic libraries have sought to provide journals in both print and electronic formats for the past 5 to 10 years. The advantages of the electronic format have been clear, so these were licensed as rapidly as possible, but it has taken time for some faculty members to grow comfortable with an exclusive dependence on the electronic format. In addition, librarians were concerned about the absence of an acceptable electronic-archiving solution, given that that their cancellation of print editions would prevent higher education from depending on print as the archival format.

In the past year or two, the movement away from print by users in higher education has expanded and accelerated. No longer is widespread migration away from print restricted to early adopters like Drexel and Suffolk Universities; it has become the norm at a broad range of academic institutions, from liberal arts colleges to the largest research universities. Ongoing budget shortfalls in academe have probably been the underlying motivation. The strategic pricing models offered by some of the largest publishers, which offer a price reduction for the cancellation of print, have provided a financial incentive for libraries to contemplate completing the transition. 

Faced with resource constraints, librarians have been required to make hard choices, electing not to purchase the print version but only to license electronic access to many journals -- a step more easily made in light of growing faculty acceptance of the electronic format. Consequently, especially in the sciences, but increasingly even in the humanities, library demand for print has begun to fall. As demand for print journals continues to decline and economies of scale of print collections are lost, there is likely to be a tipping point at which continued collecting of print no longer makes sense and libraries begin to rely only upon journals that are available electronically.  
As this tipping point approaches, at unknown speed, libraries and publishers need to evaluate how they can best manage it. We offer several specific recommendations.

  • First, for those publishers that have not yet developed a strategy for an electronic-only journals environment and the transition to it, the future is now. Today’s dual-format system can only be managed effectively with a rigorous accounting of the costs and revenues of print and electronic and how these break down by format. Because some costs incurred irrespective of format are difficult to allocate, this accounting is complicated. It is also, however, critical, allowing publishers to understand the performance of each format as currently priced and, as a result, to project how the transition to an electronic-only environment would affect them. Publishers that do not immediately undertake these analyses and, if necessary, adjust their business models accordingly, may suffer dramatically as the transition accelerates and libraries reach a tipping point.
  • Second, in this transition, libraries and higher education more broadly should consider how they can support the publishers that are faced with a difficult transition. A disconcerting number of nonprofit publishers, especially scholarly societies and university presses that have the greatest presence in the humanities and social sciences fields, have a particularly complicated transition to make. The university presses and scholarly societies have been traditionally strong allies of academic libraries. They may have priced their electronic journals generously (and unrealistically). Consequently, a business model revamped to accommodate the transition may often result in a significant price increase for the electronic format. In cases where price increases are not predatory but rather adjustments for earlier unrealistic prices, libraries should act with empathy. If libraries cancel journals based on large percentage price increases (even when, measured in dollars, the increases are trivial), they may unintentionally punish lower-price publishers struggling to make the transition as efficiently as possible.
  • Third, this same set of publishers is particularly vulnerable, because their strategic planning must take place in the absence of the working capital and the economies of scale on which larger publishers have relied. As a result, some humanities journals published by small societies are not yet even available electronically. The community has a need for collaborative solutions like Project Muse or HighWire,  (initiatives that provide the infrastructure to create and distribute electronic journals) for the scholarly societies that publish the smaller journals in the humanities and social sciences. But if such solutions are not developed or cannot succeed in relatively short order on a broader scale, the alternative may be the replacement of many of these journals with blogs, repositories, or other less formal distribution models.
  • Fourth, although libraries today face difficult questions about whether and when to proceed with electronic-only access to traditionally print journals, they should try to manage this transition strategically and, in doing so, deserve support from all members of the higher education community. It has been unusual thus far for libraries to undertake a strategic, all-encompassing format review process, since it is often far more politically palatable to cancel print versions as a tactical retreat in the face of budgetary pressures. But a chaotic retreat from print will almost certainly not allow libraries to realize the maximum potential cost savings, whereas a managed strategic format review can permit far more effective planning and cost savings.

Beyond a focus on local costs and benefits, there are a number of broader issues that many libraries will want to consider in such a strategic format review. The widespread migration from print to electronic seems likely to eliminate library ownership of new accessions, with licensing taking the place of purchase. In cases where ownership led to certain expectations or practices, these will have to be rethought in a licensing-only environment.
From our perspective, the safeguarding of materials for future generations is among the most pressing practices deserving reconsideration. Questions about the necessity of developing or deploying electronic archiving solutions, and the adequacy of the existing solutions, deserve serious consideration by all libraries contemplating a migration away from print resources. In addition, the transition to electronic journals begins to raise questions about how to ensure the preservation of existing print collections. Many observers have concluded that a paper repository framework is the optimal solution, but although individual repositories have been created at the University of California, the Five Colleges, and elsewhere, the organizational work to develop a comprehensive framework for them has yet to begin.

The implications both of licensing on archiving and of the future of existing print collections are addressable as part of any library’s strategic planning for the transition to an electronic-only environment -- but all too often are being forgotten under the pressure of the budgetary axe.

These challenges appear to us to be some of the most urgent facing libraries and publishers in the nearly inevitable transition to an electronic-only journals environment. Both libraries and publishers should proceed under the assumption that the transition may take place fairly rapidly, as either side may reach a tipping point when it is no longer cost-effective to publish or purchase any print versions. It is not impossible for this transition to occur gracefully, but to do so will require the concerted efforts of individual libraries and individual publishers.

Author/s: 
Eileen Gifford Fenton and Roger C. Schonfeld
Author's email: 
info@insidehighered.com

Eileen Gifford Fenton is executive director of Portico, whose mission is to preserve scholarly literature published in electronic form and to ensure that these materials remain accessible. Portico was launched by JSTOR and is being incubated by Ithaka, with support from the Andrew W. Mellon Foundation. Roger C. Schonfeld is coordinator of research for Ithaka, a nonprofit organization formed to accelerate the productive uses of information technologies for the benefit of academia. He is the author of JSTOR: A History (Princeton University Press, 2003). 

Literature to Infinity

Graphs, Maps, Trees: Abstract Models for a Literary History is a weird and stimulating little book by Franco Moretti, a professor of English and comparative literature at Stanford University. It was published a few months ago by Verso. But observation suggests that its argument, or rather its notoriety, now has much wider circulation than the book itself. That isn’t, I think, a good thing, though it is certainly the way of the world.

In a few months, Princeton University Press will bring out the first volume of The Novel: History, Geography, and Culture -- a set of papers edited by Moretti, based on the research program that he sketches in Graphs, Maps, Trees. (The Princeton edition of The Novel is a much-abridged translation of a work running to five volumes in Italian.) Perhaps that will redefine how Moretti’s work is understood. But for now, its reputation is a hostage to somewhat lazy journalistic caricature -- one mouthed, sometimes, even by people in literature departments.

What happened, it seems, is this: About two years ago, a prominent American newspaper devoted an article to Moretti’s work, announcing that he had launched a new wave of academic fashion by ignoring the content of novels and, instead, just counting them. Once, critics had practiced “close reading.” Moretti proposed what he called “distant reading.” Instead of looking at masterpieces, he and his students were preparing gigantic tables of data about how many books were published in the 19th century.

Harold Bloom, when reached for comment, gave one of those deep sighs for which he is so famous. (Imagine Zero Mostel playing a very weary Goethe.) And all over the country, people began smacking their foreheads in exaggerated gestures of astonishment. “Those wacky academics!” you could almost hear them say. “Counting novels! Whoever heard of such a thing? What’ll those professors think of next -- weighing them?”

In the meantime, it seems, Moretti and his students have been working their way across 19th century British literature with an adding machine -- tabulating shelf after shelf of Victorian novels, most of them utterly forgotten even while the Queen herself was alive. There is something almost urban legend-like about the whole enterprise. It has the quality of a cautionary tale about the dangers of pursuing graduate study in literature: You start out with a love of Dickens, but end up turning into Mr. Gradgrind.

That, anyway, is how Moretti’s “distant reading” looks ... well, from a distance. But things take on a somewhat different character if you actually spend some time with Moretti’s work itself.

As it happens, he has been publishing in English for quite some while: His collection of essays called Signs Taken for Wonders: On the Sociology of Literary Forms (Verso, 1983) was, for a long time, the only book I’d ever read by a contemporary Italian cultural theorist not named Umberto Eco. (It has recently been reissued as volume seven in Verso’s new Radical Thinkers series.) The papers in that volume include analyses of Restoration tragedy, of Balzac’s fiction, and of Joyce’s Ulysses.

In short, then, don’t believe the hype – the man is more than a bean-counter. There is even an anecdote circulating about how, during a lecture on “distant reading,” Moretti let slip a reference that he could only have known via close familiarity with an obscure 19th century novel. When questioned later -– so the story goes -– Moretti made some excuse for having accidentally read it. (Chances are this is an apocryphal story. It sounds like a reversal of David Lodge’s famous game of “intellectual strip-poker” called Humiliation.)

And yet it is quite literally true that Moretti and his followers are turning literary history into graphs and tables. So what’s really going on with Moretti’s work? Why are his students counting novels? Is there anything about “distant reading” that would be of interest to people who don’t, say, need to finish a dissertation on 19th century literature sometime soon? And the part, earlier, about how the next step would be to weigh the books -- that was a joke, right?

To address these and many other puzzling matters, I have prepared the following Brief Guide to Avoid Saying Anything Too Dumb About Franco Moretti.

He is doing literary history, not literary analysis. In other words, Moretti is not asking “What does [insert name of famous author or novel here] mean?” but rather, “How has literature changed over time? And are there patterns to how it has changed?” These are very different lines of inquiry, obviously. Moretti’s hunch is that it might be possible to think in a new way about what counts as “evidence” in cultural history.  

Yes, in crunching numbers, he is messing with your head. The idea of using statistical methods to understand the long-term development of literary trends runs against some deeply entrenched patterns of thought. It violates the old idea that the natural sciences are engaged in the explanation of mathematically describable phenomena, while the humanities are devoted to the interpretation of meanings embedded in documents and cultural artifacts.

Many people in the humanities are now used to seeing diagrams and charts analyzing the structure of a given text. But there is something disconcerting about a work of literary history filled with quantitative tables and statistical graphs. In doing so, Moretti is not just being provocative. He’s trying to get you to “think outside the text,” so to speak.

Moretti is taking the long view.... A basic point of reference for his “distant reading” is the work of Fernand Braudel and the Annales school of historians who traced the very long-term development of social and economic trends. Instead of chronicling events and the doings of individuals (the ebb and flow of history), Braudel and company looked at tendencies taking shape over decades or centuries. With his tables and graphs showing the number (and variety) of novels offered to the reading public over the years, Moretti is trying to chart the longue dure’e of literary history, much as Braudel did the centuries-long development of the Mediterranean.

Some of the results are fascinating, even to the layperson’s eye. One of Moretti’s graphs shows the emergence of the market for novels in Britain, Japan, Italy, Spain, and Nigeria between about 1700 and 2000. In each case, the number of new novels produced per year grows -- not at the smooth, gradual pace one might expect, but with the wild upward surge one might expect of a lab rat’s increasing interest in a liquid cocaine drip.

“Five countries, three continents, over two centuries apart,” writes Moretti, “and it’s the same pattern ... in twenty years or so, the graph leaps from five [to] ten new titles per year, which means one new novel every month or so, to one new novel per week. And at that point, the horizon of novel-reading changes. As long as only a handful of new titles are published each year, I mean, novels remain unreliable products, that disappear for long stretches of time, and cannot really command the loyalty of the reading public; they are commodities, yes, but commodities still waiting for a fully developed market.”

But as that market emerges and consolidates itself -- with at least one new title per week becoming available -- the novel becomes “the great capitalist oxymoron of the regular novelty: the unexpected that is produced with such efficiency and punctuality that readers become unable to do without it.”

And then the niches emerge: The subgenres of fiction that appeal to a specific readership. On another table, Moretti shows the life-span of about four dozen varieties of fiction that scholars have identified as emerging in British fiction between 1740 and 1900. The first few genres appearing in the late 18th century (for example, the courtship novel, the picaresque, the “Oriental tale,” and the epistolary novel) tend to thrive for long periods. Then something happens: After about 1810, new genres tend to emerge, rise, and decline in waves that last about 25 years each.

“Instead of changing all the time and a little at a time,” as Moretti puts it, “the system stands still for decades, and is then ‘punctuated’ by brief bursts of invention: forms change once, rapidly, across the board, and then repeat themselves for two [to] three decades....”

Genres as distinct as the “romantic farrago,” the “silver-fork novel,” and the “conversion novel” all appear and fade at about the same time -– to be replaced a different constellation of new forms. It can’t, argues Moretti, just be a matter of novelists all being inspired at the same time. (Or running out of steam all at once.) The changes reflect “a sudden, total change of their ecosystem."

Moretti is a cultural Darwinist, or something like one. Anyway, he is offering an alternative to what we might call the “intelligent design” model of literary history, in which various masterpieces are the almost sacramental representatives of some Higher Power. (Call that Power what you will -– individual genius, “the literary imagination,” society, Western Civilization, etc.) Instead, the works and the genres that survive are, in effect, literary mutations that possess qualities that somehow permit them to adapt to changes in the social ecosystem.

Sherlock Holmes, for example, was not the only detective in Victorian popular literature, nor even the first. So why is it that we still read his adventures, and not those of his competitors? Moretti and his team looked at the work of Conan Doyle’s rivals. While clues and deductions were scattered around in their texts, the authors were often a bit off about how they were connected. (A detective might notice the clues, then end up solving the mystery through a psychic experience, for example.)

Clearly the idea of solving a crime by gathering clues and decoding their relationship was in the air. It was Conan Doyle’s breakthrough to create a character whose “amazing powers” were, effectively, just an extremely acute version of the rational powers shared by the reader. But the distinctiveness of that adaptation only comes into view by looking at hundreds of other texts in the literary ecosystem.

This is the tip of the tip of the iceberg. Moretti’s project is not limited by the frontiers of any given national literature. He takes seriously Goethe’s idea that all literature is now world literature. In theory, anyway, it would be possible to create a gigantic database tracking global literary history.

This would require enormous computational power, of course, along with an army of graduate students. (Most of them getting very, very annoyed as they keypunched data about Icelandic magazine fiction of the 1920s into their laptops.)

My own feeling is that life is much too short for that. But perhaps a case can be made for the heuristic value of imagining that kind of vast overview of how cultural forms spread and mutate over time. Only in part is Moretti’s work a matter of counting and classifying particular works. Ultimately, it’s about how literature is as much a part of the infrastructure of ordinary life as the grocery store or Netscape. And like them, it is caught up in economic and ecological processes that do not respect local boundaries.

That, anyway, is an introduction to some aspects of Moretti’s work. I’ve just learned that Jonathan Goodwin, a Brittain Postdoctoral Fellow at Georgia Tech, is organizing an online symposium on Moretti that will start next week at The Valve.

Goodwin reports that there is a chance Moretti himself may join the fray. In the interim, I will be trying to untangle some thoughts on whether his “distant reading” might owe something to the (resolutely uncybernetic) literary theory of Georg Lukacs. And one of the participants will be Cosma Shalizi, a visiting assistant professor of statistics at Carnegie Mellon University.

It probably wouldn’t do much good to invite Harold Bloom into the conversation. He is doubtless busy reciting Paradise Lost from memory, and thinking about Moretti would not be good for his health. Besides, all the sighing would be a distraction.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Plain Talk About Plain Speech

I can’t remember when I snapped. Was it the faculty seminar in which the instructor used the phrase “the objectivity, for it is not yet a subjectivity” to refer to a baby? Maybe it was the conference in which the presenter spoke of the need to “historicize” racism, rambled through 40 minutes of impenetrable jargon to set up “new taxonomies” to “code” newspapers and reached the less-than-startling conclusion that five papers from the 1820s “situated African-Americans within pejorative tropes.” Could it have been the time I evaluated a Fulbright applicant who filled an entire page with familiar words, yet I couldn’t comprehend a single thing she was trying to tell me?   Perhaps it was when I edited a piece from a Marxist scholar who wouldn’t know a proletarian if one bit him in the keister. Or maybe it just evolved from day-to-day dealings with undergraduates hungry for basic knowledge, hold the purple prose.

At some point, I lost it. I began ranting in the faculty lounge. I hurled the Journal of American History/Mystery across the library, muttered in the shower, and sent befuddled e-mails to colleagues. I’m fine now. Once I unburdened I found I was not alone; lots of fellow academics agree that their colleagues couldn’t write intelligible explanations of how to draw water from the tap. From this was born the Society for Intellectual Clarity (SIC). We intend to launch a new journal, SIC PUPPY (Professors United in Plain Prose Yearnings) as soon as we find someone whose writing is convoluted enough to draft our grant application. (We’re told we should seek recruits among National Science Foundation recipients.)

Until the seed money comes in our journal is purely conceptual, but upon start-up SIC PUPPY will enact the following guidelines for submissions.

  • Titles: Brevity is a virtue. Titles with colons are discouraged. Any title with a colon, semi-colon, and a comma will be rejected on principle. We accept no responsibility for doodles and exclamatory obscenities scrawled on the returned text, even if you do enclose a self-addressed stamped envelope.
  • Style: If any manuscript causes one of our editors to respond to a late-night TV ad promising to train applicants for “an exciting career in long-distance trucking,” the author of said manuscript will be deemed a boring twit and his or her work will be returned. See above for doodle disclaimers.
  • Audience: Hey, would it kill you to write something an undergrad might actually read? If so, please apply for permanent residency in Bora Bora.
  • Terminology: If any author desires to invent a new term to describe any part of the research, refer to Greta Garbo’s advice on desire in the film Ninotchka: “Suppress it.” There are 171,476 active words in the English language and the authors of SIC PUPPY are confident that at least one of them would be adequate.
  • Nouns and Verbs: Among those 171, 476 words are some that are designated as nouns and others clearly meant as verbs. Do not confuse the two. SIC PUPPY refuses to conference with anyone about this. We have prioritized our objectives.
  • Thesis: We insist that you have one. If you don’t have anything to say, kindly refrain from demonstrating so. We do not care what Bakhtin, Derrida, Jameson, Marx, Freud, or Foucault have to say about your subject or any other. We’ve read them; we know what they think.
  • Academic Catfights: The only person who gives a squanker’s farley about literature reviews and historiography is your thesis adviser. We request that you get on with the article and reduce arcane debates to footnotes. The latter should be typed in three-point Windings font.
  • Editing for Smugness: If your article was originally a conference paper and, if at any time, you looked up from your text and smiled at your own cleverness, please delete this section and enroll in a remedial humility course.
  • No Silly Theories:SIC PUPPY does not care if a particular theory is in vogue; we will not consider silly ones. For example, bodies are bodies, not “texts” and dogs are dogs; they do not “signify” their “dogginess” through “signifier” barks.  While we’re on the subject, we at SIC PUPPY have combed scientific journals to confirm that time machines do not exist. We thus insist that human beings can be postpartum or postmortem, but not postmodern.
  • Privileging Meaning: We believe that sometimes you’ve got to call it like it is, even if that entails using a label or category. We know that some of you think we shouldn’t privilege any meaning over another. To this we say, “We’re the editors, not you, and we intend to use our privileged positions of power to label those who reject categories ‘ninnies.’ So there!”
  • Citations: We insist that you use the Chicago Manual of Style for all citations. Not because we love it, but because it annoys us no end to see parentheses in the middle of text we’re trying to read. Why we read a theory on ellipses (Bakhtin, 1934) just last night describing how English authors (Wilde, 1905; Shaw, 1924) sought to embed Chartist messages (S. Webb, 1891) into....
  • Complaints: In the course of preparing a journal it is inevitable that typos will appear, that medieval French words will go to print with an accent aigu where an accent grave should have been, and that edits will be made to what you were sure was perfect prose (but wasn’t). Do not call the editors to complain that we’ve humiliated you before your peers and have ruined your academic career. SIC PUPPY will not waste time telling you to get a life; we will direct your call to the following pre-recorded message: “Thhhhhwwwwwwwpt!”
  • Satire and Irony: To paraphrase the folksinger Charlie King, serious people are ruining our world. If you do not understand satire, or confuse irony with cynicism, go away.  Try therapy ... gin ... a warm bath ... anything! Except teaching or writing.  
Author/s: 
Rob Weir
Author's email: 
info@insidehighered.com

Robert E. Weir is a former senior Fulbright scholar who teaches at Smith College and the University of Massachusetts.

Legal Jams

Over the past few days, as perhaps you have heard, it has become more or less impossible to get hold of a copy of "Ready to Die" (1994) -- the classic (and prophetically named) debut album by the Notorious B.I.G., a gangster rapper killed in a shooting in 1997.

Well, perhaps "impossible" is overstating things. But expensive, anyway. Secondhand copies of the CD, recently selling for $6 each on Amazon, now fetch $40; and the price is bound only to go up from there. "Ready to Die" was withdrawn last week after a jury found that one of the tracks incorporated an unlicensed sample from a song originally recorded in 1992 by the Ohio Players -- the band best remembered for "Love Roller Coaster," a disco hit of the late 1970s. (Also, for an album cover featuring a naked woman covered in honey.)

Learning about the court case, I was, admittedly, shocked: Who knew the Ohio Players were still around? The Washington Post called them "funk dignitaries." Somehow that honorific phrase conjures an image of them playing gigs for the American Association of Retired Persons. They will be splitting a settlement of $4.2 million with their lawyers, which probably means a few more years on the road for the band.     

Apart from that, the whole matter came very close to being what, in the journalistic world, is called a "dog bites man" story -- a piece of news that is not really news at all. Digital technology now makes it very easy for one musician to copy and modify some appealing element from another musician's recording. Now lawyers hover over new records, listening for any legally actionable borrowing. Such cases are usually settled out of court -- for undisclosed, but often enormous, sums. The most remarkable thing about the "Ready to Die" case is that it ever got to trial.

More interesting than the legal-sideshow aspect, I think, is the question of how artists deal with the situation. Imitation, allusion, parody, borrowing stray bits of melody or texture -- all of this is fundamental to creativity. The line between mimicry and transformation is not absolute. And the range of electronic tools now available to musicians makes it blurrier all the time.

Using a laptop computer, it would be possible to recreate the timbre of Jimi Hendrix's guitar from the opening bars of "Voodoo Chile (Slight Return)" in order to color my own, rather less inspired riffs. This might not be a good idea. But neither would it be plagiarism, exactly. It's just an expedited version of the normal process by which the wealth of musical vocabulary gets passed around.

That, at least, would be my best argument if the Hendrix estate were to send a cease-and-desist letter. As it probably would. An absorbing new book by Joanna Demers, Steal This Music: How Intellectual Property Law Affects Musical Creativity, published by the University of Georgia Press, is full of cases of overzealous efforts to protect musical property. Some would count as implausible satire if they hadn't actually happened: There was, for example, the legal action taken to keep children from singing "This Land is Your Land" at summer camp.

Demers, an assistant professor of music history and literature at the University of Southern California, shows how the framework of legal control over music as intellectual property has developed in the United States. It began with copyright for scores, expanded to cover mechanical reproduction (originally, via player-piano rolls), and now includes protection for a famous performer's distinctive qualities as an icon. Today, the art (or whatever it is) of the Elvis impersonator is a regulated activity -- subject to the demands of Elvis Presley Enterprises, Inc., which exercises control over "not only his physical appearance and stage mannerisms but also the very quality of his voice," as Demers notes. "Impersonators who want to exhibit their vocal resemblance to Elvis can legally do so only after paying EPE."

What the King would say about this is anybody's guess. But as Demers reminds us, it probably wouldn't make any difference in any case: It is normally the corporate "content provider," not the artist, who now has discretion in taking legal action. The process of borrowing and modifying (whether of folk music by classical composers or Bootsy Collins bass-lines by hip-hop producers) is intrinsic to making music. But it is now increasingly under the influence of people who never touch an instrument.

It is impressive that so trim a book can give the reader so broad a sense of how musical creativity is being effected by the present intellectual property regime. The author's note indicates that Demers, apart from her academic duties, serves as "a freelance forensic musicologist" -- one of those professional sub-niches that didn't exist until fairly recently. Intrigued by this, I asked her about it.

The term "is definitely over the top," she admits, "but I can't take credit for it. It just refers to music specialists who assess borrowings and appropriations, sometimes in the courtroom but most often before any lawsuits are filed." The American Musicological Society provides a referral list of forensic consultants, which is where potential clients find her.

She's been at it for three years -- a period coinciding with her first full-time academic post. "As far as I know," she says, "I don't get any credit at USC for this type of work. I'm judged pretty much solely on research and teaching plus a bit of committee work. I do have a few colleagues at USC who've also done this sort of work. It's a nice source of extra revenue from time to time, but as far as I know, there are only two or three folks around the world who could survive doing this alone full-time."

Demers is selective about taking on freelance cases. "Some are legit," she says, "while others are sketchy, so I try to be choosy about which cases I'll take on." At one point, she was contacted "by a person who was putting together a lawsuit against a well-known singer/songwriter for plagiarizing one of his songs. His approach was to begin by telling me how serious the theft was, but he wanted me to commit to working for him before showing me the two songs. Needless to say, we ended up not working together. Most cases, though, are preemptive in the sense that producer or label wants to ensure that materials are 'infringement free' before releasing them."

There is an interesting tension -- if not, necessarily, a conflict -- between her scholarship and her forensic work. "The challenge for me in consulting," as Demers puts it, "is that I have to give advice based on what copyright law currently states. I don't agree with many aspects of that law, but my opinion can't get in the way of warning a client that s/he may be committing an actionable infringement."

In reading her book, I was struck by the sense that Demers was also describing something interesting and salutary. All the super-vigilant policing of musical property by corporations seems to have had an unintended consequence -- namely, the consolidation of a digital underground of musicians who ignore the restrictions and just do what they feel they must. The tools for sampling, altering, and otherwise playing with recorded sound get cheaper and easier to use all the time. Likewise, the means for circulating sound files proliferate faster than anyone can monitor.

As a geezer old enough to remember listening to Talking Heads on eight-track tape, I am by no means up to speed on how to plug into such networks. But the very idea of it is appealing. It seems as if the very danger of a cease-and-desist order might become part of the creative process. I asked Demers if she thought that sounded plausible.  

"Yes, exactly," she answered. "I don't want to come out and condone breaking the law, because even in circumstances where one could argue that something truly creative is happening, the borrower risks some pretty serious consequences if caught. But yes, this has definitely cemented the distinctions between 'mainstream' and 'underground or independent' in a way that actually bodes better for the underground than the mainstream. Major labels just aren't going to be attractive destinations for new electronica and hip-hop talent if this continues. And if there is a relatively low risk of getting caught, there are always going to be young musicians willing to break the law."

The alternative to guerilla recording and distribution is for musicians to control their own intellectual property -- for one thing, by holding onto their copyrights, though that is usually the first thing you lose by signing with a major label. "What I like to tell undergrads passing through USC," says Demers, "is that the era of mega-millions-earning stars is really coming to a close, and they can't expect to make large sums of money through music. What they should aim to do is not lose money, and there are several clever ways to avoid this, like choosing a label that allows the artist to retain control over the copyrights."

One problem is that artists often lack a sense of their options. "The situation is better than it used to be," Demers says, "but still, most artists are naive about how licensing works. They come with ideas to the studio and then realize that they must take out a loan in order to license their materials.  Labels don't license samples; artists do. And if a lawsuit develops, most of the time, the label cuts the artist loose and says, 'It's your problem.' "

There is an alternative, at least for musicians whose work incorporates recontextualized sound fragments from other sources. "The simple way around this," she continues, is for an artist who uses sampling to connect up "the millions (there are that many) who are willing to let their work be sampled cheaply or for free."

But as Steal This Music suggests, the problem runs deeper than the restrictions on "sampladelia." Had the Copyright Term Extension Act (CTEA) of 1998 been enacted 50 years earlier, you have to doubt that anyone would have dared to invent rock and roll. The real burden for correcting the situation, as Demers told me, falls on the public.

"I am pretty confident that content providers will continue to lobby for extending the copyright term," she says, "The CTEA passed because of the pressure that Disney and Time Warner put on Congress, and was abetted by the fact that the public was largely silent. But we're at a different point than we were in the late 1990s, and organizations like Public Knowledge and Creative Commons and scholars like Lawrence Lessig have done a good job of spreading the word about what extending copyrights does to creativity.  Next time Congress has a copyright extension bill in front of it, I hope that voters will get busy writing letters."

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

A Wiki Situation

To wiki or not to wiki? That is the question.

Whether ‘tis nobler to plunge in and write a few Wikipedia entries on subjects regarding which one has some expertise; and also, p'raps, to revise some of the weaker articles already available there...

Or rather, taking arms against a sea of mediocrity, to mock the whole concept of an open-source, online encyclopedia -- that bastard spawn of “American Idol” and a sixth grader’s report copied word-for-word from the World Book....

Hamlet, of course, was nothing if not ambivalent –- and my attitude towards how to deal with Wikipedia is comparably indecisive. Six years into its existence, there are now something in the neighborhood of 2 million entries, in various languages, ranging in length from one sentence to thousands of words.

They are prepared and edited by an ad hoc community of contributors. There is no definitive iteration of a Wikipedia article: It can be added to, revised, or completely rewritten by anyone who cares to take the time.

Strictly speaking, not all wiki pages are Wikipedia entries. As this useful item explains, a wiki is a generic term applying to a Web page format that is more or less open to interaction and revision. In some cases, access to the page is limited to the members of a wiki community. With Wikipedia, only a very modest level of control is exercised by administrators. The result is a wiki-based reference tool that is open to writers putting forward truth, falsehood, and all the shades of gray in between.

In other words, each entry is just as trustworthy as whoever last worked on it. And because items are unsigned, the very notion of accountability is digitized out of existence.

Yet Wikipedia now seems even more unavoidable than it is unreliable. Do a search for any given subject, and chances are good that one or more Wikipedia articles will be among the top results you get back.

Nor is use of Wikipedia limited to people who lack other information resources. My own experience is probably more common than anyone would care to admit. I have a personal library of several thousand volumes (including a range of both generalist and specialist reference books) and live in a city that is home to at least to three universities with open-stack collections. And that’s not counting access to the Library of Congress.

The expression “data out the wazoo” may apply. Still, rare is the week when I don’t glance over at least half a dozen articles from Wikipedia. (As someone once said about the comic strip “Nancy,” reading it usually takes less time than deciding not to do so.)

Basic cognitive literacy includes the ability to evaluate the strengths and the limitations of any source of information. Wikipedia is usually worth consulting simply for the references at the end of an article -- often with links to other online resources. Wikipedia is by no means a definitive reference work, but it’s not necessarily the worst place to start.

Not that everyone uses it that way, of course. Consider a recent discussion between a reference librarian and a staff member working for an important policy-making arm of the U.S. government. The librarian asked what information sources the staffer relied on most often for her work. Without hesitation, she answered: “Google and Wikipedia.” In fact, she seldom used anything else.

Coming from a junior-high student, this would be disappointing. From someone in a position of power, it is well beyond worrisome. But what is there to do about it? Apart, that is, from indulging in Menckenesque ruminations about the mule-like stupidity of the American booboisie?

Sure, we want our students, readers, and fellow citizens to become more astute in their use of the available tools for learning about the world. (Hope springs eternal!) But what is to be done in the meantime?

Given the situation at hand, what is the responsibility of people who do have some level of competence? Is there some obligation to prepare adequate Wikipedia entries?

Or is that a waste of time and effort? If so, what’s the alternative? Or is there one? Luddism is sometimes a temptation – but, as solutions go, not so practical.

I throw these questions out without having yet formulated a cohesive (let alone cogent) answer to any of them. At one level, it is a matter for personal judgment. An economic matter, even. You have to decide whether improving this one element of public life is a good use of your resources.

At the same time, it’s worth keeping in mind that Wikipedia is not just one more new gizmo arriving on the scene.  It is not just another way to shrink the American attention span that much closer to the duration of a subatomic particle. How you relate to it (whether you chip in, or rail against it) is even, arguably, a matter of long-term historical consequence. For in a way, Wikipedia is now 70 years old.

It was in 1936 that H.G. Wells, during a lecture in London, began presenting the case for what he called a “world encyclopedia” – an international project to synthesize and make readily available the latest scientific and scholarly work in all fields. Copies would be made available all over the planet. To keep pace with the constant growth of knowledge, it would be revised and updated constantly. (An essay on the same theme that Wells published the following year is available online.)

A project on this scale would be too vast for publication in the old-fashioned format of the printed book. Besides, whole sections of the work would be rewritten frequently. And so Wells came up with an elegant solution. The world encyclopedia would be published and distributed using a technological development little-known to his readers: microfilm.

Okay, so there was that slight gap between the Wellsian conception and the Wikipedian consummation. But the ambition is quite similar -- the creation of  “the largest encyclopedia in history, both in terms of breadth and depth” (as the FAQ describes Wikipedia’s goal).

Yet there are differences that go beyond the delivery system. Wells believed in expertise. He had a firm faith in the value of exact knowledge, and saw an important role for the highly educated in creating the future. Indeed, that is something of an understatement: Wells had a penchant for creating utopian scenarios in which the best and the brightest organized themselves to take the reins of progress and guide human evolution to a new level.

Sometimes that vision took more or less salutary forms. After the first World War, he coined a once-famous saying that our future was a race between education and disaster. In other moods, he was prone to imagining the benefits of quasi-dictatorial rule by the gifted. What makes Wells a fascinating writer, rather than just a somewhat scary one, is that he also had a streak of fierce pessimism about whether his projections would work out. His final book, published a few months before his death in 1946, was a depressing little volume called The Mind at the End of Its Tether, which was a study in pure worry.

The title Wells gave to his encyclopedia project is revealing: when he pulled his various essays on the topic together into a book, he called it World Brain. The researchers and writers he imagined pooling their resources would be the faculty of a kind of super-university, with the globe as its campus. But it would do even more than that. The cooperative effort would effectively mean that humanity became a single gigantic organism -- with a brain to match.

You don’t find any of Wells’s meritocracy at work in Wikipedia. There is no benchmark for quality. It is an intellectual equivalent of the Wild West, without the cows or the gold.

And yet, strangely enough, you find imagery very similar to that of Wells’s “world brain” emerging in some of the more enthusiastic claims for Wikipedia. As the computer scientist Jaron Lanier noted in a recent essay, there is now an emergent sensibility he calls “a new online collectivism” – one for which “something like a distinct kin to human consciousness is either about to appear any minute, or has already appeared.” (Lanier offers a sharp criticism of this outlook. See also the thoughtful responses to his essay assembled by John Brockman.)

From the “online collectivist’ perspective, the failings of any given Wikipedia entry are insignificant. “A core belief in the wiki world,” writes Lanier, “is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds.”

The problem being, of course, that it does not always work out that way. In 2004, Robert McHenry, the former editor-in-chief of the Encyclopedia Britannica, pointed out that, even after 150 edits, the Wikipedia entry on Alexander Hamilton would earn a high school student a C at best.

“The earlier versions of the article,” he noted, “are better written over all, with fewer murky passages and sophomoric summaries.... The article has, in fact, been edited into mediocrity.”

It is not simply proof of the old adage that too many cooks will spoil the broth. “However closely a Wikipedia article may at some point in its life attain to reliability,” as McHenry puts it, “it is forever open to the uninformed or semiliterate meddler.”

The advantage of Wikipedia’s extreme openness is that people are able to produce fantastically thorough entries on topics far off the beaten path. The wiki format creates the necessary conditions for nerd utopia. As a fan of the new “reimagined” "Battlestar Galactica," I cannot overstate my awe at the fan-generated Web site devoted to the show. Participants have created a sort of mini-encyclopedia covering all aspects of the program, with a degree of thoroughness and attention to accuracy matched by few entries at Wikipedia proper.

At the same time, Wikipedia is not necessarily less reliable than more prestigious reference works. A study appearing in the journal Nature found that Wikipedia entries on scientific topics were about as accurate as corresponding articles in the Encyclopedia Britannica.

And in any case, the preparation of reference works often resembles a sausage factory more than it does a research facility. As the British writer Joseph McCabe pointed out more than 50 years ago in a critique of the Columbia Encyclopedia, the usual procedure is less meritocratic than one might suppose. “A number of real experts are paid handsomely to write and sign lengthy articles on subjects of which they are masters,” noted McCabe, “and the bulk of the work is copied from earlier encyclopedias by a large number of penny-a-liners.”

Nobody writing for Wikipedia is “paid handsomely,” of course. For that matter, nobody is making a penny a line. The problems with it are admitted even by fans like David Shariatmadari, whose recent article on Wikipedia ended with an appeal to potential encyclopedists “to get your ideas together, get registered, and contribute.”

Well, okay ... maybe. I’ll think about it at least. There’s still something appealing about Wells’s vision of bringing people together “into a more and more conscious co-operating unity and a growing sense of their own dignity” – through a “common medium of expression” capable of “informing without pressure or propaganda, directing without tyranny.”

If only we could do this without all the semi-mystical globaloney (then and now) about the World Brain. It would also be encouraging if there were a way around certain problems -- if, say, one could be sure that different dates wouldn’t be given for the year that Alexander Hamilton ended his term as Secretary of the Treasury.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Pressing On

Late last week the Association of American University Presses held its annual meeting in New Orleans, or in what was left of it. Attendance is usually around 700 when the conference is held in an East Coast city. This time, just over 500 people attended, representing more than 80 presses -- a normal turnout, in other words, justifying the organizers’ difficult decision last fall not to change the location.

Inside the Sheraton Hotel itself, each day was a normal visit to Conference Land -- that well-appointed and smoothly functioning world where academic or business people (or both, in this case) can focus on the issues that bring them together. Stepping just outside, you were in the French Quarter. It wasn’t hit especially hard by last year’s catastrophic weather event. But there were empty buildings and boarded-up windows; the tourist-trap souvenir outlets offered a range of Katrina- and FEMA-themed apparel, with “Fixed Everything My Ass” being perhaps the most genteel message on sale.

The streets were not empty, but the place felt devitalized, even so. Only when you went outside the Quarter did the full extent of the remaining damage to the city really begin to sink in. On Thursday morning -- as the first wave of conference goers began to register -- a bus chartered by the association took a couple dozen of us around for a tour led by Michael Mitzell-Nelson and Greta Gladney (a professor and a graduate student, respectively, at the University of New Orleans). If the Quarter was bruised, the Ninth Ward was mangled.

It was overwhelming -– too much to take in. More imagery and testimony is available from the Hurricane Digital Memory Bank, a project sponsored by the University of New Orleans and the Center for History and New Media at George Mason University.

So you came back a little unsettled at the prospect of discussing business as usual. Then again, the prevailing idea at this year’s AAUP was that business has changed, and that university presses are rushing to catch up. The announced theme of the year’s program was “Transformational Publishing” -- with that titular buzzword covering the myriad ways that digital technologies affect the way we read now.

It was a far cry from the dismal slogan making the rounds at the AAUP meeting three years ago: “Flat is the new ‘up.’ ” In other words: If sales haven’t actually gone down, you are doing as well as can be expected. The cumulative effect of increasing production costs, budget cuts, and reduced library sales was a crisis in scholarly publishing. The lists of new titles got shorter, and staffs grew leaner; in a few cases, presses closed up shop.

I asked Peter Givler, the association’s executive director, if anyone was still using the old catch phrase. “Right now it looks like up is the new up,” he said. “It’s been a modest improvement, and we’re hearing from our members that there’s been a large return of books this spring. But it’s not like the slump that started in 2001.”

Cautious optimism, then, not irrational exuberance. While the word “digital” and its variants appeared in the title of many a session, it is clear that new media can be both a blessing and a curse. On the one hand, the association has been able to increase the visibility of its members’ output through the Books for Understanding Web site, which offers a convenient and reliable guide to academic titles on topics of public interest. (See, for example, this page on New Orleans.) At the same time, the market for university-press titles used in courses has been undercut by the ready availability of secondhand books online.

And then there’s Google Book Search. The AAUP has not joined the Authors Guild’s class action suit   against Google for digitizing copyrighted materials. But university presses belong to the class of those with an interest in the case -- so the organization has incurred legal expenses while monitoring developments on behalf of its members. One got the definite impression that the other shoe may yet drop in this matter. During the business meeting, Givler indicated that the association would be undertaking a major action soon that would place additional demands on the organization's resources. I tried to find out more, but evidently its Board of Directors is playing its cards close to the vest for now.

With new obligations to meet, the board requested a 4 percent increase in membership dues. This was approved during the business meeting on Thursday. (Three members voting by proxy were opposed to it, but no criticism was expressed from the floor during the meeting itself.)

Proposals for longer-term changes in the organization’s structure and mission were codified in its new Strategic Plan (the first updating of the document since 1999). A working draft was distributed for discussion at the conference; the final version will be approved by the board in October.

This document -- not now available online -- conveys a very clear sense of the opportunities now open before university presses. (For “opportunities,” read also “stresses and strains.”)

It’s not just that technological developments are shaping how books get printed, publicized, and sold -- or even how we do research. A variety of new forms of scholarly publishing are emerging -- some of which make an end run around traditional university presses. “Societies, libraries, and other scholarly groups are now more likely to undertake publishing ventures themselves,” the proposal notes, “although they often lack the editing, marketing, and business skills found in abundance in university presses.”

Full membership in AAUP is restricted to presses that meet certain criteria, including “a faculty board that certifies the scholarly quality of the publications; a minimum number of publications per year; a minimum number of university-employed staff including a full-time director; and a statement of support from the parent organization.” But an ever larger number of learned publications – print, digital, or whatever – are issued by academic or professional enterprises that don’t follow this well-established  model.

Indeed, if you hang around younger scholars long enough, it is a matter of time before someone begins pointing out that the old model might be jettisoned entirely. Why spend two years waiting for your monograph to appear from Miskatonic University Press when it might be made available in a fraction of that time through some combination of new media, peer review, and print-on-demand? No one broached such utopian ideas at AAUP (where, of course, they would be viewed as dystopian). But they certainly do get mooted. Sometimes synergy is not your friend.

The organization’s new strategic plan calls for reaching out to “nonprofit scholarly publishers and organizations whose interests and goals are compatible with AAUP” -- in part, by revising the membership categories and increasing the range of benefits. New members would be recruited through an introductory membership offer “open to small nonprofit publishers.”

These changes, if approved, will go into effect in July 2007. Apart from increasing the size of the association, they would bring in revenue -- thereby funding publicity, outreach, and professional-education programs. (One of the projects listed as “contemplated” is creation of “a ‘basic book camp’ to orient new and junior staff to working at a scholarly press.” I do like the sound of that.)

For the longer term, the intent is clearly to shore up the role of the university press’s established standards in an environment that seems increasingly prone to blowing them away.

“University presses,” the AAUP plan stresses, “are well positioned to be among the leaders in the academic community who help universities through a confusing and expensive new world. They can enhance the ability of scholars to research, add value to, and share their work with the broadest possible audiences, and they can help to develop intellectual property policies and behaviors sensible to all.”

Of course, not every discussion at the meeting was geared to the huge challenges of the not-too-distant future. Late Friday afternoon, I went to an interesting session called “Smoke, Mirrors, and Duct Tape: Nurturing a Small Press at a Major University.” It was a chance to discuss the problems that go with being a retro-style academic imprint at an institution where, say, people assume you are the campus print shop. (Or, worse, that you have some moral obligation to publish the memoirs of faculty emeritus.)

It was the rare case of an hour I spent in New Orleans without hearing any variation on the word “digital.” After getting home, I contacted one of the participants, Joseph Parsons, an acquisitions editor for the University of Iowa Press, to ask if that was just an oversight. Had academic digitality hit Iowa?

"We routinely deal with electronic files, of course," he responded, "but the books we produce have been of the old-fashioned paper and ink variety.... When we contract with authors, we typically include digital rights as part of the standard agreement, but we haven't published anything suitable for an electronic book reader."

He went on to mention, however, that print-on-demand was a ubiquitous and very reasonable option for small press runs. It was surprising that he made the point -- for a couple of reasons. POD now seems like an almost antique form of "new media," in the age of Web 2.0. I don't recall hearing it discussed in New Orleans, for example, except in passing. At the same time, it clearly fit into the plans of an old-school university press with a catalog emphasizing literature and some of the the less trend-obsessed quadrants of the humanities. It seemed like a reasonable compromise between sticking with what you already know and making a leap into the digital divide.

Anyway, I'm just glad to think there will continue to be books, at least for a while. As a matter of fact, while in New Orleans, I even bought a few. They were second-hand, admittedly, but it seemed as if the shop owner needed the business more than any of the university presses did.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Public Access

The following is based on my talk at the session on "Publicity in the Digital Age” at last month’s conference of the Association of American University Presses. For a report on the meeting itself, please check here.

For someone whose best waking hours are devoted to the printed page, it can be difficult to think of digital media as anything but a distraction, at best -- if not, in fact, a violation of the proper use of the eyeballs and brain. People who have made careers in print and ink often have a vested interest in thinking this way. The very word "blog" seems to elicit an almost Pavlovian reaction in editors, writers, and academics over a certain age –- not drooling in hunger, but snarling in  self-defense.

I share some of this conditioning, having spent the past two decades contributing reviews and essays to various magazines and newspapers. Of late, however, I've learned to move between what Marshall McLuhan called "the Gutenberg galaxy" (the cultural universe created by movable type) and "the broadband flatland" (as we might dub the uncharted frontier landscape of digital media).

Over the past 18 months, I've published about 200,000 words that have appeared strictly online, while also contributing to print publications. It feels as if the difference between them means less and less.

Yet with regard to making university-press books known to the public, it appears that the old gap remains deep and wide. On the one hand, there may now be more opportunities than ever to connect up readers with the books that will interest them. (That includes not just new titles, but books from the backlist.)

So much for the good news. The bad news is that, for the most part, it isn’t happening.

There are important exceptions. Colleen Lanick, who handles publicity for MIT Press, recently had an informative and encouraging article in the AAUP newsletter discussing how some university presses have set up blogs to promote their titles.

But my strong impression -- confirmed by a series of interviews in early June -- is that very few people at university presses have made the transition to full engagement with the developing digital public sphere.

Consider something I learned while talking to a couple of people who run fairly high-visibility venues in the world of what we might call "general interest academic blogs." One is Ralph Luker, the founder of Cliopatria -- a group blog devoted to history, which has been online since 2003. Cliopatria recently announced that it has  been visited by 400,000 distinct readers, so far.

The other person was Alfredo Perez, profiled in my column last year. His site Political Theory Daily Review is not, strictly speaking, a blog. It provides a running digest of scholarly papers and serious journalism covering a variety of fields of the humanities and social sciences. The site gets around 2,000 visitors a day, though that probably understates its influence. "Aggregator" sites like PTDR have a way of quietly affecting what gets noticed and discussed elsewhere online.

I asked Ralph and Alfredo how often they receive books from university presses "over the transom" -- that is, strictly on the publisher's initiative, in hopes that their sites would help make it known. As a reviewer, I get several books that way each week, usually in a pre-publication editions that costs the publisher relatively little to produce.

My guess had been that Ralph and Alfredo examined at least a few forthcoming books this way each month. It only stood to reason.  but it made sense to ask.

Both of them replied that it had happened just three or four times -- in as many years. They also confirmed what several other people have indicated in conversation: A few academic presses are willing to send a review copy to a blogger who asks for it. But most won’t. Often, publicists just ignore the request entirely.

That might sound like someone keeping an eye on the bottom line -- though it certainly doesn’t cost much to send a courteous e-mail message in reply to a query. In any case, it is a matter of being penny wise but pound foolish.

That realization hit home while interviewing Scott Eric Kaufman, a graduate student in English at the University of California at Irvine. He participates in a group blog on literary studies called The Valve and he also has a personal blog.

The Valve -- which gets around 10,000 visitors a day – has established a fairly amicable modus vivendi with Columbia University Press, which has provided examination copies of several recent titles to Valve members who wanted to devote symposia to the books. Kaufman told me it was not a matter of anyone at the blog having especially cozy relations with the anyone at press. It's as simple as the fact that the publicity department at Columbia will actually answer their requests.

Kaufman also told me about his experience in writing an essay on a recent novel. He provided a link to the Amazon page for it. The bookseller gave Kaufmann a small credit for each copy purchased by someone following his link. He estimates that he sold about 75 copies of the book.

Now, for a trade press (able to issue large print runs and to benefit from economies of scale), selling 75 copies of a given title in a few days would be a pleasant enough development. But it would hardly make or break anyone’s budget.

By contrast, scholarly publishers usually produce much smaller editions, even in paperback. The impact of even modestly increased sales would be much larger.

Providing bloggers with finished hardbacks could prove an expensive proposition, of course. But the prepublication galleys -- which I sometimes get in spiral binding, like a course packet -- would often be just as serviceable.

It would also help if more publishers were inclined to make extracts from their new books available online. For his daily roundup at Political Theory Daily Review, Alfredo Perez is always on the lookout for chapters of scholarly books to which he can link. "Very few presses do it, as far I can tell," he told me.

He also finds that signing up for e-mail notifications of new books from university presses rarely pays off: "They don't send out updates very often," he says, "and sometimes they don't do it at all." Academic publishers are now more likely to put their catalogs up online than a few years ago. But most seem not to have made the additional commitment of resources necessary to get the word out about their books.

Meanwhile, some commercial houses are starting to treat bloggers as just another part of the mass media. Wendi Kaufmann, who covers literary happenings around Washington at her blog The Happy Booker,  hears from trade presses regularly. Other literary bloggers have told me the same thing, as have some academic bloggers.

At least one internationally known publisher considers it worthwhile to send out dozens of its titles to The Happy Booker, in hopes that she'll give the spotlight to at least one – the same treatment given to the reviews editor for a large newspaper. "I get a box of books from Penguin every two weeks," she told me.

For any publisher or author trying to get some traction in this landscape, the situation can be confusing. It might be helpful to frame things in terms of what I’ve come to call "the price paradox." In short, the cost of making books known in the digital public sphere is both very small and incredibly intensive.

On the one hand, the monetary outlay involved in making content available online is relatively low. The cost of starting a blog, for example, is quite small -– in some cases approaching zero. And the potential audience is very large.

On the other hand, the expense of actually reaching that audience cannot be calculated in terms of simple bookkeeping. It involves significant investments of cultural capital. Time must be spent learning about the existing array of blogs, online journals, podcasts, etc.

As Richard A. Lanham indicates in his recent book The Economics of Attention: Style and Substance in the Age of Information (University of Chicago Press), the most valuable thing in an "information economy" is not information, which is abundant. Rather, it is attention. Attention is a scarce resource: the supply is limited and difficult to renew. That, in turn, makes it important to be able to tap whatever pools of attention already exist. And doing so effectively requires some exploration.

Perhaps I should quit with the implicit metaphor here before this discussion turns into one big analogy to the film Syriana.... Instead, it's time to consider the practical implications. What does all of this mean to someone at a university press who is trying to get out word about a new title?

For one thing, the emerging situation requires doing some research to find out if a given blog or Web publication is likely to take an interest in the book. And the research involved might not be a one-time thing. Having a more or less standard list of journals to send review copies in any given field was appropriate at one point. But somewhat more flexibility is necessary now.

At the very least, it is worthwhile to spend some time learning to use blog search engines -- and also to get a feel for how various sites link up to one another. Google Blog Search is particularly helpful for making an initial survey of which blogs might be relevant to a specific topic. Technorati indicates how many links a given blog has received from other sites. It also lets you examine and follow those links -- perhaps the quickest way to learn how the conversational terrain is structured.

And when a blogger asks for a review copy, these tools would help a publicist reach an informed decision about whether sending one is a good use of resources.

Perhaps the biggest obstacle to learning to move between the print and the digital domains comes from a certain unstated but powerful assumption. It could be called the ham-radio hypothesis. (Having indavertantly offended the Esperanto people a while back, I want to make clear to any ham-radio enthusiasts that the following is not meant as an insult.)

In short, there is still a tendency to think of bloggers, podcasters, etc. as some distinct group that operates apart from the worlds of academia, publishing, or offline culture. To treat them, in effect, as ham-radio operators -- people who possess a certain technical knowhow, and who talk mainly to each other.

The reality is very different. The relationship between online communities and other kinds of social or professional networks is a complicated topic. Scholarly careers will be made exploring this matter.

But it is fair to say that the ability to produce and distribute content online is less and less like being able to talk on shortwave frequencies -- and more and more like the skills involved in driving, or reading a map. You can get along without these skills, but that leaves you dependent on the people who do possess them.

UPDATE. A reader asks if there is a central index of academic blogs. There is no completely comprehensive list, but one valuable resource is the directory provided by Crooked Timber.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

Aggregate This!

“You’re either part of the solution,” as Eldridge Cleaver put it in 1968, “or part of the problem.” It was the one Black Panther slogan that appealed to Richard Nixon. He repeated it four years later, while running for re-election. A catchy saying, then. But also a risky one, in regard to the tempting of fate. (There is always a chance that you are just making the problem worse, simply by assuming you are solving it.)

Over the past few columns, I’ve pointed to some opportunities and difficulties created by emerging forms of digital publishing. In particular, the item from last week – the one suggesting that university presses might benefit from working out a modus vivendi with academic bloggers -- has generated interest and discussion. The space available online for the discussion of new books is, for all practical purposes, boundless. Meanwhile, the traditional forms of mass media place pay ever less attention to books. The avenues for making a new title known to the public get slimmer all the time. Literally slimmer, in some cases. Recently the San Francisco Chronicle cut its review section from eight pages to four, hardly an unusual development nowadays.

But will urging university presses to think more seriously about blogs (and other new media forms) really offer a solution? Or does it just compound the problem? Hearing from readers over the past week, I’ve started to wonder.

Many presses have very compact publicity departments – often enough, a single person. The work includes preparing each season’s catalog, sending out review copies, and working the display booth at conferences.

“So now,” the weary cry goes up, “we have to look at blogs too? Just how are we supposed to find the right one for a given book? There seem to be thousands of them. And that’s just counting the ones with pictures of the professors’ cats.”

Fair enough. Life is too short, and bloggers too numerous. And let’s not even get into podcasting or digital video....

The great strength of emergent media forms is also their great weakness. I mean, of course, the extreme decentralization that now characterizes “the broadband flatland.” It is now relatively easy to produce and distribute content. But it also proves a challenge to find one’s way around in a zone that is somehow expanding, crowded, and borderless, all at once.

With such difficulties in mind, then, I want to propose a kind of public-works project. The time has come to create a map. In fact, it is hard to imagine things can continue much longer without one.

At very least, we need a Web site giving users some idea what landmarks already exist in the digital space of academe. This would take time to create, of course. More than that, it would require a lot of good will.

But the benefits would be immediate -- not just for university presses and academic bloggers, but for librarians, students, and researcher within academe and without.

My grasp of the technology involved is extremely limited. So the following proposal is offered -- with all due humility -- to the attention of people capable of judging how practical it might be. For it ever to get off the ground, a catchy name would be required. For now, let’s call it the Aggregator Academica, or AggAcad for short.

Assuming a few people are interested, it might be possible to start building AggAcad rather soon. I imagine it going through two major rounds of improvement after that. Here’s the blueprint.

AggAcad 1.0 would resemble the phonebook for a very small town -- with one column of business numbers and another of personal. It would provide a rather bare-bones set of links, in two broad categories.

There would be an online directory of academic publishers, similar to the one now provided by the Association of American University Presses. But it would also have links to the Web sites of other scholarly imprints, whether from commercial publishers or professional organizations.

The other component of the start-up site would be an academic blogroll – perhaps an  updated version of the one now available at Crooked Timber, divided broadly by disciplines.

My assumption is that the initial group would be ad hoc, and assemble itself from a few people from each side of town. They would need to work out criteria for each list: the terms for deciding what links to include, and what to exclude. (Perhaps it is naive to place much trust in the power of collegiality. But it might be worth risking a little naiveté.)

The lists would be updated periodically. Meanwhile, the AggAcad team would need to go hunting for the storage space and the grant money required for the next stage of development.

AggAcad 2.0 would provide not just directories but content from and about scholarly publishing. As academic presses make more material available online -- sample chapters, interviews with authors, etc. --  the site would point readers to it. (This aspect of the site might be run by RSS or similar feeds.) Likewise, visitors to the site would learn of the more substantial reviews in online publications, including symposia on new books held by academic bloggers.

At some point, the whole site might be made searchable. (We can call this the 2.5 version.) A reader could type in “Rawls bioethics” and be given links to pertinent books, podcasts, blog entries etc. that have been referred to at the site. The total number of results would be smaller than that returned via Google -- but probably also richer in substance, per hit.

As AggAcad became more useful over time, it would presumably attract scholars and publishers who valued the site. Working on it might begin to count as professional service.
 
AggAcad 3.0 would incorporate elements of Digg -- the Web site that allows readers in the site’s community to recommend links and vote on how interesting or useful they prove. For an introduction to “the digg effect,” check out this Wikipedia article.

By this stage, AggAcad would provide something like a hub to the far-flung academic blogosphere (or whatever we are calling it within a few years). Individuals would still be able to generate and publish content as they see fit. The advantages of decentralization would continue. But the site might foster more connections than now seem possible.

Information about new scholarly books could circulate in new ways. It would begin to have some influence on how the media covered academic issues. And -- who knows? -- the quality of public discussion might even rise a little bit.

Assuming any of it is possible, of course. I sketch this idea with the hope that people better placed to make that judgment might take the idea up ... or tear it to shreds. Is it a solution? Or just part of the problem? Hard to say. But of this much I am certain: Thanks to AggAcad, there is finally an expression even uglier than “blog.”

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

A Moralist of the Mind

The billboard shows a sleek new automobile, the price tag no doubt considerable, though nowhere in sight. Instead, the agency that created the ad has run a single line of text with the image. It isn’t just a catchphrase; it’s a grab at profundity. “A strong want,” the new Lexus motto proclaims, “is a justifiable need.”

The first time I saw the ad, my jaw dropped. Now it just clenches in disgust. (If absolute moral stupidity ever required a slogan, then “A strong want is a justifiable need” would do the trick.) And once the irritation is past, there is the realization that Philip Rieff was probably right when he speculated that a new character had arrived on the scene in Western culture: “psychological man.”

Rieff, who died on July 1, was for decades a somewhat legendary professor of sociology at the University of Pennsylvania. To echo a point made elsewhere, I think the power of his influence greatly exceeded the reach of his reputation. Rieff didn’t want a large readership. He wrote in knotty apothegms -- developing a set of terms that resembled sociological jargon less than it does the private language of some brilliant but eccentric rabbi. With his later texts (including My Life Among the Deathworks, just published by the University of Virginia Press) you do not so much read Rieff as sit at his feet.

But in his first book, Freud: The Mind of the Moralist (1959) -- his dissertation from the University of Chicago, as rewritten with the help of his first wife, Susan Sontag -- the knack for aphorisms had not yet hardened into a tic. He was still addressing a broad audience of educated readers, not disciples. And it was in the final pages of that volume that he sketched the concept of “psychological man.”

According to Rieff’s careful reading, the founder of psychoanalysis was no subversive champion of the id against bourgeois society. Rather, his Freud comes to resemble other Victorian sages who tried to create inner order as the established patterns of authority were dissolving. But along the way, Freud also helped foster a new system of values – one toward which Rieff would show deep and growing ambivalence.

The new “character ideal” that Rieff saw emerging in Freud’s wake was no longer inspired by religious faith, or a strong sense of civic responsibility. Psychological man would not even need to cultivate the sort of self-interested self control practiced by his immediate ancestor, homo economicus. (Think of Benjamin Franklin, making himself wealthy and wise by careful planning.) Psychological man need not fret over material security – being, after all, reasonably comfortable in an affluent society. His energies would turn inward, toward the care and maintenance of the self.

Rieff returned to the future of psychological man in his second book, The Triumph of the Therapeutic. Its final sentence verges on a prophetic statement, then carefully backs away:

“That a sense of well-being has become the end, rather than the by-product of striving after some communal end,” wrote Rieff, “announces a fundamental change in the entire cast of our culture – toward a human condition about which there will be nothing further to say in terms of the old style of hope and despair.”

It can be strange to read some of the earliest discussions of Rieff’s work, for there was occasionally a tendency to regard him as cheerleading “the triumph of the therapeutic.” This was wide of the mark. Eventually Rieff did find things to say about this cultural transformation “in the old style of despair.”

He became a cultural reactionary. I mean that term as a description, rather than a denunciation. He saw culture as a system of restraints (what he termed “interdicts”) that prevented the individual from being swamped by the excessive range of potential human desires and behaviors. Thrown into “the abyss of possibility,” man “becomes not human but demonic.” So Rieff put it in reviewing Hannah Arendt’s The Origins of Totalitarianism.

As the historian Christopher Lasch once put it, Rieff belonged to “the party of the superego.” (Lasch translated many of Rieff’s insights about psychological man into a neo-Marxist analysis of the “culture of narcissism” emerging in advanced capitalist society.) And it was the duty of any teacher worthy of the name to play the role of superego to the hilt. “Authority untaught,” Rieff declared in the early 1970s, “is the condition in which a culture commits suicide.”

His later writings are, in effect, a series of coroner’s reports. “We professionals of the reading discipline,” he stated in My Life Among the Deathworks, “we are the real police. As teaching agents of sacred order, and inescapably within it, the moral demands we must teach, if we are teachers, are those eternal truths by which all social orders endure.” And Rieff made it pretty clear that he did not think this was happening.

There are plenty of conservative publicists in America now. There are not many conservative thinkers, proper, worthy of the name. Rieff, for all his crotchety obliqueness, was one of them.(By the way, the ratio of philosophers to propagandists is hardly any better on the left.)

In scrutinizing the logic of contemporary culture, Rieff indirectly revealed some of the dark secrets of U.S. politics -- which has been dominated by the right wing for at least a quarter century now. The therapeutic has triumphed in the red states as well as the blue. Any reference to how Ronald Reagan and George W. Bush proved themselves as great leaders by “helping America feel good about itself” confirms that psychological man is often happy to vote Republican.

But more than that, Rieff is of lasting interest for upholding an exorbitant standard of seriousness. The Feeling Intellect, a collection of his essays published by the University of Chicago Press in 1990, is rather awe-inspiring in the range and intensity of its erudition – though you do have to look past the strangely cultish introduction by one of the author’s devotees, Jonathan B. Imber, a professor of sociology at Wellesley College.

And his early polemic in the culture wars, Fellow Teachers, is some kind of cranky masterpiece. (It is now out of print.) One passage in particular has left a strong impression, lingering in my mind like the voice of a testy grandfather telling me to get off the Internet.  

“Our sacred world must remain the book,” he says. “No, not the book: the page.... To get inside a page of Haydn, of Freud, of Weber, of James: only so can our students be possessed by an idea of what it means to study.... Then, at least, they may acquire a becoming modesty about becoming ‘problem-solvers,’ dictating reality. Such disciplines would teach us, as teachers, that it would be better to spend three days imprisoned by a sentence than any length of time handing over ready-made ideas.”

Reading this again, I feel guilty of a thousand sins. Which is, of course, the intent. There are qualities and opinions in Rieff’s work it is difficult to admire. But studying him has at least one good effect. It teaches you to think about the difference between a strong want and a justifiable need -- and to keep a safe distance from anything tending to blur that distinction. 

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

If:book, Then What?

Digital publishing has been a hot topic for some time, but it’s received a good deal of attention as of late thanks to a series of recent developments. This year’s meeting of the Association of American University Presses, for example, devoted a panel to the subject. Meanwhile, Rice University has just announced plans to launch the first all digital university press. In a slightly different (though related) context,  rumors abound that the next generation of Apple’s immensely popular iPod will possess the ability to download, store, and read book content.

Clearly, the movement toward digital content delivery is gaining steam. And, as such, it is not surprising to read that the technology’s more vocal enthusiasts are forecasting nothing short of a revolution in academic research, teaching, reading, writing, and publishing once it becomes ubiquitous.

Over at if:book, the collective blog of the "Institute for the Future of the Book," commentators have had a great deal to say about the immense transformations that digital delivery and online publishing will effect on the academy and academics.

Particularly instructive is the institute’s "MediaCommons," a "project-in-progress" aimed at "exploring the future of electronic scholarly publishing and its many implications, including the development of alternate modes of peer review and the possibilities for networked interaction amongst authors and texts." In support of this goal, the if:book collective spent a good deal of time this past spring meeting, brainstorming, and discussing the possibilities of a "new model of academic publishing.” They even "wrote a bunch of manifestos" (apparently, the irony of resorting to such a 19th-century device as the "manifesto" was lost on them). Still, when one filters out the soul-deadening jargon about "authentic learning opportunities," "self-reflexivity," "mediated environments," etc. that permeates their posts, it’s clear that the blog’s authors and readers are thinking creatively and earnestly (although  rather pretentiously) about the prospects of the digital age in transforming academic writing.

To this end, if:book is making considerable noise about Mackenzie Wark's GAM3R 7H30RY, a "monograph" (their scare quotes, not mine) hosted by the institute that goes beyond even the relatively newfangled notion of the e-book toward a new über-standard in digital publishing: the "networked book." Wark’s in-progress project (an “exploration” of whether computer games may “serve as allegories for the world we live in”) is being undertaken entirely online, enabling interested readers (and more than a few gamers) to post continuous live commentary as Wark uploads drafts to the Web. Such an approach, if:book contributor Kathleen Fitzpatrick has announced, creates an “openness and interconnection” that will "allow us to make the process of scholarly work just as visible and valuable as its product; readers will be able to follow the development of an idea from its germination in a blog, though its drafting as an article, to its revisions, and authors will be able to work in dialogue with those readers, generating discussion and obtaining feedback on work-in-progress at many different stages. Because such discussions will take place in the open, and because the enormous time lags of the current modes of academic publishing will be greatly lessened, this ongoing discourse among authors and readers will no doubt result in the generation of many new ideas, leading to more exciting new work."

In the end, transparency, interconnectedness, and immediacy will emerge strengthened by the new digital regime.

Then again, there are obvious downsides to such an approach.  GAM3R 7H30R1S7 Wark has already received nearly 400 comments. That’s fine as far as it goes. But the time devoted to responding to those commentators (learned, not-so-learned, and dumb-as-a-post) is time not spent on other, profitable, scholarly pursuits. In any event, one suspects that this is not a model that would transfer well to, say, scholars writing about neoplatonic epistemology or the symbolic meanings of Malawi's Chongoni rock art.

Still, projects like MediaCommons and GAM3R 7H30RY raise an important question: Will digital content delivery and the emergence of e-books and “networked books” bring about a revolution in the way that scholars research, write, and communicate their ideas?

Perhaps.

But, then again, perhaps not.

I’m not entirely sold on the claims being made by the most fervent advocates of digital delivery. As is often the case when a technology is still in its infancy, enthusiasts tend to exaggerate a technology’s ultimate impact in transforming culture and society. Frequently, proponents fail to contemplate (because it is often impossible to foresee) the obstacles and unintended consequences that inevitably surface as efforts are made to popularize a favored device among the masses (trans-oceanic dirigible tours or flying cars, anyone?). It strikes me that, at present, the transformative potential of digital publishing in academe is being  oversold and, in many cases, misunderstood.

Just as digital publishing and new technological delivery systems will make possible the broader dissemination of academic writing, so too, will they make possible the broader dissemination of non-academic texts and visual content. Purveyors of the types of academic projects esteemed by if:book will continue to face stiff competition for attention and audiences should “iReaders” become as popular as iPods. If historians of science and technology have learned anything, it’s that new technologies have the capacity to change the world for good or for ill. Or, not at all. [I am prepared to bet a great deal of money that the development of an iReader, for example, will prove much less of a boon to academics than to purveyors of porn and self-help guides.]

Similarly, the emphasis that contributors to if:book seem to place on the “transparency” of scholarship and “immediacy” of publication made possible by digital delivery misses a very important point. There is much value to be found in not releasing one’s ideas to peers and public while those ideas are still half-baked. In many respects, the instantaneous delivery of “new media” writing is at odds with the solitude, meditation, and patience that are the hallmarks of traditional scholarship. Perhaps this is less true in if:book’s favored field (media studies), but it is manifestly not so for such disciplines as history, philosophy, and the like. Nor should it be. One can build a convincing case that, in the current age of instant analysis, self-absorbed “experts,” and ubiquitous 24/7 live blog feeds, the last thing that the academy needs is to embrace transparency and immediacy.

This is not to say that the effects of the digital revolution will not be profound, only that they are likely to be different from what enthusiasts currently believe. As yet, very few scholarly monographs have been "born digital." While it's clear that given the on-going economic pressures faced by academic publishers the movement toward digital delivery will continue (if for no other reason than it may cut costs for cash-strapped university presses), how this will all play out (for good, for ill, or for naught) is not currently clear. It will be clear eventually, but only after it has already taken place.

I am not a Luddite. I am not opposed to the efforts of if:book enthusiasts to consider and to explore the potential benefits that digital content delivery may bring to academic research and writing. If:anything, I am in favor of the growth of electronic publishing. After all, my own monograph is being published as part of the History E-Book Project.

Still, digital disciples would do well to temper their exuberance. They should at least begin to consider the many ways in which a move to all digital content delivery will adversely affect the academy and academic researchers.

Besides, the “book book,” that old-fashioned delivery system consisting of wood pulp, ink, and glue has proven to be a remarkably resilient and rather useful technology itself. It is not going to disappear anytime soon (or, perhaps, ever). Moreover, its perceived “limitations” may, in fact, turn out to be real strengths when it comes to preserving the contemplative attitude, dispassionate study, and patient reflection that are essential to lasting scholarship.

Author/s: 
Scott W. Palmer
Author's email: 
info@insidehighered.com

Scott W. Palmer,  a historian of Russian culture and technology, is an associate professor at Western Illinois University. He blogs in the Avia-Corner, at Dictatorship of the Air.

Pages

Subscribe to RSS - Presses
Back to Top