Publishing

Piled Higher and Deeper

Rick Perlstein, a friend from the days of Lingua Franca, is now working on a book about Richard Nixon. Last year, he published a series of in-depth articles about the Republican Party and the American conservative movement. (Those are not quite the same thing, though that distinction only becomes salient from time to time.) In short, Perlstein has had occasion to think about honesty and dissimulation -- and about the broad, swampy territory in between, where politicians finesse the difference. As do artists and used-car salesmen....

It’s the job of historians to map that territory. But philosophers wander there, too. “What is truth?” as Nietzsche once asked. “A mobile army of metaphors, metonymies, anthropomorphisms. Truths are illusions of which one has forgotten that they are illusions.” Kind of a Cheneyo-Rumsfeldian ring to that thought. It comes from an essay called “On Truth and Lie in an Extra-Moral Sense,” which does, too, come to think of it.  

So anyway, about a week ago, Rick pointed out a recent discussion of how the Bush Administration is dealing with critics who accuse it of fudging the intelligence that suggested Saddam Hussein had weapons of mass destruction. The link went to a comment by Joshua Micah Marshall, who is a liberal Democrat of the more temperate sort, not prone to hyperventilation. 

“Garden variety lying is knowing it’s Y and saying it’s X,” he wrote, giving Lyndon Johnson on the Gulf of Tonkin as an example. The present executive branch, he continued, shows “a much deeper indifference to factual information in itself.”

Rick posed an interesting question: “Isn't Josh Marshall here describing as the Administration's methodology exactly what that Princeton philosophy prof defines as ‘bullshit’?” That prof being, of course, Harry Frankfurt, whose short and best-selling treatise On Bullshit will probably cover everyone’s Christmas bonus at Princeton University Press this year. 

In February, The New York Times beat us by a day or so with its article on the book, which daintily avoided giving its title. But "Intellectual Affairs" first took a close look, not just at Frankfurt’s text -- noting that it remained essentially unchanged since its original publication as a scholarly paper in the 1980s -- but at the philosophical critique of it presented in G.A. Cohen’s essay “Deeper into Bullshit.” 

Since then, the call for papers for another volume of meditations on the theme of bull has appeared. Truly, we are living in a golden age.

The gist of Frankfurt’s argument, as you may recall, is that pitching BS is a very different form of activity from merely telling a lie. And Marshall’s comments do somewhat echo the philosopher’s point. Frankfurt would agree that “garden variety lying” is saying one thing when you know another to be true. The liar operates within a domain that acknowledges the difference between accuracy and untruth. The bullshitter, in Frankfurt’s analysis, does not. In a sense, then, the other feature of Marshall’s statement would seem to fit. Bullshit involves something like “indifference to factual information in itself.”

So does it follow, then, that in characterizing the Bush team’s state of mind three years ago, during the run-up to the war, we must choose between the options of incompetence, dishonesty, and bullshit?
Please understand that I frame it in such terms, not from any political motive, but purely in the interest of conceptual rigor. 

That said.... It seems to me that this range of terms is inadequate. One may agree that Bush et al. are profoundly indifferent to verifiable truth without concluding that the Frankfurt category necessarily applies.

Per G. A. Cohen’s analysis in “Deeper into Bullshit,” we must stress that Frankfurt’s model rests on a particular understanding of the consciousness of the liar. The mind of the bullshitter is defined by contrast to this state. For the liar, (1) the contrast between truth and untruth is clearly discerned, and (2) that difference would be grasped by the person to whom the liar speaks. But the liar’s intentionality also includes (3) some specific and lucidly grasped advantage over the listener made possible by the act of lying.

By contrast, the bullshitter is vague on (1) and radically unconcerned with (2). There is more work to be done on the elements of relationship and efficacy indicated by (3). We lack a carefully argued account of bullshit’s effect on the bullshitee.

There is, however, another possible state of consciousness not adequately described by Frankfurt’s paper. What might be called “the true believer” is someone possessing an intense concern with truth.

But it is a Higher Truth, which the listener may not (indeed, probably cannot) grasp. The true believer is speaking a truth that somehow exceeds the understanding of the person hearing it.

During the Moscow Trials of the late 1930s, Stalin’s attorney lodged numerous charges against the accused that were, by normal standards, absurd. In many cases, the “evidence” could be shown to be false. But so much worse for the facts, at least from the vantage point of the true believer. If you’ve ever known someone who got involved in EST or a multi-level marketing business, the same general principle applies. In each case, it is not quite accurate to say that the true believers are lying. Nor are they bullshitting, in the strictest sense, for they maintain a certain fidelity to the Higher Truth. 

Similarly, it did not matter three years ago whether or not any evidence existed to link Saddam and Osama. To anyone possessing the Higher Truth, it was obvious that Iraq must be a training ground for Al Qaeda. And guess what? It is now. So why argue about it?

On a less world-historical scale, I see something interesting and apropos in Academe, the magazine of the American Association of University Professors. In the latest issue, David Horowitz makes clear that he is not a liar just because he told a national television audience something that he knew was not true. 

(This item was brought to my attention by a friend who teaches in a state undergoing one of Horowitz’s ideological rectification campaigns. My guess is that he’d rather not be thanked by name.)

Here’s the story so far: In February, while the Ward Churchill debate was heating up, Horowitz appeared on Bill O’Reilly’s program. It came up that Horowitz, like Churchill, had been invited to lecture at Hamilton College at some point. But he was not, he said, “a speaker paid by and invited by the faculty.” 

As we all know, university faculties are hotbeds of left-wing extremism. (Especially the business schools and engineering departments. And reports of how hotel-management students are forced to read speeches by Pol Pot are positively blood-curdling.) Anyway, whenever Horowitz appears on campus, it’s because some plucky youngsters invite him. He was at Hamilton because he had been asked by “the conservative kids.”

That came as a surprise to Maurice Isserman, a left-of-center historian who teaches at Hamilton College. When I saw him at a conference a few years ago, he seemed to have a little gray in his hair, and his last book, The Other American: The Life of Michael Harrington, was a biography of the founder of the Democratic Socialists of America. No doubt he’s been called all sorts of things over the years, but “conservative kid” is not one of them. And when Horowitz spoke at Hamilton a few years ago, it was as a guest lecturer in Isserman’s class on the 1960s. 

As Isserman put it in the September/October issue of Academe: “Contrary to the impression he gave on "The O’Reilly Factor," Horowitz was, in fact, an official guest of Hamilton College in fall 2002, invited by a faculty member, introduced at his talk by the dean of the faculty, and generously compensated for his time.”

I will leave to you the pleasure and edification of watching Horowitz explain himself in the latest issue of Academe. But in short, he could not tell the truth because that would have been a lie, so he had to say something untrue in order to speak a Higher Truth. 

My apologies for the pretzel-like twistiness of that paraphrase. It is all so much clearer in the original Newspeak: Thoughtcrime is doubleplus ungood.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

The Shift Away From Print

For most scholarly journals, the transition away from the print format and to an exclusive reliance on the electronic version seems all but inevitable, driven by user preferences for electronic journals and concerns about collecting the same information in two formats. But this shift away from print, in the absence of strategic planning by a higher proportion of libraries and publishers, may endanger the viability of certain journals and even the journal literature more broadly -- while not even reducing costs in the ways that have long been assumed. 

Although the opportunities before us are significant, a smooth transition away from print and to electronic versions of journals requires concerted action, most of it individually by libraries and publishers. 

In reaching this conclusion, we rely largely on a series of studies, of both publishers and libraries, in which we examined some of the incentives for a transition and some of the opportunities and challenges that present themselves. Complete findings of our library study, on which we partnered with Don King and Ann Okerson, were published as The Nonsubscription Side of Periodicals. We also recently completed a study of the operations of 10 journal publishers, in conjunction with Mary Waltham, an independent publishing consultant. 

Taken together, these studies suggest that an electronic-only environment would be more cost-effective than print-only for most journals, with cost savings for both libraries and publishers. But this systemwide perspective must also be balanced against a more textured examination of libraries and publishers.

On the publisher side, the transition to online journals has been facilitated by some of the largest publishers, commercial and nonprofit. These publishers have already invested in and embraced a dual-format mode of publishing; they have diversified their revenue streams with separately identifiable income from both print and now increasingly electronic formats. Although the decreasing number of print subscriptions may have a negative impact on revenues, these publishers’ pricing has evolved alongside the economies of online only delivery to mitigate the effects of print cancellations on the bottom line.

The trend has been to adopt value-based pricing that recognizes the convenience of a single license serving an entire campus (rather than multiple subscriptions), with price varying by institutional size, intensity of research activity, and/or number of online users. By “flipping” their pricing to be driven primarily by the electronic version, with print effectively an add-on, these publishers have been able to manage the inevitable decline of their print business without sacrificing net earnings. They are today largely agnostic to format and, when faced with price complaints, are now positioned to recommend that libraries consider canceling their print subscriptions in favor of electronic-only access.

Other journal publishers, especially smaller nonprofit scholarly societies in the humanities and social sciences and some university presses, are only beginning to make this transition. Even when they publish electronic versions in addition to print, these publishers have generally been slower to reconceive their business models to accommodate a dual-format environment that might rapidly become electronic-only. Their business models depend on revenues received from print, in some cases with significant contributions from advertising, and are often unable to accommodate significant print cancellations in favor of electronic access. 

Until recently, this has perhaps not been unreasonable, as demand for electronic journals has been slower to build in the humanities and some social science disciplines. But the business models of these publishers are now not sufficiently durable to sustain the journals business in the event that libraries move aggressively away from the print format. 

Many American academic libraries have sought to provide journals in both print and electronic formats for the past 5 to 10 years. The advantages of the electronic format have been clear, so these were licensed as rapidly as possible, but it has taken time for some faculty members to grow comfortable with an exclusive dependence on the electronic format. In addition, librarians were concerned about the absence of an acceptable electronic-archiving solution, given that that their cancellation of print editions would prevent higher education from depending on print as the archival format.

In the past year or two, the movement away from print by users in higher education has expanded and accelerated. No longer is widespread migration away from print restricted to early adopters like Drexel and Suffolk Universities; it has become the norm at a broad range of academic institutions, from liberal arts colleges to the largest research universities. Ongoing budget shortfalls in academe have probably been the underlying motivation. The strategic pricing models offered by some of the largest publishers, which offer a price reduction for the cancellation of print, have provided a financial incentive for libraries to contemplate completing the transition. 

Faced with resource constraints, librarians have been required to make hard choices, electing not to purchase the print version but only to license electronic access to many journals -- a step more easily made in light of growing faculty acceptance of the electronic format. Consequently, especially in the sciences, but increasingly even in the humanities, library demand for print has begun to fall. As demand for print journals continues to decline and economies of scale of print collections are lost, there is likely to be a tipping point at which continued collecting of print no longer makes sense and libraries begin to rely only upon journals that are available electronically.  
As this tipping point approaches, at unknown speed, libraries and publishers need to evaluate how they can best manage it. We offer several specific recommendations.

  • First, for those publishers that have not yet developed a strategy for an electronic-only journals environment and the transition to it, the future is now. Today’s dual-format system can only be managed effectively with a rigorous accounting of the costs and revenues of print and electronic and how these break down by format. Because some costs incurred irrespective of format are difficult to allocate, this accounting is complicated. It is also, however, critical, allowing publishers to understand the performance of each format as currently priced and, as a result, to project how the transition to an electronic-only environment would affect them. Publishers that do not immediately undertake these analyses and, if necessary, adjust their business models accordingly, may suffer dramatically as the transition accelerates and libraries reach a tipping point.
  • Second, in this transition, libraries and higher education more broadly should consider how they can support the publishers that are faced with a difficult transition. A disconcerting number of nonprofit publishers, especially scholarly societies and university presses that have the greatest presence in the humanities and social sciences fields, have a particularly complicated transition to make. The university presses and scholarly societies have been traditionally strong allies of academic libraries. They may have priced their electronic journals generously (and unrealistically). Consequently, a business model revamped to accommodate the transition may often result in a significant price increase for the electronic format. In cases where price increases are not predatory but rather adjustments for earlier unrealistic prices, libraries should act with empathy. If libraries cancel journals based on large percentage price increases (even when, measured in dollars, the increases are trivial), they may unintentionally punish lower-price publishers struggling to make the transition as efficiently as possible.
  • Third, this same set of publishers is particularly vulnerable, because their strategic planning must take place in the absence of the working capital and the economies of scale on which larger publishers have relied. As a result, some humanities journals published by small societies are not yet even available electronically. The community has a need for collaborative solutions like Project Muse or HighWire,  (initiatives that provide the infrastructure to create and distribute electronic journals) for the scholarly societies that publish the smaller journals in the humanities and social sciences. But if such solutions are not developed or cannot succeed in relatively short order on a broader scale, the alternative may be the replacement of many of these journals with blogs, repositories, or other less formal distribution models.
  • Fourth, although libraries today face difficult questions about whether and when to proceed with electronic-only access to traditionally print journals, they should try to manage this transition strategically and, in doing so, deserve support from all members of the higher education community. It has been unusual thus far for libraries to undertake a strategic, all-encompassing format review process, since it is often far more politically palatable to cancel print versions as a tactical retreat in the face of budgetary pressures. But a chaotic retreat from print will almost certainly not allow libraries to realize the maximum potential cost savings, whereas a managed strategic format review can permit far more effective planning and cost savings.

Beyond a focus on local costs and benefits, there are a number of broader issues that many libraries will want to consider in such a strategic format review. The widespread migration from print to electronic seems likely to eliminate library ownership of new accessions, with licensing taking the place of purchase. In cases where ownership led to certain expectations or practices, these will have to be rethought in a licensing-only environment.
From our perspective, the safeguarding of materials for future generations is among the most pressing practices deserving reconsideration. Questions about the necessity of developing or deploying electronic archiving solutions, and the adequacy of the existing solutions, deserve serious consideration by all libraries contemplating a migration away from print resources. In addition, the transition to electronic journals begins to raise questions about how to ensure the preservation of existing print collections. Many observers have concluded that a paper repository framework is the optimal solution, but although individual repositories have been created at the University of California, the Five Colleges, and elsewhere, the organizational work to develop a comprehensive framework for them has yet to begin.

The implications both of licensing on archiving and of the future of existing print collections are addressable as part of any library’s strategic planning for the transition to an electronic-only environment -- but all too often are being forgotten under the pressure of the budgetary axe.

These challenges appear to us to be some of the most urgent facing libraries and publishers in the nearly inevitable transition to an electronic-only journals environment. Both libraries and publishers should proceed under the assumption that the transition may take place fairly rapidly, as either side may reach a tipping point when it is no longer cost-effective to publish or purchase any print versions. It is not impossible for this transition to occur gracefully, but to do so will require the concerted efforts of individual libraries and individual publishers.

Author/s: 
Eileen Gifford Fenton and Roger C. Schonfeld
Author's email: 
info@insidehighered.com

Eileen Gifford Fenton is executive director of Portico, whose mission is to preserve scholarly literature published in electronic form and to ensure that these materials remain accessible. Portico was launched by JSTOR and is being incubated by Ithaka, with support from the Andrew W. Mellon Foundation. Roger C. Schonfeld is coordinator of research for Ithaka, a nonprofit organization formed to accelerate the productive uses of information technologies for the benefit of academia. He is the author of JSTOR: A History (Princeton University Press, 2003). 

Aiming the Can(n)on

If we could retire for good one old expression from the Culture Wars, I’d like to nominate "the literary canon." Is there anything new to say about it? Has even the most gung-ho Culture Warrior seized a new bit of territory within recent memory? It looks as if all the positions have been occupied, and the battles fought to a dull standstill.

On the one side, Bill O’Reilly and his ilk passionately love Shakespeare. Or rather, they at least enjoy the idea that somebody else will be forced to read him. And on the other side, the fierce struggle to “open the canon” usually looks like an effort to break down an unlocked door.

Checking the entry in New Keywords: A Revised Vocabulary of Culture and Society -- a reference work recently published by Blackwell -- I learn that the canon is, by definition, always something open to revision. Which would, of course, come as a really big surprise to many generations of rabbis, priests, and imams.

But perhaps that underscores the real problem here. The term "canon" rests on an analogy between an established set of cultural masterpieces, on the one hand, and the authoritative body of scriptures, on the other hand. And the problem with this comparison is that, deep down, it is almost impossible to take seriously. "Canon" is not so much a concept as a dead metaphor -- or rather, perhaps, a stillborn one.

If you are a full-fledged resident of secular modernity (that is, somebody accustomed to the existence of a deep moat of separation between sacred and worldly institutions) then the strongest original sense of “the canon” is just barely imaginable.

And if you have rejected secular modernity altogether -- if you believe that God once broke into human affairs long enough to make perfectly clear what He has in mind for us -– then the notion of secular literary works as having some vaguely comparable degree of authority must seem absurd. Or blasphemous.

Once in a great while, a writer or thinker reframes things so that the expression seems to come back to life. The late Northrop Frye, for example, took seriously William Blake’s aphorism calling the Bible "the Great Code of Art." Frye worked out a theory of literature that, in effect, saw the entire DNA of Western literature as contained in Judeo-Christian scripture. And then there is the example of Adonis, the great Lebanese author, who has pointed to the challenge of creating poetry in Arabic. How can you obey the modernist imperative to "make it new" in a language deeply marked by the moment in time it was used to record the commands of God?

But Frye and Adonis are exceptions. Usually, when we talk about "the canon," it is without any strong sense of a complicated relationship between literature and authority. Between words and the Word.

Instead, the debates are really over the allocation of resources -- and the economy of prestige within academic institutions. To say that a given literary figure is "part of the canon" actually means any number of profitable investments have been made in the study of that author. Conversely, to "question the canon" is a strategic move with consequences for the bottom line. (As in, "Do we really need to hire a Miltonist?")

But that means we’ll never get rid of that expression “the literary canon” -- if only because it sounds more dignified than “the literary spreadsheet.”

Is that too cynical? Can’t we assume that works defined as canonical possess some quality that places them above the give-and-take of institutional horse trading?

As a roundabout way of thinking about such questions, let me point your attention to a seemingly unrelated item that appeared in The Washington Post over the weekend.

It seems that there has recently been an intense discussion on an Internet bulletin board in China devoted to the work of Lu Xun, an author who lived between 1881 and 1936. The exchanges concerned one text in particular, his essay "In Memory of Ms. Liu Hezhen” -- a work unfortunately not available online in English, so far as I can tell.

The essay appeared in April 1926, a few weeks after government troops opened fire on a demonstration, killing 40 students. One of them, a 22-year-old woman named Liu Hezhen, had been a devoted reader of Lu Xun’s literary magazine The Wilderness and attended his lectures on Chinese literature at National Beijing Women's Normal University.

She was, Lu wrote “a student of mine. At least, I used to think of her as one.... She, as a young Chinese woman who has dedicated her life to the nation, is no longer a student of a person like me, who still lingers on superfluously in this world.” (All quotations are from the translation appearing in Women in Republican China: A Sourcebook, edited by Hua R. Lan and Vanessa L. Fong, published by M.E. Sharpe in 1999.)

It is a moving essay, and there is now a substantial body of scholarly commentary on it. But as the Post article reported, the sudden interest in Lu Xun’s essay suggests that people are using it “as a pretext to discuss a more current and politically sensitive event -- the Dec. 6 police shooting of rural protesters in the southern town of Dongzhou in Guangdong province.” Despite the official news blackout and the Chinese government’s efforts to censor the Internet, it seems that information about the Dongzhou massacre is spreading.

This development raises complex questions about the role that new media play in developing countries, and under authoritarian regimes. This being the age of high tech, people always want to discuss it -- and, of course, we’d damned well better.

But to be honest, I found it a lot more interesting that people were using Lu Xun’s essay as a reference point. It points to questions about the relationship between literary power and political authority. That Chinese citizens are using the Web and instant messaging to execute an end-run around official censorship is certainly interesting and important. But so is the classic author they are rereading while so engaged. 

It is hard to overstate the role that Lu Xun has played in Chinese culture over most of the past century. His martyred student Liu Hezhen was only one of thousands of young readers inspired by his work in the 1920s. He did not join the Communist Party, but drew close to it in the years before his death in 1936. And after the revolutionaries came to power in 1949, Lu was “canonized” in the strongest sense possible for a completely secular regime.

At the height of the Cultural Revolution (when, as a friend who lived through it once told me, the morning class in elementary school was math, and the afternoon was Mao), the selected quotations of Lu Xun were available in a little red book, just as the Great Helmsman’s were. And even after Mao’s own legacy was quietly downplayed in later decades, the field of “Lu Xun studies” continued as a basic part of Chinese scholarly life.

The novelist Ha Jin, professor of English at Boston University, gives some sense of the author’s continuing prominence in his introduction to a recent edition of Lu’s short stories. “Hundreds of books have been written on his life and writings,” he notes, “and several officially funded journals have been devoted to him. There are even papers on his real estate contracts, the aesthetics of the designs of his books, the rents he paid, and his favorite Japanese bookstores. Novels have appeared based on different periods and aspects of his life, not to mention movies, operas, and TV shows adapted from his fiction.”

All of this might look like evidence for the simplest model of how a literary canon is formed: An author gives voice to the ideology of the powers-that-be -- whether dead white property-owning European males, or revolutionary communist Chinese bureaucrats, or whatever. And those powers then return the favor by making the author a “classic.” All very clearcut, yes?

Actually, no. It happens that Lu Xun gained his prominence, not as an ideologue, but as a writer of great power -- a figure embodying both moral authority and a capacity for literary innovation.

His earliest work was written in the classic or high style of literary language. He gave an important course of lectures on the history of Chinese fiction, and was a master practitioner of the “eight-legged essay” (a very formal structure once used in civil-service exams for the Imperial bureaucracy).

But at some point in his 30s, Lu Xun had a creative breakthrough. He published a series of classic short stories combining sophisticated fictional technique with colloquial language. I don’t know Chinese, and must rely on the accounts of those who do. But even scholars disgusted by the official Maoist cult around Lu Xun admire his profound effect on the literary resources of the language. For example, in his book Lu Xun and Evolution (SUNY Press, 1998), James Reeve Pusey writes that the author “ ‘found himself’ in the creation of a new language, a highly literary, iconoclastically erudite, powerfully subtle vernacular that no one has since used with such mastery.”

And some of his power comes through even in translation. One of Lu Xun’s classic stories is “Diary of a Madman,” in which the everyday corruption and brutality of village life is seen as refracted through the mind of someone sinking ever deeper into paranoia. The narrator becomes convinced that the people around him practice cannibalism. His only hope, he confides to his diary, is that a few young people haven’t tasted human flesh. The final line of the story reads: “Save the children....”

Around the time government troops were shooting down students in 1926, Lu was drifting away from fiction. He instead concentrated on writing what were called zagan (“sundry thoughts”) or zawen (“miscellaneous writings”) -- short, topical prose compositions on whatever caught his attention. The state of his country worried him, and he poured his anger into hundreds of short pieces.

Not everyone liked this phase of his work. Have a look at the following bitter comment from 1931, by a critic who disliked Lu Xun’s later writings: “Zagan compositions, limited to a paltry thousand words, can naturally be done in one sweep of the brush. You catch at a thought, and in the time it takes to smoke a cigarette your thousand words are produced....There is just one formula for zagan compositions: either heated abuse or cold sarcasm. If you can append a word or two of cold sarcasm to the heated abuse, or insert some heated abuse amidst the cold sarcasm, that is all to the good.”

In short, Lu Xun invented the blog entry. (I’m sure that somewhat anachronistic thought has already occurred to people in China, who are discussing recent events via commentary on his work.)

His topics were as ephemeral as any newspaper article. But there is enough wordplay, historical allusion, metaphorical resonance, and heartfelt passion to make them something more than that. A whole scholarly industry is devoted to analyzing these essays. Indeed, by the late 1980s, the field of Lu Xun studies had become so “professionalized” (as that favorite expression of the MLA has it) that one young scholar was worried that it had become completely disconnected from anything of interest to the average reader.

So Lu Xun remains, by any definition, part of the Chinese literary canon, to use that word once again. (And if you see the revolutionary ideologies of the 20th century as continuing the old Gnostic heresy of “immanentizing the eschaton” -- as one school of conservative thinkers does -- then I suppose even the quasi-scriptural overtones might also apply.)

But does that mean that it would mean that China was democratizing only if Lu Xun lost his place? Or to put it more broadly: Are cultural and social power necessarily related? Don’t literary authority and political regime tend to be mutually reinforcing?

Those are open questions. But I can’t help thinking of another question – one that someone reportedly asked Mao in the late 1950s. What would Lu Xun’s role be if he were still alive? Mao answered that Lu would either remain quiet or go to jail. (And this from the man who canonized him. )

Rereading “In Memory of Miss Liu Hezhen” this week, I was struck in particular by the second of the essay’s seven parts. The translation is a little stiff, but the passage is worth quoting in full:

“A real hero should dare to face the tragedy of life and look unwaveringly at bloodshed. It is at once sorrowful and joyful! But the Creator has determined for the sake of the ordinary people to let time heal all the wounds and to leave behind only slight traces of blood and sorrow. It is in these traces of blood and sorrow that people get a humble life and manage to keep this woeful world going. When shall we see the light at the end of such a tunnel, I do not know.”

Imagine how much has been written about that passage over the past couple of weeks. And think of all the questions it must raise – about the past, about the future.

If I were a Chinese official with some interest in the long-term welfare of my own hide, then I might have a strong interest, right about now, in “opening up the canon.” (Or abolishing it.) Perhaps literature is an unreliable way of shoring up the established order and transmitting stabilizing cultural values. It might be a good idea to discourage the reading of Lu Xun, and get people to watch "Fear Factor " instead.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Another View on 'The Access Principle'

A recent article featured a series of questions and answers with John Willinsky, whose new book argues that whenever possible, scholarly information should be disseminated online and free. Here's how I would answer those same questions.

Q: Can you define “the access principle"?

A: While Willinsky considers knowledge, especially scholarly knowledge, a public good, most university administrators and faculty don't, at least not yet. Many universities provide support for their university presses, a practice that is commendable. The practice also supports the whole infrastructure of promotion and tenure not only at universities with university presses, but at all other universities and colleges. If all universities and colleges contributed to a fiscal pool to encourage publication, the system of scholarship could support itself under many different economic models and the grip of commercial publishers could be weakened over time.

Q: Many publishers argue that journals and materials for which one must pay are somehow by definition of higher quality than various open models. How do you respond?

A: Distribution costs have indeed been minimized over time with technology such as e-books and online access systems, but acquisition and editorial cost have not declined. Since PLoS journals are freely available, citations should be higher. Thus, the high impact factor. However, the PLoS biology journal is mirroring a field that is suited to quick and accessible publication. Further, the journal is highly subsidized and PLoS is counting on creating a large number of journals to minimize the per article cost that authors pay upfront. Over time, this model might bear fruit, but the SPARC project didn't succeed in denting commercial publication or pricing and doesn't even look to publish new journals anymore.

Q: What economic models could allow those who publish journals to embrace the access principle? How can costs be covered?

A: Open access is supported by Purdue University Press. In fact, the press would give away all scholarship if it could find a financial backer to allow this. However, the reality is that costs need to be recovered. True, open access is making some inroads allowing authors to self-archive on their institutions' digital repositories, but easy retrieval is not available. In some fields six-month delays make findings of historical value only. The most apparent hurdle, however, is the archiving needs of academic libraries, which makes paper copies (at least on an annual basis) a must. Open access has clear advantages in some fields and in some forms (journals), but not all information is suited to this paradigm of distribution.

Q: To what extent do open models of information relate not only to price, but to speed of information distribution? How does this relate to your goals?

A: A primary function of a university press is to "authorize" information by having material peer-reviewed. Technology has allowed for online editorial management systems and formatted and quick publication. Still, a major problem that libraries face as overseers of digital repositories is the fact that "it's built, but no one will contribute." The acquisition function is best suited to an organization, like a university press, that knows how to acquire and format data.

Q: An open access principle also means that members of the public who might never pay for a journal can read an article when they want. Would this change the nature of scholarship?

A: Open access should help the interdisciplinarity of scholarship as long as searching systems are robust. Information must be bundled and incorporated into the university's prime information center -- the library. Otherwise, even open access publications will be underexposed.

Q: The quest for tenure is a huge motivating factor for young faculty members. Do you think the biases of the tenure system (in favor of more established journals) hinder the spread of the access principle?

A:  Easy availability obviously leads to more citations. This is true with most pieces of information, good or bad. Willinsky also cites the cream of the journal crop when he gives examples. Tenure committees will undoubtedly still evaluate individual cases on the quality of research as well as the quantity. Simply put, established journals have voices in particular fields due to longevity and the members of their editorial boards. Faculty attitudes must change to impact tenure in a serious way.

Q: How do you think scholarly publishing will be different five years from now?

A: The impact of open access can indeed be astounding. The digital enterprise will continue to move more publications to other platforms. However, if incentives (recognition, tenure, income, etc.) don't reward those who want their research to find the environments, the force of the move will be slower. Further, the advent of digital repositories with not only digitized documents but, more importantly, data sets, will bolster research once the problems of submission are solved. Scholarly communication and distribution is far more diverse that Willinsky acknowledges. The sciences and other professional fields, due to information needs, are bound to find the digital environment most suitable. Other areas, will use digital resources, but might not be as driven to move major serial or monographic publications to an e-only space.

Q: In light of the themes of your book, are you taking steps to make it available online and free?

A: Willinsky's answer is informative. The open access revolution is in its infancy. His publisher still needs to cover costs.

 

Author/s: 
Thomas Bacher
Author's email: 
info@insidehighered.com

Thomas Bacher is director of Purdue University Press.

Literature to Infinity

Graphs, Maps, Trees: Abstract Models for a Literary History is a weird and stimulating little book by Franco Moretti, a professor of English and comparative literature at Stanford University. It was published a few months ago by Verso. But observation suggests that its argument, or rather its notoriety, now has much wider circulation than the book itself. That isn’t, I think, a good thing, though it is certainly the way of the world.

In a few months, Princeton University Press will bring out the first volume of The Novel: History, Geography, and Culture -- a set of papers edited by Moretti, based on the research program that he sketches in Graphs, Maps, Trees. (The Princeton edition of The Novel is a much-abridged translation of a work running to five volumes in Italian.) Perhaps that will redefine how Moretti’s work is understood. But for now, its reputation is a hostage to somewhat lazy journalistic caricature -- one mouthed, sometimes, even by people in literature departments.

What happened, it seems, is this: About two years ago, a prominent American newspaper devoted an article to Moretti’s work, announcing that he had launched a new wave of academic fashion by ignoring the content of novels and, instead, just counting them. Once, critics had practiced “close reading.” Moretti proposed what he called “distant reading.” Instead of looking at masterpieces, he and his students were preparing gigantic tables of data about how many books were published in the 19th century.

Harold Bloom, when reached for comment, gave one of those deep sighs for which he is so famous. (Imagine Zero Mostel playing a very weary Goethe.) And all over the country, people began smacking their foreheads in exaggerated gestures of astonishment. “Those wacky academics!” you could almost hear them say. “Counting novels! Whoever heard of such a thing? What’ll those professors think of next -- weighing them?”

In the meantime, it seems, Moretti and his students have been working their way across 19th century British literature with an adding machine -- tabulating shelf after shelf of Victorian novels, most of them utterly forgotten even while the Queen herself was alive. There is something almost urban legend-like about the whole enterprise. It has the quality of a cautionary tale about the dangers of pursuing graduate study in literature: You start out with a love of Dickens, but end up turning into Mr. Gradgrind.

That, anyway, is how Moretti’s “distant reading” looks ... well, from a distance. But things take on a somewhat different character if you actually spend some time with Moretti’s work itself.

As it happens, he has been publishing in English for quite some while: His collection of essays called Signs Taken for Wonders: On the Sociology of Literary Forms (Verso, 1983) was, for a long time, the only book I’d ever read by a contemporary Italian cultural theorist not named Umberto Eco. (It has recently been reissued as volume seven in Verso’s new Radical Thinkers series.) The papers in that volume include analyses of Restoration tragedy, of Balzac’s fiction, and of Joyce’s Ulysses.

In short, then, don’t believe the hype – the man is more than a bean-counter. There is even an anecdote circulating about how, during a lecture on “distant reading,” Moretti let slip a reference that he could only have known via close familiarity with an obscure 19th century novel. When questioned later -– so the story goes -– Moretti made some excuse for having accidentally read it. (Chances are this is an apocryphal story. It sounds like a reversal of David Lodge’s famous game of “intellectual strip-poker” called Humiliation.)

And yet it is quite literally true that Moretti and his followers are turning literary history into graphs and tables. So what’s really going on with Moretti’s work? Why are his students counting novels? Is there anything about “distant reading” that would be of interest to people who don’t, say, need to finish a dissertation on 19th century literature sometime soon? And the part, earlier, about how the next step would be to weigh the books -- that was a joke, right?

To address these and many other puzzling matters, I have prepared the following Brief Guide to Avoid Saying Anything Too Dumb About Franco Moretti.

He is doing literary history, not literary analysis. In other words, Moretti is not asking “What does [insert name of famous author or novel here] mean?” but rather, “How has literature changed over time? And are there patterns to how it has changed?” These are very different lines of inquiry, obviously. Moretti’s hunch is that it might be possible to think in a new way about what counts as “evidence” in cultural history.  

Yes, in crunching numbers, he is messing with your head. The idea of using statistical methods to understand the long-term development of literary trends runs against some deeply entrenched patterns of thought. It violates the old idea that the natural sciences are engaged in the explanation of mathematically describable phenomena, while the humanities are devoted to the interpretation of meanings embedded in documents and cultural artifacts.

Many people in the humanities are now used to seeing diagrams and charts analyzing the structure of a given text. But there is something disconcerting about a work of literary history filled with quantitative tables and statistical graphs. In doing so, Moretti is not just being provocative. He’s trying to get you to “think outside the text,” so to speak.

Moretti is taking the long view.... A basic point of reference for his “distant reading” is the work of Fernand Braudel and the Annales school of historians who traced the very long-term development of social and economic trends. Instead of chronicling events and the doings of individuals (the ebb and flow of history), Braudel and company looked at tendencies taking shape over decades or centuries. With his tables and graphs showing the number (and variety) of novels offered to the reading public over the years, Moretti is trying to chart the longue dure’e of literary history, much as Braudel did the centuries-long development of the Mediterranean.

Some of the results are fascinating, even to the layperson’s eye. One of Moretti’s graphs shows the emergence of the market for novels in Britain, Japan, Italy, Spain, and Nigeria between about 1700 and 2000. In each case, the number of new novels produced per year grows -- not at the smooth, gradual pace one might expect, but with the wild upward surge one might expect of a lab rat’s increasing interest in a liquid cocaine drip.

“Five countries, three continents, over two centuries apart,” writes Moretti, “and it’s the same pattern ... in twenty years or so, the graph leaps from five [to] ten new titles per year, which means one new novel every month or so, to one new novel per week. And at that point, the horizon of novel-reading changes. As long as only a handful of new titles are published each year, I mean, novels remain unreliable products, that disappear for long stretches of time, and cannot really command the loyalty of the reading public; they are commodities, yes, but commodities still waiting for a fully developed market.”

But as that market emerges and consolidates itself -- with at least one new title per week becoming available -- the novel becomes “the great capitalist oxymoron of the regular novelty: the unexpected that is produced with such efficiency and punctuality that readers become unable to do without it.”

And then the niches emerge: The subgenres of fiction that appeal to a specific readership. On another table, Moretti shows the life-span of about four dozen varieties of fiction that scholars have identified as emerging in British fiction between 1740 and 1900. The first few genres appearing in the late 18th century (for example, the courtship novel, the picaresque, the “Oriental tale,” and the epistolary novel) tend to thrive for long periods. Then something happens: After about 1810, new genres tend to emerge, rise, and decline in waves that last about 25 years each.

“Instead of changing all the time and a little at a time,” as Moretti puts it, “the system stands still for decades, and is then ‘punctuated’ by brief bursts of invention: forms change once, rapidly, across the board, and then repeat themselves for two [to] three decades....”

Genres as distinct as the “romantic farrago,” the “silver-fork novel,” and the “conversion novel” all appear and fade at about the same time -– to be replaced a different constellation of new forms. It can’t, argues Moretti, just be a matter of novelists all being inspired at the same time. (Or running out of steam all at once.) The changes reflect “a sudden, total change of their ecosystem."

Moretti is a cultural Darwinist, or something like one. Anyway, he is offering an alternative to what we might call the “intelligent design” model of literary history, in which various masterpieces are the almost sacramental representatives of some Higher Power. (Call that Power what you will -– individual genius, “the literary imagination,” society, Western Civilization, etc.) Instead, the works and the genres that survive are, in effect, literary mutations that possess qualities that somehow permit them to adapt to changes in the social ecosystem.

Sherlock Holmes, for example, was not the only detective in Victorian popular literature, nor even the first. So why is it that we still read his adventures, and not those of his competitors? Moretti and his team looked at the work of Conan Doyle’s rivals. While clues and deductions were scattered around in their texts, the authors were often a bit off about how they were connected. (A detective might notice the clues, then end up solving the mystery through a psychic experience, for example.)

Clearly the idea of solving a crime by gathering clues and decoding their relationship was in the air. It was Conan Doyle’s breakthrough to create a character whose “amazing powers” were, effectively, just an extremely acute version of the rational powers shared by the reader. But the distinctiveness of that adaptation only comes into view by looking at hundreds of other texts in the literary ecosystem.

This is the tip of the tip of the iceberg. Moretti’s project is not limited by the frontiers of any given national literature. He takes seriously Goethe’s idea that all literature is now world literature. In theory, anyway, it would be possible to create a gigantic database tracking global literary history.

This would require enormous computational power, of course, along with an army of graduate students. (Most of them getting very, very annoyed as they keypunched data about Icelandic magazine fiction of the 1920s into their laptops.)

My own feeling is that life is much too short for that. But perhaps a case can be made for the heuristic value of imagining that kind of vast overview of how cultural forms spread and mutate over time. Only in part is Moretti’s work a matter of counting and classifying particular works. Ultimately, it’s about how literature is as much a part of the infrastructure of ordinary life as the grocery store or Netscape. And like them, it is caught up in economic and ecological processes that do not respect local boundaries.

That, anyway, is an introduction to some aspects of Moretti’s work. I’ve just learned that Jonathan Goodwin, a Brittain Postdoctoral Fellow at Georgia Tech, is organizing an online symposium on Moretti that will start next week at The Valve.

Goodwin reports that there is a chance Moretti himself may join the fray. In the interim, I will be trying to untangle some thoughts on whether his “distant reading” might owe something to the (resolutely uncybernetic) literary theory of Georg Lukacs. And one of the participants will be Cosma Shalizi, a visiting assistant professor of statistics at Carnegie Mellon University.

It probably wouldn’t do much good to invite Harold Bloom into the conversation. He is doubtless busy reciting Paradise Lost from memory, and thinking about Moretti would not be good for his health. Besides, all the sighing would be a distraction.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Notes from the Underground

Normally my social calendar is slightly less crowded than that of Raskolnikov in Crime and Punishment. (He, at least, went out to see the pawnbroker.) But late last month, in an unprecedented burst of gregariousness, I had a couple of memorable visits with scholars who had come to town – small, impromptu get-togethers that were not just lively but, in a way, remarkable.

The first occurred just before Christmas, and it included (besides your feuilletonist reporter) a political scientist, a statistician, and a philosopher. The next gathering, also for lunch, took place a week later, during the convention of the Modern Language Association. Looking around the table, I drew up a quick census. One guest worked on British novels of the Victorian era. Another writes about contemporary postcolonial fiction and poetry. We had two Americanists, but of somewhat different specialist species; besides, one was a tenured professor, while the other is just starting his dissertation. And, finally, there was, once again, a philosopher. (Actually it was the same philosopher, visiting from Singapore and in town for a while.)

If the range of disciplines or specialties was unusual, so the was the degree of conviviality. Most of us had never met in person before -- though you’d never have known that from the flow of the conversation, which never seemed to slow down for very long. Shared interests and familiar arguments (some of them pretty esoteric) kept coming up. So did news about an electronic publishing initiative some of the participants were trying to get started. On at least one occasion in either meal, someone had to pull out a notebook to have someone else jot down an interesting citation to look up later.

In each case, the members of the ad hoc symposium were academic bloggers who had gotten to know one another online. That explained the conversational dynamics -- the sense, which was vivid and  unmistakable, of continuing discussions in person that hadn’t started upon arriving at the restaurant, and wouldn’t end once everyone had dispersed.

The whole experience was too easygoing to call impressive, exactly. But later -- contemplating matters back at my hovel, over a slice of black bread and a bowl of cold cabbage soup -- I couldn’t help thinking that something very interesting had taken place. Something having little do with blogging, as such. Something that runs against the grain of how academic life in the United States has developed over the past two hundred years.

At least that’s my impression from having read Thomas Bender’s book Intellect and Public Life: Essays on the Social History of Academic Intellectuals in the United States, published by Johns Hopkins University Press in 1993. That was back when even knowing how to create a Web page would raise eyebrows in some departments. (Imagine the warnings that Ivan Tribble might have issued, at the time.)

But the specific paper I’m thinking of – reprinted as the first chapter – is even older. It’s called “The Cultures of Intellectual Life: The City and the Professions,” and Bender first presented it as a lecture in 1977. (He is currently professor of history at New York University.)

Although he does not exactly put it this way, Bender’s topic is how scholars learn to say “we.” An intellectual historian, he writes, is engaged in studying “an exceedingly complex interaction between speakers and hearers, writers and readers.”  And the framework for that “dynamic interplay” has itself changed over time. Recognizing this is the first step towards understanding that the familiar patterns of cultural life – including those that prevail in academe – aren’t set in stone. (It’s easy to give lip service to this principle. Actually thinking through its implications, though, not so much.)

The history of American intellectual life, as Bender outlines it, involved a transition from civic professionalism (which prevailed in the 18th and early 19th centuries) to disciplinary professionalism (increasingly dominant after about 1850).

“Early American professionals,” he writes, “were essentially community oriented. Entry to the professions was usually through local elite sponsorship, and professionals won public trust within this established social context rather than through certification.” One’s prestige and authority was very strongly linked to a sense of belonging to the educated class of a given city.

Bender gives as an example the career of Samuel Bard, the New York doctor who championed building a hospital to improve the quality of medical instruction available from King’s College, as Columbia University was known back in the 1770). Bard had studied in Edinburgh and wanted New York to develop institutions of similar caliber; he also took the lead in creating a major library and two learned societies.

“These efforts in civic improvement were the product of the combined energies of the educated and the powerful in the city,” writes Bender, “and they integrated and gave shape to its intellectual life.”

Nor was this phenomenon restricted to major cities in the East. Visiting the United States in the early 1840s, the British geologist Charles Lyell noted that doctors, lawyers, scientists, and merchants with literary interests in Cincinnati “form[ed] a society of a superior kind.” Likewise, William Dean Howells recalled how, at this father’s printing office in a small Ohio town, the educated sort dropped in “to stand with their back to our stove and challenge opinion concerning Holmes and Poe, Irving and Macauley....”

In short, a great deal of one’s sense of cultural “belonging” was bound up with community institutions -- whether that meant a formally established local society for the advancement of learning, or an ad hoc discussion circle warming its collective backside near a stove.

But a deep structural change was already taking shape. The German model of the research university came into ever greater prominence, especially in the decades following the Civil War. The founding of Johns Hopkins University in 1876 defined the shape of things to come. “The original faculty of philosophy,” notes Bender, “included no Baltimoreans, and no major appointments in the medical school went to members of the local medical community.” William Welch, the first dean of the Johns Hopkins School of Medicine, “identified with his profession in a new way; it was a branch of science -- a discipline -- not a civic role.”

Under the old regime, the doctors, lawyers, scientists, and literary authors of a given city might feel reasonably comfortable in sharing the first-person plural. But life began to change as, in Bender’s words, “people of ideas were inducted, increasingly through the emerging university system, into the restricted worlds of specialized discourse.” If you said “we,” it probably referred to the community of other geologists, poets, or small-claims litigators.

“Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”

Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.

“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence -- certification -- in an era when criteria of intellectual authority were vague and professional performance was unreliable.”

But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”

He wrote that in 1993. We are now more than a decade downstream. I don’t know that anyone else at the lunchtime gatherings last month had Thomas Bender’s analysis in mind. But it has been interesting to think about those meetings with reference to his categories.

The people around the table, each time, didn’t share a civic identity: We weren’t all from the same city, or even from the same country. Nor was it a matter of sharing the same disciplinary background – though no effort was made to be “interdisciplinary” in any very deliberate way, either. At the same time, I should make clear that the conversations were pretty definitely academic: “How long before hundreds of people in literary studies start trying to master set theory, now that Alain Badiou is being translated?” rather than, “Who do you think is going to win American Idol?”

Of course, two casual gatherings for lunch does not a profound cultural shift make. But it was hard not to think something interesting had just transpired: A new sort of collegiality, stretching across both geographic and professional distances, fostered by online communication but not confined to it.

The discussions were fueled by the scholarly interests of the participants. But there was a built-in expectation that you would be willing to explain your references to someone who didn’t share them. And none of it seems at all likely to win the interest (let alone the approval) of academic bureaucrats.

Surely other people must be discovering and creating this sort of thing -- this experience of communitas. Or is that merely a dream? 

It is not a matter of turning back the clock -- of undoing the division of labor that has created  specialization. That really would be a dream.

But as Bender puts it, cultural life is shaped by “patterns of interaction” that develop over long periods of time. For younger scholars, anyway, the routine give-and-take of online communication (along with the relative ease of linking to documents that support a point or amplify a nuance) may become part of the deep grammar of how they think and argue. And if enough of them become accustomed to discussing their research with people working in other disciplines, who knows what could happen?

“What our contemporary culture wants,” as Bender put it in 1993, “is the combination of theoretical abstraction and historical concreteness, technical precision and civic give-and-take, data and rhetoric.” We aren’t there, of course, or anywhere near it. But sometimes it does seem as if there might yet be grounds for optimism.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Stolen Words

What I remember about that morning was that the black cloud was already overhead when I woke up. It followed me around. My wife wondered what all the sighing was about.

"Today I ruin this guy’s life," I told her. "His department told me he has office hours around noon. He’s going to pick up the phone, and when he does, I’m going to have to ask him questions that will be humiliating. It's going to ruin his life.” 

“You shouldn’t look at it that way,” she said. “He did it to himself.” 

True enough. I had a dossier of material showing that the professor in question had engaged in plagiarism -- quite a lot of it, actually, and in the very book that had gotten him tenure. 

One author he plagiarized from had assembled a document in what seems to be the classic format for such cases. The lefthand column contained paragraphs of her work. The one on the right was from his book, published several years later, copied more or less word for word, with the occasional minuscule tweak of phrase or punctuation -- but without so much as a faint gesture of acknowledgment in the text, the footnotes, or the bibliography. 

As I found through some digging, she was not the only author he had expropriated. (It is a safe generalization that plagiarists are always serial offenders.) With the other aggrieved parties, he had come to some kind of quiet agreement -- while the university he worked for remained none the wiser. That was about to change.

I would give him a chance to explain himself, of course. But really there was not much he could say. Plagiarism is one offense where simply presenting the evidence often amounts to conviction.

To be honest, researching the story had involved a certain amount of aggressive glee on my part. There is a special pleasure that comes from establishing an airtight case. (Besides, the superego is a bit of a sadist.) But now, with the prospect of actually talking to the guy looming, it was surprising to feel contempt give way to pity. His luck had run out. In a couple of days, he would be notorious. It felt as if I were serving as his judge, jury, and executioner – not to mention the court stenographer. Oddly enough, I felt guilty.

Besides, the psychology of the serial plagiarist is so puzzling as to be a fairly absorbing mystery. So I’d discovered a few years earlier from Norman Fruman’s book Coleridge, the Damaged Archangel (Brazillier, 1971).

The poet had not simply borrowed a thought or image here and there. Some of the occasional borrowings in his verse might be discounted as, well, poetic license. After all this time, the fact that Coleridge extracted large parts of his theory of imagination from the work of German philosophers seems more interesting than it is shocking. (The notion of intertextuality can be used to excuse a variety of sins.)

But when you learn that most of Coleridge’s prose writings were also copied from other writers -- often from Grub Street hacks of his day -- then it seems that something very odd is going on. And the more you love his poetry, the harder it is to know what to think of his kleptomania. Should you be indignant? Or just perplexed?

As for the 21st century professor .... he was no tortured Romantic genius. He did sound mortified when I called, and deeply regretful. He also managed to blame his graduate student assistant, who, he asserted, was somehow the one really at fault. (Just as the two-column format is the standard way of documenting plagiarism, so, it seems, the grad-student assistant is the standard scapegoat, at least with light-fingered academics.)

That half-hearted acceptance of responsibility on his part did the trick. My ambivalence vanished. A week or so later, the university announced that he had resigned from his position. I felt neither pride nor guilt -- only the mild curiosity appropriate to something that's now really none of your business.

But the topic of plagiarism itself keeps returning. One professor after another gets caught in the act. The journalists and popular writers are just as prolific with other people's words. And as for the topic of student plagiarism, forget it -- who has time to keep up?

It was not that surprising, last fall, to come across the call for papers  for a new scholarly journal called Plagiary: Cross-Disciplinary Studies in Plagiarism, Fabrication, and Falsification. I made a mental note to check its Web site again -- and see that it began publishing this month.

One study is already available at the site: an analysis of how the federal Office of Research Integrity handled 19 cases of plagiarism involving research supported by the U.S. Public Health Service. Another paper, scheduled for publication shortly, will review media coverage of the Google Library Project. Several other articles are now working their way through peer review, according to the journal’s founder, John P. Lesko, an assistant professor of English at Saginaw Valley State University, and will be published throughout the year in open-source form. There will also be an annual print edition of Plagiary. The entire project has the support of the Scholarly Publishing Office of the University of Michigan.

In a telephone interview, Lesko told me that research into plagiarism is central to his own scholarship. His dissertation, titled “The Dynamics of Derivative Writing,” was accepted by the University of Edinburgh in 2000 -- extracts from which appear at his Web site Famous Plagiarists, which he says now gets between 5,000 and 6,000 visitors per month. 

While the journal Plagiary has a link to Famous Plagiarists, and vice versa, Lesko insists that they are separate entities -- the former scholarly and professional, the latter his personal project. And that distinction is a good thing, too. Famous Plagiarists tends to hit a note of stridency such that, when Lesko quotes Camille Paglia denouncing the poststructuralists as “cunning hypocrites whose tortured syntax and encrustations of jargon concealed the moral culpability of their and their parents' generations in Nazi France,” she seems almost calm and even-tempered by contrast.

“It seems that both Foucault and Barthes' contempt for the Author was expressed in some rather plagiaristic utterances,” he writes, “a parroting of the Nietschean ‘God is dead’ assertion.” That might strike some people as confusing allusion with theft. But Lesko is vehement about how the theorists have served as enablers for the plagiarists, as well as the receivers of hot cargo.

“After all,” he writes, “a plagiarist -- so often with the help of collaborators and sympathizers -- steals the very livelihood of a text’s real author, thus relegating that author to obscurity for as long as the plagiarist’s name usurps a text, rather than the author being recognized as the text's originator. Plagiarism of an author condemns that author to death as a text’s rightfully acknowledged creator...”  (The claim that Barthes and Foucault were involved in diminishing the reputation of Nietzsche has not, I believe, ever been made before.)

To a degree, his frustration is understandable. In some quarters, it is common to recite – as though it were an established truth, rather than an extrapolation from one of Foucault’s essays – the idea that plagiarism is a “historically constructed” category of fairly recent vintage: something that came into being around the 18th century, when a capitalistically organized publishing industry found it necessary to foster the concept of literary property.

A very interesting argument to be sure -- though not one that holds up under much scrutiny.  

The term “plagiarism” in its current sense is about two thousand years old. It was coined by the Roman poet Martial, who complained that a rival was biting his dope rhymes. (I translate freely.) Until he applied the word in that context, plagiarius had meant someone who kidnapped slaves. Clearly some notion of literary property was already implicit in Martial’s figure of speech, which dates to the first century A.D.

At around the same time, Jewish scholars were putting together the text of that gigantic colloquium known as the Talmud, which contains a passage exhorting readers to be scrupulous about attributing their sources. (And in keeping with that principle, let me acknowledge pilfering from the erudition of Stuart P. Green, a professor of law at Louisiana State University at Baton Rouge, whose fascinating paper "Plagiarism, Norms, and the Limits of Theft Law: Some Observations on the Use of Criminal Sanctions in Enforcing Intellectual Property Rights" appeared in the Hastings Law Review in 2002.)

In other words, notions of plagiarism and of authorial integrity are very much older than, say, the Romantic cult of the absolute originality of the creative genius. (You know -- that idea Coleridge ripped off from Kant.)

At the same time, scholarship on plagiarism should probably consist of something more than making strong cases against perpetrators of intellectual thievery. That has its place, of course. But how do you understand it when artists and writers make plagiarism a deliberate and unambiguous policy? I’m thinking of Kathy Acker’s novels, for example. Or the essayist and movie maker Guy Debord’s proclamation in the 1960s: “Plagiarism is necessary. Progress demands it.” (Which he, in turn, had copied from the avant-garde writer Lautreamont, who had died almost a century earlier.)

Why, given the potential for humiliation, do plagiarists run the risk? Are people doing it more, now?  Or is it, rather, now just a matter of more people getting caught?

Given Lesko’s evident passion on the topic of plagiarism as a moral transgression – embodied most strikingly, perhaps, in his color-coded War on Plagiarism Threat Level Analysis – I had to wonder if the doors of Plagiary would be open to scholars not sharing his perspective.

Was it worth the while of, say, a Foucauldian to offer him a paper? 

“It may be that I’m a bit more conservative than some scholars,” he conceded. But he points out that manuscripts submitted to Plagiary undergo a double-blind review process. They are examined by three reviewers – most of them, but not all, from the journal’s editorial board. 

There is no ideological or theoretical litmus test, and he’s actively seeking contributions from people you might not expect. “I’m willing to consider articles from plagiarists,” he said. 

That’s certainly throwing the door wide open. You would probably want to vet their work pretty carefully, though. 

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Everything to Everyone

Whether we’re aware of it or not, the doctrine of “fair use” built into copyright law is one of the most important protections available to scholars, librarians, and students. Every time you quote from someone else’s work, every time you photocopy an article for a student, every time you read a passage aloud to your class, you are technically in violation of copyright.

The reason that an army of publishers and FBI agents aren’t smashing down your office door is that U.S. jurisprudence has long understood that a totalizing approach to copyright would be disastrous. Fair use is the only way we as individuals can together do what is fundamentally a collective endeavor, scholarship, in an information ecology that otherwise lives and dies by the intensely individualizing force of the marketplace.

But fair use has been carrying a heavy load lately, and it’s starting to show its limitations. Over the last few decades and especially amid the recent “copyright wars,” a powerful new philosophy has emerged: Rather than seeing copyright as a careful balance between the interests of private owners and the public, powerful content industries have argued that robustly protecting private interests is always the best way to serve the public. It’s the trickle-down theory of knowledge: Give the power to the producers and get out of the way, and it will eventually get to everyone who needs it. And digital technologies have handed copyright owners further power to regulate the use of their work, to further commodify information in ways never before imagined.

While most of us in higher education are little content industries ourselves, we should not be seduced into forgetting our role first and foremost as the keepers, distributors, and developers of our society’s body of public knowledge. We must fight for the promise copyright made to the public: All these economic rights are only in the service of intellectual progress. However, our rhetorical arsenal in this battle seems to be only to trot out fair use, i.e. the right to violate copyright for progressive reasons. Technical copy protection? Don’t forget about fair use. Restricting peer-to-peer networks? Don’t forget about fair use. Suing our students for downloading? Don’t forget about fair use. Automatic permission systems in educational courseware? Don’t forget about fair use. It’s a wonder the poor statute can barely stand, considering how often it is invoked as defense, criticized as folly.

This dependence on fair use, to somehow safeguard all of the myriad “public interest” elements of copyright’s balance, risks crushing it altogether -- no more so than in the pending battle around Google Book Search.

For those who don’t know, the search engine giant recently announced its aspiration to digitize every book ever printed. To do this, it partnered with the university libraries of Stanford, Harvard, Michigan, and Oxford, and with the New York Public Library. Together they have already begun the process of digitizing works whose copyright protection has run out -- right now, those published before 1924. These books would be full-text searchable and could be read in their entirety online, for free. For more recent books still protected under copyright, Google intends to digitize and make them searchable as well; however, the text returned in response to the search query would only be a short excerpt around the located word or phrase. Publishers who don’t want their work to appear at all can opt out of the system. Links will lead users to vendors where the book in question can be purchased.

To be clear, Google’s project does require making copies of numerous copyrighted books, and an unauthorized copy at that. Google says this copy is a fair use. And in lawsuits brought in September and October of 2005, the Author’s Guild ( complaint) and the Association of American Publishers ( complaint) argue it is a violation of their rights, and an attempt to unfairly capitalize on their work.

Unlike battles around digital music that have occupied the courts’ attention of late, this case will be of vital importance for the academic community. What is at stake is the possibility of a digital database of all written knowledge, and the question of who gets to produce it and under what conditions. Some think this is the Library at Alexandria finally realized; others think it's risky to have just one company running the stacks. But the case will live or die not on the question of the value of such a database to users, but on the narrower legal question of whether Google has the right to scan the books to begin with.

Perhaps this case will settle -- Google certainly has the funds to do so if it chooses. If it does get heard by the courts, what is of greatest importance, I believe, is how well the doctrine of fair use can carry the weight of this particular dispute. Lawrence Lessig has argued that fair use is being stretched thin because copying is so fundamental to the digital environment; uses that never even rang copyright’s bell, because they now require a copy to be made in the process, find themselves under legal scrutiny. I believe this is true. But fair use has already been pulled in too many directions, well before the Internet stretched it to its breaking point.

Fair use has a century-long history in U.S. courts, as a handy way for judges to stave off copyright claims when the use in question is socially valuable. At first, it was a way to protect small amounts of copying for the sake of criticism; as Justice Story noted in Folsom v. Marsh (1841), “no one can doubt that a reviewer may fairly cite largely from the original work, if his design be really and truly to use the passages for the purposes of fair and reasonable criticism. On the other hand, it is as clear, that if he thus cites the most important parts of the work, with a view, not to criticize, but to supersede the use of the original work, and substitute the review for it, such a use will be deemed in law a piracy.”

As such, one of the important criteria used by the courts to judge a use fair has been whether the new work is “transformative,” rather than merely replacing the old. The most famous of these is Acuff-Rose v. Campbell (1994), in which a surprisingly culturally savvy Supreme Court found that 2 Live Crew’s sampling of the Roy Orbison classic “Pretty Woman” was a kind of parody, however crude, and should be protected as fair -- it “adds something new, with a further purpose or different character, altering the first with new expression, meaning, or message.”

However, when fair use was finally codified in 1976, the primary motivation was not to protect criticism or parody but to accommodate the increasing use of the Xerox machine, particularly in education. University libraries did not want to risk liability when they made copies of journals and book chapters for faculty and students, and aggressively lobbied Congress for some legal protection to do so. When fair use became law, it included the four factors that had developed through court precedent, but also specified “multiple copies for classroom use” alongside parody, criticism, journalism, and scholarship as the likely contexts for the use to be considered fair.

Making multiple copies of an article for use in the classroom does not claim to produce a new work, in the way that sampling Orbison’s tune does. The value of the use is not that it is “transformative,” but that it is “distributive.” Now fair use is saddled with two aspirations. If the first understands that new work often needs to lean on and borrow from existing work, the second understands that the market mechanisms and distribution technologies that circulate work do not always achieve the scope and access we would like, or that other socially valuable activities require.

The courts have since used fair use in this ‘distributive’ sense, allowing cable TV to retransmit copyrighted broadcasts to audiences who could not otherwise receive them, prohibiting Kinko’s from producing course packets without paying a fee but leaving open the possibility that universities could do so as long as they do not enjoy direct commercial gain, and most notably in Sony v. Universal (1984), granting VCR manufacturers immunity to copyright penalties because some VCR users do make unauthorized copies of protected movies. The court argued that users have the right to record shows in order to watch them at other times, that this in fact “enlarges the television viewing audience” -- even the beloved Mr. Rogers testified that he wanted public school teachers to be able to tape his show and show it in class the next day. Again, these fair uses are not transformative, but distributive.

Is Google’s book search project fair use? This was the question vigorously debated, but by no means settled, at the recent “Battle over Books” debate at the New York Public Library and the blog-off that followed. Most copyright watchers largely agree that, if it makes it to court, the legal answer will come down to a battle of precedents. (See, for example, Jonathan Band’s “The Google Print Library: A Copyright Analysis.”) Google will come out on top if the court sees the case as akin to Kelly v. Arriba-Soft (2003), which allowed an image search engine to copy images from the Web so as to make thumbnail versions available to user queries.

The publishers and authors will likely triumph if the court turns to UMG Recordings et. al. v. MP3.com (2000), where MP3.com was found to be infringing when it made single copies of 400,000 CDs in order to stock a digital locker from which users could stream music they could prove they already owned. Google needs fair use to accommodate an activity that is neither “transformative” in the classic sense, or “distributive” in the Sony sense. Neither precedent did either, and the solutions were work-arounds to force the square pegs of searching and streaming in the oddly-shaped hole fair use offers them.

Let’s give fair use a break by sending in a legislative relief pitcher, one that can better allow for the role search engines play in facilitating the circulation of digital information. If fair use has been protecting both ‘transformative’ and ‘distributive’ uses, today we need a statute that can cover the kind of “indexing” uses that Google is after.

If we recognize that the Internet offers us the chance to make much more of our society’s culture and knowledge available to more people, and we recognize that to make this massive resource most useful requires ways to navigate and search it, and we further recognize that search engines like Google need to make copies of that work in order to make it searchable, then we have a genuine and reasonable public interest in ensuring that that they and others can do so. At the same time, we should also ensure that doing so doesn’t undercut the possibility of selling these works, and ideally should help their sales.

The publishers’ concern is not that Google shouldn’t make books searchable, but that they should have to pay a fee to do so. Such a fee represents the compensation not for lost sales, but to match what they might have earned had they provided this search function themselves. So let’s imagine that they do just that; Harper & Collins has already announced that it will develop a digital database of their books, following the lead of academic journal publishers like Sage. We could decide that this is a reasonable exploitation of one’s copyright, and forbid Google from building a library.

What this is likely to produce is a bunch of different, publisher-specific archives, all searchable under different criteria in different ways, all with different rules for how much text you can view and under what conditions -- and price. Smaller publishers will be less able to afford to do any of this, so once again we will be incidentally privileging those represented by the larger publishers when what we want is all work to be as available as possible.

And all publishers will be in a position to exclude some of their works from public view, for whatever idiosyncratic (or, more likely, financial) reasons they fancy. Perhaps someone would develop a meta-search that could query many of these archives simultaneously and return the results together -- in all likelihood, it would be Google. But this does not solve the systemic problem posed by letting publishers also govern access to their content.

What I think we’re after is something more straightforward, but nearly impossible to achieve. In this dream scenario, every author would make his or her work available in a digital form that is searchable but cannot be redistributed, in a widely compatible format, marked with the same kinds of metadata. We wouldn’t need Google Book Search, because these book “footprints” would all be online and available for searches just as Web sites are. But this is certainly an unreasonable and prohibitive request to make of authors, at least right now. For all intents and purposes, this is what Google seems willing to provide for us, with the promise of some ad revenue in return. As a less than perfect version of that ideal, it’s quite good.

Waiting for fair use to shield this expanding range of uses is slowing the innovation in information, knowledge, and culture the Internet seems ready to facilitate. And every time it does, we risk a court setting a retrograde precedent that cements digital culture into place for good. We need a new statute that acknowledges and accommodates the common sense recognition that search is good, that it requires incidental copying, and that it should not be left to individual, competing publishers to make their work part of the public trust.

In a moment when we are handing content owners much more control not only over the use of their work but over access to it, we need to make a parallel commitment to ensuring and expanding access of a different kind, as an aggregate collection of all things thought and written that can be easily explored. And, we need to let fair use protect the activities it’s designed to protect, instead of letting it fray as it stands in as the only protection against a locked and licensed digital world.

Author/s: 
Tarleton Gillespie
Author's email: 
info@insidehighered.com

Tarleton Gillespie is an assistant professor in the Department of Communication at Cornell University, and a Fellow with the Stanford Law School Center for Internet and Society.

Portrait of the Scholar as a Young Novelist

I knew my life was about to change when a colleague at a recent scholarly conference came up to me at the reception and told me with some bemusement that a fellow academic, whom I did not know, had asked her, “Is Jenny White a lesbian?” After many years of scholarly research, writing and teaching, I had written a novel, a mystery set in 1886 Istanbul that, along with several murders, featured a lesbian relationship. I noted with a bitter smile that no one had (yet) asked, “Is Jenny a murderer?”

Clearly fiction is assumed to be your life, while scholarship operates at a respectable remove. The novel was still two months from publication, but the buzz had already infiltrated my scholarly environment. My colleagues at a recent faculty meeting made lighthearted suggestions that we combine a planned forensic anthropology concentration with a course on mystery writing, and that I endow a chair. Leaving aside the gross overestimation of a novelist’s income, I noted with some anxiety the notoriety and loss of privacy that appears to accompany literary, as opposed to scholarly, production.

Indeed, having spent almost two decades writing grant proposals, doing field research under sometimes difficult conditions in Turkey and Germany, writing two books and many articles, and developing a reputation as a scholar to be taken seriously, I am disconcerted to find that an (as yet unpublished) novel has overtaken all of that effort in the time it takes for a few words to be whispered in the halls of a conference hotel.

Fellow airplane passengers whose eyes glaze over when I tell them I’m a social anthropologist fall right out of their seats with excitement when I mention I’ve written a novel. They want to know where they can find it and if I’d sign it. I admit to great pride in my literary creation (and an embarrassing lust for sales). I did, after all, spend a lot of time researching the historical setting and writing and rewriting obsessively.

But I can’t help but feel sorry for my poor orphaned scholarly books, beneficiaries of so many more years of work and sacrifice, eclipsed by their glamorous new sibling. This, it turns out, is but one of the dilemmas of my new life as scholar turned novelist.

There is the guilt about money. It seems unseemly to want best-seller status after so many years of meager royalties but scholarly glory. It occurs to me, not for the first time, that academics are some of the few people in our knowledge economy expected to make available the intellectual fruit of decades of labor for a pittance, or for free,  to publishers, journalists and others asking you -- indeed, giving you the honor of spending hours or days of your time -- to evaluate manuscripts, give information or travel across the country to give a talk. I regularly remind commercial textbook publishers that their offer of a $150 “honorarium” for reading a 500 page manuscript and writing an extensive review is inappropriate for a money-making enterprise.

At first, I gloried in the additional income from the novel, crowing the amount to my friends and colleagues, dazzled by the low five-figure sum (which gives you some idea of my basement-level baseline). When the novel rights sold in nine other countries and the publisher commissioned a sequel, I became more circumspect. It seems unscholarly to revel publicly in income, although permissible to complain about it privately. Serious scholars should look like they work hard with little reward or risk being seen as popular pundits, sellouts, those with wide but less than high-brow audiences. (How else would they be earning all that money?) Suddenly, being a private scholar, rather than a public celebrity (the writer herself as a commodity), seems a safer and more comfortable place. Too late.

There is the anxiety about what in academese is called identity politics. Forget about a non-lesbian author writing about lesbians. What about a serious scholar of Muslim societies writing an Orientalist book full of harems and eunuchs? The fact that I tried to turn the usual expectations on their heads and write a sophisticated book means nothing to publishers who revel in Orientalism as a fantasy that sells.

The American version has a gorgeous harem scene on the cover. I was allowed to work with the artist to get some semblance of historical accuracy (the first sketch reminded me of a woman with a dishcloth on her head sitting in an antique store), but not nix the harem theme. The British publisher sent me a proposed blurb that began “A white woman washes up on the Bosphorus….”  The Turks, negotiating to join the European Union, would be very surprised to find they are not “white”. There was also a mention of “colonials” even though Turkey was the colonizer -- the Ottoman Empire. I wielded my red pencil firmly. But my pencil will be defenseless against what I imagine to be serious scholars waiting in the wings to excoriate me for pandering to the hot imaginings of the Orientalist West. To them I suggest a plain brown wrapper.

She doth protest too much, some of you might be thinking. Let me interject here some of the satisfactions of novel writing not to be found in scholarly work. For one thing, you can make stuff up. That is incredibly relaxing. After I wrestled down my scholarly reflexes (everything has to be entirely accurate; you can’t legitimately extrapolate culture backwards in time), the floodgates of pure invention opened and I allowed myself the company of increasingly interesting and genial characters. At times, I felt like a human ouija board, channeling their stories. This has taken on a new dimension as early readers of the novel have begun speaking about the characters as if they were real people with real lives.

The first reviews also have come out, prompting my agent to recommend that I “harden” myself, although so far the reviews have been fairly positive. After all these years of grant proposals and journal submissions, I could wallpaper a room with rejections and have developed a rhinoceros hide, yet I still want reviewers to like my characters Kamil and Jaanan and Sybil, and I feel for them when they’ve been misunderstood.  

Another perk of fiction writing is the boutique editing -- an agent and then an editor who go over drafts word for word, numerous times, in addition to proofreaders who minutely comb at least two sets of proofs. This is unimaginable luxury for those of us who publish with university and scholarly presses that more and more do no proofreading at all and sometimes, for good measure, screw up the clean text you send them. It’s fun. I admit it.

But then there are the readings. This is quite a change from scholarly talks of which I have given more than I care to remember and which, I’ve been told, I do quite well. At a novel reading, you really are supposed to read from the text. The first time I tried this out on a friend in my living room, he fell asleep. 

This was not promising. First of all, it is hard to pick a part of the text that is full of action. The most exciting parts are at the end, but reading those means giving away the plot. And what about those different voices? I tried a deep-throated male voice and a trilling female one, but felt like I was on “Sesame Street.” Someone suggested I pitch my voice low, someone else that I vary the tone. It all came out ridiculous. My first reading is in February, so I’ve decided to take the (for me) unusual step of not preparing. My plan, if you can dignify it with that term, is to have a drink beforehand and then ham it up. I think.

I’ve developed an unwholesome, masochistic fascination with the sorts of questions novelists are asked during their readings. “How did you come to write this novel?” I don’t honestly know. It kind of wrote itself. “Why did you write a mystery?” It’s a mystery to me. “How much of this is drawn from your life?” Nothing that I can discern. Everything. I’m not used to being asked questions. I am the ethnographer, and the control over the flow of information has until now been in my hands. I know everything about my “informants” and they know about me only what I’m willing to share. I don’t much like being on the other side. But authors are commodities and their lives are part of the package that sells books. My publicist (yes, the press has assigned someone to sell me) wants me to set up a jennywhite.net Web site. My editor tells me not to worry. “They’re just fans!” I worry about getting even more e-mails than I already have to answer every day. I worry about the invasion of my personal life, my privacy. I just plain worry.

And there is that most important and revealing of questions: What to wear? I had expected some exoticism in the world of novelists, only to findthat, while they dress interestingly, it is without the “Sex in the City” flair I had come to expect from watching, well, “Sex in the City.” A perk of the fiction writer’s world is literary events, of which I’ve been to, well, one. But it was in a very posh apartment overlooking the Boston Common. I wore an antique kimono over a black cat-woman outfit that to me conjured up “literary” and “novelist.”

I needn’t have worried. It seems that literary people dress much the way professors do, with perhaps more dresses and fewer beards. And they tend to spend their time gossiping about the trade and about other people, which made me feel right at home. I noticed women wearing 1950s vintage dresses, which look good only on youngsters who don’t remember the 1950s – or even the 1970s. I suppose I could go for the ageless diva look.

I lust for the flair I haven’t had the courage to  display at the university, first as an untenured faculty member (given the advice, “keep your head down”), then as a newly tenured faculty member too busy to think about clothes, much less to shop. Female faculty wear solemn, formal clothing to establish authority in the classroom, something our male colleagues seem able to accomplish with some extra facial hair. Twice this semester, I’ve caught myself wearing a sweater inside out, not a promising start for my diva metamorphosis, but the sure sign of a serious scholar. My kind or perhaps somnolent students said nothing to me. (I can only imagine what they said to each other after class.)

Lest you think it superficial to dwell with such earnestness on dress, let me reassure you that what comes out of the closet is a serious matter. In graduate school, an earnest fellow student, passing me in the hall while I was in conversation with someone about her new apartment, without missing a stride threw down this gauntlet, “You’ll never be an intellectual, if you talk about things like wallpaper.” Or clothes. Ever the radical,  I practice anti-establishment accessorizing – bright scarves, exotic jewelry, colorful shoes (immediately chewed up by the scholarly brick walkways), turning an otherwise severe outfit into a whisper of defiance.

So as a newly fledged novelist, I have great hopes to break out of my cowardly academic persona, as well as great anxieties. But what should I wear? Why can’t I earn a scholarly award with one book and a pair of Manolo Blahnik shoes with the other and still be me? Watch for those Manolos in the classroom next year. Let’s hope I put them on the right feet.

Author/s: 
Jenny White
Author's email: 
info@insidehighered.com

Jenny White is associate professor of anthropology at Boston University. Her first novel, The Sultan’s Seal, is being released this month by W.W. Norton.

The Heart Has Its Reasons

Perhaps it’s best to have waited until after Valentine’s Day to think about love. The holiday, after all, has less to do with passion than with sentimentality -- that is, a fixed matrix of sighs and signs, an established and tightly run order of feelings and expressions. That is all pleasant enough. But still, it seems kind of feeble by contrast with the reality of love, which is complicated, and which can mess you up.

The distinction is not semantic. And no, I did not improvise it as some kind of roundabout excuse for forgetting the holiday. (You do not sustain a happy marriage for a dozen years without knowing to pump a few dollars into the sentimental economy in a timely fashion.)

There are times when the usual romantic phrases and symbols prove exactly right for expressing what you mean. The stock of them is, as Roland Barthes puts it in A Lover’s Discourse, like a perpetual calendar. The standard words are clichés, perhaps. But they are meaningful clichés, and nobody has sounded out their overtones with anything like Barthes’s finesse.

Still, the repertoire of romantic discourse has its limits. “The lover speaks in bundles of sentences but does not integrate these sentences on a higher level, into a work,” writes Barthes. “His is a horizontal discourse: no transcendence, no deliverance, no novel (though a great deal of the fictive).”

Well, okay, yes -- that is all true of the early days of a relationship. When you are both horizontal, the discourse between you tends to be, as well. Once you begin to build a life together, however, a certain amount of verticality, if not transcendence, imposes itself; and the nuances of what Barthes called the “lover’s discourse” are not so much lost as transformed. Even the silences are enriched. I try to keep quiet on Sunday while my wife is reading the Times, for example. There can be a kind of intimacy involved in keeping out of the other’s way.

For an account of love in this other sense, I’d recommend Harry Frankfurt’s The Reasons of Love, first published by Princeton University Press in 2004 and released in paperback just this year. The front cover announces that Frankfurt, a professor emeritus of philosophy at Princeton, is “author of the best-selling On Bullshit.

Like the book that established the Frankfurt brand in the widest cultural marketplace, Reasons is a dry and elegant little treatise – somehow resembling the various “manuals for reflection” from Roman or Renaissance times more than it does most contemporary academic philosophy. It consists of papers originally delivered as the Romanell-Phi Beta Kappa Lectures at Princeton in 2000, then presented again the following year as the Shearman Lectures at University College London.

The ease and accessibility of Frankfurt’s manner are somewhat misleading. There is actually an enormous amount going on within the book’s hundred pages. Despite its unassuming tone, Reasons is a late installment of Frankfurt’s work on questions of moral philosophy in general, and free will in particular. In a footnote, he points out that precision can be risky, citing a comment attributed to Niels Bohr: “He is said to have cautioned that one should never speak more clearly than one can think.” (With plenty of academic books, of course, the author faces no such danger.)

It is the second of his three lectures (titled “On Love, and Its Reasons”) that seems to me to fill in all the gaps left in Barthes’s account. Frankfurt sets his argument up so that it can apply to love of any kind -- the love of one’s family, homeland, or ideological cause, quite as much as one’s romantic partner. Indeed, the latter kind of love tends to have an admixture of messy and “vividly distracting elements” (as he terms them) that can hinder exact definition of the concept. But if the shoe fits....

For all his lucidity, Frankfurt is very alert to the paradoxical nature of love. It is not really the case that we love something because it possesses a certain quality or value. “The truly essential relationship between love and the value of the beloved,” he notes, “goes in the opposite direction. It is not necessarily as a result of recognizing their value and of being captivated by it that we love things. Rather, what we love necessarily acquires value for us because we love it.”

In that respect, Frankfurt’s understanding of love seems to follow the same lines as the thinking of a philosopher one would otherwise never confuse with him -- namely, Slavoj Žižek. For as Žižek once pointed out, if our regard for another person could be strictly reduced to a list of exactly what we found admirable or valuable about them, then the word “love” wouldn’t really apply to what we feel. And even the faults of the beloved are, for the person in love, not valid objections to feeling love. (They may drive you crazy. But the fact that they do is, in its way, a dimension of love.)

So the value of the beloved, as Frankfurt argues, is an effect of love -- not the cause. And when we love someone, we want the best for that person. In other words, we regard the beloved as an end, not as a means. “The lover desires that his beloved flourish and not be harmed,” writes Frankfurt, “and he does not desire this just for the sake of promoting some other goal.... For the lover, the condition of his beloved is important in itself, apart from any bearing it might have on other matters.”

If this sounds a little bit like the categorical imperative .... well,  that’s about half right, if just barely. Kant tells us that ethical conduct requires treating other people as ends, not as means. But that imperative is universal -- and as Frankfurt says, the feeling of love is inescapably specific. “The significance to the lover of what he loves,” he writes, “is not generic; it is ineluctably particular.”

This is where things get complicated. We don’t have a lot of say or sway in regard to love. It is not that love is blind, or that passion is irrational. Sure, that too. But while the capacity to love belongs, as Frankfurt puts it, “to our most intimate and most fundamental nature,” the demands it places on each person is not subject to personal decision making.

“We cannot help it that the direction of our personal reasoning is in fact governed by the specific final ends that our love has defined for us,” writes Frankfurt. “... Whether it would be better for us to love differently is a question that we are unable to take seriously. For us, as a practical matter, the issue cannot effectively arise.”

What makes this philosophically interesting, I take it, is that love blurs the distinction between selfishness and selflessness – between treating the beloved as an end in itself, on the one hand, and the fact that the beloved is my beloved, in particular, on the other.

Quite a bit of ink has been spilled, over time, regarding the question of whether or not it is possible, or desirable, to establish universal principles that could be applied without reference to the local or personal interests of moral agents. “The ambition to provide an exhaustively rational warrant for the way we conduct our lives is misconceived,” says Frankfurt. But that doesn’t mean that the alternative to “the pan-rationalist fantasy” is imagining human beings to be totally capricious, completely self-inventing, or intractably self-absorbed.

Nor does it mean that, like the song says, “All you need is love.” Love simplifies nothing. At the same time, it makes life interesting -- and possible.

“The fact that we cannot help loving,” as Frankfurt puts it, “and that we therefore cannot help being guided by the interests of what we love, helps to ensure that we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course.”

Okay, so Harry Frankfurt is not the most lyrical of philosophers. Still, he has his moments. Roland Barthes wrote that the lover’s discourse consists of stray sentences -- never adding up to a coherent work, let alone anything with a structure, like a novel. But when Frankfurt says that love ensures that “we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course,” it does seem to gesture toward a story.

A recognizable story. A familiar story. (One that includes the line, “Before we met...”) A story I am living, as perhaps you are, too.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Pages

Subscribe to RSS - Publishing
Back to Top