Some months ago I started asking friends, colleagues from my teaching days, researchers in higher education, faculty members of various ages and ranks, deans, provosts and presidents, and focus groups of students: “What’s the status of the Big Questions on your campus?” Quite deliberately I avoided defining “Big Questions,” but I gave as examples such questions as “Who am I? Where do I come from? What am I going to do with my life? What are my values? Is there such a thing as evil? What does it mean to be human? How can I understand suffering and death? What obligations do I have to other people? What does it mean to be a citizen in a democracy? What makes work, or a life, meaningful and satisfying?” In other words, I wanted to know what was happening to questions of meaning and value that traditionally have been close to the heart of a liberal education.
Some of what I found puzzled me. People pointed out quite properly that some Big Questions were alive and well in academia today. These included some questions about the origin of the universe, the emergence of life, the nature of consciousness, and others that have been raised by the scientific breakthroughs of the past few decades.
In the humanities and related social sciences the situation was rather different. Some friends reminded me that, not all big questions were in eclipse. Over the past generation faculty members have paid great attention to questions of racial, ethnicity, gender and sexual identity. Curricular structures, professional patterns, etc. continue to be transformed by this set of questions. Professors, as well as students, care about these questions, and as a result, write, teach and learn with passion about them.
But there was wide agreement that other big questions, the ones about meaning, value, moral and civic responsibility, were in eclipse. To be sure, some individual faculty members addressed them, and when they did, students responded powerfully. In fact, in a recent Teagle-sponsored meeting on a related topic, participants kept using words such as “hungry,” “thirsty,” and “parched” to describe students’ eagerness to find ways in the curriculum, or outside it, to address these questions. But the old curricular structures that put these questions front and center have over the years often faded or been dismantled, including core curricula, great books programs, surveys “from Plato to NATO,” and general education requirements of various sorts. Only rarely have new structures emerged to replace them.
I am puzzled why. To be sure, these Big Questions are hot potatoes. Sensitivities are high. And faculty members always have the excuse that they have other more pressing things to do. Over two years ago, in an article entitled “Aim Low,” Stanley Fish attacked some of the gurus of higher education (notably, Ernest Boyer) and their insistence that college education should “go beyond the developing of intellectual and technical skills and … mastery of a scholarly domain. It should include the competence to act in the world and the judgment to do so wisely” ( Chronicle of Higher Education, May 16 2003). Fish hasn’t been the only one to point out that calls to “fashion” moral and civic-minded citizens, or to “go beyond” academic competency assume that students now routinely achieve such mastery of intellectual and scholarly skills. We all know that’s far from the case.
Minimalist approaches -- ones that limit teaching to what another friend calls “sectoral knowledge -- are alluring. But if you are committed to a liberal education, it’s hard just to aim low and leave it at that. The fact that American university students need to develop basic competencies provides an excuse, not a reason, for avoiding the Big Questions. Students also need to be challenged, provoked, and helped to explore the issues they will inevitable face as citizens and as individuals. Why have we been so reluctant to develop the structures, in the curriculum or beyond it, that provide students with the intellectual tools they need to grapple thoughtfully over the course of a lifetime with these questions?
I see four possible reasons:
1. Faculty members are scared away by the straw man Stanley Fish and others have set up. Despite accusations of liberal bias and “brainwashing” no faculty member I know wants to “mold,” “fashion” or “proselytize” students. But that’s not what exploring the Big Questions is all about. Along with all the paraphernalia college students bring with them these days are Big Questions, often poorly formulated and approached with no clue that anyone in the history of humankind has ever had anything useful to say about any of them. There’s no need to answer those questions for students, or to try to fashion them into noble people or virtuous citizens for the republic. There is, however, every reason to help students develop the vocabularies, the metaphors, the exempla, the historical perspective, the patterns of analysis and argument that let them over time answer them for themselves.
2. A second possible reason is that faculty are put off by the feeling they are not “experts” in these matters. In a culture that quite properly values professional expertise, forays beyond one’s field of competence are understandably suspect. But one does not have to be a moral philosopher to raise the Big Questions and show some of the ways smart people in the past have struggled with them. I won’t pontificate about other fields, but in my own field -- classics and ancient history -- the Big Questions come bubbling up between the floor boards of any text I have ever taught. I don’t have to be a specialist in philosophy or political science to see that Thucydides has something to say about power and morality, or the Odyssey about being a father and a husband. A classicist’s job, as I see it, is to challenge students to think about what’s implicit in a text, help them make it explicit and use that understanding to think with.
3. Or is it that engaging with these “Big Questions” or anything resembling them is the third rail of a professional career. Senior colleagues don’t encourage it; professional journals don’t publish it; deans don’t reward it and a half dozen disgruntled students might sink your tenure case with their teaching evaluations. You learn early on in an academic career not to touch the third rail. If this is right, do we need to rewire the whole reward system of academia?
4. Or, is a former student of mine, now teaching at a fine women’s college, correct when she says that on her campus “It tends to be that … those who talk about morality and the big questions come from such an entrenched far right position … that the rest of us … run for cover.”
Some of the above? All of the above? None of the above? You tell me, but let’s not shrug our shoulders and walk away from the topic until we’ve dealt with one more issue: What happens if, for whatever reason, faculty members run for the hills when the Big Questions, including the ones about morality and civic responsibility, arise? Is this not to lose focus on what matters most in an education intended to last for a lifetime? In running away, do we not then leave the field to ideologues and others we cannot trust, and create a vacuum that may be filled by proselytizers, propagandists, or the unspoken but powerful manipulations of consumer culture? Does this not sever one of the roots that has over the centuries kept liberal education alive and flourishing? But, most serious of all, will we at each Commencement say farewell to another class of students knowing that for all they have learned, they are ill equipped to lead an examined life? And if we do, can we claim to be surprised and without responsibility if a few decades later these same graduates abuse the positions of power and trust in our corporate and civic life to which they have ascended?
W. Robert Connor
W. Robert Connor is president of the Teagle Foundation, which is dedicated to strengthening liberal education. More on the foundation's “Big Questions” project may be found on its Web site. This essay is based on remarks Connor recently made at a meeting of the Middle Atlantic Chapters of Phi Beta Kappa, at the University of Pennsylvania.
Normally my social calendar is slightly less crowded than that of Raskolnikov in Crime and Punishment. (He, at least, went out to see the pawnbroker.) But late last month, in an unprecedented burst of gregariousness, I had a couple of memorable visits with scholars who had come to town – small, impromptu get-togethers that were not just lively but, in a way, remarkable.
The first occurred just before Christmas, and it included (besides your feuilletonist reporter) a political scientist, a statistician, and a philosopher. The next gathering, also for lunch, took place a week later, during the convention of the Modern Language Association. Looking around the table, I drew up a quick census. One guest worked on British novels of the Victorian era. Another writes about contemporary postcolonial fiction and poetry. We had two Americanists, but of somewhat different specialist species; besides, one was a tenured professor, while the other is just starting his dissertation. And, finally, there was, once again, a philosopher. (Actually it was the same philosopher, visiting from Singapore and in town for a while.)
If the range of disciplines or specialties was unusual, so the was the degree of conviviality. Most of us had never met in person before -- though you’d never have known that from the flow of the conversation, which never seemed to slow down for very long. Shared interests and familiar arguments (some of them pretty esoteric) kept coming up. So did news about an electronic publishing initiative some of the participants were trying to get started. On at least one occasion in either meal, someone had to pull out a notebook to have someone else jot down an interesting citation to look up later.
In each case, the members of the ad hoc symposium were academic bloggers who had gotten to know one another online. That explained the conversational dynamics -- the sense, which was vivid and unmistakable, of continuing discussions in person that hadn’t started upon arriving at the restaurant, and wouldn’t end once everyone had dispersed.
The whole experience was too easygoing to call impressive, exactly. But later -- contemplating matters back at my hovel, over a slice of black bread and a bowl of cold cabbage soup -- I couldn’t help thinking that something very interesting had taken place. Something having little do with blogging, as such. Something that runs against the grain of how academic life in the United States has developed over the past two hundred years.
At least that’s my impression from having read Thomas Bender’s book Intellect and Public Life: Essays on the Social History of Academic Intellectuals in the United States, published by Johns Hopkins University Press in 1993. That was back when even knowing how to create a Web page would raise eyebrows in some departments. (Imagine the warnings that Ivan Tribble might have issued, at the time.)
But the specific paper I’m thinking of – reprinted as the first chapter – is even older. It’s called “The Cultures of Intellectual Life: The City and the Professions,” and Bender first presented it as a lecture in 1977. (He is currently professor of history at New York University.)
Although he does not exactly put it this way, Bender’s topic is how scholars learn to say “we.” An intellectual historian, he writes, is engaged in studying “an exceedingly complex interaction between speakers and hearers, writers and readers.” And the framework for that “dynamic interplay” has itself changed over time. Recognizing this is the first step towards understanding that the familiar patterns of cultural life – including those that prevail in academe – aren’t set in stone. (It’s easy to give lip service to this principle. Actually thinking through its implications, though, not so much.)
The history of American intellectual life, as Bender outlines it, involved a transition from civic professionalism (which prevailed in the 18th and early 19th centuries) to disciplinary professionalism (increasingly dominant after about 1850).
“Early American professionals,” he writes, “were essentially community oriented. Entry to the professions was usually through local elite sponsorship, and professionals won public trust within this established social context rather than through certification.” One’s prestige and authority was very strongly linked to a sense of belonging to the educated class of a given city.
Bender gives as an example the career of Samuel Bard, the New York doctor who championed building a hospital to improve the quality of medical instruction available from King’s College, as Columbia University was known back in the 1770). Bard had studied in Edinburgh and wanted New York to develop institutions of similar caliber; he also took the lead in creating a major library and two learned societies.
“These efforts in civic improvement were the product of the combined energies of the educated and the powerful in the city,” writes Bender, “and they integrated and gave shape to its intellectual life.”
Nor was this phenomenon restricted to major cities in the East. Visiting the United States in the early 1840s, the British geologist Charles Lyell noted that doctors, lawyers, scientists, and merchants with literary interests in Cincinnati “form[ed] a society of a superior kind.” Likewise, William Dean Howells recalled how, at this father’s printing office in a small Ohio town, the educated sort dropped in “to stand with their back to our stove and challenge opinion concerning Holmes and Poe, Irving and Macauley....”
In short, a great deal of one’s sense of cultural “belonging” was bound up with community institutions -- whether that meant a formally established local society for the advancement of learning, or an ad hoc discussion circle warming its collective backside near a stove.
But a deep structural change was already taking shape. The German model of the research university came into ever greater prominence, especially in the decades following the Civil War. The founding of Johns Hopkins University in 1876 defined the shape of things to come. “The original faculty of philosophy,” notes Bender, “included no Baltimoreans, and no major appointments in the medical school went to members of the local medical community.” William Welch, the first dean of the Johns Hopkins School of Medicine, “identified with his profession in a new way; it was a branch of science -- a discipline -- not a civic role.”
Under the old regime, the doctors, lawyers, scientists, and literary authors of a given city might feel reasonably comfortable in sharing the first-person plural. But life began to change as, in Bender’s words, “people of ideas were inducted, increasingly through the emerging university system, into the restricted worlds of specialized discourse.” If you said “we,” it probably referred to the community of other geologists, poets, or small-claims litigators.
“Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”
Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.
“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence -- certification -- in an era when criteria of intellectual authority were vague and professional performance was unreliable.”
But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”
He wrote that in 1993. We are now more than a decade downstream. I don’t know that anyone else at the lunchtime gatherings last month had Thomas Bender’s analysis in mind. But it has been interesting to think about those meetings with reference to his categories.
The people around the table, each time, didn’t share a civic identity: We weren’t all from the same city, or even from the same country. Nor was it a matter of sharing the same disciplinary background – though no effort was made to be “interdisciplinary” in any very deliberate way, either. At the same time, I should make clear that the conversations were pretty definitely academic: “How long before hundreds of people in literary studies start trying to master set theory, now that Alain Badiou is being translated?” rather than, “Who do you think is going to win American Idol?”
Of course, two casual gatherings for lunch does not a profound cultural shift make. But it was hard not to think something interesting had just transpired: A new sort of collegiality, stretching across both geographic and professional distances, fostered by online communication but not confined to it.
The discussions were fueled by the scholarly interests of the participants. But there was a built-in expectation that you would be willing to explain your references to someone who didn’t share them. And none of it seems at all likely to win the interest (let alone the approval) of academic bureaucrats.
Surely other people must be discovering and creating this sort of thing -- this experience of communitas. Or is that merely a dream?
It is not a matter of turning back the clock -- of undoing the division of labor that has created specialization. That really would be a dream.
But as Bender puts it, cultural life is shaped by “patterns of interaction” that develop over long periods of time. For younger scholars, anyway, the routine give-and-take of online communication (along with the relative ease of linking to documents that support a point or amplify a nuance) may become part of the deep grammar of how they think and argue. And if enough of them become accustomed to discussing their research with people working in other disciplines, who knows what could happen?
“What our contemporary culture wants,” as Bender put it in 1993, “is the combination of theoretical abstraction and historical concreteness, technical precision and civic give-and-take, data and rhetoric.” We aren’t there, of course, or anywhere near it. But sometimes it does seem as if there might yet be grounds for optimism.
Perhaps it’s best to have waited until after Valentine’s Day to think about love. The holiday, after all, has less to do with passion than with sentimentality -- that is, a fixed matrix of sighs and signs, an established and tightly run order of feelings and expressions. That is all pleasant enough. But still, it seems kind of feeble by contrast with the reality of love, which is complicated, and which can mess you up.
The distinction is not semantic. And no, I did not improvise it as some kind of roundabout excuse for forgetting the holiday. (You do not sustain a happy marriage for a dozen years without knowing to pump a few dollars into the sentimental economy in a timely fashion.)
There are times when the usual romantic phrases and symbols prove exactly right for expressing what you mean. The stock of them is, as Roland Barthes puts it in A Lover’s Discourse, like a perpetual calendar. The standard words are clichés, perhaps. But they are meaningful clichés, and nobody has sounded out their overtones with anything like Barthes’s finesse.
Still, the repertoire of romantic discourse has its limits. “The lover speaks in bundles of sentences but does not integrate these sentences on a higher level, into a work,” writes Barthes. “His is a horizontal discourse: no transcendence, no deliverance, no novel (though a great deal of the fictive).”
Well, okay, yes -- that is all true of the early days of a relationship. When you are both horizontal, the discourse between you tends to be, as well. Once you begin to build a life together, however, a certain amount of verticality, if not transcendence, imposes itself; and the nuances of what Barthes called the “lover’s discourse” are not so much lost as transformed. Even the silences are enriched. I try to keep quiet on Sunday while my wife is reading the Times, for example. There can be a kind of intimacy involved in keeping out of the other’s way.
For an account of love in this other sense, I’d recommend Harry Frankfurt’s The Reasons of Love, first published by Princeton University Press in 2004 and released in paperback just this year. The front cover announces that Frankfurt, a professor emeritus of philosophy at Princeton, is “author of the best-selling On Bullshit.”
Like the book that established the Frankfurt brand in the widest cultural marketplace, Reasons is a dry and elegant little treatise – somehow resembling the various “manuals for reflection” from Roman or Renaissance times more than it does most contemporary academic philosophy. It consists of papers originally delivered as the Romanell-Phi Beta Kappa Lectures at Princeton in 2000, then presented again the following year as the Shearman Lectures at University College London.
The ease and accessibility of Frankfurt’s manner are somewhat misleading. There is actually an enormous amount going on within the book’s hundred pages. Despite its unassuming tone, Reasons is a late installment of Frankfurt’s work on questions of moral philosophy in general, and free will in particular. In a footnote, he points out that precision can be risky, citing a comment attributed to Niels Bohr: “He is said to have cautioned that one should never speak more clearly than one can think.” (With plenty of academic books, of course, the author faces no such danger.)
It is the second of his three lectures (titled “On Love, and Its Reasons”) that seems to me to fill in all the gaps left in Barthes’s account. Frankfurt sets his argument up so that it can apply to love of any kind -- the love of one’s family, homeland, or ideological cause, quite as much as one’s romantic partner. Indeed, the latter kind of love tends to have an admixture of messy and “vividly distracting elements” (as he terms them) that can hinder exact definition of the concept. But if the shoe fits....
For all his lucidity, Frankfurt is very alert to the paradoxical nature of love. It is not really the case that we love something because it possesses a certain quality or value. “The truly essential relationship between love and the value of the beloved,” he notes, “goes in the opposite direction. It is not necessarily as a result of recognizing their value and of being captivated by it that we love things. Rather, what we love necessarily acquires value for us because we love it.”
In that respect, Frankfurt’s understanding of love seems to follow the same lines as the thinking of a philosopher one would otherwise never confuse with him -- namely, Slavoj Å½iÅ¾ek. For as Å½iÅ¾ek once pointed out, if our regard for another person could be strictly reduced to a list of exactly what we found admirable or valuable about them, then the word “love” wouldn’t really apply to what we feel. And even the faults of the beloved are, for the person in love, not valid objections to feeling love. (They may drive you crazy. But the fact that they do is, in its way, a dimension of love.)
So the value of the beloved, as Frankfurt argues, is an effect of love -- not the cause. And when we love someone, we want the best for that person. In other words, we regard the beloved as an end, not as a means. “The lover desires that his beloved flourish and not be harmed,” writes Frankfurt, “and he does not desire this just for the sake of promoting some other goal.... For the lover, the condition of his beloved is important in itself, apart from any bearing it might have on other matters.”
If this sounds a little bit like the categorical imperative .... well, that’s about half right, if just barely. Kant tells us that ethical conduct requires treating other people as ends, not as means. But that imperative is universal -- and as Frankfurt says, the feeling of love is inescapably specific. “The significance to the lover of what he loves,” he writes, “is not generic; it is ineluctably particular.”
This is where things get complicated. We don’t have a lot of say or sway in regard to love. It is not that love is blind, or that passion is irrational. Sure, that too. But while the capacity to love belongs, as Frankfurt puts it, “to our most intimate and most fundamental nature,” the demands it places on each person is not subject to personal decision making.
“We cannot help it that the direction of our personal reasoning is in fact governed by the specific final ends that our love has defined for us,” writes Frankfurt. “... Whether it would be better for us to love differently is a question that we are unable to take seriously. For us, as a practical matter, the issue cannot effectively arise.”
What makes this philosophically interesting, I take it, is that love blurs the distinction between selfishness and selflessness – between treating the beloved as an end in itself, on the one hand, and the fact that the beloved is my beloved, in particular, on the other.
Quite a bit of ink has been spilled, over time, regarding the question of whether or not it is possible, or desirable, to establish universal principles that could be applied without reference to the local or personal interests of moral agents. “The ambition to provide an exhaustively rational warrant for the way we conduct our lives is misconceived,” says Frankfurt. But that doesn’t mean that the alternative to “the pan-rationalist fantasy” is imagining human beings to be totally capricious, completely self-inventing, or intractably self-absorbed.
Nor does it mean that, like the song says, “All you need is love.” Love simplifies nothing. At the same time, it makes life interesting -- and possible.
“The fact that we cannot help loving,” as Frankfurt puts it, “and that we therefore cannot help being guided by the interests of what we love, helps to ensure that we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course.”
Okay, so Harry Frankfurt is not the most lyrical of philosophers. Still, he has his moments. Roland Barthes wrote that the lover’s discourse consists of stray sentences -- never adding up to a coherent work, let alone anything with a structure, like a novel. But when Frankfurt says that love ensures that “we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course,” it does seem to gesture toward a story.
A recognizable story. A familiar story. (One that includes the line, “Before we met...”) A story I am living, as perhaps you are, too.
During the early decades of the 20th century, a newspaper called The Avery Boomer served the 200 or so citizens of Avery, Iowa. It was irregular in frequency, and in other ways as well. Each issue was written and typeset by one Axel Peterson, a Swedish immigrant who described himself as "lame and crippled up," and who had to make time for his journalistic labors while growing potatoes. A member of the Socialist Party, he had once gained some notoriety within it for proposing that America’s radicals take over Mexico to show how they would run things. Peterson was well-read. He developed a number of interesting and unusual scientific theories -- also, it appears, certain distinctive ideas about punctuation.
Peterson regarded himself, as he put it, as "a Social Scientist ... developing Avery as a Social Experiment Station" through his newspaper. He sought to improve the minds and morals of the townspeople. This was not pure altruism. Several of them owed Petersen money; by reforming the town, he hoped to get it back.
But he also wanted citizens to understand that Darwin's theory of evolution was a continuation of Christ's work. He encouraged readers to accelerate the cause of social progress by constantly asking themselves a simple question: "What would Jesus do?"
I discovered the incomparable Peterson recently while doing research among some obscure pamphlets published around 1925. So it was a jolt to find that staple bit of contemporary evangelical Christian pop-culture -- sometimes reduced to an acronym and printed on bracelets -- in such an unusual context. But no accident, as it turns out: Peterson was a fan of the Rev. Charles M. Sheldon’s novel In His Steps (1896), which is credited as the source of the whole phenomenon, although he cannot have anticipated its mass-marketing a century later.
Like my wild potato-growing Darwinian socialist editor, Sheldon thought that asking WWJD? would have social consequences. It would make the person asking it “identify himself with the great causes of Humanity in some personal way that would call for self-denial and suffering,” as one character in the novel puts it.
Not so coincidentally, Garry Wills takes a skeptical look at WWJD in the opening pages of his new book, What Jesus Meant, published by Viking. He takes it as a variety of spiritual kitsch -- an aspect of the fundamentalist and Republican counterculture, cemtered around suburban mega-churches offering a premium on individual salvation.
In any case, says Wills, the question is misleading and perhaps dangerous. The gospels aren’t a record of exemplary moments; the actions of Jesus are not meant as a template. “He is a divine mystery walking among men,” writes Wills. “The only way we can directly imitate him is to act as if we were gods ourselves -- yet that is the very thing he forbids.”
Wills, a professor emeritus of history at Northwestern University, was on the way to becoming a Jesuit when he left the seminary, almost 50 years ago, to begin writing for William F. Buckley at The National Review. At the time, that opinion magazine had a very impressive roster of conservative literary talent; its contributors included Joan Didion, Hugh Kenner, John Leonard, and Evelyn Waugh. (The mental firepower there has fallen off a good bit in the meantime.) Wills came to support the civil rights movement and oppose the Vietnam war, which made for a certain amount of tension; he parted ways with Buckley’s journal in the early 1970s. The story is told in his Confessions of a Conservative (1979) – a fascinating memoir, intercalated with what is, for the nonspecialist anyway, an alarmingly close analysis of St. Augustine’s City of God.
Today -- many books and countless articles later -- Wills is usually described as a liberal in both politics and theology, though that characterization might not hold up under scrutiny. His outlook is sui generis, like that of some vastly more learned Axel Peterson.
His short book on Jesus is a case in point. You pick it up expecting (well, I did, anyway) that Wills might be at least somewhat sympathetic to the efforts of the Jesus Seminar to identify the core teachings of the historical Jesus. Over the years, scholars associated with the seminar cut away more and more of the events and sayings attributed to Jesus in the four gospels, arguing that they were additions, superimposed on the record later.
After all this winnowing, there remained a handful of teachings -- turn the other cheek, be a good Samaritan, love your enemies, have faith in God -- that seemed anodyne, if not actually bland. This is Jesus as groovy rabbi, urging everybody to just be nice. Which, under the circumstances, often seems to the limit of moral ambition available to the liberal imagination.
Wills draws a firm line between his approach and that of the Jesus Seminar. He has no interest in the scholarly quest for “the historical Jesus,” which he calls a variation of fundamentalism: “It believes in the literal sense of the Bible,” writes Wills, “it just reduces the Bible to what it can take as literal quotations from Jesus.” Picking and choosing among the parts of the textual record is anathema to him: “The only Jesus we have,” writes Wills, “is the Jesus of faith. If you reject the faith, there is no reason to trust anything the Gospels say.
He comes very close to the position put forward by C.S. Lewis, that evangelical-Christian favorite. “A man who was merely a man and said the sort of things Jesus said,” as Lewis put it, “would not be a great moral teacher. He would either be a lunatic -- on a level with the man who says he is a poached egg -- or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God; or else a madman or something worse.”
That’s a pretty stark range of alternatives. For now I’ll just dodge the question and run the risk of an eternity in weasel hell. Taking it as a given that Jesus is what the Christian scriptures say he claimed to be -- “the only-begotten Son of the Father” -- Wills somehow never succumbs to the dullest consequence of piety, the idea that Jesus is easy to understand. “What he signified is always more challenging than we expect,” he writes, “more outrageous, more egregious.”
He was, as the expression goes, transgressive. He “preferred the company of the lowly and despised the rich and powerful. He crossed lines of ritual purity to deal with the unclean – with lepers, the possessed, the insane, with prostitutes and adulterers and collaborators with Rome. (Was he subtly mocking ritual purification when he filled the waters with wine?) He was called a bastard and was rejected by his own brothers and the rest of his family.”
Some of that alienation had come following his encounter with John the Baptist -- as strange a figure as any in ancient literature: “a wild man, raggedly clad in animal skins, who denounces those coming near to him as ‘vipers offspring.’” Wills writes that the effect on his family must have been dismaying: “They would have felt what families feel today when their sons or daughters join a ‘cult.’”
What emerges from the gospels, as Wills tell it, is a figure so abject as to embody a kind of permanent challenge to any established authority or code of propriety. (What would Jesus do? Hang out on skid row, that’s what.) His last action on earth is to tell a criminal being executed next to him that they will be together in paradise.
Wills says that he intends his book to be a work of devotion, not of scholarship. But the latter is not lacking. He just keeps it subdued. Irritated by the tendency for renderings of Christian scripture to have an elevated and elegant tone, Wills, a classicist by training, makes his own translations. He conveys the crude vigor of New Testament Greek, which has about as much in common with that of Plato as the prose of Mickey Spillane does with James Joyce. (As Nietzsche once put it: “It was cunning of God to learn Greek when He wished to speak to man, and not to learn it better.”)
Stripping away any trace of King James Version brocade, Wills leaves the reader with Jesus’s words in something close to the rough eloquence of the public square. “I say to all you can hear me: Love your foes, help those who hate you, praise those who curse you, pray for those who abuse you. To one who punches your cheek, offer the other cheek. To one seizing your cloak, do not refuse the tunic under it. Whoever asks, give to him. Whoever seizes, do not resist. Exactly how you wish to be treated, in that way treat others.... Your great reward will be that you are the children of the Highest One, who also favors ingrates and scoundrels.”
A bit of sarcasm, perhaps, there at the end -- which is something I don’t remember from Sunday school, though admittedly it has been a while. The strangeness of Jesus comes through clearly; it is a message that stands all “normal” values on their head. And it gives added force to another remark by Nietzsche: “In truth, there was only one Christian, and he died on the cross.”
For better and for worse, the American reception of contemporary French thought has often followed a script that frames everything in terms of generational shifts. Lately, that has usually meant baby-boomer narcissism -- as if the youngsters of '68 don't have enough cultural mirrors already. Someone like Bernard-Henri Lévy, the roving playboy philosopher, lends himself to such branding without reserve. Most of his thinking is adequately summed up by a thumbnail biography -- something like, "BHL was a young Maoist radical in 1968, but then he denounced totalitarianism, and started wearing his shirts unbuttoned, and the French left has never recovered."
Nor are American academics altogether immune to such prepackaged blendings of theory and lifestyle. Hey, you -- the Foucauldian with the leather jacket that doesn't fit anymore....Yeah, well, you're complicit too.
But there are thinkers who don't really follow the standard scripts very well, and Pierre Rosanvallon is one them. Democracy Past and Future, the selection of his writings just published by Columbia University Press, provides a long overdue introduction to a figure who defies both sound bites and the familiar academic division of labor. Born in 1948, he spent much of the 1970s as a sort of thinker-in-residence for a major trade union, the Confédération Française Démocratique du Travail, for which he organized seminars and conferences seeking to create a non-Marxist "second left" within the Socialist Party. He emerged as a theoretical voice of the autogestion (self-management) movement. His continuing work on the problem of democracy was honored in 2001 when he became a professor at the Collège de France, where Rosanvallon lectures on the field he calls "the philosophical history of the political."
Rosanvallon has written about the welfare state. Still, he isn't really engaged in political science. He closely studies classical works in political philosophy -- but in a way that doesn't quite seem like intellectual history, since he's trying to use the ideas as much as analyze them. He has published a study of the emergence of universal suffrage that draws on social history. Yet his overall project -- that of defining the essence of democracy -- is quite distinct from that of most social historians. At the same time (and making things all the more complicated) he doesn't do the kind of normative political philosophy one now associates with John Rawls or Jurgen Habermas.
Intrigued by a short intellectual autobiography that Rosanvallon presented at a conference a few years ago, I was glad to see the Columbia volume, which offers a thoughtful cross-section of texts from the past three decades. The editor, Samuel Moyn, is an assistant professor of history at Columbia. He answered my questions on Rosanvallon by e-mail.
Q:Rosanvallon is of the same generation as BHL. They sometimes get lumped together. Is that inevitable? Is it misleading?
A: They are really figures of a different caliber and significance, though you are right to suggest that they lived through the same pivotal moment. Even when he first emerged, Bernard-Henri Lévy faced doubts that he mattered, and a suspicion that he had fabricated his own success through media savvy. One famous thinker asked whether the "new philosophy" that BHL championed was either new or philosophy; and Cornelius Castoriadis attacked BHL and others as "diversionists." Yet BHL drew on some of the same figures Rosanvallon did -- Claude Lefort for example -- in formulating his critique of Stalinist totalitarianism. But Lefort, like Castoriadis and Rosanvallon himself, regretted the trivialization that BHL's meteoric rise to prominence involved.
So the issue is what the reduction of the era to the "new philosophy" risks missing. In retrospect, there is a great tragedy in the fact that BHL and others constructed the "antitotalitarian moment" (as that pivotal era in the late 1970s is called) in a way that gave the impression that a sententious "ethics" and moral vigilance were the simple solution to the failures of utopian politics. And of course BHL managed to convince some people -- though chiefly in this country, if the reception of his recent book is any evidence -- that he incarnated the very "French intellectual" whose past excesses he often denounced.
In the process, other visions of the past and future of the left were ignored. The reception was garbled -- but it is always possible to undo old mistakes. I see the philosophy of democracy Rosanvallon is developing as neither specifically French nor of a past era. At the same time, the goal is not to substitute a true philosopher for a false guru. The point is to use foreign thinkers who are challenging to come to grips with homegrown difficulties.
Q:Rosanvallon's work doesn't fit very well into some of the familiar disciplinary grids. One advantage of being at the Collège de France is that you get to name your own field, which he calls "the philosophical history of the political." But where would he belong in terms of the academic terrain here?
A: You're right. It's plausible to see him as a trespasser across the various disciplinary boundaries. If that fact makes his work of potential interest to a great many people -- in philosophy, politics, sociology, and history -- it also means that readers might have to struggle to see that the protocols of their own disciplines may not exhaust all possible ways of studying their questions.
But it is not as if there have not been significant interventions in the past -- from Max Weber for example, or Michel Foucault in living memory -- that were recognized as doing something relevant to lots of different existing inquiries. In fact, that point suggests that it may miss the point to try to locate such figures on disciplinary maps that are ordinarily so useful. If I had to sum up briefly what Rosanvallon is doing as an intellectual project, I would say that the tradition of which he's a part -- which includes his teacher Lefort as well as some colleagues like Marcel Gauchet and others -- is trying to replace Marxism with a convincing alternative social theory.
Most people write about Marxism as a political program, and of course any alternative to it will also have programmatic implications. But Marxism exercised such appeal because it was also an explanatory theory, one that claimed, by fusing the disciplines, to make a chaotic modern history -- and perhaps history as a whole -- intelligible. Its collapse, as Lefort's own teacher Maurice Merleau-Ponty clearly saw, threatened to leave confusion in its wake, unless some alternative to it is available. (Recall Merleau-Ponty's famous proclamation: "Marxism is not a philosophy of history; it is the philosophy of history, and to renounce it is to dig the grave of reason in history.")
Rosanvallon seems to move about the disciplines because, along with others in the same school, he has been trying to put together a total social theory that would integrate all the aspects of experience into a convincing story. They call the new overall framework they propose "the political," and Rosanvallon personally has focused on making sense of democratic modernity in all its facets. Almost no one I know about in the Anglo-American world has taken up so ambitious and forbidding a transdisciplinary task, but it is a highly important project.
Q:As the title of your collection neatly sums up, Rosanvallon's definitive preoccupation is democracy. But he's not just giving two cheers for it, or drawing up calls for more of it. Nor is his approach, so far as I can tell, either descriptive nor prescriptive. So what does that leave left for a philosopher to do?
A: At the core of his conception of democracy, there is a definitive problem: The new modern sovereign (the "people" who now rule) is impossible to identify or locate with any assurance. Democracy is undoubtedly a liberatory event -- a happy tale of the death of kings. But it must also face the sadly intractable problem of what it means to replace them.
Of course, the history of political theory contains many proposals for discovering the general will. Yet empirical political scientists have long insisted that "the people" do not preexist the procedures chosen for knowing their will. In different words, "the people" is not a naturally occurring object. Rosanvallon's work is, in one way or another, always about this central modern paradox: If, as the U.S. Constitution for instance says, "We the people" are now in charge, it is nevertheless true that we the people have never existed together in one place, living at one time, speaking with one voice. Who, then, is to finally say who "we" are?
The point may seem either abstract or trivial. But the power of Rosanvallon's work comes from his documentation of the ways -- sometimes blatant and sometimes subtle -- that much of the course and many of the dilemmas of modern history can be read through the lens of this paradox. For example, the large options in politics can also be understood as rival answers to the impossible quandary or permanent enigma of the new ruler's identity. Individual politicians claim special access to the popular will either because they might somehow channel what everyone wants or because they think that a rational elite possesses ways of knowing what the elusive sovereign would or should want. Democracy has also been the story, of course, of competing interpretations of what processes or devices are most likely to lead to results approximating the sovereign will.
Recently, Rosanvallon has begun to add to this central story by suggesting that there have always been -- and increasingly now are -- lots of ways outside electoral representation that the people can manifest their will, during the same era that the very idea that there exists a coherent people with a single will has entered a profound crisis.
One of the more potent implications of Rosanvallon's premise that there is no right answer to the question of the people's identity is that political study has to be conceptual but also historical. Basic concepts like the people might suggest a range of possible ways for the sovereign will to be interpreted, but only historical study can uncover the rich variety of actual responses to the difficulty.
The point, Rosanvallon thinks, is especially relevant to political theorists, who often believe they can, simply by thinking hard about what democracy must mean, finally emerge with its true model, whether based on a hypothetical contract, an ideal of deliberation, or something else. But the premise also means that democracy's most basic question is not going to go away, even if there are better and worse responses.
Q:Now to consider the relationship between Rosanvallon's work and political reality "on the ground" right now. Let's start with a domestic topic: the debate over immigration. Or more accurately, the debate over the status of people who are now part of the U.S. economy, but are effectively outside the polity. I'm not asking "what would Rosanvallon do?" here, but rather wondering: Does his work shed any light on the situation? What kinds of questions or points would Rosanvallonists (assuming there are any) be likely to raise in the discussion?
A: It's fair to ask how such an approach might help in analyzing contemporary problems. But his approach always insists on restoring the burning issues of the day to a long historical perspective, and on relating them to democracy's foundational difficulties. Without pretending to guess what Rosanvallon might say about America's recent debate, I might offer a couple of suggestions about how his analysis might begin.
The controversy over immigrants is so passionate, this approach might begin by arguing, not simply because of economic and logistical concerns but also because it reopens (though it was never closed!) the question of the identity of the people in a democracy. The challenge immigrants pose, after all, is not one of inclusion simply in a cultural sense, as Samuel Huntington recently contended, but also and more deeply in a conceptual sense.
In a fascinating chapter of his longest work, on the history of suffrage, Rosanvallon takes up the history of French colonialism, including its immigrant aftermath. There he connects different historical experiences of immigrant inclusion to the conceptual question of what the criteria for exclusion are, arguing that if democracies do not come to a clear resolution about who is inside and outside their polity, they will vacillate between two unsatisfactory syndromes. One is the "liberal" response of taking mere presence on the ground as a proxy for citizenship, falsely converting a political problem into one of future social integration. The other is the "conservative" response of of conceptualizing exclusion, having failed to resolve its meaning politically, in the false terms of cultural, religious, or even racial heterogeneity. Both responses avoid the real issue of the political boundaries of the people.
But Rosanvallon's more recent work allows for another way of looking at the immigration debate. In a new book coming out in French in the fall entitled "Counterdemocracy," whose findings are sketched in a preliminary and summary fashion in the fascinating postscript to the English-language collection, Rosanvallon tries to understand the proliferation of ways that popular expression occurs outside the classical parliamentary conception of representation. There, he notes that immigration is one of several issues around which historically "the people" have manifested their search for extraparliamentary voice.
For Rosanvallon, the point here is not so much to condemn populist backlash, as if it would help much simply to decry the breakdown of congressional lawmaking under pressure. Rather, one might have to begin by contemplating the historical emergence of a new form of democracy -- what he calls unpolitical democracy -- that often crystallizes today around such a hot-button topic as the status of immigrants. This reframing doesn't solve the problem but might help see that its details turn out to be implicated in a general transformation of how democracy works.
Q:OK, now on to foreign policy. In some circles, the invasion of Iraq was justified as antitotalitarianism in action, and as the first stage a process of building democracy. (Such are the beauty and inspiration of high ideals.) Does Rosanvallon's work lend itself to support for "regime change" via military means? Has he written anything about "nation building"?
A: This is a very important question. I write in my introduction to the collection about the contemporary uses of antitotalitarianism, and I do so mainly to make criticize the recent drift in uses of that concept.
Of course, when the critique of totalitarianism activated a generation, it was the Soviet Union above all that drew their fire. But their critique was always understood to have its most salient implications for the imagination of reform at home, and especially for the renewal of the left. This is what has changed recently, in works of those "liberal hawks," like Peter Beinart and Paul Berman, who made themselves apologists for the invasion of Iraq in the name of antitotalitarian values. Not only did they eviscerate the theoretical substance on which the earlier critique of totalitarianism drew -- from the work of philosophers like Hannah Arendt and Claude Lefort among others -- but they wholly externalized the totalitarian threat so that their critique of it no longer had any connection to a democratic program. It became purely a rhetoric for the overthrow of enemies rather than a program for the creation or reform of democracies. In the updated approach, what democracy is does not count as a problem.
It is clear that this ideological development, with all of its real-world consequences, has spelled the end of the antitotalitarian coalition that came together across borders, uniting the European left (Eastern and Western) with American liberalism, thirty years ago. That the attempt to update it and externalize that project had failed became obvious even before the Iraq adventure came to grief -- the project garnered too few allies internationally.
Now it is perfectly true that the dissolution of this consensus leaves open the problem of how democrats should think about foreign policy, once spreading it evangelistically has been unmasked as delusional or imperialistic. A few passages in the collection suggest that Rosanvallon thinks the way to democratize the world is through democratization of existing democracies -- the reinvigoration of troubled democracies is prior to the project of their externalization and duplication. Clearly this response will not satisfy anyone who believes that the main problem in the world is democracy's failure to take root everywhere, rather than its profound difficulties where it already is. But clarifying the history and present of democracy inside is of undoubted relevance to its future outside.
Q:There are some very striking passages in the book that discuss the seeming eclipse of the political now. More is involved than the withdrawl from civic participation into a privatized existence. (At the same time, that's certainly part of it.) Does Rosanvallon provide an account of how this hollowing-out of democracy has come to pass? Can it be reversed? And would its reversal necessarily be a good thing?
A: One of the most typical responses to the apparent rise of political apathy in recent decades has been nostalgia for some prior society -- classical republics or early America are often cited -- that are supposed to have featured robust civic engagement. The fashion of "republicanism" in political theory, from Arendt to Michael Sandel or Quentin Skinner, is a good example. But Rosanvallon observes that the deep explanation for what is happening is a collapse of the model of democracy based on a powerful will.
The suggestion here is that the will of the people is not simply hard to locate or identify; its very existence as the foundation of democratic politics has become hard to credit anymore. The challenge is to respond by taking this transformation as the starting point of the analysis. And there appears to be no return to what has been lost.
But in his new work, anticipated in the postscript, Rosanvallon shows that the diagnosis may be faulty anyway. What is really happening, he suggests, is not apathy towards or retreat from politics in a simple sense, but the rise of new forms of democracy -- or counterdemocracy -- outside the familiar model of participation and involvement. New forms seeking expression have multiplied, through an explosion of devices, even if they may seem an affront to politics as it has ordinarily been conceptualized.
Rosanvallon's current theory is devoted to the project of putting the multiplication of representative mechanisms -- ones that do not fit on existing diagrams of power -- into one picture. But the goal, he says, is not just to make sense of them but also to find a way for analysis to lead to reform. As one of Rosanvallon's countrymen and predecessors, Alexis de Tocqueville, might have put it: Democracy still requires a new political science, one that can take it by the hand and help to sanctify its striving.
For further reading: Professor Moyn is co-author (with Andrew Jainhill of the University of California at Berkeley) of an extensive analysis of the sources and inner tensions of Rosanvallon's thought on democracy, available online.Â And in an essay appearing on the Open Democracy Webs ite in 2004, Rosanvallon reflected on globalization, terrorism, and the war in Iraq.
The table sits at the front of the bookshop, near the door. That way it will get maximum exposure as people come and go. "If you enjoyed The Da Vinci Code," the sign over it says, "you might also like..." The store is part of a national chain, meaning there are hundreds of these tables around the country. Thousands, even.
And yet the display, however eyecatching, is by no means a triumph of mass-marketing genius. The bookseller is denying itself a chance to appeal to an enormous pool of consumer dollars. I'm referring to all the people who haven’t read Dan Brown’s globe-bestriding best-seller -- and have no intention of seeing the new movie -- yet are already sick to death of the whole phenomenon.
"If you never want to hear about The Da Vinci Code again," the sign could say, "you might like...."
The book’s historical thesis (if that is the word for it) has become the cultural equivalent of e-mail spam. You just can’t keep it out. The premise sounds more preposterous than thrilling: Leonardo da Vinci was the head of a secret society (with connections to the Knights Templar) that guarded the hidden knowledge that Mary Magdeleine fled Jerusalem, carrying Jesus’s child, and settled in France....
All of this is packaged as a contribution to the revival of feminine spirituality. Which is, in itself, enough to make the jaw drop, at least for anyone with a clue about the actual roots of this little bit of esoteric hokum.
Fantasies about the divine bloodlines of certain aristocratic families are a staple of the extreme right wing in Europe. (The adherents usually also possess "secret knowledge" about Jewish bankers.) And anyone contending that the Knights Templar were a major factor behind the scenes of world history will turn out to be a simpleton, a lunatic, or some blend of the two -- unless, of course, it’s Umberto Eco goofing on the whole thing, as he did in Foucault’s Pendulum.
It's not that Dan Brown is writing crypto-fascist novels. He just has really bad taste in crackpot theories. (Unlike Eco, who has good taste in crackpot theories.)
And Leonardo doesn’t need the publicity -- whereas my man Athanasius Kircher, the brilliant and altogether improbable Jesuit polymath, does.
Everybody has heard of the Italian painter and inventor. As universal geniuses go, he is definitely on the A list. Yet we Kircher enthusiasts feel duty-bound to point out that Leonardo started a lot more projects than he ever finished -- and that some of his bright ideas wouldn’t have worked.
Sure, Leonardo studied birds in order to design a flying machine. But if you built it and jumped off the side of a mountain, they’d be scrapping you off the bottom of the valley. Of course very few people could have painted "Mona Lisa." But hell, anybody can come up with a device permitting you to plunge to your death while waving your arms.
Why should he get all the press, while Athanasius Kircher remains in relative obscurity? He has just as much claim to the title of universal genius. Born in Germany in 1602, he was the son of a gentleman-scholar with an impressive library (most of it destroyed during the Thirty Years’ War). By the time Kircher became a monk at the age of 16, he had already become as broadly informed as someone twice his age.
He joined the faculty of the Collegio Romano in 1634, his title was Professor of Mathematics. But by no means is that a good indicator of his range of scholarly accomplishments. He studied everything. Thanks to his access to the network of Jesuit scholars, Kircher kept in touch with the latest discoveries taking place in the most far-flung parts of the world. And a constant stream of learned visitors to Rome came to see his museum at the Vatican, where Kircher exhibited curious items such as fossils and stuffed wildlife alongside his own inventions.
Leonardo kept most of his more interesting thoughts hidden in notebooks. By contrast, Kircher was all about voluminous publication. His work appeared in dozens of lavishly illustrated folios, the publication of which was often funded by wealthy and powerful figures. The word "generalist" is much too feeble for someone like Kircher. He prepared dictionaries, studied the effects of earthquakes, theorized about musical acoustics, and engineered various robot-like devices that startled tourists with their lifelike motions.
He was also enthusiastic about the microscope. In a book published in 1646, Kircher mentioned having discovered “wonders....in the verminous blood of those sick with fever, and numberless other facts not known or understood by a single physician.” He speculated that very small animals “with a vast number and variety of motions, colors, and almost invisible parts” might float up from from “the putrid vapors” emitted by sick people or corpses.
There has long been a scholarly debate over whether or not Kircher deserves recognition as the inventor of the germ theory of disease. True, he seems not to have had a very clear notion of what was involved in experimentation (then a new idea). And he threw off his idea about the very tiny animals almost in passing, rather than developing it in a rigorous manner. But then again, Kircher was a busy guy. He managed to stay on the good side of three popes, while some of his colleagues in the sciences had trouble keeping the good will of even one. Among Kircher’s passions was the study of ancient Egypt. As a young man, he read an account of the hieroglyphics that presented the idea that they were decorative inscriptions -- the equivalent of stone wallpaper, perhaps. (After all, they looked like tiny pictures.) This struck him as unlikely. Kircher suspected the hieroglyphics were actually a language of some kind, setting himself the task of figuring out how to read it.
And he made great progress in this project – albeit in the wrong direction. He decided that the symbols were somehow related to the writing system of the Chinese, which he did know how to read, more or less. (Drawing on correspondence from his missionary colleagues abroad, Kircher prepared the first book on Chinese vocabulary published in Europe.)
Only in the 19th century was Jean Francois Champollion able to solve the mystery, thanks to the discovery of the Rosetta Stone. But the French scholar gave the old Jesuit his due for his pioneering (if misguided) work. In presenting his speculations, Kircher had also provided reliable transcriptions of the hieroglyphic texts. They were valuable even if his guesses about their meaning were off.
Always at the back of Kircher’s mind, I suspect, was the story from Genesis about the Tower of Babel. (It was the subject of one of his books.) As a good Jesuit, he was doubtless confident of belonging to the one true faith -- but at the same time, he noticed parallels between the Bible and religious stories from around the world. There were various trinities of dieties, for example. As a gifted philologist, he noticed the similarities among different languages.
So it stood to reason that the seeming multiplicity of cultures was actually rather superficial. At most, it reflected the confusion of tongues following God’s expressed displeasure about that big architectural project. Deep down, even the pagan and barbarous peoples of the world had some rough approximation of the true faith.
That sounds ecumenical and cosmopolitan enough. It was also something like a blueprint for conquest: Missionaries would presumably use this basic similarity as a way to "correct" the beliefs of those they were proselytizing.
But I suspect there is another level of meaning to his musings. Kircher’s research pointed to the fundamental unity of the world. The various scholarly disciplines were, in effect, so many fragments of the Tower of Babel. He was trying to piece them together. (A risky venture, given the precedent.)
He was not content merely to speculate. Kircher tried to make a practical application of his theories by creating a "universal polygraphy" -- that is, a system of writing that would permit communication across linguistic barriers. It wasn’t an artificial language like Esperanto, exactly, but rather something like a very low-tech translation software. It would allow you to break a sentence in one language down to units, which were to be represented by symbols. Then someone who knew a different language could decode the message.
Both parties needed access to the key -- basically, a set of tables giving the meaning of Kircher’s "polygraphic" symbols. And the technique would place a premium on simple, clear expression. In any case, it would certainly make international communication faster and easier.
Unless (that is) the key were kept secret. Here, Kircher seems to have had a brilliant afterthought. The same tool allowing for speedy, transparent exchange could (with some minor adjustments) also be used to conceal the meaning of a message from prying eyes. He took this insight one step further -- working out a technique for embedding a secret message in what might otherwise look like a banal letter. Only the recipient -- provided he knew how to crack the code -- would be able to extract its hidden meaning.
Even before his death in 1680, there were those who mocked Athanasius Kircher for his vanity, for his gullibility (he practiced alchemy), and for the tendency of his books to wander around their subjects in a rather garrulous and self-indulgent manner. Nor did the passing of time and fashion treat him well. By the 18th century, scholars knew that the path to exact knowledge involved specialization. The wild and woolly encyclopedism of Athanasius Kirscher was definitely a thing of the past.
Some of the disdain may have been envy. Kircher was the embodiment of untamed curiosity, and it is pretty obvious that he was having a very good time. Even granting detractors all their points, it is hard not to be somewhat in awe of the man. Someone who could invent microbiology, multiculturalism, and encryption technology (and in the 17th century no less) at least deserves to be on a T-shirt.
But no! All anybody wants to talk about is da Vinci. (Or rather, a bogus story about him that is the hermeneutic equivalent of putting "The Last Supper" on black velvet.)
Well, if you can’t beat 'em.... Maybe it's time for a trashy historical thriller that will give Kircher his due. So here goes:
After reading this column, Tom Hanks rushes off to the Vatican archives and finds proof that Kircher used his "universal polygraphy" to embed secret messages in his the artwork for his gorgeously illustrated books.
But that’s not all. By cracking the code, he finds a cure to the avian flu. Kircher has recognized this as a long-term menace, based on a comment by a Jesuit missionary work. (We learn all this in flashbacks. I see Phillip Seymour Hoffman as Athanasius Kircher.)
Well, it's a start, anyway. And fair warning to Dan Brown. Help yourself to this plot and I will see you in court. It might be a terrible idea, but clearly that's not stopped you before.
Tomorrow night at a church in London, there will be a gathering of several hundred people to celebrate the launch of "The Euston Manifesto" -- a short document in which one sector of the British and American left declares itself to be in favor of pluralist and secular democracy, and against blowing people up for the glory of Allah.
The Eustonians also support open-source software. (I have read the document a few times now but am still not sure how that one got in there. It seems like an afterthought.)
More to the point, the Eustonians promise not to ask too many questions -- nor any really embarrassing ones -- about how we got into Iraq. The important thing, now, is that it all end well. Which is to say, that the occupation help build a new Iraq: a place of secular, pluralist democracy, where people do not blow each other up for the glory of Allah.
Suppose that a civic-minded person -- a secular humanist, let's say, and one fond of Linux -- takes a closer look at the manifesto. Such a reader will expect the document to discuss the question of means and ends. This might be addressed on the ethical plane, at some level of abstraction. Or it might be handled with a wonky attention to policy detail. In any case, the presumed reader (who is nothing if not well-meaning) will certainly want to know how Eustonian principles are to be realized in the real world. In the case of Iraq, for example, there is the problem of getting from the absolutely disastrous status quo to the brilliant future, so hailed.
Many of the signatories of the manifesto are, or until recently were, some variety or other of Marxist. Its main author, for example, is Norman Geras, a professor emeritus of government at the University of Manchester. His work includes Literature of Revolution, a volume of astute essays on Leon Trotsky and Rosa Luxemburg. (Full disclosure: Geras and I once belonged to the same worldwide revolutionary socialist organization, the United Secretariat of the Fourth International, and probably both choke up a little when singing “The Red Flag”).
Surely, then, the Euston Manifesto will bear at least some resemblance to the one written by a certain unemployed German doctor of philosophy in 1848? That is, it can be expected to provide a long-term strategic conception of how the world reached its current situation (“The history of all hitherto existing society is a history of class struggles”). And it will identify the forces in society that have emerged to transform it (“Workers of the world unite!”). And from this rigorous conceptual structure, the document can then deduce some appropriate shorter-term tactics. In The Communist Manifesto, for example, Marx and Engels pointed to universal suffrage and a progressive income tax as mighty strides forward towards the destruction of capitalism.
OK, so the proposals might not work out as planned.... Hindsight is 20-20. But a manifesto -- to be worth anyone’s time, let alone signature -- will, of course, be concrete. At the event in London tomorrow night, the comrades will rally. Surely they would never settle for broad and bland appeals to high ideals, rendered in language slightly less inspiring than the Cub Scout oath?
Well, judge for yourself. “The Euston Manifesto” was actually unveiled in April, when it was first published online. It is has an official Web site. The inspiration for it had come during a meeting at a pub near the Euston stop on the London Underground. (Hence the name.) The document has been debated and denounced at great and redundant length in the left-wing blogosphere. So the fact that the event this week in London is being described by the Eustonians as a “launch” is puzzling, at least at first. But when you realize what a rhetorical drubbing the manifesto has taken, the need for a public gathering is easier to understand. The Eustonians want to show that their heads are bloody but unbowed, etc.
The most cogent arguments against the manifesto have already been made. In April, Marc Mulholand, a historian who is a fellow at Oxford University, presented a series of pointed criticisms at his blog that seemed to take the Eustonian principles more seriously than the manifesto itself did. “Why should we expect pluralist states to foster the spread of democratic government?” he asked. “How can we audit their contribution to this universal ideal? What mechanisms ensure the coincidence of state real politick and liberal internationalism?”
And D.D. Guttenplan -- the London correspondent for “The Nation” and producer of a documentary called Edward Said: The Last Interview -- weighed in with an article in The Guardian accusing the Eustonians of, in effect, staging a historical reenactment of battle scenes from the Cold War.
In passing, Guttenplan wrote of the manifesto that “every word in it is a lie” – a bit of hyperbole with historical overtones probably lost on his British readers. (In a memorable denunciation -- and one that prompted a lawsuit -- of sometime Communist sympathizer Lillian Hellman’s work, Mary McCarthy said: “Every word she writes is a lie, including ‘and’ and ‘the.’”) Guttenplan tells me that he now considers his remark “a bit intemperate” yet still calls the manifesto “that bastard child of senescent sociology and the laptop bombardiers.”
Mulholand performed a kind of immanent critique of the Eustonians’ liberal-humanitarian proclamations. That is, he held their rhetoric up against their concepts -- and found the manifesto wanting no matter how you looked at it.
For Guttenplan, the manifesto makes more sense as a case of political bait-and-switch. “The political glue holding these folks together,” he told me, “was a kind of Zionism that dare not speak its name, in which anti-Semitism was the only racism worth getting excited about, and opposition to any kind of practical pressure on Israel or its UK supporters/defenders the only program that got these folks up from their laptops. Personally I find that both sneaky and, as my late mother would say, bad for the Jews.” (Complex irony alert! Guttenplan himself is Jewish.)
The liberal-internationalist case for military intervention in Iraq has recently been hashed out at length -- and in all of its disconcertingly belated moral passion and geopolitical irrelevance -- by the contributors to A Matter of Principle: Humanitarian Arguments for War in Iraq, published last year by the University of California Press. The editor of that volume, Thomas Cushman, is a professor of sociology at Wellesley College, and a member of the editorial board of the online journal Democratiya -- as is Norman Geras, who drafted the Euston Manifesto.
Many of the contributions to the book and the journal are intelligently argued. They are worth the attention even -- and perhaps especially -- of someone opposed to the war. For a whole wing of the left, of course, to admit that one’s opponents might be capable of arguments (rather than rationalizations) is already a sign of apostasy. But I’ll take my chances. After all, you can only listen to Noam Chomsky blame every problem in the world on American corporations just so many times. It’s good to stretch your mental legs every so often, and go wandering off to see how people think on the other side of the barricades.
That said, reading the Euston Manifesto has proven remarkably unrewarding -- even downright irritating. It is not a matter of profound disagreements. (I am, broadly speaking, in favor of pluralist and secular democracy, and against blowing people up for the glory of Allah.) But the Eustonians seem to be issuing blank moral checks for whatever excellent adventures George Bush and Tony Blair decide to undertake.
They call for supporting the reconstruction of Iraq “rather than picking through the rubble of the arguments over intervention.” The systematic campaign of disinformation and phony diplomacy engineered over the course of two years preceding the invasion, then, is to be forgotten. It’s hard to imagine a more explicit call for intellectual irresponsibility. Or, for that matter, a less adequate metaphorical image. Anyone upset by “the rubble of the arguments over intervention” is definitely facing the wrong crater.
The Eustonians seem also perfectly indifferent to the cumulative damage being done to the very fiber of democracy itself. This summer’s issue of Bookforum contains a few poems by Guantanamo Bay detainees -- part of a much larger body confiscated by the military. As a lawyer for the detainees notes, a poem containing the line “Forgive me, my dear wife” was immediately classified as an attempt to communicate with the outside.
It is hard to imagine that this sort of thing really advances the Global War on Terror, or whatever we’re calling it now. But it is not without consequences. It destroys what it pretends to protect.
As I was musing over all of this, a friend pointed out a conspicuous absence from the list of signatories to the manifesto: Todd Gitlin, a professor of sociology and journalism at Columbia University. His book The Intellectuals and the Flag, published earlier this year by Columbia University Press, defends the idea of left-wing American patriotism with a frank interest “in the necessary task of defeating the jihadist enemy.”
This would seem to put him in the Eustonian camp, yet he did not endorse the manifesto. Why not? I contacted him by e-mail to ask. “I recognize a shoddy piece of intellectual patchwork when I see one,” Gitlin responded.
He cites a passage referring to the overthrow of Saddam Hussein as “a liberation of the Iraqi people." A fine thing, to be sure. The sight of a humiliated dictator is good for the soul. “But the resulting carnage is scarcely worthy of the term ‘liberation,’” Gitlin told me. “I'm leery of the euphemism.”
Humanitarian interventionism needs an element of realist calculation. “The duty of ‘intervention and rescue’ when a state commits appalling atrocities,” he continued, “must be tempered by a hard-headed assessment of what is attainable and what are the reasonably foreseeable results of intervention. The document is cavalier about the ease of riding to the rescue. So while I support the lion's and lioness's share of the document's principles, I find it disturbingly, well, utopian. It lacks a sense of the tragic. I have not foregone the forced innocence of the anti-American left only to sign up with another variety of rigid, forced innocence.”
But in the final analysis, there was something else bothersome about the manifesto -- something I couldn’t quite put a finger on, for a while. A vague dissatisfaction, a feeling of blurry inconsequentiality....
Then it suddenly came into focus: The manifesto did not seem like the product of a real movement, nor the founding document of a new organization – nor anything, really, but a proclamation of dissatisfaction by people in an Internet-based transatlantic social network.
I dropped Norman Geras a line, asking about the virtuality of the phenomenon. Aren’t the Eustonians doomed to a kind of perpetual and constitutive blogginess?
“It's true that the manifesto is not seen by us as the rallying point for a particular organization,” Geras wrote back. “But it is seen as a rallying point nonetheless - as a focus for debate on the liberal-left, and for initiatives that might follow from that. The focus for debate part has already happened: there's been an enormous response to the manifesto and not only on the internet, but with significant press coverage as well. The venue for the launch meeting had to be changed because we ran out of tickets so fast for the original venue. So this isn't just a ‘virtual’ affair.”
The question from Lenin’s pamphlet comes up: What is to be done? “I'm not going to try to predict where or how far it will go,” says Geras. “One step at a time. But we already have more than 1,500 signatories and that means a lot of people in touch with us and interested in what the manifesto is saying. After the launch, we'll see what we want to do next in the way of forums, conferences, campaigns.”
Perhaps frustration with the document is misplaced? Something better might yet emerge -- once well-meaning people see the limits of the good intentions they have endorsed. You never know. But for now, with only the text to go by, it is hard to shake a suspicion that the Euston Manifesto owes less to Marx than to MySpace.
The hurried patron spying Why Truth Matters (Continuum) on the new arrivals shelf of a library may assume that it is yet another denunciation of the Republicans. New books defending the “reality-based community” are already thick on the ground – and the publishers' fall catalogs swarm with fresh contributions to the cause. Last month, at BookExpo America ( the annual trade show for the publishing industry), I saw an especially economical new contribution to the genre: a volume attributed to G.W. Bush under the title Whoops, I Was Wrong. The pages were completely blank.
Such books change nobody’s mind, of course. The market for them is purely a function of how much enthusiasm the choir feels for the sermon being addressed to it. As it turns out, Why Truth Matters has nothing to do with the G.O.P., and everything to do with what is sometimes called the postmodern academic left -– home to cross dressing Nietzschean dupes to the Sokal hoax.
Or so one gathers from the muttering of various shell-shocked Culture Warriors. Like screeds against the neocons, the diatribes contra pomo now tend to be light on data, and heavy on the indignation. (The choir does love indignation.)
Fortunately, Why Truth Matters by Ophelia Benson and Jeremy Stangroom, is something different. As polemics go, it is short and adequately pugnacious. Yet the authors do not paint their target with too broad a brush. At heart, they are old-fashioned logical empiricists -– or, perhaps, followers of Samuel Johnson, who, upon hearing of Bishop Berkeley’s contention that the objective world does not exist, refuted the argument by kicking a rock. Still, Benson and Stangroom do recognize that there are numerous varieties of contemporary suspicion regarding the concept of truth.
They bend over backwards in search of every plausible good intention behind postmodern epistemic skepticism. And then they kick the rock.
The authors run a Web site of news and commentary, Butterflies and Wheels. And both are editors of The Philosophers’ Magazine,a quarterly journal. In the spirit of full disclosure, it bears mentioning that I write a column for the latter publication.
A fact in no way disposing me, however, to overlook a striking gap in the book’s otherwise excellent index: The lack of any entry for “truth, definition of.” Contacting Ophelia Benson recently for an e-mail interview, that seemed like the place to start.
Q: What is truth? Is there more than one kind? If not, why not?
A: I'll just refer you to jesting Pilate, and let it go at that!
Q: Well, the gripe about jesting Pilate is that "he would not stay for the answer." Whereas I am actually going to stick around and press the point. Your book pays tribute to the human capacity for finding truth, and warns against cultural forces tending to undermine or destroy it. So what's the bottom-line criterion you have in mind for defining truth?
A: It all depends, as pedants always say, on what you mean by "truth." Sure, in a sense, there is more than one kind. There is emotional truth, for instance, which is ungainsayable and rather pointless to dispute. It is also possible and not necessarily silly to talk about somewhat fuzzy-bordered kinds such as literary truth, aesthetic truth, the truth of experience, and the like.
The kind of truth we are concerned with in the book is the fairly workaday, empirical variety that is (or should be) the goal of academic disciplines such as history and the sciences. We are concerned with pretty routine sorts of factual claim that can be either supported or rejected on the basis of evidence, and with arguments that cast doubt on that very way of proceeding.
Q: Is anybody really making a serious dent in this notion of truth? You hear all the time that the universities are full of postmodernists who think that scientific knowledge is just a Eurocentric fad, and therefore people could flap their wings and fly to the moon if they wanted. And yet you never actually see anyone doing that. At least I haven't, and I go to MLA every year.
A: Of course, there is no shortage of wild claims about what people get up to in universities. Such things make good column fodder, good talk show fodder, good gossip fodder, not to mention another round of the ever-popular game of "Who's Most Anti-Intellectual?" But there are people making some serious dents in this notion of truth and of scientific knowledge, yes. That's essentially the subject matter of Why Truth Matters: the specifics of what claims are being made, in what disciplines, using what arguments.
There are people who argue seriously that, as Sandra Harding puts it, the idea that scientific "knowers" are in principle interchangeable means that "white, Western, economically privileged, heterosexual men can produce knowledge at least as good as anyone else can" and that this appears to be an antidemocratic consequence. Harding's books are still, despite much criticism, widely assigned. There are social constructionists in sociology and philosophy of science who view social context as fully explanatory of the formation of scientific belief and knowledge, while excluding the role of evidence.
There are Afrocentric historians who make factual claims that contradict existing historical evidence, such as the claim that Aristotle stole his philosophy from the library at Alexandria when, as Mary Lefkowitz points out, that library was not built until after Aristotle's death. Lefkowitz was shocked to get no support from her colleagues when she pointed out factual errors of this kind, and even more shocked when the dean of her college (Wellesley) told her that "each of us had a different but equally valid view of history." And so on (there's a lot of the "so on" in the book).
That sort of thing of course filters out into the rest of the world, not surprisingly: People go to university and emerge having picked up the kind of thought Lefkowitz's dean had picked up; such thoughts get into newspaper columns and magazine articles; and the rest of us munch them down with our cornflakes.
We don't quite think we could fly to the moon if we tried hard enough, but we may well think there's something a bit sinister and elitist about scientific knowledge, we may well think that oppressed and marginalized groups should be allowed their own "equally valid" view of history by way of compensation, we may well think "there's no such thing as truth, really."
Q: Your book describes and responds to a considerable range of forms of thought: old fashioned Pyrronic skepticism, "standpoint" epistemology, sociology of knowledge, neopragmatism, pomo, etc. Presumably not all questions about the possibility of a bedrock notion of truth are created equal. What kinds have a strong claim to serious consideration?
A: Actually, much of the range of thought we look at doesn't necessarily ask meta-questions about truth. A lot of it is more like second level or borrowed skepticism or relativism about truth, not argued so much as referenced, or simply assumed; waved at rather than defended. The truth relativism is not itself the point, it's rather a tool for the purpose of making truth-claims that are not supported by evidence or that contradict the evidence. Skepticism and relativism about truth in this context function as a kind of veil or smokescreen to obscure the way ideology shapes the truth-claims that are being made.
As a result much of the activity on the subject takes place on this more humdrum quotidian level, in between metaquestions and bedrock notions of truth, where one can ask if this map is accurate or not, if this bus schedule tells us where and when the bus really does go, if this history text contains falsifications or not, if the charges against this scholar or that tobacco company are based on sound evidence or not.
Meta-questions about truth of course do have a strong claim to serious consideration. Maybe we are brains in vats; maybe we all are, without realizing it, Keanu Reeves; there is no way to establish with certainty that we're not; thus questions on the subject do have a claim to consideration, however unresolvable they are. (At the same time, however unresolvable they are, it is noticeable that on the mundane level of this particular possible world, no one really does take them seriously; no one really does seriously doubt that fire burns or that axes chop.)
Intermediate level questions can also be serious, searching, and worth exploring. Standpoint epistemology is reasonable enough in fields where standpoints are part of the subject matter: histories of experience, of subjective views and mentalities, of oppression, for example, surely need at least to consider the subjective stance of the inquirer. Sociology of knowledge is an essential tool of inquiry into the way interests and institutions can shape research programs and findings, provided it doesn't, as a matter of principle, exclude the causative role of evidence. In short there are, to borrow a distinction of Susan Haack's, sober and inebriated versions of questions about the possibility of truth.
Q: Arguably even the most extremist forms of skepticism can have some beneficial effects -- if only indirectly, by raising the bar for what counts as a true or valid statement. (That's one thumbnail version of intellectual history, anyway: no Sextus Empiricus would mean no Descartes.) Is there any sense in which "epistemic relativism" might have some positive effect, after all?
A: Oh, sure. In fact I think it would be extremely hard to argue the opposite. And the ways in which it could have positive effects seem obvious enough. There's Mill's point about the need for contrary arguments in order to know the grounds of one's own views, for one. Our most warranted beliefs, as he says, have no safeguard to rest on other than a standing invitation to everyone to refute them.
If we know only our own side of the case, we don't know much. Matt Ridley made a related point in a comment on the Kitzmiller Intelligent Design trial for Butterflies and Wheels: "My concern ... is about scientific consensus. In this case I find it absolutely right that the overwhelming nature of the consensus should count against creationism. But there have been plenty of other times when I have been on the other side of the argument and seen what Madison called the despotism of the majority as a bad argument.... I agree with the scientific consensus sometimes but not always, but I do not do so because it is is a consensus. Science does not work that way or Newton, Harvey, Darwin and Wegener would all have been voted into oblivion."
Another way epistemic relativism may be of value is that it is one source (one of many) of insight into what it is that some people dislike and distrust about science and reason. In a way it's a silly argument to say that science is elitist or undemocratic, since it is of course the paradigmatic case of the career open to talent. But in another way it isn't silly at all, because as Michael Young pointed out in the '50s, meritocracy has some harsh side-effects, such as erosion of the sense of self-worth of the putative less talented. Epistemic relativism may function partly as a reminder of that.
The arguments of epistemic relativism may be unconvincing, but some of the unhappiness that prompts the arguments may be more worth taking seriously. However one then has to weigh those effects against the effects of pervasive suspicion of science and reason, and one grows pale with fear. At a time when there are so many theocrats and refugees from the reality-based community on the loose, epistemic relativism doesn't seem to need more encouragement than it already has.
Why do narratives of decline have such perennial appeal in the liberal arts, especially in the humanities? Why is it, year after year, meeting after meeting, we hear laments about the good old days and predictions of ever worse days to come? Why is such talk especially common in elite institutions where, by many indicators, liberal education is doing quite well, thank you very much. I think I know why. The opportunity is just too ripe for the prophets of doom and gloom to pass up.
There is a certain warmth and comfort in being inside the “last bastion of the liberal arts,” as B.A. Scott characterized prestigious colleges and research universities in his collection of essays The Liberal Arts in a Time of Crisis (NY Praeger, 1990). The weather outside may be frightful, but inside the elite institutions, if not “delightful,” it’s perfectly tolerable, and likely to remain so until retirement time.
Narratives of decline have also been very useful to philanthropy, but in a negative way. As Tyler Cowen recently noted in The New York Times, “many donors … wish to be a part of large and successful organizations -- the ‘winning team’ so to speak.” They are not eager to pour out their funds in order to fill a moat or build a wall protecting some isolated “last bastion.” Narratives of decline provide a powerful reason not to reach for the checkbook. Most of us in the foundation world, like most other people, prefer to back winners than losers. Since there are plenty of potential winners out there, in areas of pressing need, foundation dollars have tended to flow away from higher education in general, and from liberal education in particular.
But at the campus level there’s another reason for the appeal of the narrative of decline, a genuinely insidious one. If something goes wrong the narrative of decline of the liberal arts always provides an excuse. If course enrollments decline, well, it’s just part of the trend. If students don’t like the course, well, the younger generation just doesn’t appreciate such material. If the department loses majors, again, how can it hope to swim upstream when the cultural currents are so strong? Believe in a narrative of decline and you’re home free; you never have to take responsibility, individual or collective, for anything having to do with liberal education.
There’s just one problem. The narrative of decline is about one generation out of date and applies now only in very limited circumstances. It’s true that in 1890, degrees in the liberal arts and sciences accounted for about 75 percent of all bachelor’s degrees awarded; today the number is about 39 percent, as Patricia J. Gumport and John D. Jennings noted in “Toward the Development of Liberal Arts Indicators” (American Academy of Arts and Sciences, 2005). But most of that decline had taken place by 1956, when the liberal arts and sciences had 40 percent of the degrees.
Since then the numbers have gone up and down, rising to 50 percent by 1970, falling to 33 percent by 1990, and then rising close to the 1956 levels by 2001, the last year for which the data have been analyzed. Anecdotal evidence, and some statistics, suggest that the numbers continue to rise, especially in Research I universities.
For example, in the same AAA&S report ("Tracking Changes in the Humanities) from which these figures have been derived, Donald Summer examines the University of Washington (“Prospects for the Humanities as Public Research Universities Privatize their Finances”) and finds that majors in the humanities have been increasing over the last few years and course demand is strong.
The stability of liberal education over the past half century seems to me an amazing story, far more compelling than a narrative of decline, especially when one recognizes the astonishing changes that have taken place over that time: the vast increase in numbers of students enrolled in colleges and universities, major demographic changes, the establishment of new institutions, the proliferation of knowledge, the emergence of important new disciplines, often in the applied sciences and engineering, and, especially in recent years, the financial pressures that have pushed many institutions into offering majors designed to prepare students for entry level jobs in parks and recreation, criminal justice, and now homeland security studies. And, underlying many of these changes, transformations of the American economy.
The Other, Untold Story
How, given all these changes, and many others too, have the traditional disciplines of the arts and sciences done as well as they have? That would be an interesting chapter in the history of American higher education. More pressing, however, is the consideration of one important consequence of narratives of decline of the liberal arts.
This is the “last bastion” mentality, signs of which are constantly in evidence when liberal education is under discussion. If liberal education can survive only within the protective walls of elite institutions, it doesn’t really make sense to worry about other places. Graduate programs, then, will send the message that success means teaching at a well-heeled college or university, without any hint that with some creativity and determination liberal education can flourish in less prestigious places, and that teaching there can be as satisfying as it is demanding.
Here’s one example of what I mean. In 2000, as part of a larger initiative to strengthen undergraduate liberal education, Grand Valley State University, a growing regional public institution in western Michigan, decided to establish a classics department. Through committed teaching, imaginative curriculum design, and with strong support from the administration, the department has grown to six tenured and tenure track positions with about 50 majors on the books at any given moment. Most of these are first-generation college students from blue-collar backgrounds who had no intention of majoring in classics when they arrived at Grand Valley State, but many have an interest in mythology or in ancient history that has filtered down through popular culture and high school curricula. The department taps into this interest through entry-level service courses, which are taught by regular faculty members, not part timers or graduate students.
That’s a very American story, but the story of liberal education is increasingly a global one as well. New colleges and universities in the liberal arts are springing up in many countries, especially those of the former Soviet Union.
I don’t mean that the spread of liberal education comes easily, in the United States or elsewhere. It’s swimming upstream. Cultural values, economic anxieties, and all too often institutional practices (staffing levels, salaries, leave policies and research facilities) all exert their downward pressure. It takes determination and devotion to press ahead. And those who do rarely get the recognition or credit they deserve.
But breaking out of the protective bastion of the elite institutions is vital for the continued flourishing of liberal education. One doesn’t have to read a lot of military history to know what happens to last bastions. They get surrounded; they eventually capitulate, often because those inside the walls squabble among themselves rather than devising an effective breakout strategy. We can see that squabbling at work every time humanists treat with contempt the quantitative methods of their scientific colleagues and when scientists contend that the reason we are producing so few scientists is that too many students are majoring in other fields of the liberal arts.
The last bastion mentality discourages breakout strategies. Even talking to colleagues in business or environmental studies can be seen as collaborating with the enemy rather than as a step toward broadening and enriching the education of students majoring in these fields. The last bastion mentality, like the widespread narratives of decline, injects the insidious language of purity into our thinking about student learning, hinting that any move beyond the cordon sanitaire is somehow foul or polluting and likely to result in the corruption of high academic standards.
All right, what if one takes this professed concern for high standards seriously? What standards, exactly, do we really care about and wish to see maintained? If it’s a high level of student engagement and learning, then let’s say so, and be forthright in the claim that liberal education is reaching that standard, or at least can reach that standard if given half a chance. That entails, of course, backing up the claim with some systematic form of assessment.
That provides one way to break out of the last bastion mentality. One reason that liberal education remains so vital is that when properly presented it contributes so much to personal and cognitive growth. The subject matter of the liberal arts and sciences provides some of the best ways of helping students achieve goals such as analytical thinking, clarity of written and oral expression, problem solving, and alertness to moral complexity, unexpected consequences and cultural difference. These goals command wide assent outside academia, not least among employers concerned about the quality of their work forces. They are, moreover, readily attainable through liberal education provided proper attention is paid to “transference.” “High standards” in liberal education require progress toward these cognitive capacities.
Is it not time, then, for those concerned with the vitality of liberal education to abandon the defensive strategies that derive from the last bastion mentality, and adopt a new and much more forthright stance? Liberal education cares about high standards of student engagement and learning, and it cares about them for all students regardless of their social status or the institution in which they are enrolled.
There is, of course, a corollary. Liberal education can’t just make the claim that it is committed to such standards, still less insist that others demonstrate their effectiveness in reaching them, unless those of us in the various fields of the arts and sciences are willing to put ourselves on the line. In today’s climate we have to be prepared to back up the claim that we are meeting those standards. Ways to make such assessments are now at hand, still incomplete and imperfect, but good enough to provide an opportunity for the liberal arts and sciences to show what they can do.
That story, I am convinced, is far more compelling than any narrative of decline.
George Scialabba is an essayist and critic working at Harvard University who has just published a volume of selected pieces under the title Divided Mind, issued by a small press in Boston called Arrowsmith. The publisher does not have a Web site. You cannot, as yet, get Divided Mind through Amazon, though it is said to be available in a few Cambridge bookstores. This may be the future of underground publishing: Small editions, zero publicity, and you have to know the secret password to get a copy. (I'll give contact information for the press at the end of this column, for anyone willing to put a check in the mail the old-fashioned way.)
In any case, it is about time someone brought out a collection of Scialabba's work. That it's only happening now (15 years after the National Book Critics Circle gave him its first award for excellence in reviewing) is a sign that things are not quite right in the world of belles lettres. He writes in what William Hazlitt -- the patron saint of generalist essayists -- called the "the familiar style," and he is sometimes disarmingly explicit about the difficulties, even the pain, he experiences in trying to resolve cultural contradictions. That is no way to create the aura of mystery and mastery so crucial for awesome intellectual authority.
Scialabba has his admirers, even so, and one of the pleasant surprises of Divided Mind is the set of comments on the back. "I am one of the many readers who stay on the lookout for George Scialabba's byline," writes Richard Rorty. "He cuts to the core of the ethical and political dilemmas he discusses." The novelist Norman Rush lauds Scialabba's prose itself for "bring[ing] the review-essay to a high state of development, incorporating elements of memoir and skillfully deploying the wide range of literary and historical references he commands." And there is a blurb from Christopher Hitchens praising his "eloquence and modesty" -- though perhaps that is just a gesture of relief that Scialabba has not reprinted his candid reassessment of Hitch, post-9/11.
One passage early in the collection gives a roll call of exemplary figures practicing a certain kind of writing. It includes Randolph Bourne, Bertrand Russell, George Orwell, and Maurice Merleau-Ponty, among others. "Their primary training and frame of reference," Scialabba writes, "were the humanities, usually literature or philosophy, and they habitually, even if only implicitly, employed values and ideals derived from the humanities to criticize contemporary politics.... Their 'specialty' lay not in unearthing generally unavailable facts, but in penetrating especially deeply into the shared culture, in grasping and articulating its contemporary moral/political relevance with special originality and force."
The interesting thing about this passage -- aside from its apt self-portrait of the author -- is the uncertain meaning of that slashmark in the phrase "contemporary moral/political relevance." Does it serve as the equivalent of an equals sign? I doubt that. But it suggests that the relationship is both close and problematic.
We sometimes say that a dog "worries" a bone, meaning he chews it with persistent attention; and in that sense, Divided Mind is a worried book, gnawing with a passion on the "moral/political" problems that go with holding an egalitarian outlook. Scialabba is a man of the left. If you can imagine a blend of Richard Rorty's skeptical pragmatism and Noam Chomsky's geopolitical worldview -- and it's a bit of a stretch to reconcile them, though somehow he does this -- then you have a reasonable sense of Scialabba's own politics. In short, it is the belief that life would be better, both in the United States and elsewhere, with more economic equality, a stronger sense of the common good, and the end of that narcissistic entitlement fostered by the American military-industrial complex.
A certain amount of gloominess goes with holding these principles without believing that History is on the long march to their fulfillment. But there is another complicating element in Divided Mind. It is summed in a passage from the Spanish philosopher José Ortega y Gasset's The Revolt of the Masses, from 1930 -- though you might find the same thought formulated by a dozen other conservative thinkers.
"The most radical division it is possible to make of humanity," Ortega y Gasset declares, "is that which splits it into two classes of creatures: those who make great demands on themselves, piling up difficulties and duties; and those who demand nothing special of themselves, but for whom to live is to be every moment what they already are, without imposing on themselves any effort toward perfection; mere buoys that float on the waves."
Something in Ortega y Gasset's statement must have struck a chord with Scialabba. He quotes it in two essays. "Is this a valid distinction?" he asks. "Yes, I believe it is...." But the idea bothers him; it stimulates none of the usual self-congratulatory pleasures of snobbery. The division of humanity into two categories -- the noble and "the masses" -- lends itself to anti-democratic sentiments, if not the most violently reactionary sort of politics.
At the very least, it undermines the will to make egalitarian changes. Yet it is also very hard to gainsay the truth of it. How, then, to resolve the tension? Divided Mind is a series of efforts -- provisional, personal, and ultimately unfinished -- to work out an answer.
At this point it bears mentioning that Scialabba's reflections do not follow the protocols of any particular academic discipline. He took his undergraduate degree at Harvard (Class of 1969) and has read his way through a canon or two; but his thinking is not, as the saying now goes, "professionalized." He is a writer who works at Harvard -- but not in the way that statement would normally suggest.
"After spells as a substitute teacher and Welfare Department social worker," he told me recently in an e-mail exchange, "I was, for 25 years, the manager or superintendent of a mid-sized academic office building, which housed Harvard's Center for International Affairs and several regional (East Asian, Russian, Latin American, Middle Eastern, etc) research centers. I gave directions to visitors, scheduled the seminar rooms, got offices painted, carpets installed, shelves built, windows washed, keys made, bills paid. I flirted with graduate students and staff assistants, schmoozed with junior faculty, and saw, heard, overheard, and occasionally got to know a lot of famous and near-famous academics."
As day jobs go, it was conducive to writing. "I had a typewriter and a copy machine," he says, "a good library nearby, and didn't come home every night tired or fretting about office politics." When the "homely mid-sized edifice" was replaced with "a vast, two-building complex housing the political science and history departments as well," the daily grind changed as well: "I'm now part of a large staff, and most of my days are spent staring at a flickering screen."
More pertinent to understanding what drives him as a writer, I think, are certain facts about his background that the reader glimpses in various brief references throughout his essays. The son of working-class Italian-American parents, he was once a member of the ascetic and conservative Roman Catholic group Opus Dei. In adolescence, he thought he might have a religious vocation. The critical intelligence of his critical writings is now unmistakably secular and modernist. He shows no sign of nostalgia for the faith now lost to him. But the extreme dislocation implied in leaving one life for another gives an additional resonance to the title of his collection of essays.
"For several hundred years," he told me, "a small minority of Italian/French/Spanish adolescent peasant or working-class boys -- usually the sternly repressed or (like me) libido-deficient ones -- have been devout, well-behaved, studious. Depending on their abilities and on what sort of priest they're most in contact with, they join a diocese or a religious order. Among the latter, the bright ones become Jesuits; the more modestly gifted or mystically inclined become Franciscans. I grew up among Franciscans and at first planned to become one, but I just couldn't resist going to college -- intellectual concupiscence, I guess."
Instead, he was drawn into Opus Dei -- a group trying, as he puts it, "to make a new kind of religious vocation possible, combining the traditional virtues and spiritual exercises with a professional or business career."
He recalls being "tremendously enthusiastic for the first couple of years, trying very hard, though fruitlessly, to recruit my fellow Catholic undergraduates at Harvard in the late 1960s. It was a strain, being a divine secret agent and trying at the same time to survive academically before the blessed advent of grade inflation. But the reward -- an eternity of happiness in heaven!"
The group permitted him to read secular authors, the better to understand and condemn their heresies.
"Then," he says, "Satan went to work on me. As I studied European history and thought, my conviction gradually grew that the Church had, for the most part, been on the wrong side. Catholic philosophy was wrong; Catholic politics were authoritarian....On one occasion, just after I had read Dostoevsky's parable of the Grand Inquisitor, I was rebuked for my intellectual waywardness by a priestly superior with, I fancied, a striking physical resemblance to the terrifying prelate in Ivan's fable. The hair stood up on the back of my neck."
The departure was painful. The new world he discovered on the other side of his crossing "wasn't in the slightest degree an original discovery," he says. "I simply bought the now-traditional narrative of modernity, hook, line and sinker. I still do, pretty much." But he was not quite ready to plunge without reserve into the counterculture of the time -- sex, drugs, rock and roll.
"I was, to an unusual degree, living in my head rather than my body," he says about the 1970s. "I had emerged from Opus Dei with virtually no friends, a conscious tendency to identify my life course with the trajectory of modernity, and an unconscious need to be a saint, apostle, missionary. And I had inherited from my working-class Italian family no middle-class expectations, ambitions, social skills, ego structures."
Instead, he says, "I read a lot and seethed with indignation at all forms of irrational authority or even conventional respectability. So I didn't take any constructive steps, like becoming a revolutionary or a radical academic.... In those days, it wasn't quite so weird not to be ascending some career ladder."
So he settled into a job that left him with time to think and write. And to deal with the possibility of eternal damnation -- something that can occasionally bedevil one part of the mind, even while the secular and modernist half retains its disbelief.
Somewhere in my study is a hefty folder containing, if not George Scialabba's complete oeuvre, then at least the bulk of it. After several years of reading and admiring his essays, I can testify that Divided Mind is a well-edited selection covering many of his abiding concerns. It ought to be interest to anyone interested in the "fourth genre," as the essay is sometimes called. (The other three -- poetry, drama, and fiction -- get all the glory.)
As noted, the publisher seems to be avoiding crass commercialism (not to mention convenience to the reader) by keeping Divided Mind out of the usual online bookselling venues. You can order it from the address below for $13, however. That price includes shipping and handling.