The Truth? You Can't Handle the Truth!

The hurried patron spying Why Truth Matters (Continuum) on the new arrivals shelf of a library may assume that it is yet another denunciation of the Republicans. New books defending the “reality-based community” are already thick on the ground – and the publishers' fall catalogs swarm with fresh contributions to the cause. Last month, at BookExpo America ( the annual trade show for the publishing industry), I saw an especially economical new contribution to the genre: a volume attributed to G.W. Bush under the title Whoops, I Was Wrong. The pages were completely blank.

Such books change nobody’s mind, of course. The market for them is purely a function of how much enthusiasm the choir feels for the sermon being addressed to it. As it turns out, Why Truth Matters has nothing to do with the G.O.P., and everything to do with what is sometimes called the postmodern academic left -– home to cross dressing Nietzschean dupes to the Sokal hoax.

Or so one gathers from the muttering of various shell-shocked Culture Warriors. Like screeds against the neocons, the diatribes contra pomo now tend to be light on data, and heavy on the indignation. (The choir does love indignation.)

Fortunately, Why Truth Matters by Ophelia Benson and Jeremy Stangroom, is something different. As polemics go, it is short and adequately pugnacious. Yet the authors do not paint their target with too broad a brush. At heart, they are old-fashioned logical empiricists -– or, perhaps, followers of Samuel Johnson, who, upon hearing of Bishop Berkeley’s contention that the objective world does not exist, refuted the argument by kicking a rock. Still, Benson and Stangroom do recognize that there are numerous varieties of contemporary suspicion regarding the concept of truth.

They bend over backwards in search of every plausible good intention behind postmodern epistemic skepticism. And then they kick the rock.

The authors run a Web site of news and commentary, Butterflies and Wheels. And both are editors of The Philosophers’ Magazine, a quarterly journal. In the spirit of full disclosure, it bears mentioning that I write a column for the latter publication.

A fact in no way disposing me, however, to overlook a striking gap in the book’s otherwise excellent index: The lack of any entry for “truth, definition of.” Contacting Ophelia Benson recently for an e-mail interview, that seemed like the place to start.

Q: What is truth? Is there more than one kind? If not, why not?

A: I'll just refer you to jesting Pilate, and let it go at that!

Q: Well, the gripe about jesting Pilate is that "he would not stay for the answer." Whereas I am actually going to stick around and press the point. Your book pays tribute to the human capacity for finding truth, and warns against cultural forces tending to undermine or destroy it. So what's the bottom-line criterion you have in mind for defining truth?  

A: It all depends, as pedants always say, on what you mean by "truth." Sure, in a sense, there is more than one kind. There is emotional truth, for instance, which is ungainsayable and rather pointless to dispute. It is also possible and not necessarily silly to talk about somewhat fuzzy-bordered kinds such as literary truth, aesthetic truth, the truth of experience, and the like.

The kind of truth we are concerned with in the book is the fairly workaday, empirical variety that is (or should be) the goal of academic disciplines such as history and the sciences. We are concerned with pretty routine sorts of factual claim that can be either supported or rejected on the basis of evidence, and with arguments that cast doubt on that very way of proceeding.

Q: Is anybody really making a serious dent in this notion of truth? You hear all the time that the universities are full of postmodernists who think that scientific knowledge is just a Eurocentric fad, and therefore people could flap their wings and fly to the moon if they wanted. And yet you never actually see anyone doing that. At least I haven't, and I go to MLA every year.

A: Of course, there is no shortage of wild claims about what people get up to in universities. Such things make good column fodder, good talk show fodder, good gossip fodder, not to mention another round of the ever-popular game of "Who's Most Anti-Intellectual?" But there are people making some serious dents in this notion of truth and of scientific knowledge, yes. That's essentially the subject matter of Why Truth Matters: the specifics of what claims are being made, in what disciplines, using what arguments.
There are people who argue seriously that, as Sandra Harding puts it, the idea that scientific "knowers" are in principle interchangeable means that "white, Western, economically privileged, heterosexual men can produce knowledge at least as good as anyone else can" and that this appears to be an antidemocratic consequence. Harding's books are still, despite much criticism, widely assigned. There are social constructionists in sociology and philosophy of science who view social context as fully explanatory of the formation of scientific belief and knowledge, while excluding the role of evidence.

There are Afrocentric historians who make factual claims that contradict existing historical evidence, such as the claim that Aristotle stole his philosophy from the library at Alexandria when, as Mary Lefkowitz points out, that library was not built until after Aristotle's death. Lefkowitz was shocked to get no support from her colleagues when she pointed out factual errors of this kind, and even more shocked when the dean of her college (Wellesley) told her that "each of us had a different but equally valid view of history." And so on (there's a lot of the "so on" in the book).
That sort of thing of course filters out into the rest of the world, not surprisingly: People go to university and emerge having picked up the kind of thought Lefkowitz's dean had picked up; such thoughts get into newspaper columns and magazine articles; and the rest of us munch them down with our cornflakes.

We don't quite think we could fly to the moon if we tried hard enough, but we may well think there's something a bit sinister and elitist about scientific knowledge, we may well think that oppressed and marginalized groups should be allowed their own "equally valid" view of history by way of compensation, we may well think "there's no such thing as truth, really."

Q: Your book describes and responds to a considerable range of forms of thought: old fashioned Pyrronic skepticism, "standpoint" epistemology, sociology of knowledge, neopragmatism, pomo, etc. Presumably not all questions about the possibility of a bedrock notion of truth are created equal. What kinds have a strong claim to serious consideration?

A: Actually, much of the range of thought we look at doesn't necessarily ask meta-questions about truth. A lot of it is more like second level or borrowed skepticism or relativism about truth, not argued so much as referenced, or simply assumed; waved at rather than defended. The truth relativism is not itself the point, it's rather a tool for the purpose of making truth-claims that are not supported by evidence or that contradict the evidence. Skepticism and relativism about truth in this context function as a kind of veil or smokescreen to obscure the way ideology shapes the truth-claims that are being made.

As a result much of the activity on the subject takes place on this more humdrum quotidian level, in between metaquestions and bedrock notions of truth, where one can ask if this map is accurate or not, if this bus schedule tells us where and when the bus really does go, if this history text contains falsifications or not, if the charges against this scholar or that tobacco company are based on sound evidence or not.

Meta-questions about truth of course do have a strong claim to serious consideration. Maybe we are brains in vats; maybe we all are, without realizing it, Keanu Reeves; there is no way to establish with certainty that we're not; thus questions on the subject do have a claim to consideration, however unresolvable they are. (At the same time, however unresolvable they are, it is noticeable that on the mundane level of this particular possible world, no one really does take them seriously; no one really does seriously doubt that fire burns or that axes chop.)

Intermediate level questions can also be serious, searching, and worth exploring. Standpoint epistemology is reasonable enough in fields where standpoints are part of the subject matter: histories of experience, of subjective views and mentalities, of oppression, for example, surely need at least to consider the subjective stance of the inquirer. Sociology of knowledge is an essential tool of inquiry into the way interests and institutions can shape research programs and findings, provided it doesn't, as a matter of principle, exclude the causative role of evidence. In short there are, to borrow a distinction of Susan Haack's, sober and inebriated versions of questions about the possibility of truth.

Q: Arguably even the most extremist forms of skepticism can have some beneficial effects -- if only indirectly, by raising the bar for what counts as a true or valid statement. (That's one thumbnail version of intellectual history, anyway: no Sextus Empiricus would mean no Descartes.) Is there any sense in which "epistemic relativism" might have some positive effect, after all?

A: Oh, sure. In fact I think it would be extremely hard to argue the opposite. And the ways in which it could have positive effects seem obvious enough. There's Mill's point about the need for contrary arguments in order to know the grounds of one's own views, for one. Our most warranted beliefs, as he says, have no safeguard to rest on other than a standing invitation to everyone to refute them.

If we know only our own side of the case, we don't know much. Matt Ridley made a related point in a comment on the Kitzmiller Intelligent Design trial for Butterflies and Wheels: "My concern ... is about scientific consensus. In this case I find it absolutely right that the overwhelming nature of the consensus should count against creationism. But there have been plenty of other times when I have been on the other side of the argument and seen what Madison called the despotism of the majority as a bad argument.... I agree with the scientific consensus sometimes but not always, but I do not do so because it is is a consensus. Science does not work that way or Newton, Harvey, Darwin and Wegener would all have been voted into oblivion."

Another way epistemic relativism may be of value is that it is one source (one of many) of insight into what it is that some people dislike and distrust about science and reason. In a way it's a silly argument to say that science is elitist or undemocratic, since it is of course the paradigmatic case of the career open to talent. But in another way it isn't silly at all, because as Michael Young pointed out in the '50s, meritocracy has some harsh side-effects, such as erosion of the sense of self-worth of the putative less talented. Epistemic relativism may function partly as a reminder of that.

The arguments of epistemic relativism may be unconvincing, but some of the unhappiness that prompts the arguments may be more worth taking seriously. However one then has to weigh those effects against the effects of pervasive suspicion of science and reason, and one grows pale with fear. At a time when there are so many theocrats and refugees from the reality-based community on the loose, epistemic relativism doesn't seem to need more encouragement than it already has.

Scott McLemee
Author's email:

Last Bastion of Liberal Education?

Why do narratives of decline have such perennial appeal in the liberal arts, especially in the humanities?  Why is it, year after year, meeting after meeting, we hear laments about the good old days and predictions of ever worse days to come?  Why is such talk especially common in elite institutions where, by many indicators,  liberal education is doing quite well, thank you very much.  I think I know why.  The opportunity is just too ripe for the prophets of doom and gloom to pass up.

There is a certain warmth and comfort in being inside the “last bastion of the liberal arts,” as  B.A. Scott characterized prestigious colleges and research universities in his collection of essays The Liberal Arts in a Time of Crisis (NY Praeger, 1990). The weather outside may be frightful, but inside the elite institutions, if not “delightful,” it’s perfectly tolerable, and likely to remain so until retirement time.

Narratives of decline have also been very useful to philanthropy, but in a negative way.  As Tyler Cowen recently noted in The New York Times, “many donors … wish to be a part of large and successful organizations -- the ‘winning team’ so to speak.” They are not eager to pour out their funds in order to fill a moat or build a wall protecting some isolated  “last bastion.” Narratives of decline provide a powerful reason not to reach for the checkbook. Most of us in the foundation world, like most other people, prefer to back winners than losers. Since there are plenty of potential winners out there, in areas of pressing need, foundation dollars have tended to flow away from higher education in general, and from liberal education in particular.

But at the campus level there’s another reason for the appeal of the narrative of decline, a genuinely insidious one. If something goes wrong the narrative of decline of the liberal arts always provides an excuse. If course enrollments decline, well, it’s just part of the trend.  If students don’t like the course, well, the younger generation just doesn’t appreciate such material. If the department loses majors, again, how can it hope to swim upstream when the cultural currents are so strong?  Believe in a narrative of decline and you’re home free; you never have to take responsibility, individual or collective, for anything having to do with liberal education.  

There’s just one problem. The narrative of decline is about one generation out of date and applies now only in very limited circumstances. It’s true that in 1890, degrees in the liberal arts and sciences accounted for about 75 percent of all bachelor’s degrees awarded; today the number is about 39 percent, as Patricia J. Gumport and  John D. Jennings noted in “Toward the Development of Liberal Arts Indicators” (American Academy of Arts and Sciences, 2005). But most of that decline had taken place by 1956, when the liberal arts and sciences had 40 percent of the degrees. 

Since then the numbers have gone up and down, rising to 50 percent by 1970, falling to 33 percent by 1990, and then rising close to the 1956 levels by 2001, the last year for which the data have been analyzed. Anecdotal evidence, and some statistics, suggest that the numbers continue to rise, especially in  Research I universities.  

For example, in the same AAA&S report ("Tracking Changes in the Humanities) from which these figures have been derived, Donald Summer examines the University of Washington (“Prospects for the Humanities as Public Research Universities Privatize their Finances”) and finds that majors in the humanities have been increasing over the last few years and course demand is strong.

The stability of liberal education over the past half century seems to me an amazing story, far more compelling than a narrative of decline, especially when one recognizes the astonishing changes that have taken place over that time: the vast increase in numbers of students enrolled in colleges and universities,  major demographic changes, the establishment of new institutions, the proliferation of knowledge, the emergence of important new disciplines, often in the applied sciences and engineering, and, especially in recent years, the financial pressures that have pushed many institutions into offering majors designed to prepare students for entry level jobs in parks and recreation, criminal justice, and now homeland security studies. And, underlying many of these changes, transformations of the American economy.    

The Other, Untold Story

How, given all these changes, and many others too, have the traditional disciplines of the arts and sciences done as well as they have? That would be an interesting chapter in the history of American higher education. More pressing, however, is the consideration of one important consequence of narratives of decline of the liberal arts.

This is the “last bastion” mentality, signs of which are constantly in evidence when liberal education is under discussion. If liberal education can survive only within the protective walls of elite institutions, it doesn’t really make sense to worry about other places. Graduate programs, then, will send the message that success means teaching at a well-heeled college or university, without any hint that with some creativity and determination liberal education can flourish in less prestigious places, and that teaching there can be as satisfying as it is demanding.

Here’s one example of what I mean. In 2000, as part of a larger initiative to strengthen undergraduate liberal education,  Grand Valley State University, a growing regional public institution in western Michigan, decided to establish a classics department. Through committed teaching, imaginative curriculum design, and with strong support from the administration, the department has grown to six tenured and tenure track positions with about 50 majors on the books at any given moment. Most of these are first-generation college students from blue-collar backgrounds who had no intention of majoring in classics when they arrived at Grand Valley State, but many have an interest in mythology or in ancient history that has filtered down through popular culture and high school curricula. The department taps into this interest through entry-level service courses, which are taught by regular faculty members, not part timers or graduate students.

That’s a very American story, but the story of liberal education is increasingly a global one as well.  New colleges and universities in the liberal arts are springing up in many countries, especially those of the former Soviet Union.

I don’t mean that the spread of liberal education comes easily, in the United States or elsewhere. It’s swimming upstream. Cultural values, economic anxieties, and all too often institutional practices (staffing levels, salaries, leave policies and research facilities) all exert their downward pressure. It takes determination and devotion to press ahead. And those who do rarely get the recognition or credit they deserve.

But breaking out of the protective bastion of the elite institutions is vital for the continued flourishing of liberal education. One doesn’t have to read a lot of military history to know what happens to last bastions. They get surrounded; they eventually capitulate, often because those inside the walls squabble among themselves rather than devising an effective breakout strategy. We can see that squabbling at work every time humanists treat with contempt the quantitative methods of their scientific colleagues and when scientists contend that the reason we are producing so few scientists is that too many students are majoring in other fields of the liberal arts.  

The last bastion mentality discourages breakout strategies. Even talking to colleagues in business or environmental studies can be seen as collaborating with the enemy rather than as a step toward broadening and enriching the education of students majoring in these fields. The last bastion mentality, like the widespread narratives of decline, injects the insidious language of purity into our thinking about student learning, hinting that any move  beyond the cordon sanitaire is somehow foul or polluting and likely to result in the corruption of high academic standards.   

All right, what if one takes this professed concern for high standards seriously? What standards, exactly, do we really care about and wish to see maintained? If it’s a high level of student engagement and learning, then let’s say so, and be forthright in the claim that liberal education is reaching that standard, or at least can reach that standard if given half a chance. That entails, of course, backing up the claim with some systematic form of assessment.

That provides one way to break out of the last bastion mentality. One reason that liberal education remains so vital  is that when properly presented it contributes so much to personal and cognitive growth. The subject matter of the liberal arts and sciences provides some of the best ways of helping students achieve goals such as analytical thinking, clarity of written and oral expression,  problem solving, and alertness to moral complexity, unexpected consequences and cultural difference. These goals command wide assent outside academia, not least among employers concerned about the quality of their work forces. They are, moreover, readily attainable  through liberal education provided proper attention is paid to “transference.”  “High standards” in liberal education require progress toward these cognitive capacities.

Is it not time, then, for those concerned with the vitality of liberal education to abandon the defensive strategies that derive from the last bastion mentality, and adopt a new and much more forthright stance? Liberal education cares about high standards of student engagement and learning, and it cares about them for all students regardless of their social status or the institution in which they are enrolled.

There is, of course, a corollary. Liberal education can’t just make the claim that it is committed to such standards, still less insist that others demonstrate their effectiveness in reaching them, unless those of us in the various fields of the arts and sciences are willing to put ourselves on the line. In today’s climate  we have to be prepared to back up the claim that we are meeting those standards. Ways to make such assessments are now at hand, still incomplete and imperfect, but good enough to provide an opportunity for the liberal arts and sciences to show what they can do.

That story, I am convinced, is far more compelling than any narrative of decline.

W. Robert Connor
Author's email:

W. Robert Connor is president of the Teagle Foundation and blogs frequently about liberal education.

Divided Mind

George Scialabba is an essayist and critic working at Harvard University who has just published a volume of selected pieces under the title Divided Mind, issued by a small press in Boston called Arrowsmith. The publisher does not have a Web site. You cannot, as yet, get Divided Mind through Amazon, though it is said to be available in a few Cambridge bookstores. This may be the future of underground publishing: Small editions, zero publicity, and you have to know the secret password to get a copy. (I'll give contact information for the press at the end of this column, for anyone willing to put a check in the mail the old-fashioned way.)

In any case, it is about time someone brought out a collection of Scialabba's work. That it's only happening now (15 years after the National Book Critics Circle gave him its first award for excellence in reviewing) is a sign that things are not quite right in the world of belles lettres. He writes in what William Hazlitt -- the patron saint of generalist essayists -- called the "the familiar style," and he is sometimes disarmingly explicit about the difficulties, even the pain, he experiences in trying to resolve cultural contradictions. That is no way to create the aura of mystery and mastery so crucial for awesome intellectual authority.

Scialabba has his admirers, even so, and one of the pleasant surprises of Divided Mind is the set of comments on the back. "I am one of the many readers who stay on the lookout for George Scialabba's byline," writes Richard Rorty. "He cuts to the core of the ethical and political dilemmas he discusses." The novelist Norman Rush lauds Scialabba's prose itself for "bring[ing] the review-essay to a high state of development, incorporating elements of memoir and skillfully deploying the wide range of literary and historical references he commands." And there is a blurb from Christopher Hitchens praising his "eloquence and modesty" --  though perhaps that is just a gesture of relief that Scialabba has not reprinted his candid reassessment of Hitch,  post-9/11.

One passage early in the collection gives a roll call of exemplary figures practicing a certain kind of writing. It includes Randolph Bourne, Bertrand Russell, George Orwell, and Maurice Merleau-Ponty, among others. "Their primary training and frame of reference," Scialabba writes, "were the humanities, usually literature or philosophy, and they habitually, even if only implicitly, employed values and ideals derived from the humanities to criticize contemporary politics.... Their 'specialty' lay not in unearthing generally unavailable facts, but in penetrating especially deeply into the shared culture, in grasping and articulating its contemporary moral/political relevance with special originality and force."

The interesting thing about this passage -- aside from its apt self-portrait of the author -- is the uncertain meaning of that slashmark in the phrase "contemporary moral/political relevance." Does it serve as the equivalent of an equals sign? I doubt that. But it suggests that the relationship is both close and problematic.

We sometimes say that a dog "worries" a bone, meaning he chews it with persistent attention; and in that sense, Divided Mind is a worried book, gnawing with a passion on the "moral/political" problems that go with holding an egalitarian outlook. Scialabba is a man of the left. If you can imagine a blend of Richard Rorty's skeptical pragmatism and Noam Chomsky's geopolitical worldview -- and it's a bit of a stretch to reconcile them, though somehow he does this -- then you have a reasonable sense of Scialabba's own politics. In short, it is the belief that life would be better, both in the United States and elsewhere, with more economic equality, a stronger sense of the common good, and the end of that narcissistic entitlement fostered by the American military-industrial complex.

A certain amount of gloominess goes with holding these principles without believing that History is on the long march to their fulfillment. But there is another complicating element in Divided Mind. It is summed in a passage from the Spanish philosopher José Ortega y Gasset's The Revolt of the Masses, from 1930 -- though you might find the same thought formulated by a dozen other conservative thinkers.

"The most radical division it is possible to make of humanity," Ortega y Gasset declares, "is that which splits it into two classes of creatures: those who make great demands on themselves, piling up difficulties and duties; and those who demand nothing special of themselves, but for whom to live is to be every moment what they already are, without imposing on themselves any effort toward perfection; mere buoys that float on the waves."

Something in Ortega y Gasset's statement must have struck a chord with Scialabba. He quotes it in two essays. "Is this a valid distinction?" he asks. "Yes, I believe it is...." But the idea bothers him; it stimulates none of the usual self-congratulatory pleasures of snobbery. The division of humanity into two categories -- the noble and "the masses" -- lends itself to anti-democratic sentiments, if not the most violently reactionary sort of politics.

At the very least, it undermines the will to make egalitarian changes. Yet it is also very hard to gainsay the truth of it. How, then, to resolve the tension? Divided Mind is a series of efforts -- provisional, personal, and ultimately unfinished -- to work out an answer.

At this point it bears mentioning that Scialabba's reflections do not follow the protocols of any particular academic discipline. He took his undergraduate degree at Harvard (Class of 1969) and has read his way through a canon or two; but his thinking is not, as the saying now goes, "professionalized." He is a writer who works at Harvard -- but not in the way that statement would normally suggest.

"After spells as a substitute teacher and Welfare Department social worker," he told me recently in an e-mail exchange, "I was, for 25 years, the manager or superintendent of a mid-sized academic office building, which housed Harvard's Center for International Affairs and several regional (East Asian, Russian, Latin American, Middle Eastern, etc) research centers. I gave directions to visitors, scheduled the seminar rooms, got offices painted, carpets installed, shelves built, windows washed, keys made, bills paid. I flirted with graduate students and staff assistants, schmoozed with junior faculty, and saw, heard, overheard, and occasionally got to know a lot of famous and near-famous academics."

As day jobs go, it was conducive to writing. "I had a typewriter and a copy machine," he says, "a good library nearby, and didn't come home every night tired or fretting about office politics." When the "homely mid-sized edifice" was replaced with "a vast, two-building complex housing the political science and history departments as well," the daily grind changed as well: "I'm now part of a large staff, and most of my days are spent staring at a flickering screen."

More pertinent to understanding what drives him as a writer, I think, are certain facts about his background that the reader glimpses in various brief references throughout his essays. The son of working-class Italian-American parents, he was once a member of the ascetic and conservative Roman Catholic group Opus Dei. In adolescence, he thought he might have a religious vocation. The critical intelligence of his critical writings is now unmistakably secular and modernist. He shows no sign of nostalgia for the faith now lost to him. But the extreme dislocation implied in leaving one life for another gives an additional resonance to the title of his collection of essays.

"For several hundred years," he told me, "a small minority of Italian/French/Spanish adolescent peasant or working-class boys -- usually the sternly repressed or (like me) libido-deficient ones -- have been devout, well-behaved, studious. Depending on their abilities and on what sort of priest they're most in contact with, they join a diocese or a religious order. Among the latter, the bright ones become Jesuits; the more modestly gifted or mystically inclined become Franciscans. I grew up among Franciscans and at first planned to become one, but I just couldn't resist going to college -- intellectual concupiscence, I guess."

Instead, he was drawn into Opus Dei -- a group trying, as he puts it, "to make a new kind of religious vocation possible, combining the traditional virtues and spiritual exercises with a professional or business career."

He recalls being "tremendously enthusiastic for the first couple of years, trying very hard, though fruitlessly, to recruit my fellow Catholic undergraduates at Harvard in the late 1960s. It was a strain, being a divine secret agent and trying at the same time to survive academically before the blessed advent of grade inflation. But the reward -- an eternity of happiness in heaven!"

The group permitted him to read secular authors, the better to understand and condemn their heresies.

"Then," he says, "Satan went to work on me. As I studied European history and thought, my conviction gradually grew that the Church had, for the most part, been on the wrong side. Catholic philosophy was wrong; Catholic politics were authoritarian....On one occasion, just after I had read Dostoevsky's parable of the Grand Inquisitor, I was rebuked for my intellectual waywardness by a priestly superior with, I fancied, a striking physical resemblance to the terrifying prelate in Ivan's fable. The hair stood up on the back of my neck."

The departure was painful. The new world he discovered on the other side of his crossing "wasn't in the slightest degree an original discovery," he says. "I simply bought the now-traditional narrative of modernity, hook, line and sinker. I still do, pretty much." But he was not quite ready to plunge without reserve into the counterculture of the time -- sex, drugs, rock and roll.

"I was, to an unusual degree, living in my head rather than my body," he says about the 1970s. "I had emerged from Opus Dei with virtually no friends, a conscious tendency to identify my life course with the trajectory of modernity, and an unconscious need to be a saint, apostle, missionary. And I had inherited from my working-class Italian family no middle-class expectations, ambitions, social skills, ego structures."

Instead, he says, "I read a lot and seethed with indignation at all forms of irrational authority or even conventional respectability. So I didn't take any constructive steps, like becoming a revolutionary or a radical academic.... In those days, it wasn't quite so weird not to be ascending some career ladder."

So he settled into a job that left him with time to think and write. And to deal with the possibility of eternal damnation -- something that can occasionally bedevil one part of the mind, even while the secular and modernist half retains its disbelief.

Somewhere in my study is a hefty folder containing, if not George Scialabba's complete oeuvre, then at least the bulk of it. After several years of reading and admiring his essays, I can testify that Divided Mind is a well-edited selection covering many of his abiding concerns. It ought to be interest to anyone interested in the "fourth genre," as the essay is sometimes called. (The other three -- poetry, drama, and fiction -- get all the glory.)

As noted, the publisher seems to be avoiding crass commercialism (not to mention convenience to the reader) by keeping Divided Mind out of the usual online bookselling venues. You can order it from the address below for $13, however. That price includes shipping and handling.

11 Chestnut Street
Medford, MA  02155

Scott McLemee
Author's email:

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

Remember Baudrillard

A few days ago, I tried the thought experiment of pretending never to have read anything by Jean Baudrillard – instead trying to form an impression based only on media coverage following his death last week. And there was a lot more of it than I might have expected. The gist being that, to begin with, he was a major postmodernist thinker. Everyone agrees about that much, usually without attempting to define the term, which is probably for the best. It also seems that he invented virtual reality, or at least predicted it. He may have had something to do with YouTube as well, though his role in that regard is more ambiguous. But the really important thing is that he inspired the "Matrix" movie franchise.

A segment on National Public Radio included a short clip from the soundtrack in which Lawrence Fishburn’s character Morpheus intones the Baudrillard catchphrase, “Welcome to the desert of the real.” The cover of Simulacra and Simulation -- in some ways his quintessential theoretical text, first published in a complete English translation by the University of Michigan in 1994 -- is shown in the first film. Furthermore, the Wachowski brothers, who wrote and directed the trilogy, made the book required reading for all the actors, including Keanu Reeves. (It is tempting to make a joke at this point, but we will all be better people for it if I don’t.)

There was more to Baudrillard than his role as Marshall McLuhan of the cyberculture. And yet I can’t really blame harried reporters for emphasizing the most blockbuster-ish dimensions of his influence. "The Matrix" was entertainment, not an educational filmstrip, and Baudrillard himself said that its take on his work “stemmed mostly from misunderstandings.” But its computer-generated imagery and narrative convolutions actually did a pretty decent job of conveying the feel, if not the argument, of Baudrillard’s work.

As he put it in an essay included in The Illusion of the End (Stanford University Press, 1994): “The acceleration of modernity, of technology, events and media, of all exchanges – economic, political, sexual – has propelled us to ‘escape velocity,’ with the result that we have flown free of the referential sphere of the real and of history.” You used to need digitalized special effects to project that notion. But I get the feeling of being “flown free of the referential sphere of the real and of history” a lot nowadays, especially while watching certain cable news programs.

Some of the coverage of Baudrillard’s death was baffled but vaguely respectful. Other commentary has been more hostile – though not always that much more deeply informed. A case in point would be an article by Canadian pundit Robert Fulford that appeared in The National Post on Saturday. A lazy diatribe, it feels like something kept in a drawer for the occasion of any French thinker’s death – with a few spots left blank, for details to be filled in per Google.

A tip-off to the generic nature of the piece is the line: “Strange as it seems, in the 1970s much of the Western world was ready to embrace him.” Here, Fulford can count on the prefab implication of a reference to that decade as a time of New Left-over radicalism and  countercultural indulgence. In fact Baudrillard was little known outside France until the 1980s, and even then he had a very small audience until late in the decade. The strong mood coming from most of Baudrillard’s work is that of bitter disappointment that oppositional social movements of earlier years had been neutralized – absorbed into academic bureaucracy and consumer society, with no reason to think that they would revive.

And if we are going to play the game of periodization-by-decade, well, it is perhaps worth mentioning that “much of the Western world was ready to embrace him" only after several years of watching Ronald Reagan -- a man whose anecdotes routinely confused his roles in motion pictures with actual experiences from his own life -- in a position of great power. The distinction between reality and simulation had been worn away quite a bit, by that point. Some of Baudrillard’s crazier flights of rhetoric were starting to sound more and more like apt descriptions of the actual.

Even then, it was by no means a matter of his work persuading university professors “that novels and poems had become irrelevant as subject matter for teaching and research,” as the macro setting for culture-war boilerplate on Fulford’s computer puts it.

Enthusiasm for Baudrillard’s work initially came from artists, writers, and sundry ne’er-do-wells in the cultural underground. The post-apocalyptic tone of his sentences, the science-fictionish quality of his concepts, resonated in ways that at least some people found creatively stimulating, whether or not they grasped his theories. (True confession: While still in my teens, I started writing a novel that opened with an epigraph from one of his books, simply because it sounded cool.)

Baudrillard’s work played no role whatever in the debates of “the canon” to which Fulford alludes. But he was, in a different sense, the most literary of theorists. He translated Bertolt Brecht, among other German authors, into French. Some of his earliest writings were critical articles on the fiction of William Styron and Italo Calvino. In 1978, he published a volume of poems. And a large portion of his output clearly belongs to the literary tradition of the aphorism and the “fragment” (not an unfinished work, but a very dense and compact form of essay). These are things you notice if you actually read Baudrillard, rather than striking po-faced postures of concern about how literature should be “subject matter for teaching and research.”

Besides, it is simply untrue to say that Baudrillard’s reception among American academics was one of uncritical adulation. If there was a protracted lag between the appearance of his first books in the 1960s and the dawn of interest in his work among scholars here in the 1980s, that was not simply a matter of the delay in translation. For one thing, it was hard to know what to make of Baudrillard, and a lot of the initial reception was quite skeptical.

In the mid-1960s, he became a professor of sociology at the University of  Paris at Nanterre , but the relationship of his work to the canon of social theory (let alone empirical research) is quite oblique. It’s also difficult to fit him into the history of philosophy as a discipline. Some of his work sounds like Marxist cultural theory, such as the material recently translated in Utopia Deferred: Writings for ‘Utopie’ 1967-1978 -- a collection distributed by MIT Press, a publisher known, not so coincidentally, for its books on avant-garde art. Still, there is plenty in Baudrillard’s work to irritate any Marxist (he grew profoundly cynical about the idea of social change, let alone socialism). And he delighted in baiting feminists with statements equating femininity with appearance, falsehood, and seduction.

Baudrillard was, in short, a provocateur. After a while that was all he was – or so it seemed to me, anyway. The rage of indignant editorialists notwithstanding, a lot of the response to Baudrillardisme amounted to treating him as a stimulating but dubious thinker: not so much a theorist as a prose-poet. A balanced and well-informed critical assessment of his work comes from Douglas Kellner, a professor of philosophy at UCLA, who wrote Jean Baudrillard: From Marxism to Postmodernism and Beyond (Stanford University Press, 1989), the first critical book on him in English. Kellner has provided me with the manuscript of a forthcoming essay on Baudrillard, which I quote here with permission.

“So far,” he writes, “no Baudrillardian school has emerged. His influence has been largely at the margins of a diverse number of disciplines ranging from social theory to philosophy to art history, thus it is difficult to gauge his impact on the mainstream of philosophy, or any specific academic discipline.”

At this point I’d interject that his questionable position within the disciplinary matrix (so to speak) tends to reinforce Baudrillard’s status as a minor literary figure, rather than an academic superstar. Kellner goes on to note that Baudrillard “ultimately goes beyond conventional philosophy and theory altogether into a new sphere and mode of writing that provides occasionally biting insights into contemporary social phenomena and provocative critiques of contemporary and classical thought. Yet he now appears in retrospect as a completely idiosyncratic thinker who went his own way....”

Not that Baudrillard exactly suffered for going his own way, however. A self-portrait of the postmodern intellectual as global jet-setter emerges in the five volumes of his notebook jottings published under the title “Cool Memories.” You get the sense that he spent a lot of time catching planes to far-flung speaking engagements – not to mention seeing various unnamed women out the door, once they had been given a practicum in the theories worked out in his book De la Séduction.

Many of the writings that appeared during the last two decades of his life simply recycled ideas from his early work. But celebrity is a full-time job.

One offer he did turn down was the chance to do a cameo in one of the Matrix sequels. (Instead, it was Cornel West who did his star turn onscreen as gnomic philosophical figure.) Still the appearance of "Simulacra and Seduction" in the first film greatly increased the book’s distribution, if not comprehension of its themes.

According to Mike Kehoe, the sales manager for the University of Michigan Press, which published the English translation, sales doubled in the year following “The Matrix.” The book had often been assigned in university courses. But those sales, too, jumped following the release of the film.

Rather than indulging my own halfbaked quasi-Baudrillardan speculations about how his theories of media metaphysics were reabsorbed by the culture industry, I decided to bring the week’s musings to a close by finding out more about how the book itself ended up on screen.

“It wasn’t the usual sort of product placement,” LeAnn Fields, a senior executive editor for the press, told me by phone. “That is, we didn’t pay them. It was the other way around. The movie makers contacted us for permission. But they reserved the right to redesign the cover for it when it appeared onscreen.”

The familiar Michigan edition is a paperback with bergundy letters on a mostly white cover. “But in the film,” said Fields, “it become a dark green hardcover book. We were quite surprised by that, but I guess it’s understandable since it serves as a prop and a plot device, as much as anything.” (If memory serves, some kind of cyber-gizmo is concealed in it by Keanu Reeves.)

I asked Fields if the press had considered bringing out a special version of the book, simulating its simulation in a deluxe hardback edition. “No,” she said with a laugh, “I don’t think we ever considered that. Maybe we should have, though.”

Recommended Reading: Mark Poster's edition of Baudrillard's Selected Writings, originally published by Stanford University Press in 1988, is now available as a PDF document. The single best short overview of Baudrillard's work is Douglas Kellner's entry on him for the Stanford Encyclopedia of Philosophy. There is an  International Journal of Baudrillard Studies  that publishes both commentary on his work and translations of some of his shorter recent writings. 

Scott McLemee
Author's email:

Requiem for a Heavyweight

Word that Richard Rorty was on his deathbed – that he had pancreatic cancer, the same disease that killed Jacques Derrida almost three years ago – reached me last month via someone who more or less made me swear not to say anything about it in public. The promise was easy enough to keep. But the news made reading various recent books by and about Rorty an awfully complicated enterprise. The interviews in Take Care of Freedom and Truth Will Take Care of Itself (Stanford University Press, 2006) and the fourth volume of Rorty’s collected papers, Philosophy as Cultural Politics (Cambridge University Press, 2007) are so bracingly quick-witted that it was very hard to think of them as his final books.

But the experience was not as lugubrious as it may sound. I found myself laughing aloud, and more than once, at Rorty’s consistent indifference to certain pieties and protocols. He was prone to outrageous statements delivered with a deadpan matter-of-factness that could be quite breathtaking. The man had chutzpah.

It’s a “desirable situation,” he told an interviewer, “not to have to worry about whether you are writing philosophy or literature. But, in American academic culture, that’s not possible, because you have to worry about what department you are in.”

The last volume of his collected papers contains a piece called “Grandeur, Profundity, and Finitude.” It opens with a statement sweeping enough to merit that title: “Philosophy occupies an important place in culture only when things seem to be falling apart – when long-held and widely cherished beliefs are threatened. At such periods, intellectuals reinterpret the past in terms of an imagined future. They offer suggestions about what can be preserved and what must be discarded.”

Then, a few lines later, a paradoxical note of rude modesty interrupts all the grandeur and profundity. “In the course of the 20th century," writes Rorty, "there were no crises that called forth new philosophical ideas.”

It's not that the century was peaceful or crisis-free, by any means. But philosophers had less to do with responding to troubles than they once did. And that, for Rorty, is a good thing, or at least not a bad one – a sign that we are becoming less intoxicated by philosophy itself, more able to face the need to face crises at the level (social, economic, political, etc.) they actually present themselves. We may yet be able to accept, he writes, “that each generation will solve old problems only by creating new ones, that our descendants will look back on much that we have done with incredulous contempt, and that progress towards greater justice and freedom is neither inevitable nor impossible.”

Nothing in such statements is new, of course. They are the old familiar Rorty themes. The final books aren’t groundbreaking. But neither was there anything routine or merely contrarian about the way Rorty continued to challenge the boundaries within the humanities, or the frontier between theoretical discussion and public conversation. It is hard to imagine anyone taking his place.

An unexpected and unintentional sign of his influence recently came my way in the form of an old essay from The Journal of American History. It was there that David A. Hollinger, now chair of the department of history at the University of California at Berkeley, published a long essay called “The Problem of Pragmatism in American History.”

It appeared in 1980. And as of that year, Hollinger declared, it was obvious that “‘pragmatism’ is a concept most American historians have proved that they can get along without. Some non-historians may continue to believe that pragmatism is a distinctive contribution of America to modern civilization and somehow emblematic of America, but few scholarly energies are devoted to the exploration or even the assertion of this belief.”

Almost as an afterthought, Hollinger did mention that Richard Rorty had recently addressed the work of John Dewey from a “vividly contemporary” angle. But this seemed to be the a marginal exception to the rule. “If pragmatism has a future,” concluded Hollinger in 1980, “it will probably look very different from the past, and the two may not even share a name.”

Seldom has a comment about the contemporary state of the humanities ever been overtaken by events so quickly and so thoroughly. Rorty’s Philosophy and the Mirror of Nature (Princeton University Press, 1979) had just been published, and he was finishing the last of the essays to appear in Consequences of Pragmatism (University of Minnesota Press, 1982).

It is not that the revival was purely Rorty's doing, and some version of it might have unfolded even without his efforts. In such matters, the pendulum does tend to swing.

But Rorty's suggestion that John Dewey, Martin Heidegger, and Ludwig Wittgenstein were the three major philosophers of the century, and should be discussed together -- this was counterintuitive, to put it mildly. It created excitement that blazed across disciplinary boundaries, and even carried pragmatism out of the provinces and into international conversation. I'm not sure how long Hollinger's point that pragmatism was disappearing from textbooks on American intellectual history held true. But scholarship on the original pragmatists was growing within a few years, and anyone trying to catch up with the historiography now will soon find his or her eyeballs sorely tested.

In 1998, Morris Dickstein, a senior fellow at the City University of New York Graduate Center, edited a collection of papers called The Revival of Pragmatism: New Essays on Social Thought, Law, and Culture (Duke University Press) -- one of the contributors to it being, no surprise, Richard Rorty. “I’m really grieved,” he told me on Monday. "Rorty evolved from a philosopher into a mensch.... His respect for his critics, without yielding much ground to them, went well with his complete lack of pretension as a person.”

In an e-mail note, he offered an overview of Rorty that was sympathetic though not uncritical.

“To my mind," Dickstein wrote, "he was the only intellectual who gave postmodern relativism a plausible cast, and he was certainly the only one who combined it with Dissent-style social democratic politics. He admired Derrida and Davidson, Irving Howe and Harold Bloom, and told philosophers to start reading literary criticism. His turn from analytic philosophy to his own brand of pragmatism was a seminal moment in modern cultural discourse, especially because his neopragmatism was rooted in the 'linguistic turn' of analytic philosophy. His role in the Dewey revival was tremendously influential even though Dewey scholars universally felt that it was his own construction. His influence on younger intellectuals like Louis Menand and David Bromwich was very great and, to his credit, he earned the undying enmity of hard leftists who made him a bugaboo."

The philosopher "had a blind side when it came to religion," continued Dickstein, "and he tended to think of science as yet another religion, with its faith in empirical objectivity. But it's impossible to write about issues of truth or objectivity today without somehow bouncing off his work, as Simon Blackburn and Bernard Williams both did in their very good books on the subject. I liked him personally: he was generous with his time and always civil with opponents.”

A recent essay discussing Rorty challenges the idea that Rorty “had a blind side when it came to religion.” Writing in Dissent, Casey Nelson Blake, a professor of history and American studies at Columbia University, notes that Rorty “in recent years stepped back from his early atheist pronouncements, describing his current position as ‘anti-clerical,’ and he has begun to explore, with increasing sympathy and insight, the social Christianity that his grandfather Walter Rauschenbusch championed a century ago.”

Blake quotes a comment by Rorty from The Future of Religion, an exchange with the Catholic philosopher Gianni Vattimo that Columbia University Press published in 2005. (It comes out in paperback this summer.)

“My sense of the holy,” wrote Rorty, “insofar as I have one, is bound up with the hope that someday, any millennium now, my remote descendants will live in a global civilization in which love is pretty much the only law. In such a society, communication would be domination-free, class and caste would be unknown, hierarchy would be a matter of temporary pragmatic convenience, and power would be entirely at the disposal of the free agreement of a literate and well-educated electorate.”

I'm not sure whether that counts as a religious vision, by most standards. But it certainly qualifies as something that requires a lot of faith.

Two items of great interest came to my attention too late to include in this column. One is the final interview with Rorty, conducted by Danny Postel just before the philosopher's death. The other is a tribute to Rorty by Jürgen Habermas.

Scott McLemee
Author's email:

If Not Religion, What?

In a variety of arenas, from politics to high schools, from colleges to the military, Americans argue as though the proper face-to-face discussion in our society ought to be between religion and science. This is a misunderstanding of the taxonomy of thought. Religion and science are in different families on different tracks: science deals with is vs. isn’t and religion, to the extent that it relates to daily life, deals with should vs. shouldn’t.

These are fundamentally different trains. They may hoot at each other in passing, and many people attempt to switch them onto the same track (mainly in order to damage science), but this is an act of the desperate, not the thoughtful.

It is true that a portion of religious hooting has to do with is vs. isn’t questions, in the arena of creationism and its ancillary arguments. However, this set of arguments, important as it might be for some religious people, is not important to a great many (especially outside certain Protestant variants), while the moral goals and effects of religious belief are a far more common and widespread concern among many faiths. I was raised in Quaker meeting, where we had a saying: Be too busy following the good example of Jesus to argue about his metaphysical nature.

Until recently, most scientists didn’t bother trying to fight with religion; for the most part they ignored it or practiced their own faiths. However, in recent years Carl Sagan, Richard Dawkins, Daniel Dennett and Sam Harris have decided to enter the ring and fight religion face to face. The results have been mixed. I have read books by all of these authors on this subject, as well as the interesting 2007 blog exchange between Harris and Andrew Sullivan, one of the best writers active today and a practicing Catholic, and it is clear that a great deal of energy is being expended firing heavy ordnance into black holes with no likelihood of much effect.

The problem that the scientific horsemen face is that theirs is the language of is/isn’t. Their opponents (mostly Christians but by implication observant Jews and Muslims as well) don’t use the word “is” to mean the same thing. To a religious person, God is and that’s where the discussion begins. To a nonreligious scientist, God may or may not be, and that is where the discussion begins.

The two sides, postulating only two for the moment, are each on spiral staircases, but the stairs wind around each other and never connect: this is the DNA of unmeeting thoughts. Only shouting across the gap happens, and the filters of meaning are not aligned. That is why I don’t put much faith, you’ll pardon the expression, in this flying wedge of scientific lancers to change very many minds.

Dennett’s approach is quite different from the others at a basic level; he views religious people as lab rats and wants to study why they squeak the way they do. That way of looking at the issue seems insulting at first but is more honest and practical in that it doesn’t really try to change minds that are not likely to change.

But these arguments are the wrong ones at a very basic level, especially for our schools and the colleges that train our teachers. The contrapuntal force to religion, that force which is in the same family, if a different genus, speaks the same language in different patterns regarding the same issues. It is not science, it is philosophy. That is what our teachers need to understand, and this distinction is the one in which education colleges should train them.

Those of us who acknowledge the factual world of science as genuine and reject the idea of basing moral and “should” questions in the teachings of religion are left seeking an alternate source for sound guidance. Our own judgment based in experience is a strong basic source. The most likely source, the ‘respectable’ source with sound academic underpinnings that can refine, inform and burnish our judgment, is philosophy in its more formal sense.

The word “philosophy” conjures in many minds the image of dense, dismal texts written by oil lamp with made-up words in foreign languages, and far beyond mortal ken. In fact, many writers on philosophy are quite capable of writing like human beings; some of their books are noted below.

When we introduce more religious studies into our K-12 schools, as we must if people are ever to understand each other’s lives, the family of learning into which they must go also contains philosophy. It is this conversation, between the varieties of religious outlooks and their moral conclusions, and the same questions discussed by major philosophers, that needs to happen.

Philosophy is not all a dense, opaque slurry of incomprehensible language. Some excellent basic books are available that any reasonably willing reader can comprehend and enjoy. Simon Blackburn’s Think, Robert Solomon and Kathleen Higgins’ A Passion for Wisdom and Erik Wielenberg’s Value and Virtue in a Godless Universe are some recent examples.

An older text providing a readable commentary on related issues is John Jay Chapman’s Religion and Letters, still in print in his Collected Works but hard to find in the original, single volume . Chapman wrote of changes in our school system that:

“It is familiarity with greatness that we need—an early and first-hand acquaintance with the thinkers of the world, whether their mode of thought was music or marble or canvas or language. Their meaning is not easy to come at, but in so far as it reaches us it will transform us. A strange thing has occurred in America. I am not sure that it has ever occurred before. The teachers wish to make learning easy. They desire to prepare and peptonize and sweeten the food. Their little books are soft biscuits for weak teeth, easy reading on great subjects, but these books are filled with a pervading error: they contain a subtle perversion of education. Learning is not easy, but hard: culture is severe.”

This, published in 1910, is remarkably relevant to education at all levels today. The idea that philosophy is too hard for high school students, which I doubt, simply means that we need to expect more of students all through K-12. Many of them would thank us.

Paul Kurtz’s Affirmations and my brother John Contreras’s Gathering Joy are interesting “guidebooks” that in effect apply philosophical themes in an informal way to people’s real lives. There are also somewhat more academic books that integrate what amount to philosophical views into daily life such as Michael Lynch’s True to Life: Why Truth Matters, physicist Alan Lightman’s A Sense of The Mysterious and the theologian John O’Donohue’s Beauty: The Invisible Embrace.

Some of these are denser than others and not all are suited for public schools, but the ideas they discuss are often the same ideas discussed in the context of religions, and sometimes with similar language. It is this great weave of concepts that our students should be exposed to, the continuum of philosophical thought blended with the best that different religions have to offer.

The shoulds and shouldn’ts that are most important to the future of our society need to be discussed in colleges, schools and homes, and the way to accomplish this is to bring religions and philosophies back to life as the yin and yang of right and wrong. That is the great conversation that we are not having.

Alan Contreras
Author's email:

Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission. He blogs at

Becoming Richard Rorty

In the late 1940s, as Richard Rorty was finishing his undergraduate studies and considering a future as a professional philosopher, his parents began to worry about him. This is not surprising. Parents worry; and the parents of philosophers, perhaps especially. But just why Rorty's parents worried – well now, that part is surprising.

They were prominent left-wing journalists. His father, James, also had some minor reputation as a poet; and his mother, Winifred, had done important work on the sociology of race relations, besides trying her hand at fiction. In a letter, James Rorty speculated that going straight into graduate work might be something Richard would later regret. His son would do well to take some time “to discover yourself, possibly through a renewed attempt to release your own creative need: through writing, possibly through poetry....”

In short, becoming an academic philosopher sounded too practical.

Not to go overboard and claim that this is the defining moment of the philosopher’s life (Rosebud!). But surely it is the kind of experience that must somehow mark one’s deepest sense of priorities. How does that inner sense of self then shape a thinker’s work?

Neil Gross’s book Richard Rorty: The Making of an American Philosopher, to be published next month by University of Chicago Press, is not exactly a biography of its subject, who died last year. Rather, it is a study of how institutional forces shape an intellectual’s sense of personal identity, and vice versa. (Gross is currently in transit from Harvard University to the University of British Columbia, where as of this summer he will be an associate professor of sociology.)

Influenced by recent work in sociological theory – but with one eye constantly on the archive of personal correspondence, unpublished writings, and departmental memoranda – Gross reconstructs how Rorty’s interests and intellectual commitments developed within the disciplinary matrix of academic philosophy. He takes the story up through the transformative and sui generis work of Rorty’s middle years, Philosophy and the Mirror of Nature (1979) and Consequences of Pragmatism (1982).

This includes a look at Rorty’s complicated and unhappy relationship with his colleagues at Princeton University in the 1970s. “I find it a bit terrifying,” he wrote in a letter at the time, “that we keep turning out Ph.D.'s who quite seriously conceive of philosophy as a discipline in which one does not read anything written before 1970, except for the purposes of passing odd examinations.” Nor did it help that Rorty felt other professors were taking his ex-wife’s side in their divorce. (What’s the difference between departmental gossip and cultural history? In this case, about 30 years.)

Gross has written the most readable of monographs; and the chapter titled “The Theory of Intellectual Self-Concept” should be of interest even to scholars who aren’t especially concerned with Rorty’s long interdisciplinary shadow. I interviewed Gross recently by e-mail, just before he headed off to Canada. The transcript of our discussion follows.

Q:You identify your work on Richard Rorty not as a biography, or even as a work of intellectual history, but rather as an empirical case study in "the new sociology of ideas." What is that? What tools does a sociologist bring to the job that an intellectual historian wouldn't?

A: Sociology is a diverse field, but if I had to offer a generalization, I'd say that most sociologists these days aim to identify the often hidden social mechanisms, or cascading causal processes, that help to explain interesting, important, or counterintuitive outcomes or events in the social world. How and why do some movements for social change succeed in realizing their goals when others fail to get off the ground? Why isn't there more social mobility? What exactly is the connection between neighborhood poverty and crime? Few sociologists think anymore that universal, law-like answers to such questions can be found, but they do think it possible to isolate the role played by more or less general mechanisms.

Sociologists of ideas are interested in identifying the hidden social processes that can help explain the content of intellectuals' ideas and account for patterns in the dissemination of those ideas. My book attempts to make a theoretical contribution to this subfield. I challenge the approaches taken by two of the leading figures in the area -- Pierre Bourdieu and Randall Collins -- and propose a new approach. I think that the best sociological theory, however, has strong empirical grounding, so I decided to develop my theoretical contribution and illustrate its value by deeply immersing myself in an empirical case: the development of the main lines of Richard Rorty's philosophy.

This entailed doing the same kind of work an intellectual historian would do: digging through archives, reading through Rorty's correspondence and unpublished manuscripts (to which he granted to access,) and of course trying to get a grasp on the diversity of Rorty's intellectual output for the period in question. This work is reflected in the first half of my book, which reads like an intellectual biography.

But the book isn't intended as a biography, and in the second half I try to show that thinking about Rorty's life and career in terms of the hidden social mechanisms at play offers unique explanatory leverage. I love intellectual history, but many intellectual historians are allergic to any effort at generalization. One of my aims in this book is to show them that they needn't be. The old sociology of knowledge may have been terribly reductive -- ideas are an expression of class interests or reflective of dominant cultural tendencies, etc etc -- but the sociology of ideas today offers much more fine-grained theoretical tools.

I only cover Rorty's life up until 1982 because by then most of the main lines of his philosophy had already been developed. After that point, he becomes for the sociologist of ideas a different kind of empirical case: an intellectual superstar and bête noire of American philosophy. It would be fascinating to write about the social processes involved with this, but that was too much for one book.

Q:This might seem like a chicken-or-egg question....Did an interest in Rorty lead you toward this sociological approach, or vice versa?

A: When I was a graduate student in the 1990s I read quite a bit of Rorty's work, and found it both interesting and frustrating. But my interest in the sociology of ideas developed independently. For me, Rorty is just a case, and I remain completely agnostic in the book about the value of his philosophy.

Q:But isn't there something already a little bit pragmatism-minded about analyzing a philosopher's work in sociological terms?

A: It's certainly the case that there are affinities between pragmatism and the sociology of knowledge. But I'm not trying to advance any kind of philosophical theory of knowledge, pragmatist or otherwise. I believe, like every other sociologist of ideas, that intellectuals are social actors and that their thought is systematically shaped by their social experiences. Whether that has any philosophical implications is best left to philosophers to figure out.

I do think that the classical pragmatist philosophers Charles Peirce, William James, John Dewey, and George Herbert Mead had it right in their account of human social action, as Hans Joas has persuasively argued. Some of their insights do make their way into my analysis.

Q: A common account of Rorty's career has him starting out as an analytic philosopher who then undertakes a kind of "turn to pragmatism" in the 1970s, thereby reviving interest in a whole current of American philosophy that had become a preserve of specialists. Your telling is different. What is the biggest misconception embedded in that more familiar thumbnail version?

A: Rorty didn't start out as an analytic philosopher. His masters thesis at Chicago was on Whitehead's metaphysics, and while his dissertation at Yale on potentiality was appreciative in part of analytic contributions, one of its major aims was to show how much value there might be in dialogue between analytic and non-analytic approaches. As Bruce Kuklick has shown, dialogue between various philosophical traditions, and pluralism, were watchwords of the Yale department, and Rorty was quite taken with these metaphilosophical ideals.

Rorty only became seriously committed to the analytic enterprise after graduate school while teaching at Wellesley, his first job. This conversion was directly related to his interest in moving up in the academic hierarchy to an assistant professorship in a top ranked graduate program. At nearly all such programs at the time, analytic philosophy had come to rule the roost. This was very much the case at Princeton, which hired him away from Wellesley, and his commitment to analytic philosophy solidified even more during the years when he sought tenure there.

But the conventional account is flawed in another way as well. It turns out that Rorty read a lot of pragmatism at Yale -- Peirce in particular -- and one of the things that characterized his earliest analytic contributions was a consistent interest in pointing out convergences and overlaps between pragmatism and certain recent developments in analytic thought. So when he finally started calling himself a pragmatist later in his career, it was in many respects a return to a tradition with which he had been familiar from the start, however much he might have come to interpret it differently than specialists in American philosophy would.

Q:You argue for the value of understanding what you call "the intellectual self-concept." Would you explain that idea? What does it permit us to grasp about Rorty that we might not, otherwise?

A: As I've already suggested, my goal in this book was not simply to write a biography of Rorty, but also to make a theoretical contribution to the sociology of ideas. Surprising as it might sound to some, the leading figures in this area today -- to my mind Pierre Bourdieu and Randall Collins -- have tended to depict intellectuals as strategic actors who develop their ideas and make career plans and choices with an eye toward accumulating intellectual status and prestige. That kind of depiction naturally raises the ire of those who see intellectual pursuits as more lofty endeavors -- it was not for nothing that Bourdieu described his study, Homo Academicus, as a "book for burning."

I argue that intellectuals do in fact behave strategically much of the time, but that another important factor influencing their lines of activity is the specific "intellectual self-concept" to which they come to cleave. By this I mean the highly specific narratives of intellectual selfhood that knowledge producers may carry around with them -- narratives that characterize them as intellectuals of such and such a type.

In Rorty's case, one of the intellectual self-concepts that came to be terribly important to him was that of a "leftist American patriot." I argue that intellectual self-concepts, thus understood, are important in at least two respects: they may influence the kinds of strategic choices thinkers make (for example, shaping the nature of professional ambition), and they may also directly influence lines of intellectual activity. The growing salience to Rorty of his self-understood identity as a leftist American patriot, for example, was one of the factors that led him back toward pragmatism in the late 1970s and beyond -- or so I claim.

I develop in the book an account of how the intellectual self-concepts of thinkers form and change over the life course. Rorty took on the leftist American patriot self-concept pretty directly from his parents, and it became reactivated in the 1970s in response to political and cultural developments and also their deaths. My argument is that the sociology of ideas would do well to incorporate the notion of intellectual self-concept into its theoretical toolkit.

But I must say that my ambitions extend beyond this. Bourdieu and Collins are not just sociologists of ideas, but general sociological theorists who happened to have applied their models to intellectual life. Implicit in my respectful criticisms of them is a call to supplement and revise their general models as well, and to fold notions of identity and subjectivity back into sociological theory -- conceptualized in the specific way I lay out, which eclectically draws on Anglo-American social psychology, theories of narrative identity, the ego psychology of Erikson, and other sources.

Q: The philosopher's father, James Rorty, is reasonably well-known to cultural historians as one of the left-wing anti-Communist public intellectuals of the mid-20th century. Your account of his life is interesting, but I found a lot of it rather familiar. By contrast, the chapter on Richard Rorty's mother was a revelation. Winifred Rorty was a clearly a remarkable person, and the question of her influence on her son seems very rich. What was it like to rediscover someone whose career might otherwise be completely forgotten?

A: It's well known that Rorty's mother, Winifred, was the daughter of social gospel theologian Walter Rauschenbusch. What's less well known is that she was a research assistant to the sociologist Robert Park at the University of Chicago. Winifred never entered academe -- she didn't formally enroll as a graduate student at Chicago, and in any event the opportunities for women on the academic labor market at the time were severely limited. Instead, after she left Chicago she worked, like her husband James, as a free lance writer and journalist. Her specialties were race riots and fashion. Very late in her life she wrote a biography of Park.

I ended up devoting one chapter each to Winifred and James because their influence on their son was profound, but also because theirs were fascinating stories that hadn't really been told before. Certainly there is no shortage of scholarship on the New York intellectuals -- a group of which they were loosely a part -- but both led remarkable and distinctive intellectual and writerly lives.

In the case of Rorty's mother I didn't set out to write about someone whose career might otherwise be forgotten, but I can say that it was a great pleasure to immerse myself in her papers and writings. Too often intellectual historians and sociologists of ideas alike focus their attention on the most prominent and "successful" thinkers, but feminist historians, among others, have helpfully reminded us that the stories of those whose careers have been stymied or blocked by discrimination or other factors can be every bit as rich and worth recovering.

Q: Suppose someone were persuaded to pursue research into Rorty's life and work after 1982, working from within the approach you call the "new sociology of ideas." What questions and problems concerning that period would you most want to see studied? What manner of archival resources or other documentary material would be most important for understanding the later phase of Rorty's career?

A: There are lots of questions about this period in Rorty's life that are worth pursuing, but I think one of the most important would be to figure out why Rorty struck a chord with so many people, was vehemently hated by others, and what role exactly his scholarship played in the more general revival of interest in classical American pragmatism that has taken place over the past twenty years or so. My book focuses primarily on the development of ideas, whereas this would be a question of diffusion and reception. I don't think it's possible to give an answer to the question without doing a lot of careful empirical research.

One would want to know about the state of the various intellectual fields in which Rorty's work was received; about the self-concepts and strategic concerns of those who responded to him positively or negatively; about the role of intellectual brokers who helped to champion Rorty and translate his ideas into particular disciplinary idioms; about the availability of resources for pragmatist scholarship; about the role played by scholarly organizations, such as the Society for the Advancement of American Philosophy, in doing the kind of organizational work necessary to lay the groundwork for an intellectual revival; and so on. Here again one might use Rorty as a window into a more general social phenomenon: the emergence of what Scott Frickel and I have called "scientific/intellectual movements," in this case a movement aimed at reviving an intellectual tradition that had long been seen as moribund.

Q: Rorty gave you access to his papers. The notes to your book cite e-mail exchanges you had with him. Any personal impressions that stick with you, beyond what you've had to say in the monographic format?

A: Although Dick and I never formed a friendship, he wrote to me not long after his diagnosis to tell me about it, and to suggest that if I had any unanswered factual questions about his life, I might want to consider asking them of him sooner rather than later.

Some might see this as reflecting a concern to manage his reputation, but he read drafts of the book and -- without commenting on the plausibility of my thesis -- never asked me to change a thing. I think what it shows instead is that he was an incredibly generous, kind, and decent man, even in his final hours; he didn't want to leave a young scholar in the lurch.

Whatever one thinks of Rorty's philosophy, those are qualities all intellectuals could stand to emulate, and live by even in the midst of intense disagreement.

Scott McLemee
Author's email:

The Playboy Philosopher

When introduced to American audiences from the podium or by TV interviewers, Bernard-Henri Lévy is always called a philosopher -- a label that says less about the substance of his work than the efficiency of modern public-relations techniques. Like Sartre, he is a graduate of the École Normale Supérieure. Unlike Sartre, he was formidably good-looking in his prime, and is aging gracefully. His haircuts are as thoughtful as his books are stylish. And in the spirit of Andy Warhol and Paris Hilton, Lévy has always grasped -- more profoundly, or at least more profitably, than any mere philosopher could -- an important truth: the media must constantly be fed.

Ten years ago, Pierre Bourdieu coined a term for certain French intellectuals whose writings counted for less than their TV appearances. He called them “ les fast-thinkers.” Everyone knew who the sociologist had in mind as the prototype of this phenomenon. Long before the American public got used to hearing references to J-Lo and K-Fed, the French press had dubbed him BHL. His books, movies, TV appearances, political interventions, and romances have been a staple of the French media for more than three decades. But only in the past five years has he become as much a fixture in the U.S. media as the French.

His latest opuscule -- called in translation Left in Dark Times -- has just appeared from Random House. Writing about it elsewhere, I failed to note something peculiar about this development. How it is that a volume of afterthoughts on last year’s French presidential election should appear -- in such short order, no less -- from a major commercial publisher in the United States?

It seems counterintuitive, and a matter for concern. Clearly it is time to reinvest in America’s fast-thinking infrastructure. Dependence on foreign sources of ideological methane is just too risky. Besides, as a couple of my far-flung correspondents have recently pointed out, the recent embrace of BHL by the American media is raising questions about just how gullible we really are.

Lauren Elkin, a Ph.D. candidate in English at CUNY Graduate Center and the Université de Paris VII, says that the very occasional links to BHL items on her blog tend to bring out the worst in her readers. One mention can be reliably predicted to yield 10 gripes.

“In Paris, it's just the done thing to bash BHL,” she tells me. “Recently I featured an awesome graphic that went along with a BHL piece on Sarah Palin in New York magazine -- an image of Palin getting bopped on the head with a baguette -- and I included a link to the NY mag article, because hey, I re-used their graphic, I owed them a link. The comments that followed amounted to taking the baguette and turning it on BHL!” (Well, at least it wasn’t a cream pie.)

Usually the expressions of exasperation are “all in good fun,” says Elkin. But one item at her blog -- linking to a BHL piece on Simone de Beauvoir -- provoked an exceptionally pompous display of aggravation from a French journalist.

“You and your fellow Americans,” he wrote, “should realize that BHL is not a philosopher but a clown and a buffoon. You want real French philosophy, read Derrida, Foucault, Badiou, Baudrillard, if you are a right winger, read Aron, but please forget about this pompous arrogant shmuck BHL and his unending and shameless self-promotion. As a Frenchman, I am ashamed of BHL.”

The notion that silly Americans are somehow responsible for Lévy’s prominence is a bit rich. By my estimate, his career has spanned more than a third of a century -- yet BHL, Inc., has had a fully staffed U.S. office for barely half a decade. (Note to Wikipedians: This is a figure of speech. No actual office exists, so far as I know.) And it is the work of a long, ill-spent day at the library to try to track down any discussion of his work by American intellectuals who take Lévy seriously as a philosopher. Our culture has its faults. This is not one of them.

“What really got me, as you can probably guess,” says Elkin, “was the ‘you Americans’ bit and the implication that as such we could not possibly tell Derrida from Aron, much less evaluate BHL for ourselves.” All the more galling, perhaps, given that Elkin has never concerned herself with BHL’s books. “I've been too busy reading Derrida and Foucault, so pat me on the head,” she told her blog’s interlocutor.

Given her own neglect of the playboy’s philosophy, Elkin says she “really can't comment on whether the bashing is appropriate.” But she suspects the strong feelings Lévy’s work provokes is a cultural phenomenon. “The French disdain for BHL is reflective of an inherent distaste for blatant self-promotion; as for the non-French who read my blog and write in with these comments, hating on BHL is as good a way as any to fit in.”

In an incisive review published a couple of years ago, Doug Ireland cited a critical analysis of BHL’s oeuvre, characterizing him as “a philosopher who’s never taught the subject in any university, a journalist who creates a cocktail mingling the true, the possible, and the totally false, a patch-work filmmaker, a writer without a real literary oeuvre....”

Yet Lévy swims in the main currents of European culture, and does not sink. If anything, he belongs on the short list of the world’s best-known intellectuals. How is that possible?

It seemed like a good question to pose to Arthur Goldhammer, a canny observer of French politics and culture who chairs the seminar for visiting scholars at the Center for European Studies at Harvard University. He responded to my inquiry with an e-mail note -- albeit one that amounted to a judicious essay on the mystery of BHL.

“How does he pull it off?” wrote Goldhammer. “First, it must be recognized that he's not a total fraud. Though a wretched scholar, he is neither stupid nor uneducated. His rhetoric, at least in French, has some of the old Normalien brilliance and flair. He had the wit to recognize before anyone else that a classic French role, that of the universal intellectual as moral conscience of the age, had become a media staple, creating a demand that a clever entrepreneur could exploit. He understood that it was no longer necessary first to prove one's mettle in some field of literature, art, or thought. I think that someone once said of Zsa Zsa Gabor that she was ‘famous for being famous.’ Lévy realized that one could be famous for being righteous, and that celebrity itself could establish a prima facie claim to righteousness.”

Righteous or not, BHL is certainly timely. His denunciations of Communism in the late 1970s were hardly original. But they appeared as the radical spirit of May ‘68 was exhausting itself -- and just before the Soviet invasion of Afghanistan and the Chinese party’s own denunciations of late-period Maoism. BHL developed a knack for showing up in war zones and sending out urgent dispatches. Last month he did a toe-touch in Georgia following the Russian invasion -- filing an article that was impassioned, if, it seems, imaginative.

“He chooses his causes shrewdly,” continues Goldhammer. “He may not have been the first to divine the waning of revolutionary radicalism, but he made himself revisionism's publicist. He has a knack for placing himself at the center of any scene and for depicting his presence as if it were what rendered the scene important.... His critics keep him constantly in the limelight and actually amplify his voice, and why should a ‘philosopher’ of universal range stoop to respond to ‘pedants’ who trouble the clarity of his vision with murky points of detail?”

And so he has acquired a sort of power that survives all debunking. If the topic of BHL comes up at “a typical dinner party of Parisian intellectuals,” says Goldhammer, seven of the guests will be sarcastic. “But the eighth, enticed by the allure of making a brilliant defense of a lost cause, a venerable French oratorical tradition, will launch into an elaborate defense beginning, ‘Say what you will about the man, and I wouldn't contradict a word of it, but still you must admit that for the Chechens (or Bosnians or Georgians or boat people or insert your favorite cause here), he has not been without effect.’

“The French love their litotes,” Goldhammer continues (rhetoric lesson here), “and of course no one can say that BHL has been without effect, that he has probably done more good for someone somewhere than most of us, so the revilers are reduced to sheepish silence for fear of appearing heartless.”

The role of the intellectual as famous, full-time spokesman for the Universal is well-established in France. It began with Voltaire and culminated in Sartre, its last great exemplar. (Not that other philosophers have not emerged in the meantime, of course, but none has occupied quite the same position.) From time to time, Lévy has mourned the passing of this grand tradition, while hinting, not too subtly, that it lives on in him. Clearly there is a steady French market for his line in historical reenactments of intellectual engagement.

It seems surprising, though, to find the BHL brand suddenly being imported to these shores after years of neglect -- particularly during a decade when Francophobia has become a national sport.

But like the song says, there’s a thin line between love and hate. Lévy has capitalized on American ambivalence towards France -- the potential of fascination to move from “-phobia” to “-philia” -- by performing a certain role. He is, in effect, the simulacrum of Sartre, minus the anti-imperialism and neo-Marxism.

“Lévy plays on both registers,” explains Goldhammer. “At the height of anti-French feeling in the U.S., in the period just before the Iraq War, he positioned himself as a philo-American. He made himself the avenger of Daniel Pearl. Arrogant he might be, airily infuriating in just the right way to confirm the philistine's loathing of the abstract and abstruse that philosophy is taken to embody, and yet there he was, pouring scorn on "Islamofascism" and touring the country with the New Yorker reader's nonpareil Francophile, Adam Gopnik.... Lévy chose his moment well. He insinuated himself into the American subconscious by playing against type.”

This is savvy. Also, convenient for journalists. BHL has now become “the respectable media's go-to guy whenever a French opinion is needed.” Goldhammer cites a recent article in The New York Times in which Lévy, like the presidents of Pakistan and Chile, was quoted as “as an exemplar of what ‘the world’ wants to know from the next American president.” Get in the right Rolodex, it seems, and you are the embodiment of cosmopolitanism itself.

“To those familiar with the sad nullity of Lévy's work,” says Goldhammer, “this is infuriating, but to protest is only to perpetuate the folly. His celebrity is a bubble that must be allowed to burst, but we can be sure that when it does, no crisis will ensue.”

Scott McLemee
Author's email:

'Examined Life'

Wandering around the Lyceum with an entourage, Aristotle would hold forth on his conception of the universe: one in which God is the Unmoved Mover, while all else shuttles between the potential and the actual. Part of what we know about Aristotle’s thought comes via notes from those lectures. (You picture a student scribbling furiously as the philosopher pauses to dislodge a stone from his sandal.)

This picture does not square with the usual notion of intellectual activity, which is a cross between Descartes’s self-portrait (the cogito talking to itself in a warm room) and Rodin’s nude dude. But there is a counter-tradition in philosophy -- one which takes thought to be, in essence, shambolic.

“A sedentary life is the real sin against the Holy Spirit,” says Nietzsche, blaspheming tongue not entirely in cheek. “Only those thoughts that come by walking have any value.” And more recently, Martha Nussbaum has insisted that running is an organic part of the philosopher’s professional ethos: “Lawyers tend to be tennis and squash players -- maybe it's the competitive element -- but philosophers tend to be runners, perhaps because of the loner, contemplative quality."

All of this by way of introduction to "Examined Life," the latest documentary by Astra Taylor, whose Žižek ! now turns up on the Sundance Channel from time to time. Taylor’s camera follows nine thinkers of various disciplinary extractions -- here’s a list -- as they walk on the street, ride in the backseat of a car, paddle around the pond in New York’s Central Park, and haul luggage around an international airport. They speak for about 10 minutes each -- sometimes in dialogue with Taylor or one another, sometimes in peripatetic soliloquy.

The trailer for "Examined Life" is now up on YouTube, though viewers should be warned against trying to form an impression of the film from it. "Examined Life" is more than an anthology of short lectures by famous talking heads. Taylor's intelligence as a documentarian extends to both content and form. The film is put together with a subtlety and wit that two minutes of highlights cannot capture. And she has not only scouted interesting or appropriate settings for her subjects (Anthony Appiah discussing cosmopolitanism in an airport, Slavoj Žižek challenging liberal environmentalism in a trash dump) but found common themes and points of implicit conflict among them.

But then Taylor takes another step. What might seem like a gimmick (the “philosopher-in-the-street” interview format, as I called it when blogging about the trailer last week) becomes a way to reflect on questions of context, meaning, and mobility. She does not explicitly mention Aristotle and Nietzsche, but the allusions are there, even so. Confirmation of this comes in her introduction to a book that The New Press will publish this June, based on interviews that Taylor did for the film. There, she cites another inspiration for her approach: Rousseau’s Reveries of a Solitary Walker.

One of the figures onscreen is her sister Sunuara Taylor, an artist and writer -- shown zipping through downtown San Francisco in her wheelchair with the queer theorist Judith Butler. They discuss what it means for a disabled person to “go for a walk” (and to insist on using that language even when it involv

Photo: Zeitgeist Films

Sunaura Taylor (left) and Judith Butler, in "Examined Life"

es a motor). I don’t dare try to paraphrase the exchange. The segment, which comes near the end of "Examined Life," is beautiful, fascinating, and transformative. It changes the context of all that has gone before in the film, and leaves the everyday world looking strange and new.

A couple of years ago, Tamara Chaplin, an assistant professor of history at the University of Illinois at Urbana-Champaign, published an absorbing book called Turning on the Mind: French Philosophers on Television (University of Chicago Press). It analyzed more than half a century of efforts to put abstract thought on screen. For the United States, no such monograph is necessary or, indeed, possible. The subject could be covered in a treatise the size of a take-out menu for a Chinese restaurant.

In short, Astra Taylor seems to be inventing her own genre of documentary film -- which means she is making it up as she goes along. After pestering her for an early DVD of "Examined Life," I followed up with a string of questions by e-mail about how she conceived the idea and put together the finished product.

When approaching potential participants, she described the project as “a feature length film consisting of a series of short contemplative 'walks' with world-renowned thinkers from various branches of philosophy." The formal challenge was to avoid an overly didactic approach. Getting thinkers out into public space was only part of this; it was also a matter of mode of address.

“When I first conceived the project,” says Taylor, “it was very clear to me that I wanted to try to make viewers feel like they were being engaged directly, or that they were part of a conversation even if there's only one person speaking on screen. So while half the subjects are doing direct address to the camera, the other half are actually talking to me (or in the case of Judith Butler and Sunaura Taylor, to each other). I didn't want the audience to feel lectured at, but this was difficult since the movie is monologue driven. The way the movie is directed and edited tries to make some space for viewers to insert themselves, both into the discourse and the environment.”

How did she decide who should appear on screen? “I looked for subjects whose work I value,” she responded, “who have made some sort of effort to speak to an audience outside of the academy, who focus on ethical issues, who seemed like they may enjoy the experience. The final requirement was absolutely essential. If the act of filming isn't fun, isn't a pleasure of some kind, the finished project will feel burdensome, stagnant. Slavoj Žižek, being such a movie buff, certainly brought his cinematic enthusiasm to the making of Žižek ! and that was truly invaluable. I was pleasantly surprised by the energy, playfulness, and sense of spectacle of everyone who appears in "Examined Life".... It was important to me to achieve a certain

Kwame Anthony Appiah, in "Examined Life"

diversity, not only in terms of intellectual outlook but also in regards to race, gender, age, ability, et cetera. But at a certain point it was just an intuitive sense that the cast made sense and that they would bounce well off one another.”

Everyone approached expressed a willingness to participate, but things did not always work out. The Marxist cultural critic Terry Eagleton was busy, and far away. Charles Taylor broke his arm. (I resist the temptation to ask if he didn’t just sprain it from trying to pick up a stack of his own, ever longer books.)

Taylor filmed “between 90 minutes and four hours of talking footage for each philosopher," she says, "shot over one or two days.” It then took “about two weeks to get a rough cut of each individual walk,” followed by a couple of months of work to shape the larger film. That meant “sequencing and refining, trying to tease out and highlight recurring themes, and also to figure out some sort of ‘narrative arc’ in a movie that lacks plot or chronology. How to make viewers feel they've been on a journey when there really no beginning, middle, or end to the tale?”

The result feels like a cinematic essay, instead of an educational filmstrip. It is the product of a sustained engagement with the figures onscreen, an effort to elucidate what they think and how they argue.

“I always had a bunch of prepared questions or talking points that I thought would guarantee usable material,” Taylor says. “Occasionally we worked out the brief argument we wanted to make in advance, though just as often the interview was free-floating, jumping from topic to topic, the central idea to be discovered in the editing room. Obviously a lot of material didn't make it into the final movie, which is why I decided to do the companion book.”

The project, she writes in the introduction to that volume, “doesn’t wrap everything up or pretend to provide a definitive answer to the difficult issues addressed in it; after all, if our answers were incontrovertible, we wouldn’t need philosophy.... If this effort inspires some people to pause and ponder how they come to hold the beliefs they do, to question the ethical assumptions and preconceptions they take for granted, to reconsider their responsibilities to others, or to see a problem in a new way, I’ll be content.”

(A list of playdates for "Examined Life" is available online.)

Scott McLemee
Author's email:

The Relevance of the Humanities

The deepening economic crisis has triggered a new wave of budget cuts and hiring freezes at America’s universities. Retrenchment is today’s watchword. For scholars in the humanities, arts and social sciences, the economic downturn will only exacerbate existing funding shortages. Even in more prosperous times, funding for such research has been scaled back and scholars besieged by questions concerning the relevance of their enterprise, whether measured by social impact, economic value or other sometimes misapplied benchmarks of utility.

Public funding gravitates towards scientific and medical research, with its more readily appreciated and easily discerned social benefits. In Britain, the fiscal plight of the arts and humanities is so dire that the Institute of Ideas recently sponsored a debate at King’s College London that directly addressed the question, “Do the arts have to re-brand themselves as useful to justify public money?”

In addition to decrying the rising tide of philistinism, some scholars might also be tempted to agree with Stanley Fish, who infamously asserted that humanities “cannot be justified except in relation to the pleasure they give to those who enjoy them.” Fish rejected the notion that the humanities can be validated by some standard external to them. He dismissed as wrong-headed “measures like increased economic productivity, or the fashioning of an informed citizenry, or the sharpening of moral perception, or the lessening of prejudice and discrimination.”

There is little doubt that the value of the humanities and social sciences far outstrip any simple measurement. As universities and national funding bodies face painful financial decisions and are forced to prioritize the allocation of scarce resources, however, scholars must guard against such complacency. Instead, I argue, scholars in the social sciences, arts, and humanities should consider seriously how the often underestimated value of their teaching and research could be further justified to the wider public through substantive contributions to today’s most pressing policy questions.

This present moment is a propitious one for reconsidering the function of academic scholarship in public life. The election of a new president brings with it an unprecedented opportunity for scholars in the humanities and social sciences. The meltdown of the financial markets has focused public attention on additional challenges of massive proportions, including the fading of American primacy and the swift rise of a polycentric world.

Confronting the palpable prospect of American decline will demand contributions from all sectors of society, including the universities, the nation’s greatest untapped resource. According to the Times Higher Education Supplement’s recently released rankings, the U.S. boasts 13 of the world’s top 20 universities, and 36 U.S. institutions figure in the global top 100. How can scholars in the arts, humanities and social sciences make a difference at this crucial historical juncture? How can they demonstrate the public benefits of their specialist research and accumulated learning?

A report published by the British Academy in September contains some valuable guidance. It argues that the collaboration between government and university researchers in the social sciences and humanities must be bolstered. The report, “Punching Our Weight: the Humanities and Social Sciences in Public Policy Making” emphasizes how expanded contact between government and humanities and social science researchers could improve the effectiveness of public programs. It recommends “incentivizing high quality public policy engagement.” It suggests that universities and public funding bodies should “encourage, assess and reward” scholars who interact with government. The British Academy study further hints that university promotion criteria, funding priorities, and even research agendas should be driven, at least in part, by the major challenges facing government.

The British Academy report acknowledges that “there is a risk that pressure to develop simplistic measures will eventually lead to harmful distortions in the quality of research,” but contends that the potential benefits outweigh the risks.

The report mentions several specific areas where researchers in the social sciences and humanities can improve policy design, implementation, and assessment. These include the social and economic challenges posed by globalization; innovative comprehensive measurements of human well-being; understanding and predicting human behavior; overcoming barriers to cross-cultural communication; and historical perspectives on contemporary policy problems.

The British Academy report offers insights that the U.S. government and American scholars could appropriate. It is not farfetched to imagine government-university collaboration on a wide range of crucial issues, including public transport infrastructure, early childhood education, green design, civil war mediation, food security, ethnic strife, poverty alleviation, city planning, and immigration reform. A broader national conversation to address the underlying causes of the present crisis is sorely needed. By putting their well-honed powers of perception and analysis in the public interest, scholars can demonstrate that learning and research deserve the public funding and esteem which has been waning in recent decades.

The active collaboration of scholars with government will be anathema to those who conceive of the university as a bulwark against the ever encroaching, nefarious influence of the state. The call for expanded university-government collaboration may provoke distasteful memories of the enlistment of academe in the service of the Cold War and the Vietnam War, a relationship which produced unedifying intellectual output and dreadfully compromised scholarship.

To some degree, then, skepticism toward the sort of government-university collaboration advocated here is fully warranted by the specter of the past. Moreover, the few recent efforts by the federal government to engage with researchers in the social sciences and humanities have not exactly inspired confidence.

The Pentagon’s newly launched Minerva Initiative, to say nothing of the Army’s much-criticized Human Terrain System, has generated a storm of controversy, mainly from those researchers who fear that scholarship will be placed in the service of war and counter-insurgency in Iraq and Afghanistan and produce ideologically distorted scholarship.

Certainly, the Minerva Initiative’s areas of funded research -- “Chinese military and technology studies, Iraqi and Terrorist perspective projects, religious and ideological studies," according to its Web site -- raise red flags for many university-based researchers. Yet I would argue that frustration with the Bush administration and its policies must not preclude a dispassionate analysis of the Minerva Initiative and block recognition of its enormous potential for fostering and deepening links between university research and public policy communities. The baby should not be thrown out with the bathwater. The Minerva Initiative, in a much-reformed form, represents a model upon which future university-government interaction might be built.

Cooperation between scholars in the social sciences and humanities and all of the government’s departments should be enhanced by expanding the channels of communication among them. The challenge is to establish a framework for engagement that poses a reduced threat to research ethics, eliminates selection bias in the applicant pool for funding, and maintains high scholarly standards. Were these barriers to effective collaboration overcome, it would be exhilarating to contemplate the proliferation of a series of “Minerva Initiatives” in various departments of the executive branch. Wouldn’t government policies and services -- in areas as different as the environmental degradation, foreign aid effectiveness, health care delivery, math and science achievement in secondary schools, and drug policy -- improve dramatically were they able to harness the sharpest minds and cutting-edge research that America’s universities have to offer?

What concrete forms could such university-government collaboration take? There are several immediate steps that could be taken. First, it is important to build on existing robust linkages. The State Department and DoD already have policy planning teams that engage with scholars and academic scholarship. Expanding the budgets as well as scope of these offices could produce immediate benefits.

Second, the departments of the executive branch of the federal government, especially Health and Human Services, Education, Interior, Homeland Security, and Labor, should devise ways of harnessing academic research on the Minerva Initiative model. There must be a clear assessment of where research can lead to the production of more effective policies. Special care must be taken to ensure that the scholarly standards are not adversely compromised.

Third, universities, especially public universities, should incentivize academic engagement with pressing federal initiatives. It is reasonable to envision promotion criteria modified to reward such interaction, whether it takes the form of placements in federal agencies or the production of policy relevant, though still rigorous, scholarship. Fourth, university presidents of all institutions need to renew the perennial debate concerning the purpose of higher education in American public life. Curricula and institutional missions may need to align more closely with national priorities than they do today.

The public’s commitment to scholarship, with its robust tradition of analysis and investigation, must extend well beyond the short-term needs of the economy or exigencies imposed by military entanglements. Academic research and teaching in the humanities, arts and social sciences plays a crucial role in sustaining a culture of open, informed debate that buttresses American democracy. The many-stranded national crisis, however, offers a golden opportunity for broad, meaningful civic engagement by America’s scholars and university teachers. The public benefits of engaging in the policy-making process are, potentially, vast.

Greater university-government cooperation could reaffirm and make visible the public importance of research in the humanities, arts and social sciences.

Not all academic disciplines lend themselves to such public engagement. It is hard to imagine scholars in comparative literature or art history participating with great frequency in such initiatives.

But for those scholars whose work can shed light on and contribute to the solution of massive public conundrums that the nation faces, the opportunity afforded by the election of a new president should not be squandered. Standing aloof is an unaffordable luxury for universities at the moment. The present conjuncture requires enhanced public engagement; the stakes are too high to stand aside.

Gabriel Paquette
Author's email:

Gabriel Paquette is a lecturer in the history department at Harvard University.


Subscribe to RSS - Philosophy
Back to Top