Inside a Search

When a philosophy department receives more than 600 applications for a tenure-track opening, how does it make a decision? Lou Marinoff describes the process.

I'm Sorry I Published

Ever since this piece on the hiring process in philosophy was published in Inside Higher Ed, there has been a lot of discussion about the role that pedigree should play in hiring committees' decisions about job candidates (see here, here, and

Egghead Headshots

In the early 1970s, a French publisher issued a sort of photo album devoted to Jean-Paul Sartre, who was the most famous philosopher in the world. He had been for some while, so the photojournalistic dossier on him was quite full. The book is full of pictures of him alongside equally famous figures from the world stage -- Camus and Castro, for example, and Simone de Beauvoir, of course. You also see him in the midst of dramatic events, as when he addressed an assembly of revolutionary students during May ’68. There are a few images of the philosopher in a less public capacity. As I recall, there is a baby portrait or two. Plus there were pictures of the Sartrean babes, who seemed to get younger as he got older.

The man was a philosophical action figure, to be sure. But my favorite pages in the book show him at his desk, with manuscripts piled up precariously nearby, or at a café table, scribbling away. Sartre once said that he felt like a machine while working on The Critique of Dialectical Reason, grinding out each day’s quota of concepts. And that’s what’s happening in those photographs of him with pen in hand and tobacco pipe in jaw -- tuning out everything else but the hard work of philosophizing. But who knows? A photograph cannot document thought. It’s entirely possible that Sartre was updating his schedule to accommodate a new girlfriend, rather than analyzing Stalinism.

The same brain did both -- a fact that lends itself to philosophical inquiry. Just where do you draw the line between task-oriented thinking and whatever it is philosophers do while they are “doing philosophy”? It is a conundrum.

In his new book Philosophers, from Oxford University Press, the New Yorker photographer Steve Pyke assembles a portrait gallery of contemporary thinkers. It embodies a conundrum or two of its own -- beginning with the title. In 1995, the British press Zelda Cheatle issued a collection of Pyke’s photographs that was also called Philosophers, which now fetches a high price from secondhand dealers. These are, it bears stressing, completely distinct books. All but one of the pictures in the new collection were taken over the past decade. Only two images from the earlier volume appear in the new one -- in the introductory pages, separate from the hundred portraits making up the main body of the book.

So we have, in other words, two volumes of the same kind, on the same subject, by the same author. They bear the same title. And yet they are not identical. A teachable moment in metaphysics? Yes, but one with practical implications for the used-book trade: a certain percentage of people trying to buy the older volume online will end up getting really, really irritated.

The book from Oxford is quite handsome. And its status as an aesthetic object is not a minor consideration. (For that matter, its aesthetics as a status object are also pretty demanding. It feels like you should get a nicer coffee table, just to have someplace to put it.) Without going so far as to say that Pyke represents philosophers as a subcategory of the beautiful people, he certainly renders them in beautiful black and white.

Ethnography forms no part of what he has in mind: his photographs do not show subjects going about their daily routines or occupying their usual niches. It’s difficult to think of Sartre without picturing him in certain settings – bars, cafés, lecture halls, etc. Furthermore, these places aren’t just elements of his biography; they figure into his work (the waiter in Being and Nothingness is an obvious example). Pyke’s philosophers, by contrast, hang in the void. Usually they are set against a solid black backdrop. The one conspicuous exception is the portrait of Michael Friedman, with an unreadable chalkboard diagram behind him. Their heads loom like planets in the depths of space. The camera registers the texture of skin and hair, the expression on the lips and in the eyes. Scarcely anything else enters the frame -- an earring, perhaps, or the neck of a sweater. Most of the subjects look right into the camera, or just to the side.

With Pyke, the thinker becomes, simply, a face. The effect is intimate, but also strangely abstract. The place and date of the photo session is indicated, but the book provides no biographical information about the subjects. I recognized about a quarter of them off the top of my head, such as Robert Brandom, David Chalmers, Patricia Churchland, Arthur Danto, Sydney Morgenbesser, Richard Rorty. A couple are even on TV from time to time. Both Harry Frankfurt and Bernard-Henri Levy have been on "The Daily Show." That two or three pages could not be found to list a couple of books by each figure is puzzling, although most of the portraits are accompanied by very brief remarks by the subjects on the nature or motivation of their work.

“Philosophy is the way we have of reinventing ourselves,” says Sydney Morgenbesser. Ruth Millikan quotes Wilfrid Sellars from Science, Perception, and Reality: “The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term.” Fortunately not everyone is so gnomic. The comments by Jerry Fodor seem the funniest: “To the best of my recollection, I became a philosopher because my parents wanted me to become a lawyer. It seems to me, in retrospect, that there was much to be said for their suggestion. On the other hand, many philosophers are quite good company; the arguments they use are generally better than the ones that lawyers use; and we do get to go to as many faculty meetings as we like at no extra charge.”

The ambivalence in Sally Haslanger’s statement felt more than vaguely familiar: “Given the amount of suffering and injustice in the world, I flip-flop between thinking that doing philosophy is a complete luxury and that it is an absolute necessity. The idea that it is something in between strikes me as a dodge. So I do it in the hope that it is a contribution, and with the fear that I’m just being self-indulgent. I suppose these are the moral risks life is made of.” That sounds quite a bit like Sartre, actually.

In the interview prefacing the collection, Pyke says that his intention is to make philosophers “seem more human, less of a mystery.” And that is where the true conundrum lies. Some philosophers look dyspeptic, while others have goofy smiles, but that isn’t what makes them human -- let alone philosophers. Making something “more human” precludes rendering it “less of a mystery,” since the human capacity for thought is itself an ever-deepening mystery.

Pyke thinks visually. A more interesting commentary on the figures in his portrait gallery might come indirectly, from the late Gilbert Ryle. An Oxford don and the author of The Concept of Mind, he gave a lecture that tried to sort out the relationship between deep cogitation and various other sorts of mental activity. To that end, he focused on the question of what that naked guy in Rodin's sculpture was doing -- and how it presumably differed from, say, a professor preparing to teach a class.

“The teacher has already mastered what he wants his students to master,” said Ryle. “He can guide them because he is on his own ground. But le Penseur is on ground unexplored by himself, and perhaps unexplored by anyone. He cannot guide himself through this jungle. He has to find his way without guidance from anyone who already knows it, if anyone does know it…. The teacher is a sighted leader of the blind, where le Penseur is a blind leader of the blind -- if indeed the very idea of his being or having a leader fits at all.”

That seems like a good description of what the subjects of Pyke's photographs spend their time doing. Not, of course, while the camera is turned on them. To judge by the expressions of some, their thoughts may have been something closer to, "Wow, I'm being photographed by someone from The New Yorker. How did that happen?"

Scott McLemee
Author's email:

Falling Into the Generation Gap

A few weeks ago, sitting over a cup of coffee, a writer in his twenties told me what it had been like to attend a fairly sedate university (I think he used the word "dull") that had a few old-time New Left activists on its faculty.

"If they thought you were interested in anything besides just your career," he said, "if you cared about ideas or issues, they got really excited. They sort of jumped on you."

Now, I expected this to be the prelude to a little tribute to his professors – how they had taken him seriously, opened his mind to an earlier generation’s experience, etc. But no.

"It was like they wanted to finish their youth through you, somehow," he said. "They needed your energy. They needed you to admire them. They were hungry for it. It felt like I had wandered into a crypt full of vampires. After a while, I just wanted to flee."

It was disconcerting to hear. My friend is not a conservative. And in any case, this was not the usual boilerplate about tenured radicals seeking to brainwash their students. He was not complaining about their ideas and outlook. This vivid appraisal of his teachers was not so much ideological as visceral. It tapped into an undercurrent of generational conflict that the endless "culture wars" seldom acknowledge.

You could sum it up neatly by saying that his professors, mostly in their fifties and sixties by now, had been part of the "Baby Boom," while he belonged to "Generation X."

Of course, there was a whole segment of the population that fell between those two big cultural bins -- people born at the end of the 1950s and the start of the 1960s. Our cohort never had a name, which is probably just as well. (For one thing, we’ve never really believed that we are a "we." And beside, the whole idea of a prepackaged identity based on what year you were born seems kind of tacky.)

One effect of living in this no-man’s-land between Boomers and Xers is a tendency to feel both fascinated and repulsed by moments when people really did have a strong sense of belonging to a generation. The ambivalence is confusing. But after a while it seems preferable to nostalgia -- because nostalgia is always rather simple-minded, if not dishonest.

The recent documentary The Weather Underground  (a big hit with the young-activist/antiglobalization crowd) expressed doe-eyed sadness that the terrible Amerikan War Machine had forced young idealists to plant bombs. But it somehow never mentioned that group’s enthusiasm for the Charles Manson "family." (Instead of the two-fingered hippie peace sign, Weather members flashed a three-finger salute, in honor of the fork used to carve the word "war" into one of the victims’ stomach.) Robert McNamara and Henry Kissinger have a lot of things to answer for – but that particular bit of insanity is not one of them.

Paul Berman, who was a member of Students for a Democratic Society at Columbia University during the strike of 1968, has been writing about the legacy of the 1960s for a long time. Sometimes he does so in interesting ways, as in parts of his book A Tale of Two Utopias; and sometimes he draws lessons from history that make an otherwise placid soul pull out his hair with irritation. He has tried to sort the positive aspects of the 1960s out from the negative -- claiming all the good for a revitalized liberalism, while treating the rest as symptoms of a lingering totalitarian mindset and/or psychological immaturity.

Whatever the merits of that analysis, it runs into trouble the minute Berman writes about world history -- which he always paints in broad strokes, using bright and simple colors. In his latest book, Terror and Liberalism,  he summed up the last 300 years in terms that suggested Europe and the United States had grabbed their colonies in a fit of progress-minded enthusiasm. (Economic exploitation, by Berman’s account, had nothing to do with it, or not much.) Liberalism and Terror is a small book, and easy to throw.

His essay in the new issue of Bookforum is, to my mind, part of the thoughtful, reflective, valuable side of Berman’s work. In other words, I did not lose much hair reading it.

The essay has none of that quality my friend mentioned over coffee – the morbid hunger to feast off the fresh blood of a younger generation’s idealism. Berman has fond recollections of the Columbia strike. But that is not the same as being fond of the mentality that it fostered. "Nothing is more bovine than a student movement," he writes, "with the uneducated leading the anti-educated and mooing all the way."

The foil for Berman’s reflections is the sociologist Daniel Bell, who left Columbia in the wake of the strike. At the time, Bell’s book The End of Ideology  was the bete noir of young radicals. (It was the kind of book that made people so furious that they refused to read it – always the sign of the true-believer mentality in full effect.) But it was Bell’s writing on the history of the left in the United States that had the deepest effect on Berman’s own thinking.

Bell noticed, as Berman puts it, "a strange and repeated tendency on the part of the American Left to lose the thread of continuity from one generation to the next, such that each new generation feels impelled to reinvent the entire political tradition."

There is certainly something to this. It applies to Berman himself. After all, Terror and Liberalism is pretty much a jerry-rigged version of the Whig interpretation of history,  updated for duty in the War on Terror. And the memoiristic passages in his Bookforum essay are, in part, a record of his own effort to find "the thread of continuity from one generation to the next."

But something else may be implicit in Bell’s insight about the "strange and repeated tendency" to lose that thread. It is a puzzle for which I have no solution readily at hand. Namely: Why is this tendency limited to the left?

Why is it that young conservatives tend to know who Russell Kirk was, and what Hayek thought, and how Barry Goldwater’s defeat in 1964 prepared the way for Reagan’s victory in 1980? Karl Marx once wrote that "the tradition of all the dead generations weighs like a nightmare on the brain of the living." So how come the conservatives are so well-rested and energetic, while the left has all the bad dreams?

Scott McLemee
Author's email:

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

The Consecrated Heretic

On newstands now -- or at least the ones with a decent selection of foreign periodicals -- you can find a special number of Le Magazine littéraire devoted entirely to Jean-Paul Sartre. Last month was the 25th anniversary of the grand funeral procession in Paris that drew 50,000 people out into the spring rain to see him off. (It was the last great demonstration of the 1960s generation, as people said at the time.) And next month marks the centennial of his birth.

He was "the conscience of his times," the cover announces. That is certainly arguable. It tends to equate denunciation with ethical critique. The man who declared, in 1952, that Soviet citizens enjoyed perfect freedom to criticize their government should probably be Exhibit A for any demonstration that sometimes contrarianism is not enough.

But what is not in doubt – to judge by the rest of issue – is that Sartre was the most-photographed philosopher in history.

I don’t mean that to be quite as sardonic as it probably sounds. One of Sartre’s definitive themes was the struggle between radical human freedom (in which each moment of one’s existence involves a choice) and the alienating experience of being defined by the gaze of the Other. It is striking to look at so many pictures of Sartre -- so many instances when the camera “caught” him, freezing him into an icon.   

It is a commonplace now to grumble that the golden age of the grand intellectual is gone, replaced by what, in the late 1990, Pierre Bourdieu called les fastthinkers, those well-spoken authorities who can be summoned to the television studio to "think faster than a speeding bullet." Nobody who struggles through the denser pages of Sartre’s work -- his efforts to create an "existential psychoanalysis," for example, and then to square it with the work of Marx -- will confuse him with, say, David Brooks.

The other posthumous role Sartre plays in the cultural imagination is that of the academic escapee. He was a philosopher who lived and worked "in the world," not in the university. Indeed, reading the letters he exchanged with Maurice Merleau-Ponty during their political falling-out, I was struck by a passage in which Sartre needles his ex-friend for lacking the nerve to take some distance from academic life. 

(That exchange can be found in The Debate between Sartre and Merleau-Ponty, published by Northwestern University Press, which also contains an important paper by Ronald Aronson, who has also written the definitive book on another personal/political break, Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It, which has just appeared in paperback from the University of Chicago Press.)

So the lingering afterimage of Sartre is now a double negation: Neither pundit nor professor, a mind that was free – or at least moving somewhere beyond both media and academe.

Not only did the philosopher escape their gravitational pull, but he was at was at war with the tendency of his own work to solidify into something inert. He rejected the Nobel Prize for Literature because, among other things, he said that he did not want to become an institution. But Sartre also spoke of thinking "against myself" – of throwing himself into the struggle "to break some bones in my own head."

Interviewed in 1959 – while finishing the first volume of his Critique of Dialectical Reason, a work whose ungainly style owes something to the fact that Sartre popped amphetamines constantly while writing it – he said he was looking forward to having his ideas “deposited in little coffins.” He wanted to be done with the ideas that he had been pursuing; he wanted to feel empty again.

"A writer is fortunate if he can attain such a state," he told the interviewer. "For when one has nothing to say, one can say everything....What is primary is what I haven’t yet written – what I intend to write (not tomorrow, but the day after tomorrow) and what perhaps I will never write."

It is an incredibly appealing notion – a dream of perfect freedom, and of constant renewal. And it is also something of an illusion. So one learned from Annie Cohen Solal's Sartre: A Life, first published in France in 1985. (The translation has just been reissued by the New Press – one of the few publications in English, at least so far, to mark this year’s double anniversary.)

Cohen-Solal’s portrait revealed a man who was both an incredibly well-polished product of the upper echelons of the French educational system and a canny player in the game of Parisian literary politics. As his work reached an American readership in the late 1940s, its arrival was shaped by a similar combination of academic and media forces. Ann Fulton’s book Apostles of Sartre: Existentialism in America 1945-1963 (Northwestern University Press, 1999) shows how his reception benefitted from both the support of professors in the French department at Yale University and publications about him in popular magazines.

He ascended to the position of "consecrated heretic" -- a culturally sanctioned gadfly, the embodiment of social criticism, in definace of all conformism. That is the term (coined by Bourdieu) that Paul M. Cohen applies to Sartre in Freedom’s Moment: An Essay on the French Idea of Liberty from Rousseau to Foucault (University of Chicago Press, 1997).

The consecrated heretic has no authority beyond the power of his thought and his example. Someone unimpressed or horrified by any particular consecrated heretic is likely to notice that he is also thereby relieved of any particular responsibility or accountability.

"Responsibility" was, of course, a decisive term in the Sartrean lexicon. Each of us is ultimately responsible for the creating, from our freedom, whatever commitments give meaning to our lives. But on a less metaphysical plane of  responsibility, Sartre’s example presents an especially knotty set of problems.

In his introduction to the new edition of Cohen-Solal’s biography, Cornel West offers a judicious evaluation of the thinker’s political legacy – his courage and incisiveness in denouncing racism and colonialism, the amoral and sometimes merely dishonest nature of some of his statements about Communism, and his surprising tendency to downplay the Palestinian question. Sartre was a passionate and very firm supporter of the right of Israel to exist as a state. 

So how do you come to a final judgment on such a figure? Was he the "conscience of his time" or a symptom of its disorders?

It’s not clear that a decision is possible or desirable. At least, not on any level beyond choosing either idol-worship or contempt. I’ve been reading Sartre my entire adult life (and then some) – and have gone to both extremes, along the way.

And now, in the midst of these anniversaries, what is less impressive than any given position he assumed than his willingness -- from time to time -- to think against himself. There is much to admire in his example, and much to avoid. But more than anything else, I keep going back to him in hopes of learning how to break a few bones in my head.

Scott McLemee
Author's email:

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

Remembering Ricoeur

Paul Ricoeur -- the philosopher whose writings on hermeneutics were the cornerstone of an ambitious rethinking of the relationship between the humanities and the social sciences -- died on Friday at the age of 92.

By the late 1960s, American academic presses had made him one of the first French thinkers of his generation with a substantial body of work available in English. Even as an octogenarian, he was more productive than many scholars half his age. Late last year, the University of Chicago Press published Memory, History, Forgetting -- an enormous study of the conditions of possibility for both historical writing and moral forgiveness. His book The Course of Recognition is due from Harvard University Press this fall. And Ricoeur himself provided the ideal survey of his life and philosophical development in Critique and Commitment, a lively set of interviews that Columbia University Press issued in 1998.

At the time of his death, he was professor emeritus at both the University of Paris and the University of Chicago. "The entire European humanist tradition is mourning one of its most talented spokesmen," said a statement from the office of Jean-Pierre Raffarin, the prime minister of France, released over the weekend.

And that leads to a conundrum. It is Tuesday already, and nobody in the American media has insulted Ricoeur yet. What's going on? Have our pundits lost their commitment to mocking European intellectuals and the pointy-headed professors who read them?

At first I thought it might be that people were still tired from abusing Derrida following his death last fall. But clearly that's not it.

Over at The National Interest,  for example, the neocons are already going after Jurgen Habermas, who is still very much alive, by the way. Despite being almost fanatically moderate -- his residual Marxism tempered by an admiration for the American pragmatist philosophers, combined with a hearty enthusiasm for constitutional democracy -- Habermas has criticized the invasion and occupation of Iraq. Besides, his political philosophy is grounded on the theory that the fundamental tendency of language is toward open and truthful communication. You can see where that would bother some people.

Yet Ricoeur, it seems, gave no one offense. With hindsight, that was a terrible oversight. A thinker receives significant publicity if (and only if) people speak his name in tones of apoplectic hysteria.

It was his own fault, really. Even more than Habermas, he was the soul of civility and calm intellectual labor. You will read many a page of Ricoeur before coming across any flash of apocalyptic rhetoric or stagy contrarianism. The closest thing to a bracing put-down in his writing is probably Ricouer's definition of structuralism as "Kantianism without a transcendental subject." (Which is actually kind of funny. But not, you know, Slavoj-Zizek-obscene-joke funny.

And while his work connected up with numerous other fields (sharing borders with linguistics, psychoanalysis, sociology, history, religious studies, literary theory, and law, just to give the short list), Ricoeur was in many ways an academic philosopher of a very traditional sort. His philosophy made no effort to jump out of its own skin -- to become, say, a form of avant-garde literature, or a conceptual weapon for guerrilla warfare. His work tends to ambitious, edifying, cumulative, and ... well, just a little bit dull at times.

That is not meant as a criticism exactly. Many things in the world are exciting without being good for you. Some readers find that the work of Ricoeur's younger colleague Gilles Deleuze is rather like the philosophical equivalent of taking LSD. (I can attest that, yes, there are definite similarities.) But there are limits to how much of cultural life can be conducted as a rave.

Perhaps the most salutary aspect of Ricoeur's work is that it has precisely the opposite effect of either the polemical or the psychedelic modes of intellectual intervention. It synthesizes, rather than obliterates. It treats the process of interpretation as an act of opening to the possibility of communication with the entire human community -- not as the moment when Narcissus becomes fascinated by the image in the pool.

Before dealing with the substance of Ricoeur's work, however, I want first to grapple more with the uncanny silence since his death. In American public life, you're nobody until somebody hates you. Frankly, the absence of rancor now verges on the disrespectful.

A modest proposal, then. Just as the neocons are now denouncing the mild-mannered Habermas, this might be the time for leftist wingnuts to go to town on Paul Ricoeur. As ever, total ignorance is no obstacle. Here is a quick checklist of talking points ("shrieking points?") for anyone who wants to get the vitriol flowing.

(1) Paul Ricoeur was a Christian his entire life. Most of his work is secular philosophical analysis, but he did publish writings on the Bible, and even gave sermons. Despite all that unpleasantness between his Huguenot ancestors and the Roman Catholic Church in the 16th and 17th centuries, Ricoeur is known to have spent a fair bit of time in discussion with Pope John Paul II over the years. The pope cited his work, even. Ricoeur denied being a theologian. However, there are subtle echoes and parallels between his philosophical ideas and his religious beliefs.

(For maximum effect, imitate the Fox News style like so: "Some have said that Ricoeur was actually a fundamentalist who used philosophy to brainwash his students." Nobody actually thinks this, but if you repeat it enough, someone will.)

(2) Ricoeur never really joined the "theory counterculture." When he published a major work on Freud and philosophy in 1964, the Lacanians got mad at him for failing to mention their guru. He soon became "the designated enemy," as Francois Dosse writes in his History of Structuralism, targeted by the Marxist students around Louis Althusser. A few years later, Ricoeur was a candidate for a chair at the prestigious Collège de France -- and lost the position to Michel Foucault.

(This definitely raises questions about his commitment to destroying the phallogocrentrism of dead white European males. Plus, now he is one.)      

(3) In the mid-1960s, Ricoeur was a prominent advocate of  reforms in the overburdened French university system. In 1967, he became involved in the launching of a satellite campus in the Parisian suburb of Nanterre, where he soon became dean of the college of letters. Following the student uprising of May 1968, the campus turned into a scene of continuous warfare among radical factions, some armed with chains and iron bars. Ricoeur himself was physically attacked, and some faculty refused to risk coming on campus. Eventually, he requested that police patrol the campus. This only made things worse, and in March 1970 Ricoeur resigned his position and took a long leave of absence.

(In repeating this information, hint that Ricoeur was actually a man of the right. Ignore the fact that Ricoeur was a pacifist, a supporter of the left-wing journal Esprit, and an outspoken opponent of French policy during the Algerian war. Just stress that he called the cops.)

(4) Finally, the clincher. Last November, Ricoeur was named as one of the winners of the John W. Kluge Prize for Lifetime Achievement in the Humanities and Social Sciences, given by the Library of Congress. He split the award of $1 million with Jaroslav Pelikan, a scholar of religious history.

(Note how suspicious it is that he received this award right after Derrida died. Imply that it was the religious right's way of rewarding an intellectual lapdog. Added benefit: now you have an excuse not to read him.)
Well, that was depressing, even as an exercise in satire.

In the middle of writing it, I learned of Russell Arben Fox's "Thoughts on Ricoeur." It's the first really substantial blogospheric commentary on the philosopher's death to come my way -- and a sign that perhaps things are not so dire, after all.

As Fox notes, Ricoeur was one of those thinkers it proved easy to "save for later." Beyond a certain level of familiarity, it seemed hopeless to try to catch up with him, because he was just too prolific. And there was also the fact that his work seems to have gone through somewhere between three and five stages of development.

Be that as it may, I'll try on Thursday to give a thumbnail account of what was (and still is) at stake in his work. Or at least what I know of it.

In the meantime, you might have a look at Ricoeur's acceptance speech for the Kluge Award. Knowing that he wrote it at the age of 91 is a reminder of one benefit of Paul Ricoeur's work: Reading it keeps you humble.

Scott McLemee
Author's email:

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays. In February, he wrote two columns about a scholarly conference on Derrida's legacy.

Locating Bourdieu

Pierre Bourdieu had a way of getting under one's skin. I don't mean his overtly polemical works -- the writings against globalization and neoliberalism that appeared toward the end of his life, for example. He was at his most incisive and needling in works that were much less easy for the general public to read. Bourdieu's sociological research kept mapping out the way power, prestige, and exclusion work in the spheres of politics, economics, and culture.

He was especially sharp (some thought brutal) in analyzing the French academic world. At the same time, he did very well in that system; very well indeed. He was critical of the way some scholars used expertise in one field to leverage themselves into positions of influence having no connection with their training or particular field of confidence. It could make him sound like a scold. At the same time, it often felt like Bourdieu might be criticizing his own temptation to become an oracle.

In the course of my own untutored reading of Bourdieu over the years, there came a moment when the complexity of his arguments and the aggressiveness of his insights suddenly felt like manifestations of a personality that was angry on the surface, and terribly disappointed somewhere underneath. His tone registered an acute (even an excruciating) ambivalence toward intellectual life in general and the educational system in particular. 

Stray references in his work revealed glimpses of Bourdieu as a "scholarship boy" from a family that was both rural and lower-middle class. You learned that he had trained to be a philosopher in the best school in the country. Yet there was also the element of refusal in even his most theoretical work -- an almost indignant rejection of the role of Master Thinker (played to perfection in his youth by Jean-Paul Sartre) in the name of empirical sociological research.

There is now a fairly enormous secondary literature on Bourdieu in English. Of the half-dozen or so books on him that I've read in the past few years, one has made an especially strong impression, Deborah Reed-Danahay's recent study Locating Bourdieu (Indiana University Press, 2005). Without reducing his work to memoir, she nonetheless fleshes out the autobiographical overtones of Bourdieu's major concepts and research projects. (My only complaint about the book is that it wasn't published 10 years ago: Although it is a monograph on his work rather than an introductory survey, it would also be a very good place for the new reader of Bourdieu to start.)

Reed-Danahay is a professor of anthropology at the University of Texas at Arlington. She recently answered a series of questions by e-mail.

Q: Bourdieu published sociological analyses of the Algerian peasantry, the French academic system, the work of Martin Heidegger, and patterns of attendance at art galleries -- to give only a very incomplete list. Yet his work seems much more focused and coherent than a catalog of topics would suggest. Can you to sum up the gist of his work, or rather how it all holds together?

A: Yes, I agree that, at first glance, Bourdieu's work seems to cover a seemingly disparate series of studies. When I read Bourdieu's work on education in France after first being exposed to his Algerian peasant studies in my graduate work in anthropology, I wondered if this was the same person. But when the entire corpus is taken together, and when one carefully reads Bourdieu's many texts that returned to themes brought up earlier in his work, one can see several underlying themes and recurring intellectual questions. 

One way to get a handle on his work is to realize that Bourdieu was interested in explaining social stratification, and the hierarchy of social values, in contemporary capitalist societies. He wanted to study systems of domination in a way that held some room for social agency but without a notion of complete individual freedom. Bourdieu often evoked Althusser as an example of a theorist who had too mechanical a view of internalized domination, while Sartre represented the opposite extreme of a philosopher who posited free will.  

Bourdieu believed that we are all constrained by our internalized dispositions (our habitus), deriving from the milieu in which we are socialized, which influence our world view, values, expectations for the future, and tastes. These attributes are part of the symbolic or cultural capital of a social group. 

In a stratified society, a higher value is associated with the symbolic capital of members of the dominant sectors versus the less dominant and "controlled" sectors of society. So that people who go to museums and like abstract art, for instance, are expressing a form of symbolic capital that is more highly valued than that of someone who either rarely goes to museums or who doesn't like abstract art.
The person feels that this is "just" a matter of taste, but this can have important consequences for children at school who have not been exposed to various forms of symbolic capital by their families.

Bourdieu studied both social processes (such as the French educational system, Algerian postcolonial economic dislocations, or the rural French marriage system), and individual figures and their social trajectories -- including Heidegger, Flaubert, and an Algerian worker. Bourdieu was trying to show how the choices these people made (and he often wrote of choices that were not really choices) were expressions of the articulation of habitus and the social field in which it is operating.

Q: Something about his career always seemed paradoxical. Sartre was always his worst-case reference, for example. But by the time of his death in 2002, Bourdieu was regarded as the person who had filled Sartre's shoes. Has your work given you a sense of how to resolve this seeming contradiction?

A: There is a lot of silence in Bourdieu's work on the ways in which he acquired power and prestige within the French academic system or about how he became the most famous public intellectual in France at the time of his death. He was more self-reflexive about earlier periods in his life. I have trouble defending Bourdieu in this contradiction about his stance toward the public intellectual, even though I applaud his political engagements. 

I think that Bourdieu felt he had more authority to speak to some of these issues than did other academics, given his social origins and empirical research in Algeria and France among the underclass. Bourdieu repeatedly positioned the sociologist (and one can only assume he meant himself here, too) as having a privileged perspective on the "reality" of systems of domination.

Bourdieu was very critical of Sartre for speaking out about the war in Algeria and for championing a sort of revolutionary spirit among Algerians. Bourdieu accused him of trying to be a "total intellectual" who could speak on any topic and who did not understand the situation in Algeria as profoundly as did the sociologist-ethnologist Bourdieu. When Bourdieu himself became more visible in his own political views (particularly in attacks against globalization and neo-liberalism), he does seem to have acted like the "journalist"-academics he lampooned in Homo Academicus. Nevertheless, when he was criticizing (in his essay On Television) what he saw as the necessity for "fast thinking" on television talk shows in France, where talking heads must quickly have something to say about anything, Bourdieu did (in his defense) refrain from pontificating about any and everything.

There is still a huge controversy raging in France about Bourdieu's political engagements. His detractors vilify him for his attacks against other intellectuals and journalists while he became a public intellectual himself. His defenders have published a book of his political writings ( Interventions, 1961-2001) seeking to show his long-standing commitments, and continue to guard his reputation beyond the grave.

I cannot help but think that Bourdieu's public combative persona, and his (in his own terms) refusals and ruptures, helped rather than thwarted his academic career. The degree to which this was calculated or (as he claimed) was the result of the "peasant" habitus he acquired growing up in southwestern France, is unknown.

Q: So much of his analysis of academic life is focused on the French university system that there is always a question of how well it could apply elsewhere. I'm curious about your thoughts on this. What's it been like to move between his concepts and models and your own experience as an American academic? 

A: I see two ways to answer your question. Certainly, in the specifics, French academia is very different. I have experienced that directly. My own American cultural values of independence (which may, I am aware, be a total illusion) conflict with those of many French academics. 

When I first arrived in France to do my dissertation fieldwork, I came with a grant that opened some doors to French academia, but I had little direct sponsorship by powerful patrons in the U.S. I was doing a project that had little to do with the work of my professors, none of whom had done research in France or Europe, and it was something that I had come up with on my own. This was surprising to the French, who were familiar with a patron-client system of professor/student relations. Most of the graduate students I met in France were involved in projects related to the work of their professors.

French academia, still centralized in Paris despite attempts at decentralization, is a much smaller universe than that of the vast American system. There is little room for remaining "outside" of various polemics there. I've learned, for instance, that some people whom I like and admire in France hated Bourdieu and that Bourdieu followers tend to be very fierce in their defense of him and want to promote their view of his work.

This is not to say that American academia doesn't have similar forces operating, but there are multiple points of value and hierarchy here. Whereas Bourdieu could say that Philosophy dominated French academia during the mid-20th century, it is harder to pinpoint one single dominant intellectual framework here.

I do, however, feel that Bourdieu's critique of academia as part of a larger project of the study of power (which he made very explicit in The State Nobility) is applicable beyond France. His work on academia provided us with a method of inquiry to look at the symbolic capital associated with academic advancement and, although the specific register of this will be different in different national contexts, the process may be similar. Just as Bourdieu did in France, for example, one could study how it is that elite universities here "select" students and professors.

Q: We have a memoir of Sartre's childhood in The Words. Is there anything comparable for Bourdieu?

A: Bourdieu produced self-referential writings that began to appear in the late 1990s, with "Impersonal Confessions" in Pascalian Meditations (1997), a section called "Sketch for a Self-Analysis" in his final lectures to the Collège de France, Science of Science and Reflexivity (2001), and then the stand-alone volume Esquisse pour une Auto-Analyse, published posthumously in 2004. [ Unlike the other titles listed, this last volume is not yet available in English. -- S.M.]

A statement by Bourdieu that "this is not an autobiography" appears as an epigraph to the 2004 essay. I find his autobiographical writings interesting because they show us a bit about how he wanted to use his own methods of socio-analysis on himself and his own life, with a focus particularly on his formative years -- his childhood, his education, his introduction to academia, and his experiences in Algeria.

Bourdieu was uncomfortable with what he saw as narcissism in much autobiography, and also was theoretically uncomfortable with life stories that stressed the individual as hero without sufficient social analysis. He had earlier written an essay on the 'biographical illusion" that elaborated on his biographical approach, but without self-reference. These essays are not, then, autobiographical in the conventional sense of a linear narrative of a life. Bourdieu felt that a truly scientific sociology depended on reflexivity on the part of the researcher, and by this he meant being able to analyze one's own position in the social field and one's own habitus.  

On the one hand, however, Bourdieu's auto-analysis was a defensive move meant to preempt his critics. Bourdieu included a section on self-interpretation in his book on Heidegger, in which he referred to it as "the riposte of the author to those interpretations and interpreters who at once objectify and legitimize the author, by telling him what he is and thereby authorizing him to be what they say he is..." (101). As Bourdieu became increasingly a figure in the public eye and increasingly a figure of analysis and criticism, he wanted to explain himself and thus turned to self-interpretation and auto-analysis. 

Q: In a lot of ways, Bourdieu seems like a corrosive thinker: someone who strips away illusions, rationalizations, the self-serving beliefs that institutions foster in their members. But can you identify a kernel of something positive or hopeful in his work -- especially in regard to education? I'd like to think there is one....

A: Bourdieu had little to say about how schools and universities operate that is positive, and he was very critical of them. The hopeful kernel here is that in understanding how they operate, how they inflict symbolic violence and perpetuate the illusions that enable systems of domination, we can improve educational institutions.

Bourdieu felt strongly that by de-mystifying the discourses and aura of authority surrounding education (especially its elite forms), we can learn something useful. The trick is how to turn this knowledge into power, and Bourdieu did not have any magical solutions for this. That is work still to be done.

Scott McLemee
Author's email:

The Corrosion of Ethics in Higher Education

In its 1966 declaration on professional ethics, the American Association of University Professors, the professoriate’s representation organization, states: 

"Professors, guided by a deep conviction of the worth and dignity of the advancement of knowledge, recognize the special responsibilities placed upon them....They hold before them the best scholarly and ethical standards of their discipline.… They acknowledge significant academic or scholarly assistance from (their students)."

Notwithstanding such pronouncements, higher education recently has provided the public with a series of ethical solecisms, most spectacularly the University of Colorado professor Ward Churchill’s recidivistic plagiarism and duplicitous claim of Native American ancestry along with his denunciations of 9/11 victims. While plagiarism and fraud presumably remain exceptional, accusations and complaints of such wrong doing increasingly come to light.

Some examples include Demas v. Levitsky at Cornell, where a doctoral student filed a legal complaint against her adviser’s failure to acknowledge her contribution to a grant proposal; Professor C. William Kauffman’s complaint against the University of Michigan for submitting a grant proposal without acknowledging his authorship; and charges of plagiarism against by Louis W. Roberts, the now-retired classics chair at the State University of New York at Albany. Additional plagiarism complaints have been made against Eugene M. Tobin, former president of Hamilton College, and Richard L. Judd, former president of Central Connecticut State University.

In his book Academic Ethics, Neil Hamilton observes that most doctoral programs fail to educate students about academic ethics so that knowledge of it is eroding. Lack of emphasis on ethics in graduate programs leads to skepticism about the necessity of learning about ethics and about how to teach it. Moreover, nihilist philosophies that have gained currency within the academy itself such as Stanley Fish’s “antifoundationalism” contribute to the neglect of ethics education.
For these reasons academics generally do not seriously consider how ethics education might be creatively revived. In reaction to the Enron corporate scandal, for instance, some business schools have tacked an ethics course onto an otherwise ethically vacuous M.B.A. program. While a step in the right direction, a single course in a program otherwise uninformed by ethics will do little to change the program’s culture, and may even engender cynicism among students.

Similarly, until recently, ethics education had been lacking throughout the American educational system. In response, ethicists such as Kevin Ryan and Karen Bohlin have advocated a radical renewal of ethics education in elementary schools. They claim that comprehensive ethics education can improve ethical standards. In Building Character in Schools, Ryan and Bohlin compare an elementary school to a polis, or Greek city state, and urge that ethics be fostered everywhere in the educational polis.

Teachers, they say, need to set standards and serve as ethical models for young students in a variety of ways and throughout the school. They find that manipulation and cheating tend to increase where academic achievement is prized but broader ethical values are not. They maintain that many aspects of school life, from the student cafeteria to the faculty lounge, ought to provide opportunities, among other things, to demonstrate concern for others. They also propose the use of vision statements that identify core virtues along with the implementation of this vision through appropriate involvement by staff and students.

We would argue that, like elementary schools, universities have an obligation to ethically nurture undergraduate and graduate students. Although the earliest years of life are most important for the formation of ethical habits, universities can influence ethics as well. Like the Greek polis, universities become ethical when they become communities of virtue that foster and demonstrate ethical excellence. Lack of commitment to teaching, lack of concern for student outcomes, false advertising about job opportunities open to graduates, and diploma-mill teaching practices are examples of institutional practices that corrode rather than nourish ethics on campuses.

Competency-based education, broadly considered, is increasingly of interest in business schools.  Under the competency-based approach (advocated, for example, by Rick Boyatzis of Case Western Reserve University, David Whetten of Brigham Young University, and Kim Cameron of the University of Michigan), students are exposed not only to theoretical concepts, but also to specific competencies that apply the theory. They are expected to learn how to apply in their lives the competencies learned in the classroom, for instance those relating to communication and motivating others. Important ethical competencies (or virtues) should be included and fostered alongside such competencies. Indeed, in applied programs such as business, each discipline and subject can readily be linked to ethical virtues. Any applied field, from traffic engineering to finance, can and should include ethical competencies as an integral part of each course. 

For example, one of us currently teaches a course on managerial skills, one portion of which focuses on stress management. The stress management portion includes a discussion of personal mission setting, which is interpreted as a form of stress management. The lecture emphasizes  how ethics can intersect with practical, real world decision making and how it can relate to competencies such as achievement orientation. In the context of this discussion, which is based on a perspective that originated with Aristotle, a tape is shown of Warren Buffett suggesting to M.B.A. students at the University of North Carolina that virtue is the most important element of personal success.

When giving this lecture, we have found that street smart undergraduate business students at Brooklyn College and graduates in the evening Langone program of the Stern School of Business of New York University respond well to Buffett’s testimony, perhaps better than they would to Aristotle’s timeless discussions in Nicomachean Ethics.

Many academics will probably resist integration of ethical competencies into their course curriculums, and in recent years it has become fashionable to blame economists for such resistance.  For example, in his book Moral Dimension, Amitai Etzioni equates the neoclassical economic paradigm with disregard for ethics. Sumantra Ghoshal’s article “Bad Management Theories are Destroying Good Management Practices,” in Academy of Management Learning and Education Journal, blames ethical decay on the compensation and management practices that evolved from economic theory’s emphasis on incentives.

We disagree that economics has been all that influential. Instead, the problem is much more fundamental to the humanities and social sciences and has its root in philosophy. True, economics can exhibit nihilism.  For example, the efficient markets hypothesis, that has influenced finance, holds that human knowledge is impotent in the face of efficient markets. This would imply that moral choice is impotent because all choice is so. But the efficient markets hypothesis is itself a reflection of a deeper and broader philosophical positivism that is now pandemic to the entire academy.
Over the past two centuries the assaults on the rational basis for morals have created an atmosphere that stymies interest in ethical education. In the 18th century, the philosopher David Hume wrote that one cannot derive an “ought” from an “is,” so that morals are emotional and cannot be proven true. Today’s academic luminaries have thoroughly imbibed this “emotivist” perspective. For example, Stanley Fish holds that even though academics do exhibit morality by condemning “cheating, academic fraud and plagiarism,” there is no universal morality beyond this kind of “local practice.” 

Whatever its outcome, the debate over the rational derivability of ethical laws from a set of clear and certain axioms that hold universally is of little significance in and of itself.  It will not determine whether ethics is more or less important in our lives; nor will it provide a disproof of relativism -- since defenders of relativism can still choose not to accept the validity of the derivation.

Yet ethics must still be lived -- even though the knowledge, competency, skill or talent that is needed to lead a moral life, a life of virtue, may not be derived from any clear and certain axioms. There is no need for derivation of the need, for instance, for good interpersonal skills. Rather, civilization depends on competency, skill and talent as much as it depends on practical ethics. Ethical virtue does not require, nor is it sustained by, logical derivation; it becomes most manifest, perhaps, through its absence, as revealed in the anomie and social decline that ensue from its abandonment.  Philosophy is beside the point.

Based on much evidence of such a breakdown, ethics education experts such as Thomas Lickona of the SUNY's College at Cortland have concluded that to learn to act ethically, human beings need to be exposed to living models of ethical emotion, intention and habit. Far removed from such living models, college students today are incessantly exposed to varying degrees of nihilism: anti-ethical or disembodied, hyper-rational positions that Professor Fish calls “poststructuralist” and “antifoundationalist.” In contrast, there is scant emphasis in universities on ethical virtue as a pre-requisite for participation in a civilized world. Academics tend to ignore this ethical pre-requisite, preferring to pretend that doing so has no social repercussions.

They are disingenuous – and wrong.

It is at the least counterintuitive to deny that the growing influence of nihilism within the academy is deeply, and causally, connected to increasing ethical breaches by academics (such as the cases of plagiarism and fraud that we cited earlier). Abstract theorizing about ethics has most assuredly affected academics’ professional behavior.

The academy’s influence on behavior extends, of course, far beyond its walls, for students carry the habits they have learned into society at large. The Enron scandal, for instance, had more roots in the academy than many academics have realized or would care to acknowledge. Kenneth Lay, Enron’s former chairman, holds a Ph.D. in economics from the University of Houston.Jeff Skilling, Enron’s former CEO, is a Harvard M.B.A. who had been a partner at the McKinsey consulting firm, one of the chief employers of top-tier M.B.A. graduates. According to Malcolm Gladwell in The New Yorker, Enron had followed McKinsey’s lead, habitually hiring the brightest M.B.A. graduates from leading business schools, most often from the Wharton School. Compared to most other firms, it had more aggressively placed these graduates in important decision-making posts. Thus, the crimes committed at Enron cannot be divorced from decision-making by the best and brightest of the newly minted M.B.A. graduates of the 1990s.

As we have seen, the 1966 AAUP statement implies the crucial importance of an ethical foundation to academic life. Yet ethics no longer occupies a central place in campus life, and universities are not always run ethically. With news of academic misdeeds (not to mention more spectacular academic scandals, such as the Churchill affair) continuing to unfold, the public rightly grows distrustful of universities.

It is time for the academy to heed the AAUP’s 1915 declaration, which warned that if the professoriate “should prove itself unwilling to purge its ranks of … the unworthy… it is certain that the task will be performed by others.” 

Must universities learn the practical value of ethical virtue by having it imposed from without?  Or is ethical revival possible from within? 

Candace de Russy and Mitchell Langbert
Author's email:

Candace de Russy is a trustee of the State University of New York and a Hudson Institute Adjunct Fellow. Mitchell Langbert is associate professor of business at Brooklyn College of the City University of New York.

Violence and the Sacred

Passing a Roman Catholic bookshop not long ago, I noticed a window display of books by and about Pope Benedict XVI, including a volume of interviews done back when he was Cardinal Joseph Ratzinger. The acquisitive urge was short-circuited by the fact that the store was closed. And in any case, I'll probably get an earful about his doctrines and policies soon enough from my mother-in-law. She's a Vatican II-type liberal who writes for a dissident Catholic newspaper,  of the kind likely to be amused by the rumor that the new pontiff's "street name" is Joey Rats.

Eventually,  the right combination of free time and impulse book-buying will make it feasible to catch up with the pope's thinking straight from the source. But for now, it's interesting to see that the summer issue of New Perspectives Quarterly has an interview about Benedict XVI with the literary theorist René Girard, who is now professor emeritus in French at Stanford University.

The introduction to the interview describes him as a professor of anthropology --  a mistake, but an interesting one.
Beginning in the late 1950s, Girard published a series of analyses of Cervantes, Shakespeare, Dostoevsky, and Proust (among others) that foregrounded their preoccupation with desire, envy, and imitation. He found that there was a recurrent structure in their work: a scenario of what he called "triangular" or "mimetic" desire. Don Quixote offers a fairly simple example. The would-be knight feels no particular longing for Dulcinea. Rather, he has thrown himself into a passionate imitation of certain models of what a knight must do -- and she's as close to a damsel as circumstances allow.

Girard argued that, at some deep level, all of human desire is like that. We learn by imitation -- and one of the things we learn is what, and how, to desire. (Hence, I didn't so much want that book in the window for its own sake, but as a means to triumph in the struggle for the position my wife calls "Ma's favorite son-in-law.")

For the most part, we are blind to the mediated nature of desire. But the great writers, according to Girard, are more lucid about this. They reveal the inner logic of desire, including its tendency to spread -- and, in spreading, to generate conflict. When several hands reach for the same object, some of them are bound to end up making fists. So begins a cycle of terror and retaliation; for violence, too, is mimetic.

By the 1970s, Girard had turned all of this into a grand theory of human culture. He described a process in which the contagion-like spread of mimetic desire and violence leads to the threat of utter social disintegration. At which point, something important happens: the scapegoat emerges. All of the free-floating violence is discharged in an act of murder against an innocent person or group which is treated (amidst the delirium of impending collapse) as the source of the conflict.

A kind of order takes shape around this moment of sacrificial violence. Myths and rituals are part of the commemoration of the act by which mimetic desire and its terrible consequences were subdued. But they aren't subdued forever. The potential for a return of this contagion is built into the very core of what makes us human.

Girard's thinking has not changed much in the 30 years or so since he published Violence and the Sacred, which appeared in France in 1972 and in an English translation from Johns Hopkins University Press in 1977. He has restated his theory any number of times, drawing in material from the various social sciences as evidence. He has spelled out some of its theological implications -- which, in Girard's own telling anyway, are profoundly Christian. He wasn't a believer when he started thinking about mimetic desire, but became a Catholic somewhere along the way. (Girard's readers have a right to expect a detailed spiritual autobiography, at some point.)

It isn't necessary to share Girard's creed to find his work of interest -- though I must admit to some uncertainty, after all this time, about how to classify his system of thought. You can trace some of his ideas back to Hegel (desire for the desire of the other), or sideways to George Bataille and Kenneth Burke (who both wrote about scapegoating). But there's also something reminiscent of Middlemarch about the whole thing, as if Girard were trying to finish Edward Casaubon's "Key to All Mythologies."

Girard has a small academic following, organized as the Colloquium on Violence and Religion, which produces an interdisciplinary journal called Contagion: Journal of Violence, Mimesis, and Culture. And there's a useful annotated bibliography of works by and about him available online.

The interview in this summer's issue of New Perspectives Quarterly is interesting, not just for Girard's comments on the new head of his own church, but for his thoughts on the dangers of mimetic desire in a global marketplace. One counterintuitive element of Girard's theory is that scapegoating is not the product of difference. Rather, he holds that mimetic desire and the resulting cycle of conflict tend to reduce people to the same level. (The moment of savage violence against the scapegoat is an effort to create a difference, a structure, an order in the chaos of sameness.) That would be the dark side of Tom Friedman's peppy thesis about how the world is now "flat."

The interview is also striking for Girard's full-throated proclamation that Christianity, alone among religions, can face the truth about mimetic desire. In a smart and welcome move, the editors of the Quarterly have invited the comments of someone from another religious tradition with very definite ideas about the intimate relationship between desire and human misery, Pankaj Mishra, author of An End to Suffering: The Buddha in the World, published last year by Farrar, Straus and Giroux.

Scott McLemee
Author's email:

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

Will to Power

For some time now, I have been collecting notes on the interaction between academics and journalists. In theory, at least, this relationship ought to be mutually beneficial -- almost symbiotic. Scholars would provide sound information and authoritative commentary to reporters -- who would then, in turn, perform the useful service of disseminating knowledge more broadly. 

So much for the theory. The practice is not nearly that sweet, to judge by the water-cooler conversation of either party, which often tends toward the insulting. From the mass-media side, the most concise example is probably H.L. Mencken's passing reference to someone as "a professor, hence an embalmer." And within the groves of academe itself, the very word "journalistic" is normally used as a kind of put-down. 

There is a beautiful symmetry to the condescension. It's enough to make an outsider -- someone who belongs to neither tribe, but regularly visits each -- wonder if some deep process of mutual-definition-by- mutual-exclusion might be going on. And so indeed I shall argue, one day, in a treatise considering the matter from various historical, sociological, and psychoanalytic vantage points. (This promises to be a book of no ordinary tedium.)

A fresh clipping has been added to my research file in the past couple of days, since reading Brian Leiter's objection to a piece on Nietzsche appearing in last weekend's issue of The New York Times Book Review. The paper asked the novelist and sometime magazine writer William Vollmann to review a biography of Nietzsche, instead of, let's say, an American university professor possessing some expertise on the topic.

For example, the Times editors might well have gone to Leiter himself, a professor of philosophy at UT-Austin and the author of a book called Nietzsche on Morality, published three years ago by Routledge. And in a lot of ways, I can't help wishing that they had. It would have made for a review more informative, and less embarrassingly inept, than the one that ran in the paper of record. 

Vollmann's essay is almost breathtaking in its badness. It manages to drag the conversation about Nietzsche back about 60 years by posing the question of whether or not Nietzsche was an anti-Semite or a proto-Nazi. He was not, nor is this a matter any serious person has discussed in a very long time. (The role of his sister, Elisabeth Forster-Nietsche, is an entirely different matter: Following his mental collapse, she managed to create a bizarre image of him as theorist of the Teutonic master-race, despite Nietzsche's frequent and almost irrepressible outbursts of disgust at the German national character.)

And while it is not too surprising that a review of a biography of a philosopher would tend to focus on, well, his life -- and even on his sex life, such as it was for the celibate Nietzsche -- it is still reasonable to expect maybe a paragraph or two about his ideas. Vollmann never gets around to that. Instead, he offers only the murkiest of pangyrics to Nietzsche's bravery and transgressive weirdness -- as if he were a contestant in the some X Games of the mind, or maybe a prototype of Vollmann himself. (Full disclosure: I once reviewed, for the Times in fact, Vollmann's meditation on the ethics of violence -- a work of grand size, uncertain coherence, and sometimes baffling turgidity. That was six weeks of my life I will never get back.)

Leiter has, in short, good reason to object to the review. And there are grounds, too, for questioning how well the Times has served as (in his words) "a publication that aspires to provide intellectual uplift to its non-scholarly readers."

Indeed, you don't even have to be an academic to feel those reservations. Throughout the late 1990s and early 2000s, for example, many readers would spend their Saturday afternoons studying a weekly section of the Times called "Arts and Ideas," trying to figure out where the ideas were.
By a nice paradox, though, the coverage of ideas improved, at least somewhat, after the "Arts and Ideas" section disappeared. (See, for example, the symposium inspired earlier this summer by a Times essay on early American history.) 

So while reading Leiter's complaint with much sympathy, I also found some questions taking shape about its assumptions -- and about his way of pressing the point on Vollmann's competence.
For one thing, Leiter takes it as a given that the best discussion of a book on Nietzsche would come from a scholar -- preferably, it seems, a professor of philosophy. At this, however, certain alarm bells go off. 

The last occasion Leiter had to mention The New York Times was shortly after the death of Jacques Derrida. His objection was not to the paper's front-page obituary (a truly ignorant and poorly reported piece, by the way). Rather, Leiter was unhappy to find Derrida described as a philosopher. He assured his readers that Derrida was not one, and had never taken the least bit seriously within the profession, at least here in the United States.

I read that with great interest, and with the sense of discovery. It meant that Richard Rorty isn't a philosopher, since he takes Derrida seriously. It also suggested that, say, DePaul University doesn't actually have a philosophy program, despite appearances to the contrary. (After all, so many of the "philosophy" professors there are interested in deconstruction and the like.) 

One would also have to deduce from Leiter's article that the Society for Phenomenology and Existential Philosophy is playing a very subtle joke on prospective members when it lists Derrida as one of the topics of interest they might choose to circle on the form they fill out to join the organization.

An alternative reading, of course, is that some people have a stringent and proprietary sense of what is "real" philosophy, and who counts as a philosopher. And by an interesting coincidence, such people once ruled Nietzsche out of consideration altogether. Until the past few decades, he was regarded as an essayist, an aphorist, a brilliant literary artist -- but by no means a serious philosopher. ("A stimulating thinker, but most unsound," as Jeeves tells Bertie Wooster, if memory serves.) The people who read Nietzsche in the United States a hundred years ago tended to be artists, anarchists, bohemians, and even (shudder) journalists. But not academic philosophers.

In short, it is not self-evident that the most suitable reviewer of a new book on Nietzsche would need to be a professor  -- let alone one who had published a book or two on him. (Once, the very idea would have been almost hopelessly impractical, because there were few, if any.) Assigning a biography of Nietzsche to a novelist instead of a scholar is hardly the case of malfeasance that Leiter suggests. If anything, Nietzsche himself might have approved: The idea of professors discussing his work would really have given the old invalid reason to recuperate.

Vollmann throws off a quick reference to "the relevant aspects of Schopenhauer, Aristotle and others by whom Nietzsche was influenced and against whom he reacted." And at this, Leiter really moves in for the kill.

"As every serious student of Nietzsche knows," he writes, "Aristotle is notable for his almost total absence from the corpus. There are a mere handful of explicit references to Aristotle in Nietzsche's writings (even in the unpublished notebooks), and no extended discussion of the kind afforded Plato or Thales. And apart from some generally superficial speculations in the secondary literature about similarities between Aristotle's 'great-souled man' and Nietzsche's idea of the 'higher' or 'noble' man -- similarities nowhere remarked upon by Nietzsche himself -- there is no scholarship supporting the idea that Aristotle is a significant philosopher for Nietzsche in any respect."

Reading this, I felt a vague mental itch. It kept getting stronger, and would not go away. For the idea that Aristotle was an important influence on Nietzsche appears in the work of the late Walter Kaufman -- the professor of philosophy at Princeton University who re-translated Nietzsche in the 1950s and '60s.

Kaufman published an intellectual biography that destroyed some of the pernicious myths about Nietzsche. He made the case for the coherence and substance of his work, and was merciless in criticizing earlier misinterpretations. He has had the same treatment himself, of course, at the hands of later scholars. But it was Kaufman, perhaps more than anyone else, who made it possible and even necessary for American professors of philosophy to take Nietzsche seriously.

So when Kaufman wrote that Nietzsche's debt to Aristotle's ethics was "considerable" .... well, maybe Leiter was right. Perhaps Kaufman was now just a case of someone making "superficial speculations in the secondary literature." But for a nonspecialist reviewer such as Vollmann to echo it did not quite seem like an indictable offense.

So I wrote to Leiter, asking about all of this. In replying, Leiter sounded especially put out that Vollmann had cited both Schopenhauer and Aristotle as influences. (For those watching this game without a scorecard: Nobody doubts the importance of Schopenhauer for Nietzsche.)

"To reference 'Schopenhauer and Aristotle' together as important philosophical figures for Nietzsche -- as Vollmann did -- is, indeed, preposterous," wrote Leiter in one message, "and indicative of the fact that Vollmann is obviously a tourist when it comes to reading Nietzsche. The strongest claim anyone has made (the one from Kaufmann) is that there is a kind of similarity between a notion in Aristotle and a notion in Nietzsche, but not even Kaufmann (1) showed that the similarity ran very deep; or (2) claimed that it arose from Aristotle's influence upon Nietzsche."

Well, actually, yes, Kaufman did make precisely that second claim. (He also quoted Nietzsche saying, "I honor Aristotle and honor him mostly highly...") And there is no real ground for construing the phrase "Schoenhauer and Aristotle" to mean "similarly and in equal measure."

There are preposterous things in the writing of William Vollmann. But a stray reference to a possible intellectual influence on Nietzsche is by no means one of them. Nor, for that matter, is the novelist's willingness to venture into a lair protected by fearsome dragons of the professoriat. I wish Vollmann had read more Nietzsche, and more scholarship on him than the biography he reviewed. But whatever else you can say about the guy, he's not a pedant.

In fact, the whole situation leaves me wondering if the problem ought not be framed differently. There is, obviously, a difference between an article in a scholarly journal and one appearing in a publication ordinarily read during breakfast (or later, in, as the saying goes, "the smallest room in the house"). It need not be a difference in quality or intelligence. Newspapers could do well for themselves by finding more professors to write for them. And the latter would probably enjoy it, not in spite of the sense of slumming, but precisely because of it.

But does it follow that the best results would come from having philosophers review the philosophy books, historians review the history books, and so forth? 

The arguments for doing so are obvious enough. But just as obvious are the disadvantages: Most readers would derive little benefit from intra-disciplinary disputes and niggling points of nuance spill over into the larger public arena.

It is probably a crazy dream, even something utopian, but here is the suggestion anyway. The Times Book Review (or some other such periodical) should from time to time give over an issue entirely to academic reviewers commenting on serious books -- but with everyone obliged to review outside their specialty. Hand a batch of novels to a sociologist. Give some books on Iraq to an ethicist. Ask a physicist to write about a favorite book from childhood. 

It might not be the best set of reviews ever published. But chances are it would be memorable -- and an education for everybody.

Scott McLemee
Author's email:


Subscribe to RSS - Philosophy
Back to Top