I’ll play Marc Antony. I have not come to praise large conferences, but to bury them. It is my opinion that mega humanities conferences are way past their sell-by date. For senior faculty the only reason to go is to schmooze with old friends; for junior faculty they are an onerous duty, and for graduate students they are a rip-off for which professional organizations ought to be collectively ashamed.
First codicil: I speak exclusively of humanities conferences, as they are the only ones I know firsthand. Friends in computing and the sciences tell me that collaborative efforts arise from their conferences. I’m willing to believe them. Maybe it’s a cultural thing. Most humanities people find it so hard to collaborate that their wills stipulate that their notes go with them to the grave.
Second codicil: I have only myself to blame for recent travails. I didn't need to go to my unnamed conference, but I got it into my head that it would be fun. I was wrong. It serves me right for violating my principles.
Five years ago I concluded that humanities conferences were out of touch with the times and vowed to attend only smaller regional meetings with less cachet, but more satisfaction. But I didn’t listen to me. Instead I spent four days and a considerable wad of cash jostling among a throng of over three thousand. I returned home more akin to Ponce de Leon, who sought the Fountain of Youth and found mostly dismal swampland. Sound harsh? See if any of these observations resonate with your own.
Problem One: Outmoded Presentations
We live in the communications age, but the memo apparently never circulated among those studying the liberal arts. For reasons arcane and mysterious, humanities scholars still read papers. That’s tedious enough at a small conference where one might attend six three-paper presentations. At my recent conference, sessions commenced at 8 a.m. and ran past 10 p.m. One could have conceivably attended 30 sessions and heard 90 or more papers, though the only ones with the stamina to attend more than six or seven sessions were either posturing or desperate.
I wanted my four-day sojourn to introduce me to new ideas, concepts, and teaching modules, but the reality of such a grueling schedule is that I was running on fumes by the end of day one. It would have helped if presenters took advantage of new technology, but things seldom got more flash than PowerPoint, a program that, alas, seems to encourage more reading. Let me reiterate something I’ve said for years: the death penalty should apply to those who read anything from a PowerPoint slide other than a direct quote. It's an academic conference, for crying out loud; assume your audience is reasonably proficient at reading! Seriously, does anyone need to fly across the country to listen to a paper? Why not do as science conferences have done for years: post papers online and gather to have a serious discussion of those papers?
The mind-numbing tedium of being read to for four days is exacerbated by the fact that many humanities scholars have little idea about the differences between hearing and reading. If you construct a paper that’s so highly nuanced that understanding it rests upon subtle turns of phrase or complicated linguistic shifts, do not look up from your paper with a wan smile indicating you are enamored of your own cleverness; go back to your room and rewrite the damn thing. Audience, clarity, and coherence are pretty much the Big Three for speech and composition, unless one's audience is the International Mindreaders' Society. By the way, is there something wrong with using a map, providing a chart, or summarizing a work that few in the room are likely to have read? And do bother to tell me why your paper matters.
I actually heard several very exciting papers, but most of the offerings were dreadful. Note to young scholars: stop relying on the Internet and check out journals that predate 1995 before you proclaim a “discovery.” And if you really want to stand out, work on your shtick. Guess which papers I remember? Yep -- those in which the presenter did more than read to me.
Critical note to young scholars: Want to turn off everyone in the room? Be the person who doesn’t think that the 20-minute limit applies to you. Nothing says "non-collegial" more clearly.
Problem Two: Expense
Another reason to rethink conferences is that they cost an arm and a leg to attend. I had partial funding from my university because I was presenting -- and no, I bloody well did not read my paper -- but I was still out of pocket for quite a hunk of cash. If you attend a humanities conference and want to stay anywhere near the actual site of the event, plan on $150 per night for lodging in a soulless franchise hotel with windowless conference rooms and quirky technology, $20 per day for Internet access, another $200 for conference fees, roughly $500 for airfare, at least $50 for taxis to and from the airport -- almost no U.S. city has a convenient shuttle service anymore -- and money for whatever you plan on eating.
Budget plenty for the latter if your conference is in what is glibly called a Destination City. That’s shorthand for a theme area marketing itself as unique, though it’s actually a slice of Generica surrounded by shops and restaurants identical to those found in suburban malls in every way except one: captive audiences equal higher prices. (One small example: the Starbucks inside the pedestrian precinct at my hotel charged a buck more per cup than the one on the street 100 yards away.) Do the math and you can see that you can easily drop a few grand on a megaconference. (That’s what some adjuncts are paid per course!)
An immediate cost-saving adjustment would be to confine conferences to airline hub cities such as New York, Chicago, Los Angeles, Atlanta, and Houston. The moment the conference locates to a (not my term) "second-tier" city, allot another few hundred dollars for "connecting flights," a term used by the airline industry because it sounds nicer than saying you’ll spend six hours waiting in a hub, after you’ve sprinted through the airport like Usain Bolt for your next flight, found the gate closed, and retreated to the rebooking counter.
Problem Three: Victimized Grad Students
I'm a parsimonious Scot who resents spending money on boring hotels and lousy food, but I can afford it when I have to. Grad students can’t. A major way in which megaconferences have changed in the past several decades is that there’s considerably less balance between senior scholars, junior colleagues, and graduate students. (Senior scholars used to accompany the latter two in a mentor capacity.) Now there is just a smattering of senior and junior scholars, and they’re often holed up in hotel suites conducting interviews. Whenever they can, search committee members flee the conference and rendezvous with old friends. They might attend a session or two. Unless they have to be there, there aren’t many junior colleagues in attendance at all because they're busy getting material into publication and they can meet presentation expectations at cheaper regional meetings, or save their dough and go to prestigious (-sounding) international gatherings.
So who’s left? Graduate students. Lots of graduate students. So many that conservationists would recommend culling the herd if they were wild mustangs. Grad students have always gone to conferences in hopes of making their mark, attracting attention, and meeting people who can help them advance. That was the way it was done -- 20 years ago. Now network opportunities are slimmer. Whom do they meet? Mostly other grad students, often those massed outside of interview rooms.
Of all the antiquated things about large conferences, the "cattle call" interview is the most perverse. These were barbaric back in the days in which there were jobs; now they’re simply cruel. At least a third of attendees at my conference were grad students from a single discipline: English. As has been discussed many times on this site, most of them shouldn't be in grad school in the first place. How many of the thousand-plus English grad students can realistically hope to get an academic job of any sort?
The Modern Language Association predicts that only 900 English jobs will come open for all of 2011. That’s 900 in all specialties of English, the bulk of which will be in writing and rhetoric, not Austen and Proust. Will a fifth of those at the conference get a job? The odds are long. It's probably more like half of that, and if we're talking about a good job, slice it in half once more. So why ask strapped grad students to attend expensive conferences for 15-minute preliminary interviews? Do a telephone interview, for heaven’s sake; it’s kinder on both grad students and search committees.
As I did as a grad student, many young hopefuls pooled resources and economized where they could, but the sad truth is that the vast majority of attendees spent a small fortune on a gamble whose odds aren't much greater than buying lottery tickets. Are associations playing the role of enabler to grad student delusions? Yes. Here’s another thought: Instead of holding a big conference, sponsor a teleconference. Charge a fee for uploads, but give speakers one-year access to the URL, which they can make available to potential employers. Use the savings to the association to lobby for more tenure-track faculty.
Problem Four: No-Shows
You spend lots of money, you sit through desultory talks, and head off to the one or two sessions that made you want to attend the conference in the first place. What do you find? It’s been canceled because only one of the presenters showed up, and that paper was combined with several others of sessions that suffered the same fate. Didn’t you see the 3x5 card tacked to the conference bulletin board?
As noted above, I’m in favor of putting large conferences to rest. But If we insist on having them, let’s at least make sure they’re as advertised. O.K., things do happen, but in most cases missing presenters are simply AWOL. I know it smacks of McCarthyism, but I’ve come to support the idea of a data bank of no-shows that employers, conference planners, and deans can check.
Problem Five: Urban Sprawl
What’s the point of a conference that’s so big it’s inaccessible? I walked between two different hotels to attend sessions and pored over a Britannica-sized program to locate them. Conference attendees were housed in four "official" hotels and untold numbers of others. With round-the-clock sessions and decentralization, the few networking opportunities that did exist were logistically difficult. It took me two entire days to find my old friends, let alone new folks I wanted to engage. I met two interesting people at the airport. I never saw them again.
In Praise of Small Conferences
There are other problems I’ll leave for now, including the gnawing suspicion that some big conferences have become sinecures for "insiders" who have become "players" within associations. Let’s just say that there is a serious disconnect between how the big conferences operate and what makes sense in the changing world of academe.
Teleconferences with real-time discussion groups and online forums would be one good starting point for reform; providing more resources for regional and local conferences would be another. Small gatherings have issues of their own -- no-shows, sparsely attended sessions, overreliance on volunteers -- but they compensate by offering intimacy, good value, face-to-face feedback, and easier opportunities to network. It's time to give these the cachet they deserve. The big conference is like a one-size-fits-all t-shirt; it simply doesn’t accessorize most people. I’m done. For real. Unless I get funding for an exotic overseas meeting. (Just kidding!)
Rob Weir, who writes the "Instant Mentor" column for Inside Higher Ed's Career Advice section, has published six books and numerous articles on social and cultural history, and has been cited for excellence in teaching on numerous occasions during his 20 years in the college classroom.
Submitted by John Basl on October 5, 2009 - 3:00am
Ever since this piece on the hiring process in philosophy was published in Inside Higher Ed, there has been a lot of discussion about the role that pedigree should play in hiring committees' decisions about job candidates (see here, here, and
In the early 1970s, a French publisher issued a sort of photo album devoted to Jean-Paul Sartre, who was the most famous philosopher in the world. He had been for some while, so the photojournalistic dossier on him was quite full. The book is full of pictures of him alongside equally famous figures from the world stage -- Camus and Castro, for example, and Simone de Beauvoir, of course. You also see him in the midst of dramatic events, as when he addressed an assembly of revolutionary students during May ’68. There are a few images of the philosopher in a less public capacity. As I recall, there is a baby portrait or two. Plus there were pictures of the Sartrean babes, who seemed to get younger as he got older.
The man was a philosophical action figure, to be sure. But my favorite pages in the book show him at his desk, with manuscripts piled up precariously nearby, or at a café table, scribbling away. Sartre once said that he felt like a machine while working on The Critique of Dialectical Reason, grinding out each day’s quota of concepts. And that’s what’s happening in those photographs of him with pen in hand and tobacco pipe in jaw -- tuning out everything else but the hard work of philosophizing. But who knows? A photograph cannot document thought. It’s entirely possible that Sartre was updating his schedule to accommodate a new girlfriend, rather than analyzing Stalinism.
The same brain did both -- a fact that lends itself to philosophical inquiry. Just where do you draw the line between task-oriented thinking and whatever it is philosophers do while they are “doing philosophy”? It is a conundrum.
In his new book Philosophers, from Oxford University Press, the New Yorker photographer Steve Pyke assembles a portrait gallery of contemporary thinkers. It embodies a conundrum or two of its own -- beginning with the title. In 1995, the British press Zelda Cheatle issued a collection of Pyke’s photographs that was also called Philosophers, which now fetches a high price from secondhand dealers. These are, it bears stressing, completely distinct books. All but one of the pictures in the new collection were taken over the past decade. Only two images from the earlier volume appear in the new one -- in the introductory pages, separate from the hundred portraits making up the main body of the book.
So we have, in other words, two volumes of the same kind, on the same subject, by the same author. They bear the same title. And yet they are not identical. A teachable moment in metaphysics? Yes, but one with practical implications for the used-book trade: a certain percentage of people trying to buy the older volume online will end up getting really, really irritated.
The book from Oxford is quite handsome. And its status as an aesthetic object is not a minor consideration. (For that matter, its aesthetics as a status object are also pretty demanding. It feels like you should get a nicer coffee table, just to have someplace to put it.) Without going so far as to say that Pyke represents philosophers as a subcategory of the beautiful people, he certainly renders them in beautiful black and white.
Ethnography forms no part of what he has in mind: his photographs do not show subjects going about their daily routines or occupying their usual niches. It’s difficult to think of Sartre without picturing him in certain settings – bars, cafés, lecture halls, etc. Furthermore, these places aren’t just elements of his biography; they figure into his work (the waiter in Being and Nothingness is an obvious example). Pyke’s philosophers, by contrast, hang in the void. Usually they are set against a solid black backdrop. The one conspicuous exception is the portrait of Michael Friedman, with an unreadable chalkboard diagram behind him. Their heads loom like planets in the depths of space. The camera registers the texture of skin and hair, the expression on the lips and in the eyes. Scarcely anything else enters the frame -- an earring, perhaps, or the neck of a sweater. Most of the subjects look right into the camera, or just to the side.
With Pyke, the thinker becomes, simply, a face. The effect is intimate, but also strangely abstract. The place and date of the photo session is indicated, but the book provides no biographical information about the subjects. I recognized about a quarter of them off the top of my head, such as Robert Brandom, David Chalmers, Patricia Churchland, Arthur Danto, Sydney Morgenbesser, Richard Rorty. A couple are even on TV from time to time. Both Harry Frankfurt and Bernard-Henri Levy have been on "The Daily Show." That two or three pages could not be found to list a couple of books by each figure is puzzling, although most of the portraits are accompanied by very brief remarks by the subjects on the nature or motivation of their work.
“Philosophy is the way we have of reinventing ourselves,” says Sydney Morgenbesser. Ruth Millikan quotes Wilfrid Sellars from Science, Perception, and Reality: “The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term.” Fortunately not everyone is so gnomic. The comments by Jerry Fodor seem the funniest: “To the best of my recollection, I became a philosopher because my parents wanted me to become a lawyer. It seems to me, in retrospect, that there was much to be said for their suggestion. On the other hand, many philosophers are quite good company; the arguments they use are generally better than the ones that lawyers use; and we do get to go to as many faculty meetings as we like at no extra charge.”
The ambivalence in Sally Haslanger’s statement felt more than vaguely familiar: “Given the amount of suffering and injustice in the world, I flip-flop between thinking that doing philosophy is a complete luxury and that it is an absolute necessity. The idea that it is something in between strikes me as a dodge. So I do it in the hope that it is a contribution, and with the fear that I’m just being self-indulgent. I suppose these are the moral risks life is made of.” That sounds quite a bit like Sartre, actually.
In the interview prefacing the collection, Pyke says that his intention is to make philosophers “seem more human, less of a mystery.” And that is where the true conundrum lies. Some philosophers look dyspeptic, while others have goofy smiles, but that isn’t what makes them human -- let alone philosophers. Making something “more human” precludes rendering it “less of a mystery,” since the human capacity for thought is itself an ever-deepening mystery.
Pyke thinks visually. A more interesting commentary on the figures in his portrait gallery might come indirectly, from the late Gilbert Ryle. An Oxford don and the author of The Concept of Mind, he gave a lecture that tried to sort out the relationship between deep cogitation and various other sorts of mental activity. To that end, he focused on the question of what that naked guy in Rodin's sculpture was doing -- and how it presumably differed from, say, a professor preparing to teach a class.
“The teacher has already mastered what he wants his students to master,” said Ryle. “He can guide them because he is on his own ground. But le Penseur is on ground unexplored by himself, and perhaps unexplored by anyone. He cannot guide himself through this jungle. He has to find his way without guidance from anyone who already knows it, if anyone does know it…. The teacher is a sighted leader of the blind, where le Penseur is a blind leader of the blind -- if indeed the very idea of his being or having a leader fits at all.”
That seems like a good description of what the subjects of Pyke's photographs spend their time doing. Not, of course, while the camera is turned on them. To judge by the expressions of some, their thoughts may have been something closer to, "Wow, I'm being photographed by someone from The New Yorker. How did that happen?"
A few weeks ago, sitting over a cup of coffee, a writer in his twenties told me what it had been like to attend a fairly sedate university (I think he used the word "dull") that had a few old-time New Left activists on its faculty.
"If they thought you were interested in anything besides just your career," he said, "if you cared about ideas or issues, they got really excited. They sort of jumped on you."
Now, I expected this to be the prelude to a little tribute to his professors – how they had taken him seriously, opened his mind to an earlier generation’s experience, etc. But no.
"It was like they wanted to finish their youth through you, somehow," he said. "They needed your energy. They needed you to admire them. They were hungry for it. It felt like I had wandered into a crypt full of vampires. After a while, I just wanted to flee."
It was disconcerting to hear. My friend is not a conservative. And in any case, this was not the usual boilerplate about tenured radicals seeking to brainwash their students. He was not complaining about their ideas and outlook. This vivid appraisal of his teachers was not so much ideological as visceral. It tapped into an undercurrent of generational conflict that the endless "culture wars" seldom acknowledge.
You could sum it up neatly by saying that his professors, mostly in their fifties and sixties by now, had been part of the "Baby Boom," while he belonged to "Generation X."
Of course, there was a whole segment of the population that fell between those two big cultural bins -- people born at the end of the 1950s and the start of the 1960s. Our cohort never had a name, which is probably just as well. (For one thing, we’ve never really believed that we are a "we." And beside, the whole idea of a prepackaged identity based on what year you were born seems kind of tacky.)
One effect of living in this no-man’s-land between Boomers and Xers is a tendency to feel both fascinated and repulsed by moments when people really did have a strong sense of belonging to a generation. The ambivalence is confusing. But after a while it seems preferable to nostalgia -- because nostalgia is always rather simple-minded, if not dishonest.
The recent documentary The Weather Underground (a big hit with the young-activist/antiglobalization crowd) expressed doe-eyed sadness that the terrible Amerikan War Machine had forced young idealists to plant bombs. But it somehow never mentioned that group’s enthusiasm for the Charles Manson "family." (Instead of the two-fingered hippie peace sign, Weather members flashed a three-finger salute, in honor of the fork used to carve the word "war" into one of the victims’ stomach.) Robert McNamara and Henry Kissinger have a lot of things to answer for – but that particular bit of insanity is not one of them.
Paul Berman, who was a member of Students for a Democratic Society at Columbia University during the strike of 1968, has been writing about the legacy of the 1960s for a long time. Sometimes he does so in interesting ways, as in parts of his book A Tale of Two Utopias; and sometimes he draws lessons from history that make an otherwise placid soul pull out his hair with irritation. He has tried to sort the positive aspects of the 1960s out from the negative -- claiming all the good for a revitalized liberalism, while treating the rest as symptoms of a lingering totalitarian mindset and/or psychological immaturity.
Whatever the merits of that analysis, it runs into trouble the minute Berman writes about world history -- which he always paints in broad strokes, using bright and simple colors. In his latest book, Terror and Liberalism, he summed up the last 300 years in terms that suggested Europe and the United States had grabbed their colonies in a fit of progress-minded enthusiasm. (Economic exploitation, by Berman’s account, had nothing to do with it, or not much.) Liberalism and Terror is a small book, and easy to throw.
His essay in the new issue of Bookforum is, to my mind, part of the thoughtful, reflective, valuable side of Berman’s work. In other words, I did not lose much hair reading it.
The essay has none of that quality my friend mentioned over coffee – the morbid hunger to feast off the fresh blood of a younger generation’s idealism. Berman has fond recollections of the Columbia strike. But that is not the same as being fond of the mentality that it fostered. "Nothing is more bovine than a student movement," he writes, "with the uneducated leading the anti-educated and mooing all the way."
The foil for Berman’s reflections is the sociologist Daniel Bell, who left Columbia in the wake of the strike. At the time, Bell’s book The End of Ideology was the bete noir of young radicals. (It was the kind of book that made people so furious that they refused to read it – always the sign of the true-believer mentality in full effect.) But it was Bell’s writing on the history of the left in the United States that had the deepest effect on Berman’s own thinking.
Bell noticed, as Berman puts it, "a strange and repeated tendency on the part of the American Left to lose the thread of continuity from one generation to the next, such that each new generation feels impelled to reinvent the entire political tradition."
There is certainly something to this. It applies to Berman himself. After all, Terror and Liberalism is pretty much a jerry-rigged version of the Whig interpretation of history, updated for duty in the War on Terror. And the memoiristic passages in his Bookforum essay are, in part, a record of his own effort to find "the thread of continuity from one generation to the next."
But something else may be implicit in Bell’s insight about the "strange and repeated tendency" to lose that thread. It is a puzzle for which I have no solution readily at hand. Namely: Why is this tendency limited to the left?
Why is it that young conservatives tend to know who Russell Kirk was, and what Hayek thought, and how Barry Goldwater’s defeat in 1964 prepared the way for Reagan’s victory in 1980? Karl Marx once wrote that "the tradition of all the dead generations weighs like a nightmare on the brain of the living." So how come the conservatives are so well-rested and energetic, while the left has all the bad dreams?
Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.
On newstands now -- or at least the ones with a decent selection of foreign periodicals -- you can find a special number of Le Magazine littéraire devoted entirely to Jean-Paul Sartre. Last month was the 25th anniversary of the grand funeral procession in Paris that drew 50,000 people out into the spring rain to see him off. (It was the last great demonstration of the 1960s generation, as people said at the time.) And next month marks the centennial of his birth.
He was "the conscience of his times," the cover announces. That is certainly arguable. It tends to equate denunciation with ethical critique. The man who declared, in 1952, that Soviet citizens enjoyed perfect freedom to criticize their government should probably be Exhibit A for any demonstration that sometimes contrarianism is not enough.
But what is not in doubt – to judge by the rest of issue – is that Sartre was the most-photographed philosopher in history.
I don’t mean that to be quite as sardonic as it probably sounds. One of Sartre’s definitive themes was the struggle between radical human freedom (in which each moment of one’s existence involves a choice) and the alienating experience of being defined by the gaze of the Other. It is striking to look at so many pictures of Sartre -- so many instances when the camera “caught” him, freezing him into an icon.
It is a commonplace now to grumble that the golden age of the grand intellectual is gone, replaced by what, in the late 1990, Pierre Bourdieu called les fastthinkers, those well-spoken authorities who can be summoned to the television studio to "think faster than a speeding bullet." Nobody who struggles through the denser pages of Sartre’s work -- his efforts to create an "existential psychoanalysis," for example, and then to square it with the work of Marx -- will confuse him with, say, David Brooks.
The other posthumous role Sartre plays in the cultural imagination is that of the academic escapee. He was a philosopher who lived and worked "in the world," not in the university. Indeed, reading the letters he exchanged with Maurice Merleau-Ponty during their political falling-out, I was struck by a passage in which Sartre needles his ex-friend for lacking the nerve to take some distance from academic life.
So the lingering afterimage of Sartre is now a double negation: Neither pundit nor professor, a mind that was free – or at least moving somewhere beyond both media and academe.
Not only did the philosopher escape their gravitational pull, but he was at was at war with the tendency of his own work to solidify into something inert. He rejected the Nobel Prize for Literature because, among other things, he said that he did not want to become an institution. But Sartre also spoke of thinking "against myself" – of throwing himself into the struggle "to break some bones in my own head."
Interviewed in 1959 – while finishing the first volume of his Critique of Dialectical Reason, a work whose ungainly style owes something to the fact that Sartre popped amphetamines constantly while writing it – he said he was looking forward to having his ideas “deposited in little coffins.” He wanted to be done with the ideas that he had been pursuing; he wanted to feel empty again.
"A writer is fortunate if he can attain such a state," he told the interviewer. "For when one has nothing to say, one can say everything....What is primary is what I haven’t yet written – what I intend to write (not tomorrow, but the day after tomorrow) and what perhaps I will never write."
It is an incredibly appealing notion – a dream of perfect freedom, and of constant renewal. And it is also something of an illusion. So one learned from Annie Cohen Solal's Sartre: A Life, first published in France in 1985. (The translation has just been reissued by the New Press – one of the few publications in English, at least so far, to mark this year’s double anniversary.)
Cohen-Solal’s portrait revealed a man who was both an incredibly well-polished product of the upper echelons of the French educational system and a canny player in the game of Parisian literary politics. As his work reached an American readership in the late 1940s, its arrival was shaped by a similar combination of academic and media forces. Ann Fulton’s book Apostles of Sartre: Existentialism in America 1945-1963 (Northwestern University Press, 1999) shows how his reception benefitted from both the support of professors in the French department at Yale University and publications about him in popular magazines.
He ascended to the position of "consecrated heretic" -- a culturally sanctioned gadfly, the embodiment of social criticism, in definace of all conformism. That is the term (coined by Bourdieu) that Paul M. Cohen applies to Sartre in Freedom’s Moment: An Essay on the French Idea of Liberty from Rousseau to Foucault (University of Chicago Press, 1997).
The consecrated heretic has no authority beyond the power of his thought and his example. Someone unimpressed or horrified by any particular consecrated heretic is likely to notice that he is also thereby relieved of any particular responsibility or accountability.
"Responsibility" was, of course, a decisive term in the Sartrean lexicon. Each of us is ultimately responsible for the creating, from our freedom, whatever commitments give meaning to our lives. But on a less metaphysical plane of responsibility, Sartre’s example presents an especially knotty set of problems.
In his introduction to the new edition of Cohen-Solal’s biography, Cornel West offers a judicious evaluation of the thinker’s political legacy – his courage and incisiveness in denouncing racism and colonialism, the amoral and sometimes merely dishonest nature of some of his statements about Communism, and his surprising tendency to downplay the Palestinian question. Sartre was a passionate and very firm supporter of the right of Israel to exist as a state.
So how do you come to a final judgment on such a figure? Was he the "conscience of his time" or a symptom of its disorders?
It’s not clear that a decision is possible or desirable. At least, not on any level beyond choosing either idol-worship or contempt. I’ve been reading Sartre my entire adult life (and then some) – and have gone to both extremes, along the way.
And now, in the midst of these anniversaries, what is less impressive than any given position he assumed than his willingness -- from time to time -- to think against himself. There is much to admire in his example, and much to avoid. But more than anything else, I keep going back to him in hopes of learning how to break a few bones in my head.
Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.
Paul Ricoeur -- the philosopher whose writings on hermeneutics were the cornerstone of an ambitious rethinking of the relationship between the humanities and the social sciences -- died on Friday at the age of 92.
By the late 1960s, American academic presses had made him one of the first French thinkers of his generation with a substantial body of work available in English. Even as an octogenarian, he was more productive than many scholars half his age. Late last year, the University of Chicago Press published Memory, History, Forgetting -- an enormous study of the conditions of possibility for both historical writing and moral forgiveness. His book The Course of Recognition is due from Harvard University Press this fall. And Ricoeur himself provided the ideal survey of his life and philosophical development in Critique and Commitment, a lively set of interviews that Columbia University Press issued in 1998.
At the time of his death, he was professor emeritus at both the University of Paris and the University of Chicago. "The entire European humanist tradition is mourning one of its most talented spokesmen," said a statement from the office of Jean-Pierre Raffarin, the prime minister of France, released over the weekend.
And that leads to a conundrum. It is Tuesday already, and nobody in the American media has insulted Ricoeur yet. What's going on? Have our pundits lost their commitment to mocking European intellectuals and the pointy-headed professors who read them?
At first I thought it might be that people were still tired from abusing Derrida following his death last fall. But clearly that's not it.
Over at The National Interest, for example, the neocons are already going after Jurgen Habermas, who is still very much alive, by the way. Despite being almost fanatically moderate -- his residual Marxism tempered by an admiration for the American pragmatist philosophers, combined with a hearty enthusiasm for constitutional democracy -- Habermas has criticized the invasion and occupation of Iraq. Besides, his political philosophy is grounded on the theory that the fundamental tendency of language is toward open and truthful communication. You can see where that would bother some people.
Yet Ricoeur, it seems, gave no one offense. With hindsight, that was a terrible oversight. A thinker receives significant publicity if (and only if) people speak his name in tones of apoplectic hysteria.
It was his own fault, really. Even more than Habermas, he was the soul of civility and calm intellectual labor. You will read many a page of Ricoeur before coming across any flash of apocalyptic rhetoric or stagy contrarianism. The closest thing to a bracing put-down in his writing is probably Ricouer's definition of structuralism as "Kantianism without a transcendental subject." (Which is actually kind of funny. But not, you know, Slavoj-Zizek-obscene-joke funny.
And while his work connected up with numerous other fields (sharing borders with linguistics, psychoanalysis, sociology, history, religious studies, literary theory, and law, just to give the short list), Ricoeur was in many ways an academic philosopher of a very traditional sort. His philosophy made no effort to jump out of its own skin -- to become, say, a form of avant-garde literature, or a conceptual weapon for guerrilla warfare. His work tends to ambitious, edifying, cumulative, and ... well, just a little bit dull at times.
That is not meant as a criticism exactly. Many things in the world are exciting without being good for you. Some readers find that the work of Ricoeur's younger colleague Gilles Deleuze is rather like the philosophical equivalent of taking LSD. (I can attest that, yes, there are definite similarities.) But there are limits to how much of cultural life can be conducted as a rave.
Perhaps the most salutary aspect of Ricoeur's work is that it has precisely the opposite effect of either the polemical or the psychedelic modes of intellectual intervention. It synthesizes, rather than obliterates. It treats the process of interpretation as an act of opening to the possibility of communication with the entire human community -- not as the moment when Narcissus becomes fascinated by the image in the pool.
Before dealing with the substance of Ricoeur's work, however, I want first to grapple more with the uncanny silence since his death. In American public life, you're nobody until somebody hates you. Frankly, the absence of rancor now verges on the disrespectful.
A modest proposal, then. Just as the neocons are now denouncing the mild-mannered Habermas, this might be the time for leftist wingnuts to go to town on Paul Ricoeur. As ever, total ignorance is no obstacle. Here is a quick checklist of talking points ("shrieking points?") for anyone who wants to get the vitriol flowing.
(1) Paul Ricoeur was a Christian his entire life. Most of his work is secular philosophical analysis, but he did publish writings on the Bible, and even gave sermons. Despite all that unpleasantness between his Huguenot ancestors and the Roman Catholic Church in the 16th and 17th centuries, Ricoeur is known to have spent a fair bit of time in discussion with Pope John Paul II over the years. The pope cited his work, even. Ricoeur denied being a theologian. However, there are subtle echoes and parallels between his philosophical ideas and his religious beliefs.
(For maximum effect, imitate the Fox News style like so: "Some have said that Ricoeur was actually a fundamentalist who used philosophy to brainwash his students." Nobody actually thinks this, but if you repeat it enough, someone will.)
(2) Ricoeur never really joined the "theory counterculture." When he published a major work on Freud and philosophy in 1964, the Lacanians got mad at him for failing to mention their guru. He soon became "the designated enemy," as Francois Dosse writes in his History of Structuralism, targeted by the Marxist students around Louis Althusser. A few years later, Ricoeur was a candidate for a chair at the prestigious Collège de France -- and lost the position to Michel Foucault.
(This definitely raises questions about his commitment to destroying the phallogocrentrism of dead white European males. Plus, now he is one.)
(3) In the mid-1960s, Ricoeur was a prominent advocate of reforms in the overburdened French university system. In 1967, he became involved in the launching of a satellite campus in the Parisian suburb of Nanterre, where he soon became dean of the college of letters. Following the student uprising of May 1968, the campus turned into a scene of continuous warfare among radical factions, some armed with chains and iron bars. Ricoeur himself was physically attacked, and some faculty refused to risk coming on campus. Eventually, he requested that police patrol the campus. This only made things worse, and in March 1970 Ricoeur resigned his position and took a long leave of absence.
(In repeating this information, hint that Ricoeur was actually a man of the right. Ignore the fact that Ricoeur was a pacifist, a supporter of the left-wing journal Esprit, and an outspoken opponent of French policy during the Algerian war. Just stress that he called the cops.)
(4) Finally, the clincher. Last November, Ricoeur was named as one of the winners of the John W. Kluge Prize for Lifetime Achievement in the Humanities and Social Sciences, given by the Library of Congress. He split the award of $1 million with Jaroslav Pelikan, a scholar of religious history.
(Note how suspicious it is that he received this award right after Derrida died. Imply that it was the religious right's way of rewarding an intellectual lapdog. Added benefit: now you have an excuse not to read him.)
Well, that was depressing, even as an exercise in satire.
In the middle of writing it, I learned of Russell Arben Fox's "Thoughts on Ricoeur." It's the first really substantial blogospheric commentary on the philosopher's death to come my way -- and a sign that perhaps things are not so dire, after all.
As Fox notes, Ricoeur was one of those thinkers it proved easy to "save for later." Beyond a certain level of familiarity, it seemed hopeless to try to catch up with him, because he was just too prolific. And there was also the fact that his work seems to have gone through somewhere between three and five stages of development.
Be that as it may, I'll try on Thursday to give a thumbnail account of what was (and still is) at stake in his work. Or at least what I know of it.
In the meantime, you might have a look at Ricoeur's acceptance speech for the Kluge Award. Knowing that he wrote it at the age of 91 is a reminder of one benefit of Paul Ricoeur's work: Reading it keeps you humble.
Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays. In February, he wrote two columns about a scholarly conference on Derrida's legacy.
Pierre Bourdieu had a way of getting under one's skin. I don't mean his overtly polemical works -- the writings against globalization and neoliberalism that appeared toward the end of his life, for example. He was at his most incisive and needling in works that were much less easy for the general public to read. Bourdieu's sociological research kept mapping out the way power, prestige, and exclusion work in the spheres of politics, economics, and culture.
He was especially sharp (some thought brutal) in analyzing the French academic world. At the same time, he did very well in that system; very well indeed. He was critical of the way some scholars used expertise in one field to leverage themselves into positions of influence having no connection with their training or particular field of confidence. It could make him sound like a scold. At the same time, it often felt like Bourdieu might be criticizing his own temptation to become an oracle.
In the course of my own untutored reading of Bourdieu over the years, there came a moment when the complexity of his arguments and the aggressiveness of his insights suddenly felt like manifestations of a personality that was angry on the surface, and terribly disappointed somewhere underneath. His tone registered an acute (even an excruciating) ambivalence toward intellectual life in general and the educational system in particular.
Stray references in his work revealed glimpses of Bourdieu as a "scholarship boy" from a family that was both rural and lower-middle class. You learned that he had trained to be a philosopher in the best school in the country. Yet there was also the element of refusal in even his most theoretical work -- an almost indignant rejection of the role of Master Thinker (played to perfection in his youth by Jean-Paul Sartre) in the name of empirical sociological research.
There is now a fairly enormous secondary literature on Bourdieu in English. Of the half-dozen or so books on him that I've read in the past few years, one has made an especially strong impression, Deborah Reed-Danahay's recent study Locating Bourdieu (Indiana University Press, 2005). Without reducing his work to memoir, she nonetheless fleshes out the autobiographical overtones of Bourdieu's major concepts and research projects. (My only complaint about the book is that it wasn't published 10 years ago: Although it is a monograph on his work rather than an introductory survey, it would also be a very good place for the new reader of Bourdieu to start.)
Reed-Danahay is a professor of anthropology at the University of Texas at Arlington. She recently answered a series of questions by e-mail. Q: Bourdieu published sociological analyses of the Algerian peasantry, the French academic system, the work of Martin Heidegger, and patterns of attendance at art galleries -- to give only a very incomplete list. Yet his work seems much more focused and coherent than a catalog of topics would suggest. Can you to sum up the gist of his work, or rather how it all holds together?
A: Yes, I agree that, at first glance, Bourdieu's work seems to cover a seemingly disparate series of studies. When I read Bourdieu's work on education in France after first being exposed to his Algerian peasant studies in my graduate work in anthropology, I wondered if this was the same person. But when the entire corpus is taken together, and when one carefully reads Bourdieu's many texts that returned to themes brought up earlier in his work, one can see several underlying themes and recurring intellectual questions.
One way to get a handle on his work is to realize that Bourdieu was interested in explaining social stratification, and the hierarchy of social values, in contemporary capitalist societies. He wanted to study systems of domination in a way that held some room for social agency but without a notion of complete individual freedom. Bourdieu often evoked Althusser as an example of a theorist who had too mechanical a view of internalized domination, while Sartre represented the opposite extreme of a philosopher who posited free will.
Bourdieu believed that we are all constrained by our internalized dispositions (our habitus), deriving from the milieu in which we are socialized, which influence our world view, values, expectations for the future, and tastes. These attributes are part of the symbolic or cultural capital of a social group.
In a stratified society, a higher value is associated with the symbolic capital of members of the dominant sectors versus the less dominant and "controlled" sectors of society. So that people who go to museums and like abstract art, for instance, are expressing a form of symbolic capital that is more highly valued than that of someone who either rarely goes to museums or who doesn't like abstract art. The person feels that this is "just" a matter of taste, but this can have important consequences for children at school who have not been exposed to various forms of symbolic capital by their families.
Bourdieu studied both social processes (such as the French educational system, Algerian postcolonial economic dislocations, or the rural French marriage system), and individual figures and their social trajectories -- including Heidegger, Flaubert, and an Algerian worker. Bourdieu was trying to show how the choices these people made (and he often wrote of choices that were not really choices) were expressions of the articulation of habitus and the social field in which it is operating.
Q: Something about his career always seemed paradoxical. Sartre was always his worst-case reference, for example. But by the time of his death in 2002, Bourdieu was regarded as the person who had filled Sartre's shoes. Has your work given you a sense of how to resolve this seeming contradiction?
A: There is a lot of silence in Bourdieu's work on the ways in which he acquired power and prestige within the French academic system or about how he became the most famous public intellectual in France at the time of his death. He was more self-reflexive about earlier periods in his life. I have trouble defending Bourdieu in this contradiction about his stance toward the public intellectual, even though I applaud his political engagements.
I think that Bourdieu felt he had more authority to speak to some of these issues than did other academics, given his social origins and empirical research in Algeria and France among the underclass. Bourdieu repeatedly positioned the sociologist (and one can only assume he meant himself here, too) as having a privileged perspective on the "reality" of systems of domination.
Bourdieu was very critical of Sartre for speaking out about the war in Algeria and for championing a sort of revolutionary spirit among Algerians. Bourdieu accused him of trying to be a "total intellectual" who could speak on any topic and who did not understand the situation in Algeria as profoundly as did the sociologist-ethnologist Bourdieu. When Bourdieu himself became more visible in his own political views (particularly in attacks against globalization and neo-liberalism), he does seem to have acted like the "journalist"-academics he lampooned in Homo Academicus. Nevertheless, when he was criticizing (in his essay On Television) what he saw as the necessity for "fast thinking" on television talk shows in France, where talking heads must quickly have something to say about anything, Bourdieu did (in his defense) refrain from pontificating about any and everything.
There is still a huge controversy raging in France about Bourdieu's political engagements. His detractors vilify him for his attacks against other intellectuals and journalists while he became a public intellectual himself. His defenders have published a book of his political writings ( Interventions, 1961-2001) seeking to show his long-standing commitments, and continue to guard his reputation beyond the grave.
I cannot help but think that Bourdieu's public combative persona, and his (in his own terms) refusals and ruptures, helped rather than thwarted his academic career. The degree to which this was calculated or (as he claimed) was the result of the "peasant" habitus he acquired growing up in southwestern France, is unknown.
Q: So much of his analysis of academic life is focused on the French university system that there is always a question of how well it could apply elsewhere. I'm curious about your thoughts on this. What's it been like to move between his concepts and models and your own experience as an American academic?
A: I see two ways to answer your question. Certainly, in the specifics, French academia is very different. I have experienced that directly. My own American cultural values of independence (which may, I am aware, be a total illusion) conflict with those of many French academics.
When I first arrived in France to do my dissertation fieldwork, I came with a grant that opened some doors to French academia, but I had little direct sponsorship by powerful patrons in the U.S. I was doing a project that had little to do with the work of my professors, none of whom had done research in France or Europe, and it was something that I had come up with on my own. This was surprising to the French, who were familiar with a patron-client system of professor/student relations. Most of the graduate students I met in France were involved in projects related to the work of their professors.
French academia, still centralized in Paris despite attempts at decentralization, is a much smaller universe than that of the vast American system. There is little room for remaining "outside" of various polemics there. I've learned, for instance, that some people whom I like and admire in France hated Bourdieu and that Bourdieu followers tend to be very fierce in their defense of him and want to promote their view of his work.
This is not to say that American academia doesn't have similar forces operating, but there are multiple points of value and hierarchy here. Whereas Bourdieu could say that Philosophy dominated French academia during the mid-20th century, it is harder to pinpoint one single dominant intellectual framework here.
I do, however, feel that Bourdieu's critique of academia as part of a larger project of the study of power (which he made very explicit in The State Nobility) is applicable beyond France. His work on academia provided us with a method of inquiry to look at the symbolic capital associated with academic advancement and, although the specific register of this will be different in different national contexts, the process may be similar. Just as Bourdieu did in France, for example, one could study how it is that elite universities here "select" students and professors.
Q: We have a memoir of Sartre's childhood in The Words. Is there anything comparable for Bourdieu?
A: Bourdieu produced self-referential writings that began to appear in the late 1990s, with "Impersonal Confessions" in Pascalian Meditations (1997), a section called "Sketch for a Self-Analysis" in his final lectures to the Collège de France, Science of Science and Reflexivity (2001), and then the stand-alone volume Esquisse pour une Auto-Analyse, published posthumously in 2004. [ Unlike the other titles listed, this last volume is not yet available in English. -- S.M.]
A statement by Bourdieu that "this is not an autobiography" appears as an epigraph to the 2004 essay. I find his autobiographical writings interesting because they show us a bit about how he wanted to use his own methods of socio-analysis on himself and his own life, with a focus particularly on his formative years -- his childhood, his education, his introduction to academia, and his experiences in Algeria.
Bourdieu was uncomfortable with what he saw as narcissism in much autobiography, and also was theoretically uncomfortable with life stories that stressed the individual as hero without sufficient social analysis. He had earlier written an essay on the 'biographical illusion" that elaborated on his biographical approach, but without self-reference. These essays are not, then, autobiographical in the conventional sense of a linear narrative of a life. Bourdieu felt that a truly scientific sociology depended on reflexivity on the part of the researcher, and by this he meant being able to analyze one's own position in the social field and one's own habitus.
On the one hand, however, Bourdieu's auto-analysis was a defensive move meant to preempt his critics. Bourdieu included a section on self-interpretation in his book on Heidegger, in which he referred to it as "the riposte of the author to those interpretations and interpreters who at once objectify and legitimize the author, by telling him what he is and thereby authorizing him to be what they say he is..." (101). As Bourdieu became increasingly a figure in the public eye and increasingly a figure of analysis and criticism, he wanted to explain himself and thus turned to self-interpretation and auto-analysis.
Q: In a lot of ways, Bourdieu seems like a corrosive thinker: someone who strips away illusions, rationalizations, the self-serving beliefs that institutions foster in their members. But can you identify a kernel of something positive or hopeful in his work -- especially in regard to education? I'd like to think there is one....
A: Bourdieu had little to say about how schools and universities operate that is positive, and he was very critical of them. The hopeful kernel here is that in understanding how they operate, how they inflict symbolic violence and perpetuate the illusions that enable systems of domination, we can improve educational institutions.
Bourdieu felt strongly that by de-mystifying the discourses and aura of authority surrounding education (especially its elite forms), we can learn something useful. The trick is how to turn this knowledge into power, and Bourdieu did not have any magical solutions for this. That is work still to be done.