Passing a Roman Catholic bookshop not long ago, I noticed a window display of books by and about Pope Benedict XVI, including a volume of interviews done back when he was Cardinal Joseph Ratzinger. The acquisitive urge was short-circuited by the fact that the store was closed. And in any case, I'll probably get an earful about his doctrines and policies soon enough from my mother-in-law. She's a Vatican II-type liberal who writes for a dissident Catholic newspaper, of the kind likely to be amused by the rumor that the new pontiff's "street name" is Joey Rats.
Eventually, the right combination of free time and impulse book-buying will make it feasible to catch up with the pope's thinking straight from the source. But for now, it's interesting to see that the summer issue of New Perspectives Quarterly has an interview about Benedict XVI with the literary theorist René Girard, who is now professor emeritus in French at Stanford University.
The introduction to the interview describes him as a professor of anthropology -- a mistake, but an interesting one.
Beginning in the late 1950s, Girard published a series of analyses of Cervantes, Shakespeare, Dostoevsky, and Proust (among others) that foregrounded their preoccupation with desire, envy, and imitation. He found that there was a recurrent structure in their work: a scenario of what he called "triangular" or "mimetic" desire. Don Quixote offers a fairly simple example. The would-be knight feels no particular longing for Dulcinea. Rather, he has thrown himself into a passionate imitation of certain models of what a knight must do -- and she's as close to a damsel as circumstances allow.
Girard argued that, at some deep level, all of human desire is like that. We learn by imitation -- and one of the things we learn is what, and how, to desire. (Hence, I didn't so much want that book in the window for its own sake, but as a means to triumph in the struggle for the position my wife calls "Ma's favorite son-in-law.")
For the most part, we are blind to the mediated nature of desire. But the great writers, according to Girard, are more lucid about this. They reveal the inner logic of desire, including its tendency to spread -- and, in spreading, to generate conflict. When several hands reach for the same object, some of them are bound to end up making fists. So begins a cycle of terror and retaliation; for violence, too, is mimetic.
By the 1970s, Girard had turned all of this into a grand theory of human culture. He described a process in which the contagion-like spread of mimetic desire and violence leads to the threat of utter social disintegration. At which point, something important happens: the scapegoat emerges. All of the free-floating violence is discharged in an act of murder against an innocent person or group which is treated (amidst the delirium of impending collapse) as the source of the conflict.
A kind of order takes shape around this moment of sacrificial violence. Myths and rituals are part of the commemoration of the act by which mimetic desire and its terrible consequences were subdued. But they aren't subdued forever. The potential for a return of this contagion is built into the very core of what makes us human.
Girard's thinking has not changed much in the 30 years or so since he published Violence and the Sacred, which appeared in France in 1972 and in an English translation from Johns Hopkins University Press in 1977. He has restated his theory any number of times, drawing in material from the various social sciences as evidence. He has spelled out some of its theological implications -- which, in Girard's own telling anyway, are profoundly Christian. He wasn't a believer when he started thinking about mimetic desire, but became a Catholic somewhere along the way. (Girard's readers have a right to expect a detailed spiritual autobiography, at some point.)
It isn't necessary to share Girard's creed to find his work of interest -- though I must admit to some uncertainty, after all this time, about how to classify his system of thought. You can trace some of his ideas back to Hegel (desire for the desire of the other), or sideways to George Bataille and Kenneth Burke (who both wrote about scapegoating). But there's also something reminiscent of Middlemarch about the whole thing, as if Girard were trying to finish Edward Casaubon's "Key to All Mythologies."
Girard has a small academic following, organized as the Colloquium on Violence and Religion, which produces an interdisciplinary journal called Contagion: Journal of Violence, Mimesis, and Culture. And there's a useful annotated bibliography of works by and about him available online.
The interview in this summer's issue of New Perspectives Quarterly is interesting, not just for Girard's comments on the new head of his own church, but for his thoughts on the dangers of mimetic desire in a global marketplace. One counterintuitive element of Girard's theory is that scapegoating is not the product of difference. Rather, he holds that mimetic desire and the resulting cycle of conflict tend to reduce people to the same level. (The moment of savage violence against the scapegoat is an effort to create a difference, a structure, an order in the chaos of sameness.) That would be the dark side of Tom Friedman's peppy thesis about how the world is now "flat."
The interview is also striking for Girard's full-throated proclamation that Christianity, alone among religions, can face the truth about mimetic desire. In a smart and welcome move, the editors of the Quarterly have invited the comments of someone from another religious tradition with very definite ideas about the intimate relationship between desire and human misery, Pankaj Mishra, author of An End to Suffering: The Buddha in the World, published last year by Farrar, Straus and Giroux.
For some time now, I have been collecting notes on the interaction between academics and journalists. In theory, at least, this relationship ought to be mutually beneficial -- almost symbiotic. Scholars would provide sound information and authoritative commentary to reporters -- who would then, in turn, perform the useful service of disseminating knowledge more broadly.
So much for the theory. The practice is not nearly that sweet, to judge by the water-cooler conversation of either party, which often tends toward the insulting. From the mass-media side, the most concise example is probably H.L. Mencken's passing reference to someone as "a professor, hence an embalmer." And within the groves of academe itself, the very word "journalistic" is normally used as a kind of put-down.
There is a beautiful symmetry to the condescension. It's enough to make an outsider -- someone who belongs to neither tribe, but regularly visits each -- wonder if some deep process of mutual-definition-by- mutual-exclusion might be going on. And so indeed I shall argue, one day, in a treatise considering the matter from various historical, sociological, and psychoanalytic vantage points. (This promises to be a book of no ordinary tedium.)
A fresh clipping has been added to my research file in the past couple of days, since reading Brian Leiter's objection to a piece on Nietzsche appearing in last weekend's issue of The New York Times Book Review. The paper asked the novelist and sometime magazine writer William Vollmann to review a biography of Nietzsche, instead of, let's say, an American university professor possessing some expertise on the topic.
For example, the Times editors might well have gone to Leiter himself, a professor of philosophy at UT-Austin and the author of a book called Nietzsche on Morality, published three years ago by Routledge. And in a lot of ways, I can't help wishing that they had. It would have made for a review more informative, and less embarrassingly inept, than the one that ran in the paper of record.
Vollmann's essay is almost breathtaking in its badness. It manages to drag the conversation about Nietzsche back about 60 years by posing the question of whether or not Nietzsche was an anti-Semite or a proto-Nazi. He was not, nor is this a matter any serious person has discussed in a very long time. (The role of his sister, Elisabeth Forster-Nietsche, is an entirely different matter: Following his mental collapse, she managed to create a bizarre image of him as theorist of the Teutonic master-race, despite Nietzsche's frequent and almost irrepressible outbursts of disgust at the German national character.)
And while it is not too surprising that a review of a biography of a philosopher would tend to focus on, well, his life -- and even on his sex life, such as it was for the celibate Nietzsche -- it is still reasonable to expect maybe a paragraph or two about his ideas. Vollmann never gets around to that. Instead, he offers only the murkiest of pangyrics to Nietzsche's bravery and transgressive weirdness -- as if he were a contestant in the some X Games of the mind, or maybe a prototype of Vollmann himself. (Full disclosure: I once reviewed, for the Times in fact, Vollmann's meditation on the ethics of violence -- a work of grand size, uncertain coherence, and sometimes baffling turgidity. That was six weeks of my life I will never get back.)
Leiter has, in short, good reason to object to the review. And there are grounds, too, for questioning how well the Times has served as (in his words) "a publication that aspires to provide intellectual uplift to its non-scholarly readers."
Indeed, you don't even have to be an academic to feel those reservations. Throughout the late 1990s and early 2000s, for example, many readers would spend their Saturday afternoons studying a weekly section of the Times called "Arts and Ideas," trying to figure out where the ideas were. By a nice paradox, though, the coverage of ideas improved, at least somewhat, after the "Arts and Ideas" section disappeared. (See, for example, the symposium inspired earlier this summer by a Times essay on early American history.)
So while reading Leiter's complaint with much sympathy, I also found some questions taking shape about its assumptions -- and about his way of pressing the point on Vollmann's competence. For one thing, Leiter takes it as a given that the best discussion of a book on Nietzsche would come from a scholar -- preferably, it seems, a professor of philosophy. At this, however, certain alarm bells go off.
The last occasion Leiter had to mention The New York Times was shortly after the death of Jacques Derrida. His objection was not to the paper's front-page obituary (a truly ignorant and poorly reported piece, by the way). Rather, Leiter was unhappy to find Derrida described as a philosopher. He assured his readers that Derrida was not one, and had never taken the least bit seriously within the profession, at least here in the United States.
I read that with great interest, and with the sense of discovery. It meant that Richard Rorty isn't a philosopher, since he takes Derrida seriously. It also suggested that, say, DePaul University doesn't actually have a philosophy program, despite appearances to the contrary. (After all, so many of the "philosophy" professors there are interested in deconstruction and the like.)
One would also have to deduce from Leiter's article that the Society for Phenomenology and Existential Philosophy is playing a very subtle joke on prospective members when it lists Derrida as one of the topics of interest they might choose to circle on the form they fill out to join the organization.
An alternative reading, of course, is that some people have a stringent and proprietary sense of what is "real" philosophy, and who counts as a philosopher. And by an interesting coincidence, such people once ruled Nietzsche out of consideration altogether. Until the past few decades, he was regarded as an essayist, an aphorist, a brilliant literary artist -- but by no means a serious philosopher. ("A stimulating thinker, but most unsound," as Jeeves tells Bertie Wooster, if memory serves.) The people who read Nietzsche in the United States a hundred years ago tended to be artists, anarchists, bohemians, and even (shudder) journalists. But not academic philosophers.
In short, it is not self-evident that the most suitable reviewer of a new book on Nietzsche would need to be a professor -- let alone one who had published a book or two on him. (Once, the very idea would have been almost hopelessly impractical, because there were few, if any.) Assigning a biography of Nietzsche to a novelist instead of a scholar is hardly the case of malfeasance that Leiter suggests. If anything, Nietzsche himself might have approved: The idea of professors discussing his work would really have given the old invalid reason to recuperate.
Vollmann throws off a quick reference to "the relevant aspects of Schopenhauer, Aristotle and others by whom Nietzsche was influenced and against whom he reacted." And at this, Leiter really moves in for the kill.
"As every serious student of Nietzsche knows," he writes, "Aristotle is notable for his almost total absence from the corpus. There are a mere handful of explicit references to Aristotle in Nietzsche's writings (even in the unpublished notebooks), and no extended discussion of the kind afforded Plato or Thales. And apart from some generally superficial speculations in the secondary literature about similarities between Aristotle's 'great-souled man' and Nietzsche's idea of the 'higher' or 'noble' man -- similarities nowhere remarked upon by Nietzsche himself -- there is no scholarship supporting the idea that Aristotle is a significant philosopher for Nietzsche in any respect."
Reading this, I felt a vague mental itch. It kept getting stronger, and would not go away. For the idea that Aristotle was an important influence on Nietzsche appears in the work of the late Walter Kaufman -- the professor of philosophy at Princeton University who re-translated Nietzsche in the 1950s and '60s.
Kaufman published an intellectual biography that destroyed some of the pernicious myths about Nietzsche. He made the case for the coherence and substance of his work, and was merciless in criticizing earlier misinterpretations. He has had the same treatment himself, of course, at the hands of later scholars. But it was Kaufman, perhaps more than anyone else, who made it possible and even necessary for American professors of philosophy to take Nietzsche seriously.
So when Kaufman wrote that Nietzsche's debt to Aristotle's ethics was "considerable" .... well, maybe Leiter was right. Perhaps Kaufman was now just a case of someone making "superficial speculations in the secondary literature." But for a nonspecialist reviewer such as Vollmann to echo it did not quite seem like an indictable offense.
So I wrote to Leiter, asking about all of this. In replying, Leiter sounded especially put out that Vollmann had cited both Schopenhauer and Aristotle as influences. (For those watching this game without a scorecard: Nobody doubts the importance of Schopenhauer for Nietzsche.)
"To reference 'Schopenhauer and Aristotle' together as important philosophical figures for Nietzsche -- as Vollmann did -- is, indeed, preposterous," wrote Leiter in one message, "and indicative of the fact that Vollmann is obviously a tourist when it comes to reading Nietzsche. The strongest claim anyone has made (the one from Kaufmann) is that there is a kind of similarity between a notion in Aristotle and a notion in Nietzsche, but not even Kaufmann (1) showed that the similarity ran very deep; or (2) claimed that it arose from Aristotle's influence upon Nietzsche."
Well, actually, yes, Kaufman did make precisely that second claim. (He also quoted Nietzsche saying, "I honor Aristotle and honor him mostly highly...") And there is no real ground for construing the phrase "Schoenhauer and Aristotle"Â to mean "similarly and in equal measure."
There are preposterous things in the writing of William Vollmann. But a stray reference to a possible intellectual influence on Nietzsche is by no means one of them. Nor, for that matter, is the novelist's willingness to venture into a lair protected by fearsome dragons of the professoriat. I wish Vollmann had read more Nietzsche, and more scholarship on him than the biography he reviewed. But whatever else you can say about the guy, he's not a pedant.
In fact, the whole situation leaves me wondering if the problem ought not be framed differently. There is, obviously, a difference between an article in a scholarly journal and one appearing in a publication ordinarily read during breakfast (or later, in, as the saying goes, "the smallest room in the house"). It need not be a difference in quality or intelligence. Newspapers could do well for themselves by finding more professors to write for them. And the latter would probably enjoy it, not in spite of the sense of slumming, but precisely because of it.
But does it follow that the best results would come from having philosophers review the philosophy books, historians review the history books, and so forth?
The arguments for doing so are obvious enough. But just as obvious are the disadvantages: Most readers would derive little benefit from intra-disciplinary disputes and niggling points of nuance spill over into the larger public arena.
It is probably a crazy dream, even something utopian, but here is the suggestion anyway. The Times Book Review (or some other such periodical) should from time to time give over an issue entirely to academic reviewers commenting on serious books -- but with everyone obliged to review outside their specialty. Hand a batch of novels to a sociologist. Give some books on Iraq to an ethicist. Ask a physicist to write about a favorite book from childhood.
It might not be the best set of reviews ever published. But chances are it would be memorable -- and an education for everybody.
My ambition to write a musical about the arrival of Lacanian theory in Tito-era Yugoslavia has always hinged on the zestiness of the intended title: Å½iÅ¾ek! The music would be performed, of course, by Laibach, those lords of industrial-strength irony; and the moment of psychoanalytic breakthrough that Lacan called la Passe would be conveyed via an interpretative dance, to be performed by a high-stepping chorus of Slovenian Rockettes.
Alas, it was all a dream. (Either that, or a symptom.) The funding never came through, and now Astra Taylor has laid claim to the title for her documentary, shown recently at the Toronto Film Festival.
Å½iÅ¾ek! is distributed by Zeitgeist, which also released the film Derrida. The company provided a screener DVD of Å½iÅ¾ek! that I've now watched twice -- probably the minimum number of times necessary to appreciate the intelligence and style of Taylor's work. The director is 25 years old; this is her first documentary.
It's not just her willingness to let Slavoj Å½iÅ¾ek be Slavoj Å½iÅ¾ek -- responding bitterly to an orthodox deconstructionist in the audience at a lecture at Columbia University, for example, or revisiting some familiar elements of his early work on the theory of ideology. Nor is it even her willingness to risk trying to popularize the unpopularizable. The film ventures into an account of Å½iÅ¾ek's claim of the parallel between Marx's concept of surplus value and Lacan's "object petit a." (This is illustrated, you may be relieved to know, via a cartoon involving bottles of Coke.)
Beyond all that, Å½iÅ¾ek! is very smart as a film. How it moves from scene to scene -- the playful, yet coherent and even intricate relationship between structure and substance -- rewards more than one viewing.
In an e-mail conversation with Taylor, I mentioned how surprising it was that Å½iÅ¾ek! actually engaged with his theory. It would be much easier, after all, just to treat him as one wacky dude -- not that Å½iÅ¾ek quite avoids typecasting himself.
"I wanted very much to make a film about ideas," she told me. "That said, I think the film betrays a certain fascination with Å½iÅ¾ek's personality. He's got this excess of character and charisma that can't be restrained, even when we would try to do an interview about 'pure theory.'"
Å½iÅ¾ek! isn't a biography. (For that, you're probably better off reading Robert Boynton's profile from Lingua Franca some years ago.) Taylor says she started work with only a hazy sense of what she wanted the documentary to do -- but with some definite ideas about things she wanted to avoid. "I didn't want to make a conventional biopic," she recalls, "tracing an individual's trajectory from childhood, complete with old photographs, etc. It's not even that I have anything against that form in particular, it just didn't seem the right approach for a film about Å½iÅ¾ek."
Her other rule was to avoid pretentiousness. "Especially when dealing in theory, which has quite a bad name on this front, one has to be careful," she says. "I decided to veer towards the irreverent instead of the reverential. Granted, this is fairly easy when you're working with Slavoj Å½iÅ¾ek."
Fair enough: This is the man who once explained the distinctions between German philosophy, English political economy, and the French Revolution by reference to each nation's toilet design. (Å½iÅ¾ek runs through this analysis in the film; it also appeared last year in an article in The London Review of Books.)
Just to be on the safe side, Taylor also avoided having talking heads on screen "instructing the audience in what to think about Å½iÅ¾ek or how to interpret his work." The viewer sees Å½iÅ¾ek interact with people at public events, including both an enormous left-wing conference in Buenos Aires and a rather more tragically hip one in New York. But all explanations of his ideas come straight from the source.
In preparing to shoot the film, Taylor says she came up with a dozen pages of questions for Å½iÅ¾ek, but only ended up asking two or three of them. Having interviewed him by phone a couple of years ago, I knew exactly what she meant. You pose a question. Å½iÅ¾ek then takes it wherever he wants to go at the moment. The trip is usually interesting, but never short.
One of the funniest moments in Å½iÅ¾ek! is a video clip from a broadcast of a political debate from 1990, when he ran for president of Yugoslavia as the candidate of the Liberal Democratic Party. At one point, an old Communist bureaucrat says, "Okay, Å½iÅ¾ek, we all know your IQ is twice that of everybody else here put together. But please, please let somebody else talk!"
Taylor says she soon realized that her role was less that of interviewer than traffic director, "giving positive or negative feedback, telling him when to stop or when he'd said enough, and directing the flow of the conversation as opposed to conducting a straightforward interview with stops and starts."
She kept a log throughout the various shoots, "summing up everything he said in what would eventually be a one hundred page Excel spreadsheet. That way, I knew what subjects had been addressed, in what setting, and if the material was useful or needed to be reshot." About halfway through the production, she and Laura Hanna, the film's editor, assembled a rough cut.
"At that point," Taylor recalls, "I began to choose various passages for the animated sequences. I knew there needed to be some recurring themes and a broader theoretical argument to underpin the film.... But that makes it sound too easy and rational. The majority of choices were more intuitive, especially at the beginning when we were trying to cut down eighty hours of raw footage. When you're editing a film it is as much about what feels right, what flows, as what makes sense logically."
One really inspired moment came when Taylor learned of Jacques Lacan's appearance on French educational television in the early 1970s. She obtained a copy of the program and sat down with Å½iÅ¾ek in his apartment to watch it.
The transcript of Lacan's enigmatic performance is available as the book Television: A Challenge to the Psychoanalytic Establishment (Norton, 1991). But to get the full effect, you really have to see Lacan in action: Self-consciously inscrutAble, yet also suave, he utters short and gnomic sentences, looking for all the world like Count Dracula ready for a nap after a good meal.
The contrast with the stocky and plebeian Å½iÅ¾ek (a bundle of energy and nervous tics) is remarkable; and so is the highly ambivalent way he responds to hearing his Master's voice. Å½iÅ¾ek takes pride in being called a dogmatic Lacanian. But the video clearly bothers him.
"I think Å½iÅ¾ek reacts to the footage on different registers at once," as Taylor puts it, "which is what makes the scene so interesting. He's obviously disturbed by Lacan's delivery, which seems very staged and pompous. Yet he attempts to salvage the situation by discussing how the very idea of a 'true self' is ideological or by arguing that the substance of Lacan's work should not be judged by his style."
The scene is also plenty meta. We are watching footage in which the most psychoanalytic of philosophers watches a video of the most philosophical of psychoanalysts. And yet somehow it does not feel the least bit contrived. If anything, there is something almost voyeuristically fascinating about it.
Taylor told me that the sequence "evokes what I see as one of the film's central themes: the predicament of the public intellectual today, and Å½iÅ¾ek's strategies for coping with it."
Early in the documentary -- and again at the end -- he denounces the fascination with him as an individual, insisting that the only thing that matters is his theoretical work. He gives a list of what he regards as his four really important books: The Sublime Object of Ideology, For They Know Not What They Do, The Ticklish Subject, and a work now in progress that he has provisionally titled The Parallax View (a.k.a. the sequel to Ticklish).
There is a clear hint that his other and more popular books are negligible by contrast; he speaks of wanting to kill his doppelganger, the wild-and-crazy guy known for obscene jokes and pop-culture riffs.
"And yet," as Taylor notes, "Å½iÅ¾ek, despite his frustrations, continues to put on a good show, albeit one quite different in demeanor from Lacan's." That is what makes the final images of Å½iÅ¾ek! so interesting.
I don't want to give the surprise ending away. Suffice it to say that it involves a spiral staircase, and makes explicit reference to Vertigo, Alfred Hitchcock's great meditation on Freud's Beyond the Pleasure Principle. (Whether or not Hitchcock ever actually read Freud is sort of beside the point, here.) The scene also harkens back to earlier comments by Å½iÅ¾ek -- and yet it really comes out of left field.
Taylor says they improvised it at the very last moment of shooting. She calls the scene "fantastically head-scratching," and not just for the audience.
"Over the last few months," she says, "I have come up with all sorts of pseudo-theoretical justifications and interpretation of it, all the different layers of meaning and resonances with Å½iÅ¾ek's work and life and the intersections of the two. But all of these, I must admit, were created after the fact ( après coup, as Lacan would say)."
So what are her theories? "I feel like I would be ruining the fun if I elaborated on them," she told me. "That is, after all, precisely what people are supposed to debate over a beer after seeing the movie."
For more on Å½iÅ¾ek! -- including information about its availability and a clip from the film -- check out its Web site.
Inspired less by Phillip Seymour Hoffman’s impressive turn in Capote than by the condition of my checking account, I have been considering the idea of turning out a true-crime book -- a lurid potboiler, but one fortified with a little cultural history, albeit of an eccentric kind. The idea has been stewing for a couple of months now. My working title is BTK and the Beatnik. Here’s the pitch.
In late August,The New York Times ran a short profile of Colin Wilson, the last surviving member of the Angry Young Men -- a literary group that emerged in Britain during the mid-1950s at just about the time Allen Ginsberg and Jack Kerouac were interrupting the presumed consensus of Eisenhower-era America. Kingsley Amis, Philip Larkin, and John Osborne as the Angries’ most prominent novelist, poet, and playwright, respectively. And Colin Wilson -- whose quasi-existentialist credo The Outsider appeared in May 1956, when he was 24 -- was taken up by the press as the Angry thinker.
"The Outsider," wrote Wilson, "is a man who cannot live in the comfortable, insulated world of the bourgeoisie, accepting what he sees and touches as reality. He sees too deep and too much, and what he sees is essentially chaos." He developed this theme through a series of commentaries on literary and philosophical texts, all handled with a certain vigorous confidence that impressed the reviewers very much.
As, indeed, did Wilson’s personal story, which was a publicist’s dream come true. Born to a working-class family, he had quit school at 16, taken odd jobs while reading his way through modern European literature, and camped out in a public park to save on rent as he wrote and studied at the library of the British Museum. Wilson was not shy about proclaiming his own genius. For several months, the media lent him a bullhorn to do it. He posed for photographs with his sleeping bag, and otherwise complied with his fans' desire that he be something like a cross between Albert Camus and James Dean.
The backlash was not long in coming. It started with his second book, Religion and the Rebel, which got savage notices when it appeared in 1957, despite being virtually indistinguishable from The Outsider in subject and method. “We are tired of the boy Colin,” as one literary journalist is supposed to have said at the time.
Roundly abused, though no wit abashed, he kept on writing, and has published dozens of novels and works of nonfiction over the intervening decades. The Outsider has never gone out of print in English; it developed a solid following in Arabic translation, and appeared a few years ago in Chinese.
The piece in The Times came in the wake of his recent autobiography, Dreaming to Some Purpose -- a book revealing that Wilson is still very confident of his own place as perhaps the greatest writer of our time. This is a minority opinion. The reviews his memoir got in the British press were savage. So far it has not received much attention in the United States. Wilson has a cult following here; and the few scholarly monographs on his work tend to have that cult-following feel.
Perhaps the most forceful claim for his importance was made by Joyce Carol Oates, who provided an introduction to the American edition of his science-fiction novel The Philosopher’s Stone (1969). Oates hailed Wilson for "consciously attempting to imagine a new image for man ... freed of ambiguity, irony, and the self-conscious narrowness of the imagination we have inherited from 19th century Romanticism."
Her praise seems to me a bit overstated. But I have a certain fondness for that novel, having discovered it during Christmas break while in high school. It set me off on a fascination with Wilson's work that seems, with hindsight, perfectly understandable. Adolescence is a good time to read The Outsider. For that matter, Wilson himself was barely out of it when he wrote the book. Although now a septuagenarian, the author now displays the keen egomania of someone a quarter that age.
Now, just as The Times was running its profile of the last Angry Young Man, the sentencing hearing for Dennis Rader, the confessed BTK killer, was underway in Kansas. News accounts mentioned, usually in passing, his claims that the striking of sadistic murders he committed over the years were the result of something he called "Factor X." He did not elaborate on the nature of Factor X, though reporters often did often note that the killer saw himself as demonically possessed. (He also referred to having been dropped on his head as a child, which may have been one of Rader’s cold-blooded little jokes.)
But in a television interview, Rader indicated that Factor X, while mysterious, was also something in his control. "I used it," he said.
A jolting remark -- at least to anyone familiar with Colin Wilson's work. Over the years, Wilson has developed a whole battery of concepts (or at least of neologisms) to spell out his hunch that the Outsider has access to levels of consciousness not available to more conformist souls. Something he dubbed "Faculty X" has long been central to Wilson’s improvised psychological theories, as well as to his fiction. (The Philosopher’s Stone, which Oates liked so much, is all about Faculty X.)
As Wilson describes it, Faculty X is the opposite of the normal, dulled state of consciousness. It is our potential to grasp, with an incandescent brilliance and intensity of focus, the actuality of the world, including the reality of other times and places. "Our preconceptions, our fixed ideas about ourselves," as Wilson puts, "means that we remain unaware of this power." We trudge along, not engaging the full power of our mental abilities.
Most of us have had similar insights, often while recovering from a bad cold. But the contrast between mental states hit Wilson like a bolt of lightening. In his recent memoir, he writes, "The basic aim of human evolution, I decided, is to achieve Faculty X."
A few artists are able to summon Faculty X at will. But so, in rather less creative form, do psychopathic killers. For that is the stranger side of Colin Wilson’s work -- the part overlooked by The Times, for example, which repeated the standard Wilsonian claim that he was a philosopher of optimism.
Cheerful as that may sound, a very large part of his work over the years has consisted of books about serial murders. They, too, are Outsiders -- in revolt against "a decadent, frivolous society" that gives them no outlet for the development of Faculty X. Such an individual "feels no compunction in committing acts of violence," as Wilson explains, "because he feels only contempt for society and its values."
These quotations are from his book Order of Assassins: The Psychology of Murder (1972), but they might have been drawn from any of dozens of other titles. Beginning with the Jack the Ripper-sque character Austin Nunne in his first novel, Ritual in the Dark (1960), Wilson has populated his fiction with an array of what can only be called existentialist serial killers.
In these novels, the narrator is usually an alienated intellectual who sounds ... well, quite a bit like Colin Wilson does in his nonfiction books. The narrator will spend a few hundred pages tracking down a panty-fetishist sex killer, or something of the kind -- often developing a strong sense of identification with, or at least respect for, the murderer. There may be a moment when he recoils from the senselessness of the crimes. But there is never (oddly enough) any really credible expression of sympathy for the victims.
The tendency to see the artist and the criminal as figures dwelling outside the norms of society is an old one, of course; and we are now about 180 years downstream from Thomas De Quincy’s essay "On Murder Considered as One of the Fine Arts." But there is something particularly cold and morbid about Wilson's treatment of the theme. It calls to mind the comment film critic Anthony Lane made about Quentin Tarantino: "He knows everything about violence and nothing about suffering." It comes as little surprise to learn that a girlfriend from Wilson's bohemian days recoiled from one of his earliest efforts at fiction: “She later admitted,” he writes, “that it made her wonder if I was on the verge of becoming a homicidal maniac.”
So did BTK have Wilson’s philosophical ruminations in mind when he was “using” Factor X to commit a string of sadistic murders? Did he see himself as an Outsider – a tormented genius, expressing his contempt for (in Wilson’s phrase) “the comfortable, insulated world” of modernity?
Circumstantial evidence indicates that it is a lead worth pursuing. We know that that Rader studied criminal justice administration at Wichita State University, receiving his B.A. in 1979. Wilson’s brand of pop-philosophizing on murder as a form of revolt (a manifestation of “man’s striving to become a god”) is certainly the kind of thing an adventurous professor might have used to stimulate class discussion.
And it would be extremely surprising to find that Rader never read Wilson’s work. Given the relatively small market for books devoted entirely to existential musings, Wilson has produced an incredible volume of true-crime writing over the years – beginning with his Encyclopedia of Murder (1961) and continuing through dozens of compilations of serial-killer lore, many available in the United States as rather trashy paperbacks.
The earliest messages Rader sent to police in the mid-1970s reveal disappointment at not getting sufficient press coverage. He even coined the nickname BTK to speed things along. Clearly this was someone with a degree of status anxiety about his role in serial-killing history. One imagines him turning the pages of Wilson’s pulp trilogy Written in Blood (1989) or the two volumes of The Killers Among Us (1995) – all published by Bantam in the U.S. – with some disappointment at not having made the finals.
Well, it’s not too late. We know from his memoirs that Colin Wilson has engaged in extensive correspondence with serial-killing Outsiders who have ended up behind bars. It seems like a matter of time before he turns out a book on BTK.
Unless, of course, I beat him to it. The key is to overcome the gray fog of everyday, dull consciousness by activating my dormant reserves of Faculty X. Fortunately it has never been necessary for me to kill anyone to manage this. Two large cups of French Roast will usually do the trick.
The curtain rises on a domestic scene –- though not, the audience soon learns, a tranquil one. It is the apartment of the philosopher Louis Althusser and his wife Hélène Rytman, on an evening in November, a quarter century ago. The play in question, which opened last month in Paris, is called The Caïman. That’s an old bit of university slang referring to Althusser's job as the “director of studies” -- an instructor who helps students prepare for the final exam at the École Normale Supérieure, part of what might be called the French Ivy League.
The caïman whose apartment the audience has entered was, in his prime, one of the “master thinkers” of the day. In the mid-1960s, Althusser conducted an incredibly influential seminar that unleashed structuralist Marxism on the world. He played a somewhat pestiferous role within the French Communist Party, where he was spokesman for Mao-minded student radicals. And he served as tutor and advisor for generations of philosophers-in-training.
At Althusser’s funeral in 1990, Jacques Derrida recalled how, “beginning in 1952 ... the caïman received in his office the young student I then was.” One of the biographers of Michel Foucault (another of his pupils) describes Althusser as an aloof and mysterious figure, but also one known for his gentleness and tact. When a student turned in an essay, Althusser wrote his comments on a separate sheet of paper -- feeling that there would be something humiliating about defacing the original with his criticisms.
But everyone in the audience knows how Althusser’s evening at home with his wife in November 1980 will end. How could they not? And even if you know the story, it is still horrifying to read Althusser’s own account of it. In a memoir that appeared posthumously, he recalls coming out of a groggy state the next morning, and finding himself massaging Hélène’s neck, just as he had countless times in the course of their long marriage.
“Suddenly, I was terror-struck,” he wrote. “Her eyes stared interminably, and I noticed the tip of her tongue was showing between her teeth and lips, strange and still.” He ran to the École, screaming, “I’ve strangled Hélène!”
He was whisked away for psychiatric evaluation, which can’t have taken long: Althusser’s entire career had been conducted between spells of hospitalization for manic-depression. In one autobiographical fragment from the late 1970s –- presumably written while on a manic high –- he brags about sneaking aboard a nuclear submarine and taking it for a joy-ride when no one was looking. If ever there were reason to question legal guilt on grounds of insanity, the murder of Hélène Rytman would seem to qualify.
He underwent a long spell of psychiatric incarceration -- a plunge, as he later wrote, back into the darkness from which he had awakened that morning. In the late 1980s, after he was released, the philosopher could be seen wandering in the streets, announcing “I am the great Althusser!” to startled pedestrians.
It became the stuff of legend. In the early 1980s, as a student at the University of Texas at Austin, I heard what turns out to have been an apocryphal account of that morning. A small crowd of Althusser’s students, it was said, routinely gathered outside his apartment to greet him each day. When he emerged, disheveled and shrieking that he was a murderer, everyone laughed and clapped their hands. They thought (so the story went) that Althusser was clowning around.
That rumor probably says more about American attitudes towards French thinkers than it does about Althusser himself, of course. The murder has become a standard reference in some of the lesser skirmishes of the culture wars – with Hélène Rytman’s fate a sort of morbid punch-line.
Althusser’s philosophical work took as its starting point the need to question, and ultimately to dissolve, any notion that social structures and historical changes are the result of some basic human essence. Somewhat like Foucault, at least in this regard, he regards the idea of “man” as a kind of myth. Instead, Althusser conceived of history as a “a process without a subject” – something operating in ways not quite available to consciousness. Various economic and linguistic structures interacted to “articulate” the various levels of life and experience.
Althusser called this perspective “theoretical anti-humanism.” And for anyone who loathes such thinking, the standard quip is that he practiced his anti-humanism at home.
That strikes me as being neither funny nor fair. At the risk of sounding like a pretty old-fashioned bourgeois humanist, I think you have to treat his ideas as ... well, ideas. Not necessarily as good ones, of course. (In his seminar, Althusser and his students undertook a laborious and ultimately preposterous effort to figure out when and how Marx became a Marxist, only to conclude that only a few of his works really qualified.) But however you judge his writings, they make sense as part of a conversation that started long before Althusser entered the room -- one that will continue long after we are all dead.
One way to see his “theoretical anti-humanism,” for example, is as a retort to Jean-Paul Sartre’s “Existentialism is a Humanism” –- the lecture that drew standing-room only crowds in 1945, at just about the time Althusser was resuming an academic career interrupted by the war. (The Germans held him as a POW for most of it.) It was the breeziest of Sartre’s introductions to his basic themes: We are free – deep down, and for good. That freedom may be unbearable at times. But it never goes away. No matter what, each individual is always radically responsible for whatever action and meaning is possible in a given circumstance.
“Man,” Sartre told his listeners, “is nothing else but what he makes of himself.” But that “nothing” is, after all, everything. “There is no universe other than a human universe, a universe of human subjectivity.”
For Althusser, this is all completely off track. It rests on the idea that individuals are atoms who create their own meaning – and that somehow then link up to form a society. A very different conception is evident in “Ideology and Ideological State Apparatuses,” a paper from 1970 that is about as close to a smash-hit, era-defining performance as Althusser ever got. Which is to say, not that close at all. But extracts are available in The Norton Anthology of Theory and Criticism, and passages have turned up in countless thousands of course packets in lit-crit and cultural studies, over the years.
For Althusser, it’s exactly backwards to start from the individual as a basic unit capable, through its own imagination and endeavor, to create a world of meaning. On the contrary, there are societies that seek to reproduce themselves over time, not just by producing material goods (that too) but through imposing and enforcing order.
The police, military, and penal systems have an obvious role. Althusser calls them the Repressive State Apparatuses. But he’s much more interested in what he calls the Ideological State Apparatuses – the complex array of religious institutions, legal processes, communication systems, schools, etc. that surround us. And, in effect, create us. They give us the tools to make sense of the world. Most of all, the ISAs convey what the social order demands of us. And for anyone who doesn’t go along....Well, that’s when the Repressive State Apparatuses might just step in to put you in line.
Why has this idea been so appealing to so many academics –- and for such a long time? Well, at the time, it tended to confirm the sense that you could effect radical social change via “the long march through the institutions.” By challenging how the Ideological State Apparatuses operated, it might be possible to shift the whole culture’s center of gravity. And Althusser placed special emphasis on educational institutions as among the most important ISA's in capitalist society.
Such was the theory. In practice, of course, the social order tends to push back –- and not necessarily through repression. A handful of non-academic activists became interested in Althusser for a while; perhaps some still are. But for the most part, his work ended up as a fairly nonthreatening commodity within the grand supermarket of American academic life.
The brand is so well-established, in fact, that the thinker’s later misfortunes are often dismissed with a quick change of subject. The effect is sometimes bizarre.
In 1996, Columbia University Press issued a volume by Althusser called Writings on Psychoanalysis: Freud and Lacan. Surely an appropriate occasion for some thoughtful essays on how the theorist’s own experience of mental illness might have come into play in his work, right? Evidently not: The book contains only a few very perfunctory references to “temporary insanity” and psychiatric care. Presumably Althusser’s editors will be forthcoming next summer, with the publication by Verso of Philosophy of the Encounter: Later Writings, 1978-1987. The catalog text for the book refers to it as “his most prolific period.” But it was also one when much of his writing was done while hospitalized.
Is it possible to say anything about his work and his illness that doesn’t amount to a roundabout denunciation of Althusser? I think perhaps there is.
On one level, his theory about the Ideological State Apparatuses looks....maybe not optimistic, exactly, but like a guide to transforming things. From this point of view, each individual is a point of convergence among several ISAs. In other words, each of us has assimilated various codes and rules about how things are supposed to be. And if there are movements underway challenging how the different ISAs operate, that might have a cumulative effect. If, say, feminists and gay rights activists are transforming the rules about how gender is constructed, that creates new ways of life. (Though not necessarily a social revolution, as Althusser wanted. Capitalism is plenty flexible if there’s a buck to be extracted.)
But that notion of the individual as the intersection of rules and messages also has a melancholy side. It somewhat resembles the experience of depression. If a person suffering from depression is aware of anything, it is this: The self is a product of established patterns....fixed structures.... forces in the outside world that are definitive, and sometimes crushing.
Any Sartrean talk of “radical freedom” makes no sense whatever to anyone in that condition – which is, rather, a state of radical loss. And as the German poet Hans Magnus Enzensberger puts it in a recent essay, the most extreme “radical loser” may find the only transcendence in an act of violence.
“He can explode at any moment,” writes Enzensberger. “This is the only solution to his problem that he can imagine: a worsening of the evil conditions under which he suffers.... At last, he is master over life and death.”
Is that what happened in Althusser’s apartment, 25 years ago? That, or something like it.
Some months ago I started asking friends, colleagues from my teaching days, researchers in higher education, faculty members of various ages and ranks, deans, provosts and presidents, and focus groups of students: “What’s the status of the Big Questions on your campus?” Quite deliberately I avoided defining “Big Questions,” but I gave as examples such questions as “Who am I? Where do I come from? What am I going to do with my life? What are my values? Is there such a thing as evil? What does it mean to be human? How can I understand suffering and death? What obligations do I have to other people? What does it mean to be a citizen in a democracy? What makes work, or a life, meaningful and satisfying?” In other words, I wanted to know what was happening to questions of meaning and value that traditionally have been close to the heart of a liberal education.
Some of what I found puzzled me. People pointed out quite properly that some Big Questions were alive and well in academia today. These included some questions about the origin of the universe, the emergence of life, the nature of consciousness, and others that have been raised by the scientific breakthroughs of the past few decades.
In the humanities and related social sciences the situation was rather different. Some friends reminded me that, not all big questions were in eclipse. Over the past generation faculty members have paid great attention to questions of racial, ethnicity, gender and sexual identity. Curricular structures, professional patterns, etc. continue to be transformed by this set of questions. Professors, as well as students, care about these questions, and as a result, write, teach and learn with passion about them.
But there was wide agreement that other big questions, the ones about meaning, value, moral and civic responsibility, were in eclipse. To be sure, some individual faculty members addressed them, and when they did, students responded powerfully. In fact, in a recent Teagle-sponsored meeting on a related topic, participants kept using words such as “hungry,” “thirsty,” and “parched” to describe students’ eagerness to find ways in the curriculum, or outside it, to address these questions. But the old curricular structures that put these questions front and center have over the years often faded or been dismantled, including core curricula, great books programs, surveys “from Plato to NATO,” and general education requirements of various sorts. Only rarely have new structures emerged to replace them.
I am puzzled why. To be sure, these Big Questions are hot potatoes. Sensitivities are high. And faculty members always have the excuse that they have other more pressing things to do. Over two years ago, in an article entitled “Aim Low,” Stanley Fish attacked some of the gurus of higher education (notably, Ernest Boyer) and their insistence that college education should “go beyond the developing of intellectual and technical skills and … mastery of a scholarly domain. It should include the competence to act in the world and the judgment to do so wisely” ( Chronicle of Higher Education, May 16 2003). Fish hasn’t been the only one to point out that calls to “fashion” moral and civic-minded citizens, or to “go beyond” academic competency assume that students now routinely achieve such mastery of intellectual and scholarly skills. We all know that’s far from the case.
Minimalist approaches -- ones that limit teaching to what another friend calls “sectoral knowledge -- are alluring. But if you are committed to a liberal education, it’s hard just to aim low and leave it at that. The fact that American university students need to develop basic competencies provides an excuse, not a reason, for avoiding the Big Questions. Students also need to be challenged, provoked, and helped to explore the issues they will inevitable face as citizens and as individuals. Why have we been so reluctant to develop the structures, in the curriculum or beyond it, that provide students with the intellectual tools they need to grapple thoughtfully over the course of a lifetime with these questions?
I see four possible reasons:
1. Faculty members are scared away by the straw man Stanley Fish and others have set up. Despite accusations of liberal bias and “brainwashing” no faculty member I know wants to “mold,” “fashion” or “proselytize” students. But that’s not what exploring the Big Questions is all about. Along with all the paraphernalia college students bring with them these days are Big Questions, often poorly formulated and approached with no clue that anyone in the history of humankind has ever had anything useful to say about any of them. There’s no need to answer those questions for students, or to try to fashion them into noble people or virtuous citizens for the republic. There is, however, every reason to help students develop the vocabularies, the metaphors, the exempla, the historical perspective, the patterns of analysis and argument that let them over time answer them for themselves.
2. A second possible reason is that faculty are put off by the feeling they are not “experts” in these matters. In a culture that quite properly values professional expertise, forays beyond one’s field of competence are understandably suspect. But one does not have to be a moral philosopher to raise the Big Questions and show some of the ways smart people in the past have struggled with them. I won’t pontificate about other fields, but in my own field -- classics and ancient history -- the Big Questions come bubbling up between the floor boards of any text I have ever taught. I don’t have to be a specialist in philosophy or political science to see that Thucydides has something to say about power and morality, or the Odyssey about being a father and a husband. A classicist’s job, as I see it, is to challenge students to think about what’s implicit in a text, help them make it explicit and use that understanding to think with.
3. Or is it that engaging with these “Big Questions” or anything resembling them is the third rail of a professional career. Senior colleagues don’t encourage it; professional journals don’t publish it; deans don’t reward it and a half dozen disgruntled students might sink your tenure case with their teaching evaluations. You learn early on in an academic career not to touch the third rail. If this is right, do we need to rewire the whole reward system of academia?
4. Or, is a former student of mine, now teaching at a fine women’s college, correct when she says that on her campus “It tends to be that … those who talk about morality and the big questions come from such an entrenched far right position … that the rest of us … run for cover.”
Some of the above? All of the above? None of the above? You tell me, but let’s not shrug our shoulders and walk away from the topic until we’ve dealt with one more issue: What happens if, for whatever reason, faculty members run for the hills when the Big Questions, including the ones about morality and civic responsibility, arise? Is this not to lose focus on what matters most in an education intended to last for a lifetime? In running away, do we not then leave the field to ideologues and others we cannot trust, and create a vacuum that may be filled by proselytizers, propagandists, or the unspoken but powerful manipulations of consumer culture? Does this not sever one of the roots that has over the centuries kept liberal education alive and flourishing? But, most serious of all, will we at each Commencement say farewell to another class of students knowing that for all they have learned, they are ill equipped to lead an examined life? And if we do, can we claim to be surprised and without responsibility if a few decades later these same graduates abuse the positions of power and trust in our corporate and civic life to which they have ascended?
W. Robert Connor
W. Robert Connor is president of the Teagle Foundation, which is dedicated to strengthening liberal education. More on the foundation's “Big Questions” project may be found on its Web site. This essay is based on remarks Connor recently made at a meeting of the Middle Atlantic Chapters of Phi Beta Kappa, at the University of Pennsylvania.
Normally my social calendar is slightly less crowded than that of Raskolnikov in Crime and Punishment. (He, at least, went out to see the pawnbroker.) But late last month, in an unprecedented burst of gregariousness, I had a couple of memorable visits with scholars who had come to town – small, impromptu get-togethers that were not just lively but, in a way, remarkable.
The first occurred just before Christmas, and it included (besides your feuilletonist reporter) a political scientist, a statistician, and a philosopher. The next gathering, also for lunch, took place a week later, during the convention of the Modern Language Association. Looking around the table, I drew up a quick census. One guest worked on British novels of the Victorian era. Another writes about contemporary postcolonial fiction and poetry. We had two Americanists, but of somewhat different specialist species; besides, one was a tenured professor, while the other is just starting his dissertation. And, finally, there was, once again, a philosopher. (Actually it was the same philosopher, visiting from Singapore and in town for a while.)
If the range of disciplines or specialties was unusual, so the was the degree of conviviality. Most of us had never met in person before -- though you’d never have known that from the flow of the conversation, which never seemed to slow down for very long. Shared interests and familiar arguments (some of them pretty esoteric) kept coming up. So did news about an electronic publishing initiative some of the participants were trying to get started. On at least one occasion in either meal, someone had to pull out a notebook to have someone else jot down an interesting citation to look up later.
In each case, the members of the ad hoc symposium were academic bloggers who had gotten to know one another online. That explained the conversational dynamics -- the sense, which was vivid and unmistakable, of continuing discussions in person that hadn’t started upon arriving at the restaurant, and wouldn’t end once everyone had dispersed.
The whole experience was too easygoing to call impressive, exactly. But later -- contemplating matters back at my hovel, over a slice of black bread and a bowl of cold cabbage soup -- I couldn’t help thinking that something very interesting had taken place. Something having little do with blogging, as such. Something that runs against the grain of how academic life in the United States has developed over the past two hundred years.
At least that’s my impression from having read Thomas Bender’s book Intellect and Public Life: Essays on the Social History of Academic Intellectuals in the United States, published by Johns Hopkins University Press in 1993. That was back when even knowing how to create a Web page would raise eyebrows in some departments. (Imagine the warnings that Ivan Tribble might have issued, at the time.)
But the specific paper I’m thinking of – reprinted as the first chapter – is even older. It’s called “The Cultures of Intellectual Life: The City and the Professions,” and Bender first presented it as a lecture in 1977. (He is currently professor of history at New York University.)
Although he does not exactly put it this way, Bender’s topic is how scholars learn to say “we.” An intellectual historian, he writes, is engaged in studying “an exceedingly complex interaction between speakers and hearers, writers and readers.” And the framework for that “dynamic interplay” has itself changed over time. Recognizing this is the first step towards understanding that the familiar patterns of cultural life – including those that prevail in academe – aren’t set in stone. (It’s easy to give lip service to this principle. Actually thinking through its implications, though, not so much.)
The history of American intellectual life, as Bender outlines it, involved a transition from civic professionalism (which prevailed in the 18th and early 19th centuries) to disciplinary professionalism (increasingly dominant after about 1850).
“Early American professionals,” he writes, “were essentially community oriented. Entry to the professions was usually through local elite sponsorship, and professionals won public trust within this established social context rather than through certification.” One’s prestige and authority was very strongly linked to a sense of belonging to the educated class of a given city.
Bender gives as an example the career of Samuel Bard, the New York doctor who championed building a hospital to improve the quality of medical instruction available from King’s College, as Columbia University was known back in the 1770). Bard had studied in Edinburgh and wanted New York to develop institutions of similar caliber; he also took the lead in creating a major library and two learned societies.
“These efforts in civic improvement were the product of the combined energies of the educated and the powerful in the city,” writes Bender, “and they integrated and gave shape to its intellectual life.”
Nor was this phenomenon restricted to major cities in the East. Visiting the United States in the early 1840s, the British geologist Charles Lyell noted that doctors, lawyers, scientists, and merchants with literary interests in Cincinnati “form[ed] a society of a superior kind.” Likewise, William Dean Howells recalled how, at this father’s printing office in a small Ohio town, the educated sort dropped in “to stand with their back to our stove and challenge opinion concerning Holmes and Poe, Irving and Macauley....”
In short, a great deal of one’s sense of cultural “belonging” was bound up with community institutions -- whether that meant a formally established local society for the advancement of learning, or an ad hoc discussion circle warming its collective backside near a stove.
But a deep structural change was already taking shape. The German model of the research university came into ever greater prominence, especially in the decades following the Civil War. The founding of Johns Hopkins University in 1876 defined the shape of things to come. “The original faculty of philosophy,” notes Bender, “included no Baltimoreans, and no major appointments in the medical school went to members of the local medical community.” William Welch, the first dean of the Johns Hopkins School of Medicine, “identified with his profession in a new way; it was a branch of science -- a discipline -- not a civic role.”
Under the old regime, the doctors, lawyers, scientists, and literary authors of a given city might feel reasonably comfortable in sharing the first-person plural. But life began to change as, in Bender’s words, “people of ideas were inducted, increasingly through the emerging university system, into the restricted worlds of specialized discourse.” If you said “we,” it probably referred to the community of other geologists, poets, or small-claims litigators.
“Knowledge and competence increasingly developed out of the internal dynamics of esoteric disciplines rather than within the context of shared perceptions of public needs,” writes Bender. “This is not to say that professionalized disciplines or the modern service professions that imitated them became socially irresponsible. But their contributions to society began to flow from their own self-definitions rather than from a reciprocal engagement with general public discourse.”
Now, there is a definite note of sadness in Bender’s narrative – as there always tends to be in accounts of the shift from Gemeinschaft to Gesellschaft. Yet it is also clear that the transformation from civic to disciplinary professionalism was necessary.
“The new disciplines offered relatively precise subject matter and procedures,” Bender concedes, “at a time when both were greatly confused. The new professionalism also promised guarantees of competence -- certification -- in an era when criteria of intellectual authority were vague and professional performance was unreliable.”
But in the epilogue to Intellect and Public Life, Bender suggests that the process eventually went too far. “The risk now is precisely the opposite,” he writes. “Academe is threatened by the twin dangers of fossilization and scholasticism (of three types: tedium, high tech, and radical chic). The agenda for the next decade, at least as I see it, ought to be the opening up of the disciplines, the ventilating of professional communities that have come to share too much and that have become too self-referential.”
He wrote that in 1993. We are now more than a decade downstream. I don’t know that anyone else at the lunchtime gatherings last month had Thomas Bender’s analysis in mind. But it has been interesting to think about those meetings with reference to his categories.
The people around the table, each time, didn’t share a civic identity: We weren’t all from the same city, or even from the same country. Nor was it a matter of sharing the same disciplinary background – though no effort was made to be “interdisciplinary” in any very deliberate way, either. At the same time, I should make clear that the conversations were pretty definitely academic: “How long before hundreds of people in literary studies start trying to master set theory, now that Alain Badiou is being translated?” rather than, “Who do you think is going to win American Idol?”
Of course, two casual gatherings for lunch does not a profound cultural shift make. But it was hard not to think something interesting had just transpired: A new sort of collegiality, stretching across both geographic and professional distances, fostered by online communication but not confined to it.
The discussions were fueled by the scholarly interests of the participants. But there was a built-in expectation that you would be willing to explain your references to someone who didn’t share them. And none of it seems at all likely to win the interest (let alone the approval) of academic bureaucrats.
Surely other people must be discovering and creating this sort of thing -- this experience of communitas. Or is that merely a dream?
It is not a matter of turning back the clock -- of undoing the division of labor that has created specialization. That really would be a dream.
But as Bender puts it, cultural life is shaped by “patterns of interaction” that develop over long periods of time. For younger scholars, anyway, the routine give-and-take of online communication (along with the relative ease of linking to documents that support a point or amplify a nuance) may become part of the deep grammar of how they think and argue. And if enough of them become accustomed to discussing their research with people working in other disciplines, who knows what could happen?
“What our contemporary culture wants,” as Bender put it in 1993, “is the combination of theoretical abstraction and historical concreteness, technical precision and civic give-and-take, data and rhetoric.” We aren’t there, of course, or anywhere near it. But sometimes it does seem as if there might yet be grounds for optimism.
Perhaps it’s best to have waited until after Valentine’s Day to think about love. The holiday, after all, has less to do with passion than with sentimentality -- that is, a fixed matrix of sighs and signs, an established and tightly run order of feelings and expressions. That is all pleasant enough. But still, it seems kind of feeble by contrast with the reality of love, which is complicated, and which can mess you up.
The distinction is not semantic. And no, I did not improvise it as some kind of roundabout excuse for forgetting the holiday. (You do not sustain a happy marriage for a dozen years without knowing to pump a few dollars into the sentimental economy in a timely fashion.)
There are times when the usual romantic phrases and symbols prove exactly right for expressing what you mean. The stock of them is, as Roland Barthes puts it in A Lover’s Discourse, like a perpetual calendar. The standard words are clichés, perhaps. But they are meaningful clichés, and nobody has sounded out their overtones with anything like Barthes’s finesse.
Still, the repertoire of romantic discourse has its limits. “The lover speaks in bundles of sentences but does not integrate these sentences on a higher level, into a work,” writes Barthes. “His is a horizontal discourse: no transcendence, no deliverance, no novel (though a great deal of the fictive).”
Well, okay, yes -- that is all true of the early days of a relationship. When you are both horizontal, the discourse between you tends to be, as well. Once you begin to build a life together, however, a certain amount of verticality, if not transcendence, imposes itself; and the nuances of what Barthes called the “lover’s discourse” are not so much lost as transformed. Even the silences are enriched. I try to keep quiet on Sunday while my wife is reading the Times, for example. There can be a kind of intimacy involved in keeping out of the other’s way.
For an account of love in this other sense, I’d recommend Harry Frankfurt’s The Reasons of Love, first published by Princeton University Press in 2004 and released in paperback just this year. The front cover announces that Frankfurt, a professor emeritus of philosophy at Princeton, is “author of the best-selling On Bullshit.”
Like the book that established the Frankfurt brand in the widest cultural marketplace, Reasons is a dry and elegant little treatise – somehow resembling the various “manuals for reflection” from Roman or Renaissance times more than it does most contemporary academic philosophy. It consists of papers originally delivered as the Romanell-Phi Beta Kappa Lectures at Princeton in 2000, then presented again the following year as the Shearman Lectures at University College London.
The ease and accessibility of Frankfurt’s manner are somewhat misleading. There is actually an enormous amount going on within the book’s hundred pages. Despite its unassuming tone, Reasons is a late installment of Frankfurt’s work on questions of moral philosophy in general, and free will in particular. In a footnote, he points out that precision can be risky, citing a comment attributed to Niels Bohr: “He is said to have cautioned that one should never speak more clearly than one can think.” (With plenty of academic books, of course, the author faces no such danger.)
It is the second of his three lectures (titled “On Love, and Its Reasons”) that seems to me to fill in all the gaps left in Barthes’s account. Frankfurt sets his argument up so that it can apply to love of any kind -- the love of one’s family, homeland, or ideological cause, quite as much as one’s romantic partner. Indeed, the latter kind of love tends to have an admixture of messy and “vividly distracting elements” (as he terms them) that can hinder exact definition of the concept. But if the shoe fits....
For all his lucidity, Frankfurt is very alert to the paradoxical nature of love. It is not really the case that we love something because it possesses a certain quality or value. “The truly essential relationship between love and the value of the beloved,” he notes, “goes in the opposite direction. It is not necessarily as a result of recognizing their value and of being captivated by it that we love things. Rather, what we love necessarily acquires value for us because we love it.”
In that respect, Frankfurt’s understanding of love seems to follow the same lines as the thinking of a philosopher one would otherwise never confuse with him -- namely, Slavoj Å½iÅ¾ek. For as Å½iÅ¾ek once pointed out, if our regard for another person could be strictly reduced to a list of exactly what we found admirable or valuable about them, then the word “love” wouldn’t really apply to what we feel. And even the faults of the beloved are, for the person in love, not valid objections to feeling love. (They may drive you crazy. But the fact that they do is, in its way, a dimension of love.)
So the value of the beloved, as Frankfurt argues, is an effect of love -- not the cause. And when we love someone, we want the best for that person. In other words, we regard the beloved as an end, not as a means. “The lover desires that his beloved flourish and not be harmed,” writes Frankfurt, “and he does not desire this just for the sake of promoting some other goal.... For the lover, the condition of his beloved is important in itself, apart from any bearing it might have on other matters.”
If this sounds a little bit like the categorical imperative .... well, that’s about half right, if just barely. Kant tells us that ethical conduct requires treating other people as ends, not as means. But that imperative is universal -- and as Frankfurt says, the feeling of love is inescapably specific. “The significance to the lover of what he loves,” he writes, “is not generic; it is ineluctably particular.”
This is where things get complicated. We don’t have a lot of say or sway in regard to love. It is not that love is blind, or that passion is irrational. Sure, that too. But while the capacity to love belongs, as Frankfurt puts it, “to our most intimate and most fundamental nature,” the demands it places on each person is not subject to personal decision making.
“We cannot help it that the direction of our personal reasoning is in fact governed by the specific final ends that our love has defined for us,” writes Frankfurt. “... Whether it would be better for us to love differently is a question that we are unable to take seriously. For us, as a practical matter, the issue cannot effectively arise.”
What makes this philosophically interesting, I take it, is that love blurs the distinction between selfishness and selflessness – between treating the beloved as an end in itself, on the one hand, and the fact that the beloved is my beloved, in particular, on the other.
Quite a bit of ink has been spilled, over time, regarding the question of whether or not it is possible, or desirable, to establish universal principles that could be applied without reference to the local or personal interests of moral agents. “The ambition to provide an exhaustively rational warrant for the way we conduct our lives is misconceived,” says Frankfurt. But that doesn’t mean that the alternative to “the pan-rationalist fantasy” is imagining human beings to be totally capricious, completely self-inventing, or intractably self-absorbed.
Nor does it mean that, like the song says, “All you need is love.” Love simplifies nothing. At the same time, it makes life interesting -- and possible.
“The fact that we cannot help loving,” as Frankfurt puts it, “and that we therefore cannot help being guided by the interests of what we love, helps to ensure that we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course.”
Okay, so Harry Frankfurt is not the most lyrical of philosophers. Still, he has his moments. Roland Barthes wrote that the lover’s discourse consists of stray sentences -- never adding up to a coherent work, let alone anything with a structure, like a novel. But when Frankfurt says that love ensures that “we neither flounder aimlessly nor hold ourselves back from definitive adherence to a meaningful practical course,” it does seem to gesture toward a story.
A recognizable story. A familiar story. (One that includes the line, “Before we met...”) A story I am living, as perhaps you are, too.
During the early decades of the 20th century, a newspaper called The Avery Boomer served the 200 or so citizens of Avery, Iowa. It was irregular in frequency, and in other ways as well. Each issue was written and typeset by one Axel Peterson, a Swedish immigrant who described himself as "lame and crippled up," and who had to make time for his journalistic labors while growing potatoes. A member of the Socialist Party, he had once gained some notoriety within it for proposing that America’s radicals take over Mexico to show how they would run things. Peterson was well-read. He developed a number of interesting and unusual scientific theories -- also, it appears, certain distinctive ideas about punctuation.
Peterson regarded himself, as he put it, as "a Social Scientist ... developing Avery as a Social Experiment Station" through his newspaper. He sought to improve the minds and morals of the townspeople. This was not pure altruism. Several of them owed Petersen money; by reforming the town, he hoped to get it back.
But he also wanted citizens to understand that Darwin's theory of evolution was a continuation of Christ's work. He encouraged readers to accelerate the cause of social progress by constantly asking themselves a simple question: "What would Jesus do?"
I discovered the incomparable Peterson recently while doing research among some obscure pamphlets published around 1925. So it was a jolt to find that staple bit of contemporary evangelical Christian pop-culture -- sometimes reduced to an acronym and printed on bracelets -- in such an unusual context. But no accident, as it turns out: Peterson was a fan of the Rev. Charles M. Sheldon’s novel In His Steps (1896), which is credited as the source of the whole phenomenon, although he cannot have anticipated its mass-marketing a century later.
Like my wild potato-growing Darwinian socialist editor, Sheldon thought that asking WWJD? would have social consequences. It would make the person asking it “identify himself with the great causes of Humanity in some personal way that would call for self-denial and suffering,” as one character in the novel puts it.
Not so coincidentally, Garry Wills takes a skeptical look at WWJD in the opening pages of his new book, What Jesus Meant, published by Viking. He takes it as a variety of spiritual kitsch -- an aspect of the fundamentalist and Republican counterculture, cemtered around suburban mega-churches offering a premium on individual salvation.
In any case, says Wills, the question is misleading and perhaps dangerous. The gospels aren’t a record of exemplary moments; the actions of Jesus are not meant as a template. “He is a divine mystery walking among men,” writes Wills. “The only way we can directly imitate him is to act as if we were gods ourselves -- yet that is the very thing he forbids.”
Wills, a professor emeritus of history at Northwestern University, was on the way to becoming a Jesuit when he left the seminary, almost 50 years ago, to begin writing for William F. Buckley at The National Review. At the time, that opinion magazine had a very impressive roster of conservative literary talent; its contributors included Joan Didion, Hugh Kenner, John Leonard, and Evelyn Waugh. (The mental firepower there has fallen off a good bit in the meantime.) Wills came to support the civil rights movement and oppose the Vietnam war, which made for a certain amount of tension; he parted ways with Buckley’s journal in the early 1970s. The story is told in his Confessions of a Conservative (1979) – a fascinating memoir, intercalated with what is, for the nonspecialist anyway, an alarmingly close analysis of St. Augustine’s City of God.
Today -- many books and countless articles later -- Wills is usually described as a liberal in both politics and theology, though that characterization might not hold up under scrutiny. His outlook is sui generis, like that of some vastly more learned Axel Peterson.
His short book on Jesus is a case in point. You pick it up expecting (well, I did, anyway) that Wills might be at least somewhat sympathetic to the efforts of the Jesus Seminar to identify the core teachings of the historical Jesus. Over the years, scholars associated with the seminar cut away more and more of the events and sayings attributed to Jesus in the four gospels, arguing that they were additions, superimposed on the record later.
After all this winnowing, there remained a handful of teachings -- turn the other cheek, be a good Samaritan, love your enemies, have faith in God -- that seemed anodyne, if not actually bland. This is Jesus as groovy rabbi, urging everybody to just be nice. Which, under the circumstances, often seems to the limit of moral ambition available to the liberal imagination.
Wills draws a firm line between his approach and that of the Jesus Seminar. He has no interest in the scholarly quest for “the historical Jesus,” which he calls a variation of fundamentalism: “It believes in the literal sense of the Bible,” writes Wills, “it just reduces the Bible to what it can take as literal quotations from Jesus.” Picking and choosing among the parts of the textual record is anathema to him: “The only Jesus we have,” writes Wills, “is the Jesus of faith. If you reject the faith, there is no reason to trust anything the Gospels say.
He comes very close to the position put forward by C.S. Lewis, that evangelical-Christian favorite. “A man who was merely a man and said the sort of things Jesus said,” as Lewis put it, “would not be a great moral teacher. He would either be a lunatic -- on a level with the man who says he is a poached egg -- or else he would be the Devil of Hell. You must make your choice. Either this man was, and is, the Son of God; or else a madman or something worse.”
That’s a pretty stark range of alternatives. For now I’ll just dodge the question and run the risk of an eternity in weasel hell. Taking it as a given that Jesus is what the Christian scriptures say he claimed to be -- “the only-begotten Son of the Father” -- Wills somehow never succumbs to the dullest consequence of piety, the idea that Jesus is easy to understand. “What he signified is always more challenging than we expect,” he writes, “more outrageous, more egregious.”
He was, as the expression goes, transgressive. He “preferred the company of the lowly and despised the rich and powerful. He crossed lines of ritual purity to deal with the unclean – with lepers, the possessed, the insane, with prostitutes and adulterers and collaborators with Rome. (Was he subtly mocking ritual purification when he filled the waters with wine?) He was called a bastard and was rejected by his own brothers and the rest of his family.”
Some of that alienation had come following his encounter with John the Baptist -- as strange a figure as any in ancient literature: “a wild man, raggedly clad in animal skins, who denounces those coming near to him as ‘vipers offspring.’” Wills writes that the effect on his family must have been dismaying: “They would have felt what families feel today when their sons or daughters join a ‘cult.’”
What emerges from the gospels, as Wills tell it, is a figure so abject as to embody a kind of permanent challenge to any established authority or code of propriety. (What would Jesus do? Hang out on skid row, that’s what.) His last action on earth is to tell a criminal being executed next to him that they will be together in paradise.
Wills says that he intends his book to be a work of devotion, not of scholarship. But the latter is not lacking. He just keeps it subdued. Irritated by the tendency for renderings of Christian scripture to have an elevated and elegant tone, Wills, a classicist by training, makes his own translations. He conveys the crude vigor of New Testament Greek, which has about as much in common with that of Plato as the prose of Mickey Spillane does with James Joyce. (As Nietzsche once put it: “It was cunning of God to learn Greek when He wished to speak to man, and not to learn it better.”)
Stripping away any trace of King James Version brocade, Wills leaves the reader with Jesus’s words in something close to the rough eloquence of the public square. “I say to all you can hear me: Love your foes, help those who hate you, praise those who curse you, pray for those who abuse you. To one who punches your cheek, offer the other cheek. To one seizing your cloak, do not refuse the tunic under it. Whoever asks, give to him. Whoever seizes, do not resist. Exactly how you wish to be treated, in that way treat others.... Your great reward will be that you are the children of the Highest One, who also favors ingrates and scoundrels.”
A bit of sarcasm, perhaps, there at the end -- which is something I don’t remember from Sunday school, though admittedly it has been a while. The strangeness of Jesus comes through clearly; it is a message that stands all “normal” values on their head. And it gives added force to another remark by Nietzsche: “In truth, there was only one Christian, and he died on the cross.”
For better and for worse, the American reception of contemporary French thought has often followed a script that frames everything in terms of generational shifts. Lately, that has usually meant baby-boomer narcissism -- as if the youngsters of '68 don't have enough cultural mirrors already. Someone like Bernard-Henri Lévy, the roving playboy philosopher, lends himself to such branding without reserve. Most of his thinking is adequately summed up by a thumbnail biography -- something like, "BHL was a young Maoist radical in 1968, but then he denounced totalitarianism, and started wearing his shirts unbuttoned, and the French left has never recovered."
Nor are American academics altogether immune to such prepackaged blendings of theory and lifestyle. Hey, you -- the Foucauldian with the leather jacket that doesn't fit anymore....Yeah, well, you're complicit too.
But there are thinkers who don't really follow the standard scripts very well, and Pierre Rosanvallon is one them. Democracy Past and Future, the selection of his writings just published by Columbia University Press, provides a long overdue introduction to a figure who defies both sound bites and the familiar academic division of labor. Born in 1948, he spent much of the 1970s as a sort of thinker-in-residence for a major trade union, the Confédération Française Démocratique du Travail, for which he organized seminars and conferences seeking to create a non-Marxist "second left" within the Socialist Party. He emerged as a theoretical voice of the autogestion (self-management) movement. His continuing work on the problem of democracy was honored in 2001 when he became a professor at the Collège de France, where Rosanvallon lectures on the field he calls "the philosophical history of the political."
Rosanvallon has written about the welfare state. Still, he isn't really engaged in political science. He closely studies classical works in political philosophy -- but in a way that doesn't quite seem like intellectual history, since he's trying to use the ideas as much as analyze them. He has published a study of the emergence of universal suffrage that draws on social history. Yet his overall project -- that of defining the essence of democracy -- is quite distinct from that of most social historians. At the same time (and making things all the more complicated) he doesn't do the kind of normative political philosophy one now associates with John Rawls or Jurgen Habermas.
Intrigued by a short intellectual autobiography that Rosanvallon presented at a conference a few years ago, I was glad to see the Columbia volume, which offers a thoughtful cross-section of texts from the past three decades. The editor, Samuel Moyn, is an assistant professor of history at Columbia. He answered my questions on Rosanvallon by e-mail.
Q:Rosanvallon is of the same generation as BHL. They sometimes get lumped together. Is that inevitable? Is it misleading?
A: They are really figures of a different caliber and significance, though you are right to suggest that they lived through the same pivotal moment. Even when he first emerged, Bernard-Henri Lévy faced doubts that he mattered, and a suspicion that he had fabricated his own success through media savvy. One famous thinker asked whether the "new philosophy" that BHL championed was either new or philosophy; and Cornelius Castoriadis attacked BHL and others as "diversionists." Yet BHL drew on some of the same figures Rosanvallon did -- Claude Lefort for example -- in formulating his critique of Stalinist totalitarianism. But Lefort, like Castoriadis and Rosanvallon himself, regretted the trivialization that BHL's meteoric rise to prominence involved.
So the issue is what the reduction of the era to the "new philosophy" risks missing. In retrospect, there is a great tragedy in the fact that BHL and others constructed the "antitotalitarian moment" (as that pivotal era in the late 1970s is called) in a way that gave the impression that a sententious "ethics" and moral vigilance were the simple solution to the failures of utopian politics. And of course BHL managed to convince some people -- though chiefly in this country, if the reception of his recent book is any evidence -- that he incarnated the very "French intellectual" whose past excesses he often denounced.
In the process, other visions of the past and future of the left were ignored. The reception was garbled -- but it is always possible to undo old mistakes. I see the philosophy of democracy Rosanvallon is developing as neither specifically French nor of a past era. At the same time, the goal is not to substitute a true philosopher for a false guru. The point is to use foreign thinkers who are challenging to come to grips with homegrown difficulties.
Q:Rosanvallon's work doesn't fit very well into some of the familiar disciplinary grids. One advantage of being at the Collège de France is that you get to name your own field, which he calls "the philosophical history of the political." But where would he belong in terms of the academic terrain here?
A: You're right. It's plausible to see him as a trespasser across the various disciplinary boundaries. If that fact makes his work of potential interest to a great many people -- in philosophy, politics, sociology, and history -- it also means that readers might have to struggle to see that the protocols of their own disciplines may not exhaust all possible ways of studying their questions.
But it is not as if there have not been significant interventions in the past -- from Max Weber for example, or Michel Foucault in living memory -- that were recognized as doing something relevant to lots of different existing inquiries. In fact, that point suggests that it may miss the point to try to locate such figures on disciplinary maps that are ordinarily so useful. If I had to sum up briefly what Rosanvallon is doing as an intellectual project, I would say that the tradition of which he's a part -- which includes his teacher Lefort as well as some colleagues like Marcel Gauchet and others -- is trying to replace Marxism with a convincing alternative social theory.
Most people write about Marxism as a political program, and of course any alternative to it will also have programmatic implications. But Marxism exercised such appeal because it was also an explanatory theory, one that claimed, by fusing the disciplines, to make a chaotic modern history -- and perhaps history as a whole -- intelligible. Its collapse, as Lefort's own teacher Maurice Merleau-Ponty clearly saw, threatened to leave confusion in its wake, unless some alternative to it is available. (Recall Merleau-Ponty's famous proclamation: "Marxism is not a philosophy of history; it is the philosophy of history, and to renounce it is to dig the grave of reason in history.")
Rosanvallon seems to move about the disciplines because, along with others in the same school, he has been trying to put together a total social theory that would integrate all the aspects of experience into a convincing story. They call the new overall framework they propose "the political," and Rosanvallon personally has focused on making sense of democratic modernity in all its facets. Almost no one I know about in the Anglo-American world has taken up so ambitious and forbidding a transdisciplinary task, but it is a highly important project.
Q:As the title of your collection neatly sums up, Rosanvallon's definitive preoccupation is democracy. But he's not just giving two cheers for it, or drawing up calls for more of it. Nor is his approach, so far as I can tell, either descriptive nor prescriptive. So what does that leave left for a philosopher to do?
A: At the core of his conception of democracy, there is a definitive problem: The new modern sovereign (the "people" who now rule) is impossible to identify or locate with any assurance. Democracy is undoubtedly a liberatory event -- a happy tale of the death of kings. But it must also face the sadly intractable problem of what it means to replace them.
Of course, the history of political theory contains many proposals for discovering the general will. Yet empirical political scientists have long insisted that "the people" do not preexist the procedures chosen for knowing their will. In different words, "the people" is not a naturally occurring object. Rosanvallon's work is, in one way or another, always about this central modern paradox: If, as the U.S. Constitution for instance says, "We the people" are now in charge, it is nevertheless true that we the people have never existed together in one place, living at one time, speaking with one voice. Who, then, is to finally say who "we" are?
The point may seem either abstract or trivial. But the power of Rosanvallon's work comes from his documentation of the ways -- sometimes blatant and sometimes subtle -- that much of the course and many of the dilemmas of modern history can be read through the lens of this paradox. For example, the large options in politics can also be understood as rival answers to the impossible quandary or permanent enigma of the new ruler's identity. Individual politicians claim special access to the popular will either because they might somehow channel what everyone wants or because they think that a rational elite possesses ways of knowing what the elusive sovereign would or should want. Democracy has also been the story, of course, of competing interpretations of what processes or devices are most likely to lead to results approximating the sovereign will.
Recently, Rosanvallon has begun to add to this central story by suggesting that there have always been -- and increasingly now are -- lots of ways outside electoral representation that the people can manifest their will, during the same era that the very idea that there exists a coherent people with a single will has entered a profound crisis.
One of the more potent implications of Rosanvallon's premise that there is no right answer to the question of the people's identity is that political study has to be conceptual but also historical. Basic concepts like the people might suggest a range of possible ways for the sovereign will to be interpreted, but only historical study can uncover the rich variety of actual responses to the difficulty.
The point, Rosanvallon thinks, is especially relevant to political theorists, who often believe they can, simply by thinking hard about what democracy must mean, finally emerge with its true model, whether based on a hypothetical contract, an ideal of deliberation, or something else. But the premise also means that democracy's most basic question is not going to go away, even if there are better and worse responses.
Q:Now to consider the relationship between Rosanvallon's work and political reality "on the ground" right now. Let's start with a domestic topic: the debate over immigration. Or more accurately, the debate over the status of people who are now part of the U.S. economy, but are effectively outside the polity. I'm not asking "what would Rosanvallon do?" here, but rather wondering: Does his work shed any light on the situation? What kinds of questions or points would Rosanvallonists (assuming there are any) be likely to raise in the discussion?
A: It's fair to ask how such an approach might help in analyzing contemporary problems. But his approach always insists on restoring the burning issues of the day to a long historical perspective, and on relating them to democracy's foundational difficulties. Without pretending to guess what Rosanvallon might say about America's recent debate, I might offer a couple of suggestions about how his analysis might begin.
The controversy over immigrants is so passionate, this approach might begin by arguing, not simply because of economic and logistical concerns but also because it reopens (though it was never closed!) the question of the identity of the people in a democracy. The challenge immigrants pose, after all, is not one of inclusion simply in a cultural sense, as Samuel Huntington recently contended, but also and more deeply in a conceptual sense.
In a fascinating chapter of his longest work, on the history of suffrage, Rosanvallon takes up the history of French colonialism, including its immigrant aftermath. There he connects different historical experiences of immigrant inclusion to the conceptual question of what the criteria for exclusion are, arguing that if democracies do not come to a clear resolution about who is inside and outside their polity, they will vacillate between two unsatisfactory syndromes. One is the "liberal" response of taking mere presence on the ground as a proxy for citizenship, falsely converting a political problem into one of future social integration. The other is the "conservative" response of of conceptualizing exclusion, having failed to resolve its meaning politically, in the false terms of cultural, religious, or even racial heterogeneity. Both responses avoid the real issue of the political boundaries of the people.
But Rosanvallon's more recent work allows for another way of looking at the immigration debate. In a new book coming out in French in the fall entitled "Counterdemocracy," whose findings are sketched in a preliminary and summary fashion in the fascinating postscript to the English-language collection, Rosanvallon tries to understand the proliferation of ways that popular expression occurs outside the classical parliamentary conception of representation. There, he notes that immigration is one of several issues around which historically "the people" have manifested their search for extraparliamentary voice.
For Rosanvallon, the point here is not so much to condemn populist backlash, as if it would help much simply to decry the breakdown of congressional lawmaking under pressure. Rather, one might have to begin by contemplating the historical emergence of a new form of democracy -- what he calls unpolitical democracy -- that often crystallizes today around such a hot-button topic as the status of immigrants. This reframing doesn't solve the problem but might help see that its details turn out to be implicated in a general transformation of how democracy works.
Q:OK, now on to foreign policy. In some circles, the invasion of Iraq was justified as antitotalitarianism in action, and as the first stage a process of building democracy. (Such are the beauty and inspiration of high ideals.) Does Rosanvallon's work lend itself to support for "regime change" via military means? Has he written anything about "nation building"?
A: This is a very important question. I write in my introduction to the collection about the contemporary uses of antitotalitarianism, and I do so mainly to make criticize the recent drift in uses of that concept.
Of course, when the critique of totalitarianism activated a generation, it was the Soviet Union above all that drew their fire. But their critique was always understood to have its most salient implications for the imagination of reform at home, and especially for the renewal of the left. This is what has changed recently, in works of those "liberal hawks," like Peter Beinart and Paul Berman, who made themselves apologists for the invasion of Iraq in the name of antitotalitarian values. Not only did they eviscerate the theoretical substance on which the earlier critique of totalitarianism drew -- from the work of philosophers like Hannah Arendt and Claude Lefort among others -- but they wholly externalized the totalitarian threat so that their critique of it no longer had any connection to a democratic program. It became purely a rhetoric for the overthrow of enemies rather than a program for the creation or reform of democracies. In the updated approach, what democracy is does not count as a problem.
It is clear that this ideological development, with all of its real-world consequences, has spelled the end of the antitotalitarian coalition that came together across borders, uniting the European left (Eastern and Western) with American liberalism, thirty years ago. That the attempt to update it and externalize that project had failed became obvious even before the Iraq adventure came to grief -- the project garnered too few allies internationally.
Now it is perfectly true that the dissolution of this consensus leaves open the problem of how democrats should think about foreign policy, once spreading it evangelistically has been unmasked as delusional or imperialistic. A few passages in the collection suggest that Rosanvallon thinks the way to democratize the world is through democratization of existing democracies -- the reinvigoration of troubled democracies is prior to the project of their externalization and duplication. Clearly this response will not satisfy anyone who believes that the main problem in the world is democracy's failure to take root everywhere, rather than its profound difficulties where it already is. But clarifying the history and present of democracy inside is of undoubted relevance to its future outside.
Q:There are some very striking passages in the book that discuss the seeming eclipse of the political now. More is involved than the withdrawl from civic participation into a privatized existence. (At the same time, that's certainly part of it.) Does Rosanvallon provide an account of how this hollowing-out of democracy has come to pass? Can it be reversed? And would its reversal necessarily be a good thing?
A: One of the most typical responses to the apparent rise of political apathy in recent decades has been nostalgia for some prior society -- classical republics or early America are often cited -- that are supposed to have featured robust civic engagement. The fashion of "republicanism" in political theory, from Arendt to Michael Sandel or Quentin Skinner, is a good example. But Rosanvallon observes that the deep explanation for what is happening is a collapse of the model of democracy based on a powerful will.
The suggestion here is that the will of the people is not simply hard to locate or identify; its very existence as the foundation of democratic politics has become hard to credit anymore. The challenge is to respond by taking this transformation as the starting point of the analysis. And there appears to be no return to what has been lost.
But in his new work, anticipated in the postscript, Rosanvallon shows that the diagnosis may be faulty anyway. What is really happening, he suggests, is not apathy towards or retreat from politics in a simple sense, but the rise of new forms of democracy -- or counterdemocracy -- outside the familiar model of participation and involvement. New forms seeking expression have multiplied, through an explosion of devices, even if they may seem an affront to politics as it has ordinarily been conceptualized.
Rosanvallon's current theory is devoted to the project of putting the multiplication of representative mechanisms -- ones that do not fit on existing diagrams of power -- into one picture. But the goal, he says, is not just to make sense of them but also to find a way for analysis to lead to reform. As one of Rosanvallon's countrymen and predecessors, Alexis de Tocqueville, might have put it: Democracy still requires a new political science, one that can take it by the hand and help to sanctify its striving.
For further reading: Professor Moyn is co-author (with Andrew Jainhill of the University of California at Berkeley) of an extensive analysis of the sources and inner tensions of Rosanvallon's thought on democracy, available online.Â And in an essay appearing on the Open Democracy Webs ite in 2004, Rosanvallon reflected on globalization, terrorism, and the war in Iraq.