Philosophy

Falling Into the Generation Gap

A few weeks ago, sitting over a cup of coffee, a writer in his twenties told me what it had been like to attend a fairly sedate university (I think he used the word "dull") that had a few old-time New Left activists on its faculty.

"If they thought you were interested in anything besides just your career," he said, "if you cared about ideas or issues, they got really excited. They sort of jumped on you."

Now, I expected this to be the prelude to a little tribute to his professors – how they had taken him seriously, opened his mind to an earlier generation’s experience, etc. But no.

"It was like they wanted to finish their youth through you, somehow," he said. "They needed your energy. They needed you to admire them. They were hungry for it. It felt like I had wandered into a crypt full of vampires. After a while, I just wanted to flee."

It was disconcerting to hear. My friend is not a conservative. And in any case, this was not the usual boilerplate about tenured radicals seeking to brainwash their students. He was not complaining about their ideas and outlook. This vivid appraisal of his teachers was not so much ideological as visceral. It tapped into an undercurrent of generational conflict that the endless "culture wars" seldom acknowledge.

You could sum it up neatly by saying that his professors, mostly in their fifties and sixties by now, had been part of the "Baby Boom," while he belonged to "Generation X."

Of course, there was a whole segment of the population that fell between those two big cultural bins -- people born at the end of the 1950s and the start of the 1960s. Our cohort never had a name, which is probably just as well. (For one thing, we’ve never really believed that we are a "we." And beside, the whole idea of a prepackaged identity based on what year you were born seems kind of tacky.)

One effect of living in this no-man’s-land between Boomers and Xers is a tendency to feel both fascinated and repulsed by moments when people really did have a strong sense of belonging to a generation. The ambivalence is confusing. But after a while it seems preferable to nostalgia -- because nostalgia is always rather simple-minded, if not dishonest.

The recent documentary The Weather Underground  (a big hit with the young-activist/antiglobalization crowd) expressed doe-eyed sadness that the terrible Amerikan War Machine had forced young idealists to plant bombs. But it somehow never mentioned that group’s enthusiasm for the Charles Manson "family." (Instead of the two-fingered hippie peace sign, Weather members flashed a three-finger salute, in honor of the fork used to carve the word "war" into one of the victims’ stomach.) Robert McNamara and Henry Kissinger have a lot of things to answer for – but that particular bit of insanity is not one of them.

Paul Berman, who was a member of Students for a Democratic Society at Columbia University during the strike of 1968, has been writing about the legacy of the 1960s for a long time. Sometimes he does so in interesting ways, as in parts of his book A Tale of Two Utopias; and sometimes he draws lessons from history that make an otherwise placid soul pull out his hair with irritation. He has tried to sort the positive aspects of the 1960s out from the negative -- claiming all the good for a revitalized liberalism, while treating the rest as symptoms of a lingering totalitarian mindset and/or psychological immaturity.

Whatever the merits of that analysis, it runs into trouble the minute Berman writes about world history -- which he always paints in broad strokes, using bright and simple colors. In his latest book, Terror and Liberalism,  he summed up the last 300 years in terms that suggested Europe and the United States had grabbed their colonies in a fit of progress-minded enthusiasm. (Economic exploitation, by Berman’s account, had nothing to do with it, or not much.) Liberalism and Terror is a small book, and easy to throw.

His essay in the new issue of Bookforum is, to my mind, part of the thoughtful, reflective, valuable side of Berman’s work. In other words, I did not lose much hair reading it.

The essay has none of that quality my friend mentioned over coffee – the morbid hunger to feast off the fresh blood of a younger generation’s idealism. Berman has fond recollections of the Columbia strike. But that is not the same as being fond of the mentality that it fostered. "Nothing is more bovine than a student movement," he writes, "with the uneducated leading the anti-educated and mooing all the way."

The foil for Berman’s reflections is the sociologist Daniel Bell, who left Columbia in the wake of the strike. At the time, Bell’s book The End of Ideology  was the bete noir of young radicals. (It was the kind of book that made people so furious that they refused to read it – always the sign of the true-believer mentality in full effect.) But it was Bell’s writing on the history of the left in the United States that had the deepest effect on Berman’s own thinking.

Bell noticed, as Berman puts it, "a strange and repeated tendency on the part of the American Left to lose the thread of continuity from one generation to the next, such that each new generation feels impelled to reinvent the entire political tradition."

There is certainly something to this. It applies to Berman himself. After all, Terror and Liberalism is pretty much a jerry-rigged version of the Whig interpretation of history,  updated for duty in the War on Terror. And the memoiristic passages in his Bookforum essay are, in part, a record of his own effort to find "the thread of continuity from one generation to the next."

But something else may be implicit in Bell’s insight about the "strange and repeated tendency" to lose that thread. It is a puzzle for which I have no solution readily at hand. Namely: Why is this tendency limited to the left?

Why is it that young conservatives tend to know who Russell Kirk was, and what Hayek thought, and how Barry Goldwater’s defeat in 1964 prepared the way for Reagan’s victory in 1980? Karl Marx once wrote that "the tradition of all the dead generations weighs like a nightmare on the brain of the living." So how come the conservatives are so well-rested and energetic, while the left has all the bad dreams?

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

The Consecrated Heretic

On newstands now -- or at least the ones with a decent selection of foreign periodicals -- you can find a special number of Le Magazine littéraire devoted entirely to Jean-Paul Sartre. Last month was the 25th anniversary of the grand funeral procession in Paris that drew 50,000 people out into the spring rain to see him off. (It was the last great demonstration of the 1960s generation, as people said at the time.) And next month marks the centennial of his birth.

He was "the conscience of his times," the cover announces. That is certainly arguable. It tends to equate denunciation with ethical critique. The man who declared, in 1952, that Soviet citizens enjoyed perfect freedom to criticize their government should probably be Exhibit A for any demonstration that sometimes contrarianism is not enough.

But what is not in doubt – to judge by the rest of issue – is that Sartre was the most-photographed philosopher in history.

I don’t mean that to be quite as sardonic as it probably sounds. One of Sartre’s definitive themes was the struggle between radical human freedom (in which each moment of one’s existence involves a choice) and the alienating experience of being defined by the gaze of the Other. It is striking to look at so many pictures of Sartre -- so many instances when the camera “caught” him, freezing him into an icon.   

It is a commonplace now to grumble that the golden age of the grand intellectual is gone, replaced by what, in the late 1990, Pierre Bourdieu called les fastthinkers, those well-spoken authorities who can be summoned to the television studio to "think faster than a speeding bullet." Nobody who struggles through the denser pages of Sartre’s work -- his efforts to create an "existential psychoanalysis," for example, and then to square it with the work of Marx -- will confuse him with, say, David Brooks.

The other posthumous role Sartre plays in the cultural imagination is that of the academic escapee. He was a philosopher who lived and worked "in the world," not in the university. Indeed, reading the letters he exchanged with Maurice Merleau-Ponty during their political falling-out, I was struck by a passage in which Sartre needles his ex-friend for lacking the nerve to take some distance from academic life. 

(That exchange can be found in The Debate between Sartre and Merleau-Ponty, published by Northwestern University Press, which also contains an important paper by Ronald Aronson, who has also written the definitive book on another personal/political break, Camus and Sartre: The Story of a Friendship and the Quarrel that Ended It, which has just appeared in paperback from the University of Chicago Press.)

So the lingering afterimage of Sartre is now a double negation: Neither pundit nor professor, a mind that was free – or at least moving somewhere beyond both media and academe.

Not only did the philosopher escape their gravitational pull, but he was at was at war with the tendency of his own work to solidify into something inert. He rejected the Nobel Prize for Literature because, among other things, he said that he did not want to become an institution. But Sartre also spoke of thinking "against myself" – of throwing himself into the struggle "to break some bones in my own head."

Interviewed in 1959 – while finishing the first volume of his Critique of Dialectical Reason, a work whose ungainly style owes something to the fact that Sartre popped amphetamines constantly while writing it – he said he was looking forward to having his ideas “deposited in little coffins.” He wanted to be done with the ideas that he had been pursuing; he wanted to feel empty again.

"A writer is fortunate if he can attain such a state," he told the interviewer. "For when one has nothing to say, one can say everything....What is primary is what I haven’t yet written – what I intend to write (not tomorrow, but the day after tomorrow) and what perhaps I will never write."

It is an incredibly appealing notion – a dream of perfect freedom, and of constant renewal. And it is also something of an illusion. So one learned from Annie Cohen Solal's Sartre: A Life, first published in France in 1985. (The translation has just been reissued by the New Press – one of the few publications in English, at least so far, to mark this year’s double anniversary.)

Cohen-Solal’s portrait revealed a man who was both an incredibly well-polished product of the upper echelons of the French educational system and a canny player in the game of Parisian literary politics. As his work reached an American readership in the late 1940s, its arrival was shaped by a similar combination of academic and media forces. Ann Fulton’s book Apostles of Sartre: Existentialism in America 1945-1963 (Northwestern University Press, 1999) shows how his reception benefitted from both the support of professors in the French department at Yale University and publications about him in popular magazines.

He ascended to the position of "consecrated heretic" -- a culturally sanctioned gadfly, the embodiment of social criticism, in definace of all conformism. That is the term (coined by Bourdieu) that Paul M. Cohen applies to Sartre in Freedom’s Moment: An Essay on the French Idea of Liberty from Rousseau to Foucault (University of Chicago Press, 1997).

The consecrated heretic has no authority beyond the power of his thought and his example. Someone unimpressed or horrified by any particular consecrated heretic is likely to notice that he is also thereby relieved of any particular responsibility or accountability.

"Responsibility" was, of course, a decisive term in the Sartrean lexicon. Each of us is ultimately responsible for the creating, from our freedom, whatever commitments give meaning to our lives. But on a less metaphysical plane of  responsibility, Sartre’s example presents an especially knotty set of problems.

In his introduction to the new edition of Cohen-Solal’s biography, Cornel West offers a judicious evaluation of the thinker’s political legacy – his courage and incisiveness in denouncing racism and colonialism, the amoral and sometimes merely dishonest nature of some of his statements about Communism, and his surprising tendency to downplay the Palestinian question. Sartre was a passionate and very firm supporter of the right of Israel to exist as a state. 

So how do you come to a final judgment on such a figure? Was he the "conscience of his time" or a symptom of its disorders?

It’s not clear that a decision is possible or desirable. At least, not on any level beyond choosing either idol-worship or contempt. I’ve been reading Sartre my entire adult life (and then some) – and have gone to both extremes, along the way.

And now, in the midst of these anniversaries, what is less impressive than any given position he assumed than his willingness -- from time to time -- to think against himself. There is much to admire in his example, and much to avoid. But more than anything else, I keep going back to him in hopes of learning how to break a few bones in my head.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

Remembering Ricoeur

Paul Ricoeur -- the philosopher whose writings on hermeneutics were the cornerstone of an ambitious rethinking of the relationship between the humanities and the social sciences -- died on Friday at the age of 92.

By the late 1960s, American academic presses had made him one of the first French thinkers of his generation with a substantial body of work available in English. Even as an octogenarian, he was more productive than many scholars half his age. Late last year, the University of Chicago Press published Memory, History, Forgetting -- an enormous study of the conditions of possibility for both historical writing and moral forgiveness. His book The Course of Recognition is due from Harvard University Press this fall. And Ricoeur himself provided the ideal survey of his life and philosophical development in Critique and Commitment, a lively set of interviews that Columbia University Press issued in 1998.

At the time of his death, he was professor emeritus at both the University of Paris and the University of Chicago. "The entire European humanist tradition is mourning one of its most talented spokesmen," said a statement from the office of Jean-Pierre Raffarin, the prime minister of France, released over the weekend.

And that leads to a conundrum. It is Tuesday already, and nobody in the American media has insulted Ricoeur yet. What's going on? Have our pundits lost their commitment to mocking European intellectuals and the pointy-headed professors who read them?

At first I thought it might be that people were still tired from abusing Derrida following his death last fall. But clearly that's not it.

Over at The National Interest,  for example, the neocons are already going after Jurgen Habermas, who is still very much alive, by the way. Despite being almost fanatically moderate -- his residual Marxism tempered by an admiration for the American pragmatist philosophers, combined with a hearty enthusiasm for constitutional democracy -- Habermas has criticized the invasion and occupation of Iraq. Besides, his political philosophy is grounded on the theory that the fundamental tendency of language is toward open and truthful communication. You can see where that would bother some people.

Yet Ricoeur, it seems, gave no one offense. With hindsight, that was a terrible oversight. A thinker receives significant publicity if (and only if) people speak his name in tones of apoplectic hysteria.

It was his own fault, really. Even more than Habermas, he was the soul of civility and calm intellectual labor. You will read many a page of Ricoeur before coming across any flash of apocalyptic rhetoric or stagy contrarianism. The closest thing to a bracing put-down in his writing is probably Ricouer's definition of structuralism as "Kantianism without a transcendental subject." (Which is actually kind of funny. But not, you know, Slavoj-Zizek-obscene-joke funny.

And while his work connected up with numerous other fields (sharing borders with linguistics, psychoanalysis, sociology, history, religious studies, literary theory, and law, just to give the short list), Ricoeur was in many ways an academic philosopher of a very traditional sort. His philosophy made no effort to jump out of its own skin -- to become, say, a form of avant-garde literature, or a conceptual weapon for guerrilla warfare. His work tends to ambitious, edifying, cumulative, and ... well, just a little bit dull at times.

That is not meant as a criticism exactly. Many things in the world are exciting without being good for you. Some readers find that the work of Ricoeur's younger colleague Gilles Deleuze is rather like the philosophical equivalent of taking LSD. (I can attest that, yes, there are definite similarities.) But there are limits to how much of cultural life can be conducted as a rave.

Perhaps the most salutary aspect of Ricoeur's work is that it has precisely the opposite effect of either the polemical or the psychedelic modes of intellectual intervention. It synthesizes, rather than obliterates. It treats the process of interpretation as an act of opening to the possibility of communication with the entire human community -- not as the moment when Narcissus becomes fascinated by the image in the pool.

Before dealing with the substance of Ricoeur's work, however, I want first to grapple more with the uncanny silence since his death. In American public life, you're nobody until somebody hates you. Frankly, the absence of rancor now verges on the disrespectful.

A modest proposal, then. Just as the neocons are now denouncing the mild-mannered Habermas, this might be the time for leftist wingnuts to go to town on Paul Ricoeur. As ever, total ignorance is no obstacle. Here is a quick checklist of talking points ("shrieking points?") for anyone who wants to get the vitriol flowing.

(1) Paul Ricoeur was a Christian his entire life. Most of his work is secular philosophical analysis, but he did publish writings on the Bible, and even gave sermons. Despite all that unpleasantness between his Huguenot ancestors and the Roman Catholic Church in the 16th and 17th centuries, Ricoeur is known to have spent a fair bit of time in discussion with Pope John Paul II over the years. The pope cited his work, even. Ricoeur denied being a theologian. However, there are subtle echoes and parallels between his philosophical ideas and his religious beliefs.

(For maximum effect, imitate the Fox News style like so: "Some have said that Ricoeur was actually a fundamentalist who used philosophy to brainwash his students." Nobody actually thinks this, but if you repeat it enough, someone will.)

(2) Ricoeur never really joined the "theory counterculture." When he published a major work on Freud and philosophy in 1964, the Lacanians got mad at him for failing to mention their guru. He soon became "the designated enemy," as Francois Dosse writes in his History of Structuralism, targeted by the Marxist students around Louis Althusser. A few years later, Ricoeur was a candidate for a chair at the prestigious Collège de France -- and lost the position to Michel Foucault.

(This definitely raises questions about his commitment to destroying the phallogocrentrism of dead white European males. Plus, now he is one.)      

(3) In the mid-1960s, Ricoeur was a prominent advocate of  reforms in the overburdened French university system. In 1967, he became involved in the launching of a satellite campus in the Parisian suburb of Nanterre, where he soon became dean of the college of letters. Following the student uprising of May 1968, the campus turned into a scene of continuous warfare among radical factions, some armed with chains and iron bars. Ricoeur himself was physically attacked, and some faculty refused to risk coming on campus. Eventually, he requested that police patrol the campus. This only made things worse, and in March 1970 Ricoeur resigned his position and took a long leave of absence.

(In repeating this information, hint that Ricoeur was actually a man of the right. Ignore the fact that Ricoeur was a pacifist, a supporter of the left-wing journal Esprit, and an outspoken opponent of French policy during the Algerian war. Just stress that he called the cops.)

(4) Finally, the clincher. Last November, Ricoeur was named as one of the winners of the John W. Kluge Prize for Lifetime Achievement in the Humanities and Social Sciences, given by the Library of Congress. He split the award of $1 million with Jaroslav Pelikan, a scholar of religious history.

(Note how suspicious it is that he received this award right after Derrida died. Imply that it was the religious right's way of rewarding an intellectual lapdog. Added benefit: now you have an excuse not to read him.)
     
Well, that was depressing, even as an exercise in satire.

In the middle of writing it, I learned of Russell Arben Fox's "Thoughts on Ricoeur." It's the first really substantial blogospheric commentary on the philosopher's death to come my way -- and a sign that perhaps things are not so dire, after all.

As Fox notes, Ricoeur was one of those thinkers it proved easy to "save for later." Beyond a certain level of familiarity, it seemed hopeless to try to catch up with him, because he was just too prolific. And there was also the fact that his work seems to have gone through somewhere between three and five stages of development.

Be that as it may, I'll try on Thursday to give a thumbnail account of what was (and still is) at stake in his work. Or at least what I know of it.

In the meantime, you might have a look at Ricoeur's acceptance speech for the Kluge Award. Knowing that he wrote it at the age of 91 is a reminder of one benefit of Paul Ricoeur's work: Reading it keeps you humble.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays. In February, he wrote two columns about a scholarly conference on Derrida's legacy.

Locating Bourdieu

Pierre Bourdieu had a way of getting under one's skin. I don't mean his overtly polemical works -- the writings against globalization and neoliberalism that appeared toward the end of his life, for example. He was at his most incisive and needling in works that were much less easy for the general public to read. Bourdieu's sociological research kept mapping out the way power, prestige, and exclusion work in the spheres of politics, economics, and culture.

He was especially sharp (some thought brutal) in analyzing the French academic world. At the same time, he did very well in that system; very well indeed. He was critical of the way some scholars used expertise in one field to leverage themselves into positions of influence having no connection with their training or particular field of confidence. It could make him sound like a scold. At the same time, it often felt like Bourdieu might be criticizing his own temptation to become an oracle.

In the course of my own untutored reading of Bourdieu over the years, there came a moment when the complexity of his arguments and the aggressiveness of his insights suddenly felt like manifestations of a personality that was angry on the surface, and terribly disappointed somewhere underneath. His tone registered an acute (even an excruciating) ambivalence toward intellectual life in general and the educational system in particular. 

Stray references in his work revealed glimpses of Bourdieu as a "scholarship boy" from a family that was both rural and lower-middle class. You learned that he had trained to be a philosopher in the best school in the country. Yet there was also the element of refusal in even his most theoretical work -- an almost indignant rejection of the role of Master Thinker (played to perfection in his youth by Jean-Paul Sartre) in the name of empirical sociological research.

There is now a fairly enormous secondary literature on Bourdieu in English. Of the half-dozen or so books on him that I've read in the past few years, one has made an especially strong impression, Deborah Reed-Danahay's recent study Locating Bourdieu (Indiana University Press, 2005). Without reducing his work to memoir, she nonetheless fleshes out the autobiographical overtones of Bourdieu's major concepts and research projects. (My only complaint about the book is that it wasn't published 10 years ago: Although it is a monograph on his work rather than an introductory survey, it would also be a very good place for the new reader of Bourdieu to start.)

Reed-Danahay is a professor of anthropology at the University of Texas at Arlington. She recently answered a series of questions by e-mail.

Q: Bourdieu published sociological analyses of the Algerian peasantry, the French academic system, the work of Martin Heidegger, and patterns of attendance at art galleries -- to give only a very incomplete list. Yet his work seems much more focused and coherent than a catalog of topics would suggest. Can you to sum up the gist of his work, or rather how it all holds together?

A: Yes, I agree that, at first glance, Bourdieu's work seems to cover a seemingly disparate series of studies. When I read Bourdieu's work on education in France after first being exposed to his Algerian peasant studies in my graduate work in anthropology, I wondered if this was the same person. But when the entire corpus is taken together, and when one carefully reads Bourdieu's many texts that returned to themes brought up earlier in his work, one can see several underlying themes and recurring intellectual questions. 

One way to get a handle on his work is to realize that Bourdieu was interested in explaining social stratification, and the hierarchy of social values, in contemporary capitalist societies. He wanted to study systems of domination in a way that held some room for social agency but without a notion of complete individual freedom. Bourdieu often evoked Althusser as an example of a theorist who had too mechanical a view of internalized domination, while Sartre represented the opposite extreme of a philosopher who posited free will.  

Bourdieu believed that we are all constrained by our internalized dispositions (our habitus), deriving from the milieu in which we are socialized, which influence our world view, values, expectations for the future, and tastes. These attributes are part of the symbolic or cultural capital of a social group. 

In a stratified society, a higher value is associated with the symbolic capital of members of the dominant sectors versus the less dominant and "controlled" sectors of society. So that people who go to museums and like abstract art, for instance, are expressing a form of symbolic capital that is more highly valued than that of someone who either rarely goes to museums or who doesn't like abstract art.
The person feels that this is "just" a matter of taste, but this can have important consequences for children at school who have not been exposed to various forms of symbolic capital by their families.

Bourdieu studied both social processes (such as the French educational system, Algerian postcolonial economic dislocations, or the rural French marriage system), and individual figures and their social trajectories -- including Heidegger, Flaubert, and an Algerian worker. Bourdieu was trying to show how the choices these people made (and he often wrote of choices that were not really choices) were expressions of the articulation of habitus and the social field in which it is operating.

Q: Something about his career always seemed paradoxical. Sartre was always his worst-case reference, for example. But by the time of his death in 2002, Bourdieu was regarded as the person who had filled Sartre's shoes. Has your work given you a sense of how to resolve this seeming contradiction?

A: There is a lot of silence in Bourdieu's work on the ways in which he acquired power and prestige within the French academic system or about how he became the most famous public intellectual in France at the time of his death. He was more self-reflexive about earlier periods in his life. I have trouble defending Bourdieu in this contradiction about his stance toward the public intellectual, even though I applaud his political engagements. 

I think that Bourdieu felt he had more authority to speak to some of these issues than did other academics, given his social origins and empirical research in Algeria and France among the underclass. Bourdieu repeatedly positioned the sociologist (and one can only assume he meant himself here, too) as having a privileged perspective on the "reality" of systems of domination.

Bourdieu was very critical of Sartre for speaking out about the war in Algeria and for championing a sort of revolutionary spirit among Algerians. Bourdieu accused him of trying to be a "total intellectual" who could speak on any topic and who did not understand the situation in Algeria as profoundly as did the sociologist-ethnologist Bourdieu. When Bourdieu himself became more visible in his own political views (particularly in attacks against globalization and neo-liberalism), he does seem to have acted like the "journalist"-academics he lampooned in Homo Academicus. Nevertheless, when he was criticizing (in his essay On Television) what he saw as the necessity for "fast thinking" on television talk shows in France, where talking heads must quickly have something to say about anything, Bourdieu did (in his defense) refrain from pontificating about any and everything.

There is still a huge controversy raging in France about Bourdieu's political engagements. His detractors vilify him for his attacks against other intellectuals and journalists while he became a public intellectual himself. His defenders have published a book of his political writings ( Interventions, 1961-2001) seeking to show his long-standing commitments, and continue to guard his reputation beyond the grave.

I cannot help but think that Bourdieu's public combative persona, and his (in his own terms) refusals and ruptures, helped rather than thwarted his academic career. The degree to which this was calculated or (as he claimed) was the result of the "peasant" habitus he acquired growing up in southwestern France, is unknown.

Q: So much of his analysis of academic life is focused on the French university system that there is always a question of how well it could apply elsewhere. I'm curious about your thoughts on this. What's it been like to move between his concepts and models and your own experience as an American academic? 

A: I see two ways to answer your question. Certainly, in the specifics, French academia is very different. I have experienced that directly. My own American cultural values of independence (which may, I am aware, be a total illusion) conflict with those of many French academics. 

When I first arrived in France to do my dissertation fieldwork, I came with a grant that opened some doors to French academia, but I had little direct sponsorship by powerful patrons in the U.S. I was doing a project that had little to do with the work of my professors, none of whom had done research in France or Europe, and it was something that I had come up with on my own. This was surprising to the French, who were familiar with a patron-client system of professor/student relations. Most of the graduate students I met in France were involved in projects related to the work of their professors.

French academia, still centralized in Paris despite attempts at decentralization, is a much smaller universe than that of the vast American system. There is little room for remaining "outside" of various polemics there. I've learned, for instance, that some people whom I like and admire in France hated Bourdieu and that Bourdieu followers tend to be very fierce in their defense of him and want to promote their view of his work.

This is not to say that American academia doesn't have similar forces operating, but there are multiple points of value and hierarchy here. Whereas Bourdieu could say that Philosophy dominated French academia during the mid-20th century, it is harder to pinpoint one single dominant intellectual framework here.

I do, however, feel that Bourdieu's critique of academia as part of a larger project of the study of power (which he made very explicit in The State Nobility) is applicable beyond France. His work on academia provided us with a method of inquiry to look at the symbolic capital associated with academic advancement and, although the specific register of this will be different in different national contexts, the process may be similar. Just as Bourdieu did in France, for example, one could study how it is that elite universities here "select" students and professors.

Q: We have a memoir of Sartre's childhood in The Words. Is there anything comparable for Bourdieu?

A: Bourdieu produced self-referential writings that began to appear in the late 1990s, with "Impersonal Confessions" in Pascalian Meditations (1997), a section called "Sketch for a Self-Analysis" in his final lectures to the Collège de France, Science of Science and Reflexivity (2001), and then the stand-alone volume Esquisse pour une Auto-Analyse, published posthumously in 2004. [ Unlike the other titles listed, this last volume is not yet available in English. -- S.M.]

A statement by Bourdieu that "this is not an autobiography" appears as an epigraph to the 2004 essay. I find his autobiographical writings interesting because they show us a bit about how he wanted to use his own methods of socio-analysis on himself and his own life, with a focus particularly on his formative years -- his childhood, his education, his introduction to academia, and his experiences in Algeria.

Bourdieu was uncomfortable with what he saw as narcissism in much autobiography, and also was theoretically uncomfortable with life stories that stressed the individual as hero without sufficient social analysis. He had earlier written an essay on the 'biographical illusion" that elaborated on his biographical approach, but without self-reference. These essays are not, then, autobiographical in the conventional sense of a linear narrative of a life. Bourdieu felt that a truly scientific sociology depended on reflexivity on the part of the researcher, and by this he meant being able to analyze one's own position in the social field and one's own habitus.  

On the one hand, however, Bourdieu's auto-analysis was a defensive move meant to preempt his critics. Bourdieu included a section on self-interpretation in his book on Heidegger, in which he referred to it as "the riposte of the author to those interpretations and interpreters who at once objectify and legitimize the author, by telling him what he is and thereby authorizing him to be what they say he is..." (101). As Bourdieu became increasingly a figure in the public eye and increasingly a figure of analysis and criticism, he wanted to explain himself and thus turned to self-interpretation and auto-analysis. 

Q: In a lot of ways, Bourdieu seems like a corrosive thinker: someone who strips away illusions, rationalizations, the self-serving beliefs that institutions foster in their members. But can you identify a kernel of something positive or hopeful in his work -- especially in regard to education? I'd like to think there is one....

A: Bourdieu had little to say about how schools and universities operate that is positive, and he was very critical of them. The hopeful kernel here is that in understanding how they operate, how they inflict symbolic violence and perpetuate the illusions that enable systems of domination, we can improve educational institutions.

Bourdieu felt strongly that by de-mystifying the discourses and aura of authority surrounding education (especially its elite forms), we can learn something useful. The trick is how to turn this knowledge into power, and Bourdieu did not have any magical solutions for this. That is work still to be done.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

The Corrosion of Ethics in Higher Education

In its 1966 declaration on professional ethics, the American Association of University Professors, the professoriate’s representation organization, states: 

"Professors, guided by a deep conviction of the worth and dignity of the advancement of knowledge, recognize the special responsibilities placed upon them....They hold before them the best scholarly and ethical standards of their discipline.… They acknowledge significant academic or scholarly assistance from (their students)."

Notwithstanding such pronouncements, higher education recently has provided the public with a series of ethical solecisms, most spectacularly the University of Colorado professor Ward Churchill’s recidivistic plagiarism and duplicitous claim of Native American ancestry along with his denunciations of 9/11 victims. While plagiarism and fraud presumably remain exceptional, accusations and complaints of such wrong doing increasingly come to light.

Some examples include Demas v. Levitsky at Cornell, where a doctoral student filed a legal complaint against her adviser’s failure to acknowledge her contribution to a grant proposal; Professor C. William Kauffman’s complaint against the University of Michigan for submitting a grant proposal without acknowledging his authorship; and charges of plagiarism against by Louis W. Roberts, the now-retired classics chair at the State University of New York at Albany. Additional plagiarism complaints have been made against Eugene M. Tobin, former president of Hamilton College, and Richard L. Judd, former president of Central Connecticut State University.

In his book Academic Ethics, Neil Hamilton observes that most doctoral programs fail to educate students about academic ethics so that knowledge of it is eroding. Lack of emphasis on ethics in graduate programs leads to skepticism about the necessity of learning about ethics and about how to teach it. Moreover, nihilist philosophies that have gained currency within the academy itself such as Stanley Fish’s “antifoundationalism” contribute to the neglect of ethics education.
   .
For these reasons academics generally do not seriously consider how ethics education might be creatively revived. In reaction to the Enron corporate scandal, for instance, some business schools have tacked an ethics course onto an otherwise ethically vacuous M.B.A. program. While a step in the right direction, a single course in a program otherwise uninformed by ethics will do little to change the program’s culture, and may even engender cynicism among students.

Similarly, until recently, ethics education had been lacking throughout the American educational system. In response, ethicists such as Kevin Ryan and Karen Bohlin have advocated a radical renewal of ethics education in elementary schools. They claim that comprehensive ethics education can improve ethical standards. In Building Character in Schools, Ryan and Bohlin compare an elementary school to a polis, or Greek city state, and urge that ethics be fostered everywhere in the educational polis.

Teachers, they say, need to set standards and serve as ethical models for young students in a variety of ways and throughout the school. They find that manipulation and cheating tend to increase where academic achievement is prized but broader ethical values are not. They maintain that many aspects of school life, from the student cafeteria to the faculty lounge, ought to provide opportunities, among other things, to demonstrate concern for others. They also propose the use of vision statements that identify core virtues along with the implementation of this vision through appropriate involvement by staff and students.

We would argue that, like elementary schools, universities have an obligation to ethically nurture undergraduate and graduate students. Although the earliest years of life are most important for the formation of ethical habits, universities can influence ethics as well. Like the Greek polis, universities become ethical when they become communities of virtue that foster and demonstrate ethical excellence. Lack of commitment to teaching, lack of concern for student outcomes, false advertising about job opportunities open to graduates, and diploma-mill teaching practices are examples of institutional practices that corrode rather than nourish ethics on campuses.

Competency-based education, broadly considered, is increasingly of interest in business schools.  Under the competency-based approach (advocated, for example, by Rick Boyatzis of Case Western Reserve University, David Whetten of Brigham Young University, and Kim Cameron of the University of Michigan), students are exposed not only to theoretical concepts, but also to specific competencies that apply the theory. They are expected to learn how to apply in their lives the competencies learned in the classroom, for instance those relating to communication and motivating others. Important ethical competencies (or virtues) should be included and fostered alongside such competencies. Indeed, in applied programs such as business, each discipline and subject can readily be linked to ethical virtues. Any applied field, from traffic engineering to finance, can and should include ethical competencies as an integral part of each course. 

For example, one of us currently teaches a course on managerial skills, one portion of which focuses on stress management. The stress management portion includes a discussion of personal mission setting, which is interpreted as a form of stress management. The lecture emphasizes  how ethics can intersect with practical, real world decision making and how it can relate to competencies such as achievement orientation. In the context of this discussion, which is based on a perspective that originated with Aristotle, a tape is shown of Warren Buffett suggesting to M.B.A. students at the University of North Carolina that virtue is the most important element of personal success.

When giving this lecture, we have found that street smart undergraduate business students at Brooklyn College and graduates in the evening Langone program of the Stern School of Business of New York University respond well to Buffett’s testimony, perhaps better than they would to Aristotle’s timeless discussions in Nicomachean Ethics.

Many academics will probably resist integration of ethical competencies into their course curriculums, and in recent years it has become fashionable to blame economists for such resistance.  For example, in his book Moral Dimension, Amitai Etzioni equates the neoclassical economic paradigm with disregard for ethics. Sumantra Ghoshal’s article “Bad Management Theories are Destroying Good Management Practices,” in Academy of Management Learning and Education Journal, blames ethical decay on the compensation and management practices that evolved from economic theory’s emphasis on incentives.

We disagree that economics has been all that influential. Instead, the problem is much more fundamental to the humanities and social sciences and has its root in philosophy. True, economics can exhibit nihilism.  For example, the efficient markets hypothesis, that has influenced finance, holds that human knowledge is impotent in the face of efficient markets. This would imply that moral choice is impotent because all choice is so. But the efficient markets hypothesis is itself a reflection of a deeper and broader philosophical positivism that is now pandemic to the entire academy.
 
Over the past two centuries the assaults on the rational basis for morals have created an atmosphere that stymies interest in ethical education. In the 18th century, the philosopher David Hume wrote that one cannot derive an “ought” from an “is,” so that morals are emotional and cannot be proven true. Today’s academic luminaries have thoroughly imbibed this “emotivist” perspective. For example, Stanley Fish holds that even though academics do exhibit morality by condemning “cheating, academic fraud and plagiarism,” there is no universal morality beyond this kind of “local practice.” 

Whatever its outcome, the debate over the rational derivability of ethical laws from a set of clear and certain axioms that hold universally is of little significance in and of itself.  It will not determine whether ethics is more or less important in our lives; nor will it provide a disproof of relativism -- since defenders of relativism can still choose not to accept the validity of the derivation.

Yet ethics must still be lived -- even though the knowledge, competency, skill or talent that is needed to lead a moral life, a life of virtue, may not be derived from any clear and certain axioms. There is no need for derivation of the need, for instance, for good interpersonal skills. Rather, civilization depends on competency, skill and talent as much as it depends on practical ethics. Ethical virtue does not require, nor is it sustained by, logical derivation; it becomes most manifest, perhaps, through its absence, as revealed in the anomie and social decline that ensue from its abandonment.  Philosophy is beside the point.

Based on much evidence of such a breakdown, ethics education experts such as Thomas Lickona of the SUNY's College at Cortland have concluded that to learn to act ethically, human beings need to be exposed to living models of ethical emotion, intention and habit. Far removed from such living models, college students today are incessantly exposed to varying degrees of nihilism: anti-ethical or disembodied, hyper-rational positions that Professor Fish calls “poststructuralist” and “antifoundationalist.” In contrast, there is scant emphasis in universities on ethical virtue as a pre-requisite for participation in a civilized world. Academics tend to ignore this ethical pre-requisite, preferring to pretend that doing so has no social repercussions.

They are disingenuous – and wrong.

It is at the least counterintuitive to deny that the growing influence of nihilism within the academy is deeply, and causally, connected to increasing ethical breaches by academics (such as the cases of plagiarism and fraud that we cited earlier). Abstract theorizing about ethics has most assuredly affected academics’ professional behavior.

The academy’s influence on behavior extends, of course, far beyond its walls, for students carry the habits they have learned into society at large. The Enron scandal, for instance, had more roots in the academy than many academics have realized or would care to acknowledge. Kenneth Lay, Enron’s former chairman, holds a Ph.D. in economics from the University of Houston.Jeff Skilling, Enron’s former CEO, is a Harvard M.B.A. who had been a partner at the McKinsey consulting firm, one of the chief employers of top-tier M.B.A. graduates. According to Malcolm Gladwell in The New Yorker, Enron had followed McKinsey’s lead, habitually hiring the brightest M.B.A. graduates from leading business schools, most often from the Wharton School. Compared to most other firms, it had more aggressively placed these graduates in important decision-making posts. Thus, the crimes committed at Enron cannot be divorced from decision-making by the best and brightest of the newly minted M.B.A. graduates of the 1990s.

As we have seen, the 1966 AAUP statement implies the crucial importance of an ethical foundation to academic life. Yet ethics no longer occupies a central place in campus life, and universities are not always run ethically. With news of academic misdeeds (not to mention more spectacular academic scandals, such as the Churchill affair) continuing to unfold, the public rightly grows distrustful of universities.

It is time for the academy to heed the AAUP’s 1915 declaration, which warned that if the professoriate “should prove itself unwilling to purge its ranks of … the unworthy… it is certain that the task will be performed by others.” 

Must universities learn the practical value of ethical virtue by having it imposed from without?  Or is ethical revival possible from within? 

Author/s: 
Candace de Russy and Mitchell Langbert
Author's email: 
info@insidehighered.com

Candace de Russy is a trustee of the State University of New York and a Hudson Institute Adjunct Fellow. Mitchell Langbert is associate professor of business at Brooklyn College of the City University of New York.

Violence and the Sacred

Passing a Roman Catholic bookshop not long ago, I noticed a window display of books by and about Pope Benedict XVI, including a volume of interviews done back when he was Cardinal Joseph Ratzinger. The acquisitive urge was short-circuited by the fact that the store was closed. And in any case, I'll probably get an earful about his doctrines and policies soon enough from my mother-in-law. She's a Vatican II-type liberal who writes for a dissident Catholic newspaper,  of the kind likely to be amused by the rumor that the new pontiff's "street name" is Joey Rats.

Eventually,  the right combination of free time and impulse book-buying will make it feasible to catch up with the pope's thinking straight from the source. But for now, it's interesting to see that the summer issue of New Perspectives Quarterly has an interview about Benedict XVI with the literary theorist René Girard, who is now professor emeritus in French at Stanford University.

The introduction to the interview describes him as a professor of anthropology --  a mistake, but an interesting one.
 
Beginning in the late 1950s, Girard published a series of analyses of Cervantes, Shakespeare, Dostoevsky, and Proust (among others) that foregrounded their preoccupation with desire, envy, and imitation. He found that there was a recurrent structure in their work: a scenario of what he called "triangular" or "mimetic" desire. Don Quixote offers a fairly simple example. The would-be knight feels no particular longing for Dulcinea. Rather, he has thrown himself into a passionate imitation of certain models of what a knight must do -- and she's as close to a damsel as circumstances allow.

Girard argued that, at some deep level, all of human desire is like that. We learn by imitation -- and one of the things we learn is what, and how, to desire. (Hence, I didn't so much want that book in the window for its own sake, but as a means to triumph in the struggle for the position my wife calls "Ma's favorite son-in-law.")

For the most part, we are blind to the mediated nature of desire. But the great writers, according to Girard, are more lucid about this. They reveal the inner logic of desire, including its tendency to spread -- and, in spreading, to generate conflict. When several hands reach for the same object, some of them are bound to end up making fists. So begins a cycle of terror and retaliation; for violence, too, is mimetic.

By the 1970s, Girard had turned all of this into a grand theory of human culture. He described a process in which the contagion-like spread of mimetic desire and violence leads to the threat of utter social disintegration. At which point, something important happens: the scapegoat emerges. All of the free-floating violence is discharged in an act of murder against an innocent person or group which is treated (amidst the delirium of impending collapse) as the source of the conflict.

A kind of order takes shape around this moment of sacrificial violence. Myths and rituals are part of the commemoration of the act by which mimetic desire and its terrible consequences were subdued. But they aren't subdued forever. The potential for a return of this contagion is built into the very core of what makes us human.

Girard's thinking has not changed much in the 30 years or so since he published Violence and the Sacred, which appeared in France in 1972 and in an English translation from Johns Hopkins University Press in 1977. He has restated his theory any number of times, drawing in material from the various social sciences as evidence. He has spelled out some of its theological implications -- which, in Girard's own telling anyway, are profoundly Christian. He wasn't a believer when he started thinking about mimetic desire, but became a Catholic somewhere along the way. (Girard's readers have a right to expect a detailed spiritual autobiography, at some point.)

It isn't necessary to share Girard's creed to find his work of interest -- though I must admit to some uncertainty, after all this time, about how to classify his system of thought. You can trace some of his ideas back to Hegel (desire for the desire of the other), or sideways to George Bataille and Kenneth Burke (who both wrote about scapegoating). But there's also something reminiscent of Middlemarch about the whole thing, as if Girard were trying to finish Edward Casaubon's "Key to All Mythologies."

Girard has a small academic following, organized as the Colloquium on Violence and Religion, which produces an interdisciplinary journal called Contagion: Journal of Violence, Mimesis, and Culture. And there's a useful annotated bibliography of works by and about him available online.

The interview in this summer's issue of New Perspectives Quarterly is interesting, not just for Girard's comments on the new head of his own church, but for his thoughts on the dangers of mimetic desire in a global marketplace. One counterintuitive element of Girard's theory is that scapegoating is not the product of difference. Rather, he holds that mimetic desire and the resulting cycle of conflict tend to reduce people to the same level. (The moment of savage violence against the scapegoat is an effort to create a difference, a structure, an order in the chaos of sameness.) That would be the dark side of Tom Friedman's peppy thesis about how the world is now "flat."

The interview is also striking for Girard's full-throated proclamation that Christianity, alone among religions, can face the truth about mimetic desire. In a smart and welcome move, the editors of the Quarterly have invited the comments of someone from another religious tradition with very definite ideas about the intimate relationship between desire and human misery, Pankaj Mishra, author of An End to Suffering: The Buddha in the World, published last year by Farrar, Straus and Giroux.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs on Tuesdays and Thursdays.

Will to Power

For some time now, I have been collecting notes on the interaction between academics and journalists. In theory, at least, this relationship ought to be mutually beneficial -- almost symbiotic. Scholars would provide sound information and authoritative commentary to reporters -- who would then, in turn, perform the useful service of disseminating knowledge more broadly. 

So much for the theory. The practice is not nearly that sweet, to judge by the water-cooler conversation of either party, which often tends toward the insulting. From the mass-media side, the most concise example is probably H.L. Mencken's passing reference to someone as "a professor, hence an embalmer." And within the groves of academe itself, the very word "journalistic" is normally used as a kind of put-down. 

There is a beautiful symmetry to the condescension. It's enough to make an outsider -- someone who belongs to neither tribe, but regularly visits each -- wonder if some deep process of mutual-definition-by- mutual-exclusion might be going on. And so indeed I shall argue, one day, in a treatise considering the matter from various historical, sociological, and psychoanalytic vantage points. (This promises to be a book of no ordinary tedium.)

A fresh clipping has been added to my research file in the past couple of days, since reading Brian Leiter's objection to a piece on Nietzsche appearing in last weekend's issue of The New York Times Book Review. The paper asked the novelist and sometime magazine writer William Vollmann to review a biography of Nietzsche, instead of, let's say, an American university professor possessing some expertise on the topic.

For example, the Times editors might well have gone to Leiter himself, a professor of philosophy at UT-Austin and the author of a book called Nietzsche on Morality, published three years ago by Routledge. And in a lot of ways, I can't help wishing that they had. It would have made for a review more informative, and less embarrassingly inept, than the one that ran in the paper of record. 

Vollmann's essay is almost breathtaking in its badness. It manages to drag the conversation about Nietzsche back about 60 years by posing the question of whether or not Nietzsche was an anti-Semite or a proto-Nazi. He was not, nor is this a matter any serious person has discussed in a very long time. (The role of his sister, Elisabeth Forster-Nietsche, is an entirely different matter: Following his mental collapse, she managed to create a bizarre image of him as theorist of the Teutonic master-race, despite Nietzsche's frequent and almost irrepressible outbursts of disgust at the German national character.)

And while it is not too surprising that a review of a biography of a philosopher would tend to focus on, well, his life -- and even on his sex life, such as it was for the celibate Nietzsche -- it is still reasonable to expect maybe a paragraph or two about his ideas. Vollmann never gets around to that. Instead, he offers only the murkiest of pangyrics to Nietzsche's bravery and transgressive weirdness -- as if he were a contestant in the some X Games of the mind, or maybe a prototype of Vollmann himself. (Full disclosure: I once reviewed, for the Times in fact, Vollmann's meditation on the ethics of violence -- a work of grand size, uncertain coherence, and sometimes baffling turgidity. That was six weeks of my life I will never get back.)

Leiter has, in short, good reason to object to the review. And there are grounds, too, for questioning how well the Times has served as (in his words) "a publication that aspires to provide intellectual uplift to its non-scholarly readers."

Indeed, you don't even have to be an academic to feel those reservations. Throughout the late 1990s and early 2000s, for example, many readers would spend their Saturday afternoons studying a weekly section of the Times called "Arts and Ideas," trying to figure out where the ideas were.
By a nice paradox, though, the coverage of ideas improved, at least somewhat, after the "Arts and Ideas" section disappeared. (See, for example, the symposium inspired earlier this summer by a Times essay on early American history.) 

So while reading Leiter's complaint with much sympathy, I also found some questions taking shape about its assumptions -- and about his way of pressing the point on Vollmann's competence.
For one thing, Leiter takes it as a given that the best discussion of a book on Nietzsche would come from a scholar -- preferably, it seems, a professor of philosophy. At this, however, certain alarm bells go off. 

The last occasion Leiter had to mention The New York Times was shortly after the death of Jacques Derrida. His objection was not to the paper's front-page obituary (a truly ignorant and poorly reported piece, by the way). Rather, Leiter was unhappy to find Derrida described as a philosopher. He assured his readers that Derrida was not one, and had never taken the least bit seriously within the profession, at least here in the United States.

I read that with great interest, and with the sense of discovery. It meant that Richard Rorty isn't a philosopher, since he takes Derrida seriously. It also suggested that, say, DePaul University doesn't actually have a philosophy program, despite appearances to the contrary. (After all, so many of the "philosophy" professors there are interested in deconstruction and the like.) 

One would also have to deduce from Leiter's article that the Society for Phenomenology and Existential Philosophy is playing a very subtle joke on prospective members when it lists Derrida as one of the topics of interest they might choose to circle on the form they fill out to join the organization.

An alternative reading, of course, is that some people have a stringent and proprietary sense of what is "real" philosophy, and who counts as a philosopher. And by an interesting coincidence, such people once ruled Nietzsche out of consideration altogether. Until the past few decades, he was regarded as an essayist, an aphorist, a brilliant literary artist -- but by no means a serious philosopher. ("A stimulating thinker, but most unsound," as Jeeves tells Bertie Wooster, if memory serves.) The people who read Nietzsche in the United States a hundred years ago tended to be artists, anarchists, bohemians, and even (shudder) journalists. But not academic philosophers.

In short, it is not self-evident that the most suitable reviewer of a new book on Nietzsche would need to be a professor  -- let alone one who had published a book or two on him. (Once, the very idea would have been almost hopelessly impractical, because there were few, if any.) Assigning a biography of Nietzsche to a novelist instead of a scholar is hardly the case of malfeasance that Leiter suggests. If anything, Nietzsche himself might have approved: The idea of professors discussing his work would really have given the old invalid reason to recuperate.

Vollmann throws off a quick reference to "the relevant aspects of Schopenhauer, Aristotle and others by whom Nietzsche was influenced and against whom he reacted." And at this, Leiter really moves in for the kill.

"As every serious student of Nietzsche knows," he writes, "Aristotle is notable for his almost total absence from the corpus. There are a mere handful of explicit references to Aristotle in Nietzsche's writings (even in the unpublished notebooks), and no extended discussion of the kind afforded Plato or Thales. And apart from some generally superficial speculations in the secondary literature about similarities between Aristotle's 'great-souled man' and Nietzsche's idea of the 'higher' or 'noble' man -- similarities nowhere remarked upon by Nietzsche himself -- there is no scholarship supporting the idea that Aristotle is a significant philosopher for Nietzsche in any respect."

Reading this, I felt a vague mental itch. It kept getting stronger, and would not go away. For the idea that Aristotle was an important influence on Nietzsche appears in the work of the late Walter Kaufman -- the professor of philosophy at Princeton University who re-translated Nietzsche in the 1950s and '60s.

Kaufman published an intellectual biography that destroyed some of the pernicious myths about Nietzsche. He made the case for the coherence and substance of his work, and was merciless in criticizing earlier misinterpretations. He has had the same treatment himself, of course, at the hands of later scholars. But it was Kaufman, perhaps more than anyone else, who made it possible and even necessary for American professors of philosophy to take Nietzsche seriously.

So when Kaufman wrote that Nietzsche's debt to Aristotle's ethics was "considerable" .... well, maybe Leiter was right. Perhaps Kaufman was now just a case of someone making "superficial speculations in the secondary literature." But for a nonspecialist reviewer such as Vollmann to echo it did not quite seem like an indictable offense.

So I wrote to Leiter, asking about all of this. In replying, Leiter sounded especially put out that Vollmann had cited both Schopenhauer and Aristotle as influences. (For those watching this game without a scorecard: Nobody doubts the importance of Schopenhauer for Nietzsche.)

"To reference 'Schopenhauer and Aristotle' together as important philosophical figures for Nietzsche -- as Vollmann did -- is, indeed, preposterous," wrote Leiter in one message, "and indicative of the fact that Vollmann is obviously a tourist when it comes to reading Nietzsche. The strongest claim anyone has made (the one from Kaufmann) is that there is a kind of similarity between a notion in Aristotle and a notion in Nietzsche, but not even Kaufmann (1) showed that the similarity ran very deep; or (2) claimed that it arose from Aristotle's influence upon Nietzsche."

Well, actually, yes, Kaufman did make precisely that second claim. (He also quoted Nietzsche saying, "I honor Aristotle and honor him mostly highly...") And there is no real ground for construing the phrase "Schoenhauer and Aristotle" to mean "similarly and in equal measure."

There are preposterous things in the writing of William Vollmann. But a stray reference to a possible intellectual influence on Nietzsche is by no means one of them. Nor, for that matter, is the novelist's willingness to venture into a lair protected by fearsome dragons of the professoriat. I wish Vollmann had read more Nietzsche, and more scholarship on him than the biography he reviewed. But whatever else you can say about the guy, he's not a pedant.

In fact, the whole situation leaves me wondering if the problem ought not be framed differently. There is, obviously, a difference between an article in a scholarly journal and one appearing in a publication ordinarily read during breakfast (or later, in, as the saying goes, "the smallest room in the house"). It need not be a difference in quality or intelligence. Newspapers could do well for themselves by finding more professors to write for them. And the latter would probably enjoy it, not in spite of the sense of slumming, but precisely because of it.

But does it follow that the best results would come from having philosophers review the philosophy books, historians review the history books, and so forth? 

The arguments for doing so are obvious enough. But just as obvious are the disadvantages: Most readers would derive little benefit from intra-disciplinary disputes and niggling points of nuance spill over into the larger public arena.

It is probably a crazy dream, even something utopian, but here is the suggestion anyway. The Times Book Review (or some other such periodical) should from time to time give over an issue entirely to academic reviewers commenting on serious books -- but with everyone obliged to review outside their specialty. Hand a batch of novels to a sociologist. Give some books on Iraq to an ethicist. Ask a physicist to write about a favorite book from childhood. 

It might not be the best set of reviews ever published. But chances are it would be memorable -- and an education for everybody.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

The Žižek Effect

My ambition to write a musical about the arrival of Lacanian theory in Tito-era Yugoslavia has always hinged on the zestiness of the intended title: Žižek! The music would be performed, of course, by
Laibach, those lords of industrial-strength irony; and the moment of psychoanalytic breakthrough that Lacan called la Passe would be conveyed via an interpretative dance, to be performed by a high-stepping chorus of Slovenian Rockettes.

Alas, it was all a dream. (Either that, or a symptom.) The funding never came through, and now Astra Taylor has laid claim to the title for her documentary, shown recently at the Toronto Film Festival.

Žižek! is distributed by Zeitgeist, which also released the film Derrida. The company provided a screener DVD of Žižek! that I've now watched twice -- probably the minimum number of times necessary to appreciate the intelligence and style of Taylor's work. The director is 25 years old; this is her first documentary.

It's not just her willingness to let Slavoj Žižek be Slavoj Žižek -- responding bitterly to an orthodox
deconstructionist in the audience at a lecture at Columbia University, for example, or revisiting some familiar elements of his early work on the theory of ideology. Nor is it even her willingness to risk trying to popularize the unpopularizable. The film ventures into an account of Žižek's claim of the parallel between Marx's concept of surplus value and Lacan's "object petit a." (This is illustrated, you may be relieved to know, via a cartoon involving bottles of Coke.)

Beyond all that, Žižek! is very smart as a film. How it moves from scene to scene -- the playful, yet coherent and even intricate relationship between structure and substance -- rewards more than one
viewing.

In an e-mail conversation with Taylor, I mentioned how surprising it was that Žižek! actually engaged with his theory. It would be much easier, after all, just to treat him as one wacky dude -- not that Žižek quite avoids typecasting himself.

"I wanted very much to make a film about ideas," she told me. "That said, I think the film betrays a certain fascination with Žižek's personality.  He's got this excess of character and charisma that can't be restrained, even when we would try to do an interview about 'pure theory.'"

Žižek! isn't a biography. (For that, you're probably better off reading Robert Boynton's profile from Lingua Franca some years ago.) Taylor says she started work with only a hazy sense of what she wanted the documentary to do -- but with some definite ideas about things she wanted to avoid. "I didn't want to make a conventional biopic," she recalls, "tracing an individual's trajectory from childhood, complete with old photographs, etc.  It's not even that I have anything against that form in particular, it just didn't seem the right approach for a film about Žižek."

Her other rule was to avoid pretentiousness. "Especially when dealing in theory, which has quite a bad name on this front, one has to be careful," she says. "I decided to veer towards the irreverent instead of the reverential. Granted, this is fairly easy when you're working with Slavoj Žižek."

Fair enough: This is the man who once explained the distinctions between German philosophy, English political economy, and the French Revolution by reference to each nation's toilet design. (Žižek runs through this analysis in the film; it also appeared last year in an article in The London Review of Books.)

Just to be on the safe side, Taylor also avoided having talking heads on screen "instructing the audience in what to think about Žižek or how to interpret his work." The viewer sees Žižek interact with people at public events, including both an enormous left-wing conference in Buenos Aires and a rather more tragically hip one in New York. But all explanations of his ideas come straight from the source.

In preparing to shoot the film, Taylor says she came up with a dozen pages of questions for Žižek, but only ended up asking two or three of them. Having interviewed him by phone a couple of years ago, I knew exactly what she meant. You pose a question. Žižek then takes it wherever he wants to go at the moment. The trip is usually interesting, but never short.

One of the funniest moments in Žižek! is a video clip from a broadcast of a political debate from 1990, when he ran for president of Yugoslavia as the candidate of the Liberal Democratic Party. At one point,
an old Communist bureaucrat says, "Okay, Žižek, we all know your IQ is twice that of everybody else here put together. But please, please let somebody else talk!"

Taylor says she soon realized that her role was less that of interviewer than traffic director, "giving positive or negative feedback, telling him when to stop or when he'd said enough, and directing the flow of the conversation as opposed to conducting a straightforward interview with stops and starts."

She kept a log throughout the various shoots, "summing up everything he said in what would eventually be a one hundred page Excel spreadsheet. That way, I knew what subjects had been addressed, in what setting, and if the material was useful or needed to be reshot." About halfway through the production, she and Laura Hanna, the film's editor, assembled a rough cut.

"At that point," Taylor recalls, "I began to choose various passages for the animated sequences. I knew there needed to be some recurring themes and a broader theoretical argument to underpin the film.... But that makes it sound too easy and rational.  The majority of choices were more intuitive, especially at the beginning when we were trying to cut down eighty hours of raw footage. When you're editing a film it is as much about what feels right, what flows, as what makes sense logically."

One really inspired moment came when Taylor learned of Jacques Lacan's appearance on French educational television in the early 1970s. She obtained a copy of the program and sat down with Žižek in his apartment to watch it.

The transcript of Lacan's enigmatic performance is available as the book Television: A Challenge to the Psychoanalytic Establishment (Norton, 1991). But to get the full effect, you really have to see Lacan in action: Self-consciously inscrutAble, yet also suave, he utters short and gnomic sentences, looking for all the world like Count Dracula ready for a nap after a good meal.

The contrast with the stocky and plebeian Žižek (a bundle of energy and nervous tics) is remarkable; and so is the highly ambivalent way he responds to hearing his Master's voice. Žižek takes pride in being called a dogmatic Lacanian. But the video clearly bothers him.

"I think Žižek reacts to the footage on different registers at once," as Taylor puts it, "which is what makes the scene so interesting.  He's obviously disturbed by Lacan's delivery, which seems very staged and pompous. Yet he attempts to salvage the situation by discussing how the very idea of a 'true self' is ideological or by arguing that the substance of Lacan's work should not be judged by his style."

The scene is also plenty meta. We are watching footage in which the most psychoanalytic of philosophers watches a video of the most philosophical of psychoanalysts. And yet somehow it does not feel the least bit contrived. If anything, there is something almost voyeuristically fascinating about it.

Taylor told me that the sequence "evokes what I see as one of the film's central themes: the predicament of the public intellectual today, and Žižek's strategies for coping with it."

Early in the documentary -- and again at the end -- he denounces the fascination with him as an individual, insisting that the only thing that matters is his theoretical work. He gives a list of what he regards as his four really important books: The Sublime Object of Ideology, For They Know Not What They Do, The Ticklish Subject, and a work now in progress that he has provisionally titled The Parallax View (a.k.a. the  sequel to Ticklish).

There is a clear hint that his other and more popular books are negligible by contrast; he speaks of wanting to kill his doppelganger, the wild-and-crazy guy known for obscene jokes and pop-culture
riffs.

"And yet," as Taylor notes, "Žižek, despite his frustrations, continues to put on a good show, albeit one quite different in demeanor from Lacan's." That is what makes the final images of Žižek! so interesting.

I don't want to give the surprise ending away. Suffice it to say that it involves a spiral staircase, and makes explicit reference to Vertigo, Alfred Hitchcock's great meditation on Freud's Beyond the Pleasure Principle. (Whether or not Hitchcock ever actually read Freud is sort of beside the point, here.) The scene also harkens back to earlier comments by Žižek -- and yet it really comes out of
left field.

Taylor says they improvised it at the very last moment of shooting. She calls the scene "fantastically head-scratching," and not just for the audience.

"Over the last few months," she says, "I have come up with all sorts of pseudo-theoretical justifications and interpretation of it, all the different layers of meaning and resonances with Žižek's work and life and the intersections of the two. But all of these, I must admit, were created
after the fact ( après coup, as Lacan would say)."

So what are her theories? "I feel like I would be ruining the fun if I elaborated on them," she told me. "That is, after all, precisely what people are supposed to debate over a beer after seeing the movie."

For more on Žižek! -- including information about its availability and a clip from the film -- check out its Web site.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

A Killing Concept

Inspired less by Phillip Seymour Hoffman’s impressive turn in Capote than by the condition of my checking account, I have been considering the idea of turning out a true-crime book -- a lurid potboiler, but one fortified with a little cultural history, albeit of an eccentric kind. The idea has been stewing for a couple of months now. My working title is BTK and the Beatnik. Here’s the pitch.

In late August,The New York Times ran a short profile of Colin Wilson, the last surviving member of the Angry Young Men -- a literary group that emerged in Britain during the mid-1950s at just about the time Allen Ginsberg and Jack Kerouac were interrupting the presumed consensus of Eisenhower-era America. Kingsley Amis, Philip Larkin, and John Osborne as the Angries’ most prominent novelist, poet, and playwright, respectively. And Colin Wilson -- whose quasi-existentialist credo The Outsider appeared in May 1956, when he was 24 -- was taken up by the press as the Angry thinker.

"The Outsider," wrote Wilson, "is a man who cannot live in the comfortable, insulated world of the bourgeoisie, accepting what he sees and touches as reality. He sees too deep and too much, and what he sees is essentially chaos." He developed this theme through a series of commentaries on literary and philosophical texts, all handled with a certain vigorous confidence that impressed the reviewers very much.

As, indeed, did Wilson’s personal story, which was a publicist’s dream come true. Born to a working-class family, he had quit school at 16, taken odd jobs while reading his way through modern European literature, and camped out in a public park to save on rent as he wrote and studied at the library of the British Museum. Wilson was not shy about proclaiming his own genius. For several months, the media lent him a bullhorn to do it. He posed for photographs with his sleeping bag, and otherwise complied with his fans' desire that he be something like a cross between Albert Camus and James Dean.

The backlash was not long in coming. It started with his second book, Religion and the Rebel, which got savage notices when it appeared in 1957, despite being virtually indistinguishable from The Outsider in subject and method. “We are tired of the boy Colin,” as one literary journalist is supposed to have said at the time.

Roundly abused, though no wit abashed, he kept on writing, and has published dozens of novels and works of nonfiction over the intervening decades. The Outsider has never gone out of print in English; it developed a solid following in Arabic translation, and appeared a few years ago in Chinese.

The piece in The Times came in the wake of his recent autobiography, Dreaming to Some Purpose -- a book revealing that Wilson is still very confident of his own place as perhaps the greatest writer of our time. This is a minority opinion. The reviews his memoir got in the British press were savage. So far it has not received much attention in the United States. Wilson has a cult following here; and the few scholarly monographs on his work tend to have that cult-following feel.

Perhaps the most forceful claim for his importance was made by Joyce Carol Oates, who provided an introduction to the American edition of his science-fiction novel The Philosopher’s Stone (1969). Oates hailed Wilson for "consciously attempting to imagine a new image for man ... freed of ambiguity, irony, and the self-conscious narrowness of the imagination we have inherited from 19th century Romanticism."

Her praise seems to me a bit overstated. But I have a certain fondness for that novel, having discovered it during Christmas break while in high school. It set me off on a fascination with Wilson's work that seems, with hindsight, perfectly understandable. Adolescence is a good time to read The Outsider. For that matter, Wilson himself was barely out of it when he wrote the book. Although now a septuagenarian, the author now displays the keen egomania of someone a quarter that age.  

Now, just as The Times was running its profile of the last Angry Young Man, the sentencing hearing for Dennis Rader, the confessed BTK killer, was underway in Kansas. News accounts mentioned, usually in passing, his claims that the striking of sadistic murders he committed over the years were the result of something he called "Factor X." He did not elaborate on the nature of Factor X, though reporters often did often note that the killer saw himself as demonically possessed. (He also referred to having been dropped on his head as a child, which may have been one of Rader’s cold-blooded little jokes.)

But in a television interview, Rader indicated that Factor X, while mysterious, was also something in his control. "I used it," he said.

A jolting remark -- at least to anyone familiar with Colin Wilson's work. Over the years, Wilson has developed a whole battery of concepts (or at least of neologisms) to spell out his hunch that the Outsider has access to levels of consciousness not available to more conformist souls. Something he dubbed "Faculty X" has long been central to Wilson’s improvised psychological theories, as well as to his fiction. (The Philosopher’s Stone, which Oates liked so much, is all about Faculty X.)

As Wilson describes it, Faculty X is the opposite of the normal, dulled state of consciousness. It is our potential to grasp, with an incandescent brilliance and intensity of focus, the actuality of the world, including the reality of other times and places. "Our preconceptions, our fixed ideas about ourselves," as Wilson puts, "means that we remain unaware of this power." We trudge along, not engaging the full power of our mental abilities.

Most of us have had similar insights, often while recovering from a bad cold. But the contrast between mental states hit Wilson like a bolt of lightening. In his recent memoir, he writes, "The basic aim of human evolution, I decided, is to achieve Faculty X."

A few artists are able to summon Faculty X at will. But so, in rather less creative form, do psychopathic killers. For that is the stranger side of Colin Wilson’s work -- the part overlooked by The Times, for example, which repeated the standard Wilsonian claim that he was a philosopher of optimism.

Cheerful as that may sound, a very large part of his work over the years has consisted of books about serial murders. They, too, are Outsiders -- in revolt against "a decadent, frivolous society" that gives them no outlet for the development of Faculty X. Such an individual "feels no compunction in committing acts of violence," as Wilson explains, "because he feels only contempt for society and its values."

These quotations are from his book Order of Assassins: The Psychology of Murder (1972), but they might have been drawn from any of dozens of other titles. Beginning with the Jack the Ripper-sque character Austin Nunne in his first novel, Ritual in the Dark (1960),  Wilson has populated his fiction with an array of what can only be called existentialist serial killers.

In these novels, the narrator is usually an alienated intellectual who sounds ... well, quite a bit like Colin Wilson does in his nonfiction books. The narrator will spend a few hundred pages tracking down a panty-fetishist sex killer, or something of the kind -- often developing a strong sense of identification with, or at least respect for, the murderer. There may be a moment when he recoils from the senselessness of the crimes. But there is never (oddly enough) any really credible expression of sympathy for the victims.

The tendency to see the artist and the criminal as figures dwelling outside the norms of society is an old one, of course; and we are now about 180 years downstream from Thomas De Quincy’s essay "On Murder Considered as One of the Fine Arts." But there is something particularly cold and morbid about Wilson's treatment of the theme. It calls to mind the comment film critic Anthony Lane made about Quentin Tarantino: "He knows everything about violence and nothing about suffering." It comes as little surprise to learn that a girlfriend from Wilson's bohemian days recoiled from one of his earliest efforts at fiction: “She later admitted,” he writes, “that it made her wonder if I was on the verge of becoming a homicidal maniac.”

So did BTK have Wilson’s philosophical ruminations in mind when he was “using” Factor X to commit a string of sadistic murders? Did he see himself as an Outsider – a tormented genius, expressing his contempt for (in Wilson’s phrase) “the comfortable, insulated world” of modernity?

Circumstantial evidence indicates that it is a lead worth pursuing. We know that that Rader studied criminal justice administration at Wichita State University, receiving his B.A. in 1979. Wilson’s brand of pop-philosophizing on murder as a form of revolt (a manifestation of “man’s striving to become a god”) is certainly the kind of thing an adventurous professor might have used to stimulate class discussion.

And  it would be extremely surprising to find that Rader never read Wilson’s work. Given the relatively small market for books devoted entirely to existential musings, Wilson has produced an incredible volume of true-crime writing over the years – beginning with his Encyclopedia of Murder (1961) and continuing through dozens of compilations of serial-killer lore, many available in the United States as rather trashy paperbacks.

The earliest messages Rader sent to police in the mid-1970s reveal disappointment at not getting sufficient press coverage. He even coined the nickname BTK to speed things along. Clearly this was someone with a degree of status anxiety about his role in serial-killing history. One imagines him turning the pages of Wilson’s pulp trilogy Written in Blood (1989) or the two volumes of The Killers Among Us (1995) – all published by Bantam in the U.S. – with some disappointment at not having made the finals.

Well, it’s not too late. We know from his memoirs that Colin Wilson has engaged in extensive correspondence with serial-killing Outsiders who have ended up behind bars. It seems like a matter of time before he turns out a book on BTK.

Unless, of course, I beat him to it. The key is to overcome the gray fog of everyday, dull consciousness by activating my dormant reserves of Faculty X. Fortunately it has never been necessary for me to kill anyone to manage this. Two large cups of French Roast will usually do the trick.   

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Thinking at the Limits

The curtain rises on a domestic scene –- though not, the audience soon learns, a tranquil one. It is the apartment of the philosopher Louis Althusser and his wife Hélène Rytman, on an evening in November, a quarter century ago. The play in question, which opened last month in Paris, is called The Caïman. That’s an old bit of university slang referring to Althusser's job as the “director of studies” -- an instructor who helps students prepare for the final exam at the École Normale Supérieure, part of what might be called the French Ivy League.

The caïman whose apartment the audience has entered was, in his prime, one of the “master thinkers” of the day. In the mid-1960s, Althusser conducted an incredibly influential seminar that unleashed structuralist Marxism on the world. He played a somewhat pestiferous role within the French Communist Party, where he was spokesman for Mao-minded student radicals. And he served as tutor and advisor for generations of philosophers-in-training.

At Althusser’s funeral in 1990, Jacques Derrida recalled how, “beginning in 1952 ... the caïman received in his office the young student I then was.” One of the biographers of Michel Foucault (another of his pupils) describes Althusser as an aloof and mysterious figure, but also one known for his gentleness and tact. When a student turned in an essay, Althusser wrote his comments on a separate sheet of paper -- feeling that there would be something humiliating about defacing the original with his criticisms.

But everyone in the audience knows how Althusser’s evening at home with his wife in November 1980 will end. How could they not? And even if you know the story, it is still horrifying to read Althusser’s own account of it. In a memoir that appeared posthumously, he recalls coming out of a groggy state the next morning, and finding himself massaging Hélène’s neck, just as he had countless times in the course of their long marriage.

“Suddenly, I was terror-struck,” he wrote. “Her eyes stared interminably, and I noticed the tip of her tongue was showing between her teeth and lips, strange and still.” He ran to the École, screaming, “I’ve strangled Hélène!”

He was whisked away for psychiatric evaluation, which can’t have taken long: Althusser’s entire career had been conducted between spells of hospitalization for manic-depression. In one autobiographical fragment from the late 1970s –- presumably written while on a manic high –- he brags about sneaking aboard a nuclear submarine and taking it for a joy-ride when no one was looking. If ever there were reason to question legal guilt on grounds of insanity, the murder of Hélène Rytman would seem to qualify.

He underwent a long spell of psychiatric incarceration -- a plunge, as he later wrote, back into the darkness from which he had awakened that morning. In the late 1980s, after he was released, the philosopher could be seen wandering in the streets, announcing “I am the great Althusser!” to startled pedestrians.

It became the stuff of legend. In the early 1980s, as a student at the University of Texas at Austin, I heard what turns out to have been an apocryphal account of that morning. A small crowd of Althusser’s students, it was said, routinely gathered outside his apartment to greet him each day. When he emerged, disheveled and shrieking that he was a murderer, everyone laughed and clapped their hands. They thought (so the story went) that Althusser was clowning around.

That rumor probably says more about American attitudes towards French thinkers than it does about Althusser himself, of course. The murder has become a standard reference in some of the lesser skirmishes of the culture wars – with Hélène Rytman’s fate a sort of morbid punch-line.

Althusser’s philosophical work took as its starting point the need to question, and ultimately to dissolve, any notion that social structures and historical changes are the result of some basic human essence. Somewhat like Foucault, at least in this regard, he regards the idea of “man” as a kind of myth. Instead, Althusser conceived of history as a “a process without a subject” – something operating in ways not quite available to consciousness. Various economic and linguistic structures interacted to “articulate” the various levels of life and experience.

Althusser called this perspective “theoretical anti-humanism.” And for anyone who loathes such thinking, the standard quip is that he practiced his anti-humanism at home.

That strikes me as being neither funny nor fair. At the risk of sounding like a pretty old-fashioned bourgeois humanist, I think you have to treat his ideas as ... well, ideas. Not necessarily as good ones, of course. (In his seminar, Althusser and his students undertook a laborious and ultimately preposterous effort to figure out when and how Marx became a Marxist, only to conclude that only a few of his works really qualified.)  But however you judge his writings, they make sense as part of a conversation that started long before Althusser entered the room -- one that will continue long after we are all dead.

One way to see his “theoretical anti-humanism,” for example, is as a retort to Jean-Paul Sartre’s “Existentialism is a Humanism” –- the lecture that drew standing-room only crowds in 1945, at just about the time Althusser was resuming an academic career interrupted by the war. (The Germans held him as a POW for most of it.) It was the breeziest of Sartre’s introductions to his basic themes: We are free – deep down, and for good. That freedom may be unbearable at times. But it never goes away. No matter what, each individual is always radically responsible for whatever action and meaning is possible in a given circumstance.

“Man,” Sartre told his listeners, “is nothing else but what he makes of himself.” But that “nothing” is, after all, everything. “There is no universe other than a human universe, a universe of human subjectivity.”

For Althusser, this is all completely off track. It rests on the idea that individuals are atoms who create their own meaning – and that somehow then link up to form a society. A very different conception is evident in “Ideology and Ideological State Apparatuses,” a paper from 1970 that is about as close to a smash-hit, era-defining performance as Althusser ever got. Which is to say, not that close at all. But extracts are available in The Norton Anthology of Theory and Criticism, and passages have turned up in countless thousands of course packets in lit-crit and cultural studies, over the years.

For Althusser, it’s exactly backwards to start from the individual as a basic unit capable, through its own imagination and endeavor, to create a world of meaning. On the contrary, there are societies that seek to reproduce themselves over time, not just by producing material goods (that too) but through imposing and enforcing order.

The police, military, and penal systems have an obvious role. Althusser calls them the Repressive State Apparatuses. But he’s much more interested in what he calls the Ideological State Apparatuses – the complex array of religious institutions, legal processes, communication systems, schools, etc. that surround us. And, in effect, create us. They give us the tools to make sense of the world. Most of all, the ISAs convey what the social order demands of us. And for anyone who doesn’t go along....Well, that’s when the Repressive State Apparatuses might just step in to put you in line.

Why has this idea been so appealing to so many academics –- and for such a long time? Well, at the time, it tended to confirm the sense that you could effect radical social change via “the long march through the institutions.” By challenging how the Ideological State Apparatuses operated, it might be possible to shift the whole culture’s center of gravity. And Althusser placed special emphasis on educational institutions as among the most important ISA's in capitalist society.

Such was the theory. In practice, of course, the social order tends to push back –- and not necessarily through repression. A handful of non-academic activists became interested in Althusser for a while; perhaps some still are. But for the most part, his work ended up as a fairly nonthreatening commodity within the grand supermarket of American academic life.

The brand is so well-established, in fact, that the thinker’s later misfortunes are often dismissed with a quick change of subject. The effect is sometimes bizarre.

In 1996, Columbia University Press issued a volume by Althusser called Writings on Psychoanalysis: Freud and Lacan. Surely an appropriate occasion for some thoughtful essays on how the theorist’s own experience of mental illness might have come into play in his work, right? Evidently not: The book contains only a few very perfunctory references to “temporary insanity” and psychiatric care. Presumably Althusser’s editors will be forthcoming next summer, with the publication by Verso of Philosophy of the Encounter: Later Writings, 1978-1987. The catalog text for the book refers to it as “his most prolific period.” But it was also one when much of his writing was done while hospitalized.

Is it possible to say anything about his work and his illness that doesn’t amount to a roundabout denunciation of Althusser? I think perhaps there is.

On one level, his theory about the Ideological State Apparatuses looks....maybe not optimistic, exactly, but like a guide to transforming things. From this point of view, each individual is a point of convergence among several ISAs. In other words, each of us has assimilated various codes and rules about how things are supposed to be. And if there are movements underway challenging how the different ISAs operate, that might have a cumulative effect. If, say, feminists and gay rights activists are transforming the rules about how gender is constructed, that creates new ways of life. (Though not necessarily a social revolution, as Althusser wanted. Capitalism is plenty flexible if there’s a buck to be extracted.)

But that notion of the individual as the intersection of rules and messages also has a melancholy side. It somewhat resembles the experience of depression. If a person suffering from depression is aware of anything, it is this: The self is a product of established patterns....fixed structures.... forces in the outside world that are definitive, and sometimes crushing.

Any Sartrean talk of “radical freedom” makes no sense whatever to anyone in that condition – which is, rather, a state of radical loss. And as the German poet Hans Magnus Enzensberger puts it in a recent essay, the most extreme “radical loser” may find the only transcendence in an act of violence.

“He can explode at any moment,” writes Enzensberger. “This is the only solution to his problem that he can imagine: a worsening of the evil conditions under which he suffers.... At last, he is master over life and death.”

Is that what happened in Althusser’s apartment, 25 years ago? That, or something like it. 

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

Pages

Subscribe to RSS - Philosophy
Back to Top