For better and for worse, the American reception of contemporary French thought has often followed a script that frames everything in terms of generational shifts. Lately, that has usually meant baby-boomer narcissism -- as if the youngsters of '68 don't have enough cultural mirrors already. Someone like Bernard-Henri Lévy, the roving playboy philosopher, lends himself to such branding without reserve. Most of his thinking is adequately summed up by a thumbnail biography -- something like, "BHL was a young Maoist radical in 1968, but then he denounced totalitarianism, and started wearing his shirts unbuttoned, and the French left has never recovered."
Nor are American academics altogether immune to such prepackaged blendings of theory and lifestyle. Hey, you -- the Foucauldian with the leather jacket that doesn't fit anymore....Yeah, well, you're complicit too.
But there are thinkers who don't really follow the standard scripts very well, and Pierre Rosanvallon is one them. Democracy Past and Future, the selection of his writings just published by Columbia University Press, provides a long overdue introduction to a figure who defies both sound bites and the familiar academic division of labor. Born in 1948, he spent much of the 1970s as a sort of thinker-in-residence for a major trade union, the Confédération Française Démocratique du Travail, for which he organized seminars and conferences seeking to create a non-Marxist "second left" within the Socialist Party. He emerged as a theoretical voice of the autogestion (self-management) movement. His continuing work on the problem of democracy was honored in 2001 when he became a professor at the Collège de France, where Rosanvallon lectures on the field he calls "the philosophical history of the political."
Rosanvallon has written about the welfare state. Still, he isn't really engaged in political science. He closely studies classical works in political philosophy -- but in a way that doesn't quite seem like intellectual history, since he's trying to use the ideas as much as analyze them. He has published a study of the emergence of universal suffrage that draws on social history. Yet his overall project -- that of defining the essence of democracy -- is quite distinct from that of most social historians. At the same time (and making things all the more complicated) he doesn't do the kind of normative political philosophy one now associates with John Rawls or Jurgen Habermas.
Intrigued by a short intellectual autobiography that Rosanvallon presented at a conference a few years ago, I was glad to see the Columbia volume, which offers a thoughtful cross-section of texts from the past three decades. The editor, Samuel Moyn, is an assistant professor of history at Columbia. He answered my questions on Rosanvallon by e-mail.
Q:Rosanvallon is of the same generation as BHL. They sometimes get lumped together. Is that inevitable? Is it misleading?
A: They are really figures of a different caliber and significance, though you are right to suggest that they lived through the same pivotal moment. Even when he first emerged, Bernard-Henri Lévy faced doubts that he mattered, and a suspicion that he had fabricated his own success through media savvy. One famous thinker asked whether the "new philosophy" that BHL championed was either new or philosophy; and Cornelius Castoriadis attacked BHL and others as "diversionists." Yet BHL drew on some of the same figures Rosanvallon did -- Claude Lefort for example -- in formulating his critique of Stalinist totalitarianism. But Lefort, like Castoriadis and Rosanvallon himself, regretted the trivialization that BHL's meteoric rise to prominence involved.
So the issue is what the reduction of the era to the "new philosophy" risks missing. In retrospect, there is a great tragedy in the fact that BHL and others constructed the "antitotalitarian moment" (as that pivotal era in the late 1970s is called) in a way that gave the impression that a sententious "ethics" and moral vigilance were the simple solution to the failures of utopian politics. And of course BHL managed to convince some people -- though chiefly in this country, if the reception of his recent book is any evidence -- that he incarnated the very "French intellectual" whose past excesses he often denounced.
In the process, other visions of the past and future of the left were ignored. The reception was garbled -- but it is always possible to undo old mistakes. I see the philosophy of democracy Rosanvallon is developing as neither specifically French nor of a past era. At the same time, the goal is not to substitute a true philosopher for a false guru. The point is to use foreign thinkers who are challenging to come to grips with homegrown difficulties.
Q:Rosanvallon's work doesn't fit very well into some of the familiar disciplinary grids. One advantage of being at the Collège de France is that you get to name your own field, which he calls "the philosophical history of the political." But where would he belong in terms of the academic terrain here?
A: You're right. It's plausible to see him as a trespasser across the various disciplinary boundaries. If that fact makes his work of potential interest to a great many people -- in philosophy, politics, sociology, and history -- it also means that readers might have to struggle to see that the protocols of their own disciplines may not exhaust all possible ways of studying their questions.
But it is not as if there have not been significant interventions in the past -- from Max Weber for example, or Michel Foucault in living memory -- that were recognized as doing something relevant to lots of different existing inquiries. In fact, that point suggests that it may miss the point to try to locate such figures on disciplinary maps that are ordinarily so useful. If I had to sum up briefly what Rosanvallon is doing as an intellectual project, I would say that the tradition of which he's a part -- which includes his teacher Lefort as well as some colleagues like Marcel Gauchet and others -- is trying to replace Marxism with a convincing alternative social theory.
Most people write about Marxism as a political program, and of course any alternative to it will also have programmatic implications. But Marxism exercised such appeal because it was also an explanatory theory, one that claimed, by fusing the disciplines, to make a chaotic modern history -- and perhaps history as a whole -- intelligible. Its collapse, as Lefort's own teacher Maurice Merleau-Ponty clearly saw, threatened to leave confusion in its wake, unless some alternative to it is available. (Recall Merleau-Ponty's famous proclamation: "Marxism is not a philosophy of history; it is the philosophy of history, and to renounce it is to dig the grave of reason in history.")
Rosanvallon seems to move about the disciplines because, along with others in the same school, he has been trying to put together a total social theory that would integrate all the aspects of experience into a convincing story. They call the new overall framework they propose "the political," and Rosanvallon personally has focused on making sense of democratic modernity in all its facets. Almost no one I know about in the Anglo-American world has taken up so ambitious and forbidding a transdisciplinary task, but it is a highly important project.
Q:As the title of your collection neatly sums up, Rosanvallon's definitive preoccupation is democracy. But he's not just giving two cheers for it, or drawing up calls for more of it. Nor is his approach, so far as I can tell, either descriptive nor prescriptive. So what does that leave left for a philosopher to do?
A: At the core of his conception of democracy, there is a definitive problem: The new modern sovereign (the "people" who now rule) is impossible to identify or locate with any assurance. Democracy is undoubtedly a liberatory event -- a happy tale of the death of kings. But it must also face the sadly intractable problem of what it means to replace them.
Of course, the history of political theory contains many proposals for discovering the general will. Yet empirical political scientists have long insisted that "the people" do not preexist the procedures chosen for knowing their will. In different words, "the people" is not a naturally occurring object. Rosanvallon's work is, in one way or another, always about this central modern paradox: If, as the U.S. Constitution for instance says, "We the people" are now in charge, it is nevertheless true that we the people have never existed together in one place, living at one time, speaking with one voice. Who, then, is to finally say who "we" are?
The point may seem either abstract or trivial. But the power of Rosanvallon's work comes from his documentation of the ways -- sometimes blatant and sometimes subtle -- that much of the course and many of the dilemmas of modern history can be read through the lens of this paradox. For example, the large options in politics can also be understood as rival answers to the impossible quandary or permanent enigma of the new ruler's identity. Individual politicians claim special access to the popular will either because they might somehow channel what everyone wants or because they think that a rational elite possesses ways of knowing what the elusive sovereign would or should want. Democracy has also been the story, of course, of competing interpretations of what processes or devices are most likely to lead to results approximating the sovereign will.
Recently, Rosanvallon has begun to add to this central story by suggesting that there have always been -- and increasingly now are -- lots of ways outside electoral representation that the people can manifest their will, during the same era that the very idea that there exists a coherent people with a single will has entered a profound crisis.
One of the more potent implications of Rosanvallon's premise that there is no right answer to the question of the people's identity is that political study has to be conceptual but also historical. Basic concepts like the people might suggest a range of possible ways for the sovereign will to be interpreted, but only historical study can uncover the rich variety of actual responses to the difficulty.
The point, Rosanvallon thinks, is especially relevant to political theorists, who often believe they can, simply by thinking hard about what democracy must mean, finally emerge with its true model, whether based on a hypothetical contract, an ideal of deliberation, or something else. But the premise also means that democracy's most basic question is not going to go away, even if there are better and worse responses.
Q:Now to consider the relationship between Rosanvallon's work and political reality "on the ground" right now. Let's start with a domestic topic: the debate over immigration. Or more accurately, the debate over the status of people who are now part of the U.S. economy, but are effectively outside the polity. I'm not asking "what would Rosanvallon do?" here, but rather wondering: Does his work shed any light on the situation? What kinds of questions or points would Rosanvallonists (assuming there are any) be likely to raise in the discussion?
A: It's fair to ask how such an approach might help in analyzing contemporary problems. But his approach always insists on restoring the burning issues of the day to a long historical perspective, and on relating them to democracy's foundational difficulties. Without pretending to guess what Rosanvallon might say about America's recent debate, I might offer a couple of suggestions about how his analysis might begin.
The controversy over immigrants is so passionate, this approach might begin by arguing, not simply because of economic and logistical concerns but also because it reopens (though it was never closed!) the question of the identity of the people in a democracy. The challenge immigrants pose, after all, is not one of inclusion simply in a cultural sense, as Samuel Huntington recently contended, but also and more deeply in a conceptual sense.
In a fascinating chapter of his longest work, on the history of suffrage, Rosanvallon takes up the history of French colonialism, including its immigrant aftermath. There he connects different historical experiences of immigrant inclusion to the conceptual question of what the criteria for exclusion are, arguing that if democracies do not come to a clear resolution about who is inside and outside their polity, they will vacillate between two unsatisfactory syndromes. One is the "liberal" response of taking mere presence on the ground as a proxy for citizenship, falsely converting a political problem into one of future social integration. The other is the "conservative" response of of conceptualizing exclusion, having failed to resolve its meaning politically, in the false terms of cultural, religious, or even racial heterogeneity. Both responses avoid the real issue of the political boundaries of the people.
But Rosanvallon's more recent work allows for another way of looking at the immigration debate. In a new book coming out in French in the fall entitled "Counterdemocracy," whose findings are sketched in a preliminary and summary fashion in the fascinating postscript to the English-language collection, Rosanvallon tries to understand the proliferation of ways that popular expression occurs outside the classical parliamentary conception of representation. There, he notes that immigration is one of several issues around which historically "the people" have manifested their search for extraparliamentary voice.
For Rosanvallon, the point here is not so much to condemn populist backlash, as if it would help much simply to decry the breakdown of congressional lawmaking under pressure. Rather, one might have to begin by contemplating the historical emergence of a new form of democracy -- what he calls unpolitical democracy -- that often crystallizes today around such a hot-button topic as the status of immigrants. This reframing doesn't solve the problem but might help see that its details turn out to be implicated in a general transformation of how democracy works.
Q:OK, now on to foreign policy. In some circles, the invasion of Iraq was justified as antitotalitarianism in action, and as the first stage a process of building democracy. (Such are the beauty and inspiration of high ideals.) Does Rosanvallon's work lend itself to support for "regime change" via military means? Has he written anything about "nation building"?
A: This is a very important question. I write in my introduction to the collection about the contemporary uses of antitotalitarianism, and I do so mainly to make criticize the recent drift in uses of that concept.
Of course, when the critique of totalitarianism activated a generation, it was the Soviet Union above all that drew their fire. But their critique was always understood to have its most salient implications for the imagination of reform at home, and especially for the renewal of the left. This is what has changed recently, in works of those "liberal hawks," like Peter Beinart and Paul Berman, who made themselves apologists for the invasion of Iraq in the name of antitotalitarian values. Not only did they eviscerate the theoretical substance on which the earlier critique of totalitarianism drew -- from the work of philosophers like Hannah Arendt and Claude Lefort among others -- but they wholly externalized the totalitarian threat so that their critique of it no longer had any connection to a democratic program. It became purely a rhetoric for the overthrow of enemies rather than a program for the creation or reform of democracies. In the updated approach, what democracy is does not count as a problem.
It is clear that this ideological development, with all of its real-world consequences, has spelled the end of the antitotalitarian coalition that came together across borders, uniting the European left (Eastern and Western) with American liberalism, thirty years ago. That the attempt to update it and externalize that project had failed became obvious even before the Iraq adventure came to grief -- the project garnered too few allies internationally.
Now it is perfectly true that the dissolution of this consensus leaves open the problem of how democrats should think about foreign policy, once spreading it evangelistically has been unmasked as delusional or imperialistic. A few passages in the collection suggest that Rosanvallon thinks the way to democratize the world is through democratization of existing democracies -- the reinvigoration of troubled democracies is prior to the project of their externalization and duplication. Clearly this response will not satisfy anyone who believes that the main problem in the world is democracy's failure to take root everywhere, rather than its profound difficulties where it already is. But clarifying the history and present of democracy inside is of undoubted relevance to its future outside.
Q:There are some very striking passages in the book that discuss the seeming eclipse of the political now. More is involved than the withdrawl from civic participation into a privatized existence. (At the same time, that's certainly part of it.) Does Rosanvallon provide an account of how this hollowing-out of democracy has come to pass? Can it be reversed? And would its reversal necessarily be a good thing?
A: One of the most typical responses to the apparent rise of political apathy in recent decades has been nostalgia for some prior society -- classical republics or early America are often cited -- that are supposed to have featured robust civic engagement. The fashion of "republicanism" in political theory, from Arendt to Michael Sandel or Quentin Skinner, is a good example. But Rosanvallon observes that the deep explanation for what is happening is a collapse of the model of democracy based on a powerful will.
The suggestion here is that the will of the people is not simply hard to locate or identify; its very existence as the foundation of democratic politics has become hard to credit anymore. The challenge is to respond by taking this transformation as the starting point of the analysis. And there appears to be no return to what has been lost.
But in his new work, anticipated in the postscript, Rosanvallon shows that the diagnosis may be faulty anyway. What is really happening, he suggests, is not apathy towards or retreat from politics in a simple sense, but the rise of new forms of democracy -- or counterdemocracy -- outside the familiar model of participation and involvement. New forms seeking expression have multiplied, through an explosion of devices, even if they may seem an affront to politics as it has ordinarily been conceptualized.
Rosanvallon's current theory is devoted to the project of putting the multiplication of representative mechanisms -- ones that do not fit on existing diagrams of power -- into one picture. But the goal, he says, is not just to make sense of them but also to find a way for analysis to lead to reform. As one of Rosanvallon's countrymen and predecessors, Alexis de Tocqueville, might have put it: Democracy still requires a new political science, one that can take it by the hand and help to sanctify its striving.
For further reading: Professor Moyn is co-author (with Andrew Jainhill of the University of California at Berkeley) of an extensive analysis of the sources and inner tensions of Rosanvallon's thought on democracy, available online.Â And in an essay appearing on the Open Democracy Webs ite in 2004, Rosanvallon reflected on globalization, terrorism, and the war in Iraq.
The table sits at the front of the bookshop, near the door. That way it will get maximum exposure as people come and go. "If you enjoyed The Da Vinci Code," the sign over it says, "you might also like..." The store is part of a national chain, meaning there are hundreds of these tables around the country. Thousands, even.
And yet the display, however eyecatching, is by no means a triumph of mass-marketing genius. The bookseller is denying itself a chance to appeal to an enormous pool of consumer dollars. I'm referring to all the people who haven’t read Dan Brown’s globe-bestriding best-seller -- and have no intention of seeing the new movie -- yet are already sick to death of the whole phenomenon.
"If you never want to hear about The Da Vinci Code again," the sign could say, "you might like...."
The book’s historical thesis (if that is the word for it) has become the cultural equivalent of e-mail spam. You just can’t keep it out. The premise sounds more preposterous than thrilling: Leonardo da Vinci was the head of a secret society (with connections to the Knights Templar) that guarded the hidden knowledge that Mary Magdeleine fled Jerusalem, carrying Jesus’s child, and settled in France....
All of this is packaged as a contribution to the revival of feminine spirituality. Which is, in itself, enough to make the jaw drop, at least for anyone with a clue about the actual roots of this little bit of esoteric hokum.
Fantasies about the divine bloodlines of certain aristocratic families are a staple of the extreme right wing in Europe. (The adherents usually also possess "secret knowledge" about Jewish bankers.) And anyone contending that the Knights Templar were a major factor behind the scenes of world history will turn out to be a simpleton, a lunatic, or some blend of the two -- unless, of course, it’s Umberto Eco goofing on the whole thing, as he did in Foucault’s Pendulum.
It's not that Dan Brown is writing crypto-fascist novels. He just has really bad taste in crackpot theories. (Unlike Eco, who has good taste in crackpot theories.)
And Leonardo doesn’t need the publicity -- whereas my man Athanasius Kircher, the brilliant and altogether improbable Jesuit polymath, does.
Everybody has heard of the Italian painter and inventor. As universal geniuses go, he is definitely on the A list. Yet we Kircher enthusiasts feel duty-bound to point out that Leonardo started a lot more projects than he ever finished -- and that some of his bright ideas wouldn’t have worked.
Sure, Leonardo studied birds in order to design a flying machine. But if you built it and jumped off the side of a mountain, they’d be scrapping you off the bottom of the valley. Of course very few people could have painted "Mona Lisa." But hell, anybody can come up with a device permitting you to plunge to your death while waving your arms.
Why should he get all the press, while Athanasius Kircher remains in relative obscurity? He has just as much claim to the title of universal genius. Born in Germany in 1602, he was the son of a gentleman-scholar with an impressive library (most of it destroyed during the Thirty Years’ War). By the time Kircher became a monk at the age of 16, he had already become as broadly informed as someone twice his age.
He joined the faculty of the Collegio Romano in 1634, his title was Professor of Mathematics. But by no means is that a good indicator of his range of scholarly accomplishments. He studied everything. Thanks to his access to the network of Jesuit scholars, Kircher kept in touch with the latest discoveries taking place in the most far-flung parts of the world. And a constant stream of learned visitors to Rome came to see his museum at the Vatican, where Kircher exhibited curious items such as fossils and stuffed wildlife alongside his own inventions.
Leonardo kept most of his more interesting thoughts hidden in notebooks. By contrast, Kircher was all about voluminous publication. His work appeared in dozens of lavishly illustrated folios, the publication of which was often funded by wealthy and powerful figures. The word "generalist" is much too feeble for someone like Kircher. He prepared dictionaries, studied the effects of earthquakes, theorized about musical acoustics, and engineered various robot-like devices that startled tourists with their lifelike motions.
He was also enthusiastic about the microscope. In a book published in 1646, Kircher mentioned having discovered “wonders....in the verminous blood of those sick with fever, and numberless other facts not known or understood by a single physician.” He speculated that very small animals “with a vast number and variety of motions, colors, and almost invisible parts” might float up from from “the putrid vapors” emitted by sick people or corpses.
There has long been a scholarly debate over whether or not Kircher deserves recognition as the inventor of the germ theory of disease. True, he seems not to have had a very clear notion of what was involved in experimentation (then a new idea). And he threw off his idea about the very tiny animals almost in passing, rather than developing it in a rigorous manner. But then again, Kircher was a busy guy. He managed to stay on the good side of three popes, while some of his colleagues in the sciences had trouble keeping the good will of even one. Among Kircher’s passions was the study of ancient Egypt. As a young man, he read an account of the hieroglyphics that presented the idea that they were decorative inscriptions -- the equivalent of stone wallpaper, perhaps. (After all, they looked like tiny pictures.) This struck him as unlikely. Kircher suspected the hieroglyphics were actually a language of some kind, setting himself the task of figuring out how to read it.
And he made great progress in this project – albeit in the wrong direction. He decided that the symbols were somehow related to the writing system of the Chinese, which he did know how to read, more or less. (Drawing on correspondence from his missionary colleagues abroad, Kircher prepared the first book on Chinese vocabulary published in Europe.)
Only in the 19th century was Jean Francois Champollion able to solve the mystery, thanks to the discovery of the Rosetta Stone. But the French scholar gave the old Jesuit his due for his pioneering (if misguided) work. In presenting his speculations, Kircher had also provided reliable transcriptions of the hieroglyphic texts. They were valuable even if his guesses about their meaning were off.
Always at the back of Kircher’s mind, I suspect, was the story from Genesis about the Tower of Babel. (It was the subject of one of his books.) As a good Jesuit, he was doubtless confident of belonging to the one true faith -- but at the same time, he noticed parallels between the Bible and religious stories from around the world. There were various trinities of dieties, for example. As a gifted philologist, he noticed the similarities among different languages.
So it stood to reason that the seeming multiplicity of cultures was actually rather superficial. At most, it reflected the confusion of tongues following God’s expressed displeasure about that big architectural project. Deep down, even the pagan and barbarous peoples of the world had some rough approximation of the true faith.
That sounds ecumenical and cosmopolitan enough. It was also something like a blueprint for conquest: Missionaries would presumably use this basic similarity as a way to "correct" the beliefs of those they were proselytizing.
But I suspect there is another level of meaning to his musings. Kircher’s research pointed to the fundamental unity of the world. The various scholarly disciplines were, in effect, so many fragments of the Tower of Babel. He was trying to piece them together. (A risky venture, given the precedent.)
He was not content merely to speculate. Kircher tried to make a practical application of his theories by creating a "universal polygraphy" -- that is, a system of writing that would permit communication across linguistic barriers. It wasn’t an artificial language like Esperanto, exactly, but rather something like a very low-tech translation software. It would allow you to break a sentence in one language down to units, which were to be represented by symbols. Then someone who knew a different language could decode the message.
Both parties needed access to the key -- basically, a set of tables giving the meaning of Kircher’s "polygraphic" symbols. And the technique would place a premium on simple, clear expression. In any case, it would certainly make international communication faster and easier.
Unless (that is) the key were kept secret. Here, Kircher seems to have had a brilliant afterthought. The same tool allowing for speedy, transparent exchange could (with some minor adjustments) also be used to conceal the meaning of a message from prying eyes. He took this insight one step further -- working out a technique for embedding a secret message in what might otherwise look like a banal letter. Only the recipient -- provided he knew how to crack the code -- would be able to extract its hidden meaning.
Even before his death in 1680, there were those who mocked Athanasius Kircher for his vanity, for his gullibility (he practiced alchemy), and for the tendency of his books to wander around their subjects in a rather garrulous and self-indulgent manner. Nor did the passing of time and fashion treat him well. By the 18th century, scholars knew that the path to exact knowledge involved specialization. The wild and woolly encyclopedism of Athanasius Kirscher was definitely a thing of the past.
Some of the disdain may have been envy. Kircher was the embodiment of untamed curiosity, and it is pretty obvious that he was having a very good time. Even granting detractors all their points, it is hard not to be somewhat in awe of the man. Someone who could invent microbiology, multiculturalism, and encryption technology (and in the 17th century no less) at least deserves to be on a T-shirt.
But no! All anybody wants to talk about is da Vinci. (Or rather, a bogus story about him that is the hermeneutic equivalent of putting "The Last Supper" on black velvet.)
Well, if you can’t beat 'em.... Maybe it's time for a trashy historical thriller that will give Kircher his due. So here goes:
After reading this column, Tom Hanks rushes off to the Vatican archives and finds proof that Kircher used his "universal polygraphy" to embed secret messages in his the artwork for his gorgeously illustrated books.
But that’s not all. By cracking the code, he finds a cure to the avian flu. Kircher has recognized this as a long-term menace, based on a comment by a Jesuit missionary work. (We learn all this in flashbacks. I see Phillip Seymour Hoffman as Athanasius Kircher.)
Well, it's a start, anyway. And fair warning to Dan Brown. Help yourself to this plot and I will see you in court. It might be a terrible idea, but clearly that's not stopped you before.
Tomorrow night at a church in London, there will be a gathering of several hundred people to celebrate the launch of "The Euston Manifesto" -- a short document in which one sector of the British and American left declares itself to be in favor of pluralist and secular democracy, and against blowing people up for the glory of Allah.
The Eustonians also support open-source software. (I have read the document a few times now but am still not sure how that one got in there. It seems like an afterthought.)
More to the point, the Eustonians promise not to ask too many questions -- nor any really embarrassing ones -- about how we got into Iraq. The important thing, now, is that it all end well. Which is to say, that the occupation help build a new Iraq: a place of secular, pluralist democracy, where people do not blow each other up for the glory of Allah.
Suppose that a civic-minded person -- a secular humanist, let's say, and one fond of Linux -- takes a closer look at the manifesto. Such a reader will expect the document to discuss the question of means and ends. This might be addressed on the ethical plane, at some level of abstraction. Or it might be handled with a wonky attention to policy detail. In any case, the presumed reader (who is nothing if not well-meaning) will certainly want to know how Eustonian principles are to be realized in the real world. In the case of Iraq, for example, there is the problem of getting from the absolutely disastrous status quo to the brilliant future, so hailed.
Many of the signatories of the manifesto are, or until recently were, some variety or other of Marxist. Its main author, for example, is Norman Geras, a professor emeritus of government at the University of Manchester. His work includes Literature of Revolution, a volume of astute essays on Leon Trotsky and Rosa Luxemburg. (Full disclosure: Geras and I once belonged to the same worldwide revolutionary socialist organization, the United Secretariat of the Fourth International, and probably both choke up a little when singing “The Red Flag”).
Surely, then, the Euston Manifesto will bear at least some resemblance to the one written by a certain unemployed German doctor of philosophy in 1848? That is, it can be expected to provide a long-term strategic conception of how the world reached its current situation (“The history of all hitherto existing society is a history of class struggles”). And it will identify the forces in society that have emerged to transform it (“Workers of the world unite!”). And from this rigorous conceptual structure, the document can then deduce some appropriate shorter-term tactics. In The Communist Manifesto, for example, Marx and Engels pointed to universal suffrage and a progressive income tax as mighty strides forward towards the destruction of capitalism.
OK, so the proposals might not work out as planned.... Hindsight is 20-20. But a manifesto -- to be worth anyone’s time, let alone signature -- will, of course, be concrete. At the event in London tomorrow night, the comrades will rally. Surely they would never settle for broad and bland appeals to high ideals, rendered in language slightly less inspiring than the Cub Scout oath?
Well, judge for yourself. “The Euston Manifesto” was actually unveiled in April, when it was first published online. It is has an official Web site. The inspiration for it had come during a meeting at a pub near the Euston stop on the London Underground. (Hence the name.) The document has been debated and denounced at great and redundant length in the left-wing blogosphere. So the fact that the event this week in London is being described by the Eustonians as a “launch” is puzzling, at least at first. But when you realize what a rhetorical drubbing the manifesto has taken, the need for a public gathering is easier to understand. The Eustonians want to show that their heads are bloody but unbowed, etc.
The most cogent arguments against the manifesto have already been made. In April, Marc Mulholand, a historian who is a fellow at Oxford University, presented a series of pointed criticisms at his blog that seemed to take the Eustonian principles more seriously than the manifesto itself did. “Why should we expect pluralist states to foster the spread of democratic government?” he asked. “How can we audit their contribution to this universal ideal? What mechanisms ensure the coincidence of state real politick and liberal internationalism?”
And D.D. Guttenplan -- the London correspondent for “The Nation” and producer of a documentary called Edward Said: The Last Interview -- weighed in with an article in The Guardian accusing the Eustonians of, in effect, staging a historical reenactment of battle scenes from the Cold War.
In passing, Guttenplan wrote of the manifesto that “every word in it is a lie” – a bit of hyperbole with historical overtones probably lost on his British readers. (In a memorable denunciation -- and one that prompted a lawsuit -- of sometime Communist sympathizer Lillian Hellman’s work, Mary McCarthy said: “Every word she writes is a lie, including ‘and’ and ‘the.’”) Guttenplan tells me that he now considers his remark “a bit intemperate” yet still calls the manifesto “that bastard child of senescent sociology and the laptop bombardiers.”
Mulholand performed a kind of immanent critique of the Eustonians’ liberal-humanitarian proclamations. That is, he held their rhetoric up against their concepts -- and found the manifesto wanting no matter how you looked at it.
For Guttenplan, the manifesto makes more sense as a case of political bait-and-switch. “The political glue holding these folks together,” he told me, “was a kind of Zionism that dare not speak its name, in which anti-Semitism was the only racism worth getting excited about, and opposition to any kind of practical pressure on Israel or its UK supporters/defenders the only program that got these folks up from their laptops. Personally I find that both sneaky and, as my late mother would say, bad for the Jews.” (Complex irony alert! Guttenplan himself is Jewish.)
The liberal-internationalist case for military intervention in Iraq has recently been hashed out at length -- and in all of its disconcertingly belated moral passion and geopolitical irrelevance -- by the contributors to A Matter of Principle: Humanitarian Arguments for War in Iraq, published last year by the University of California Press. The editor of that volume, Thomas Cushman, is a professor of sociology at Wellesley College, and a member of the editorial board of the online journal Democratiya -- as is Norman Geras, who drafted the Euston Manifesto.
Many of the contributions to the book and the journal are intelligently argued. They are worth the attention even -- and perhaps especially -- of someone opposed to the war. For a whole wing of the left, of course, to admit that one’s opponents might be capable of arguments (rather than rationalizations) is already a sign of apostasy. But I’ll take my chances. After all, you can only listen to Noam Chomsky blame every problem in the world on American corporations just so many times. It’s good to stretch your mental legs every so often, and go wandering off to see how people think on the other side of the barricades.
That said, reading the Euston Manifesto has proven remarkably unrewarding -- even downright irritating. It is not a matter of profound disagreements. (I am, broadly speaking, in favor of pluralist and secular democracy, and against blowing people up for the glory of Allah.) But the Eustonians seem to be issuing blank moral checks for whatever excellent adventures George Bush and Tony Blair decide to undertake.
They call for supporting the reconstruction of Iraq “rather than picking through the rubble of the arguments over intervention.” The systematic campaign of disinformation and phony diplomacy engineered over the course of two years preceding the invasion, then, is to be forgotten. It’s hard to imagine a more explicit call for intellectual irresponsibility. Or, for that matter, a less adequate metaphorical image. Anyone upset by “the rubble of the arguments over intervention” is definitely facing the wrong crater.
The Eustonians seem also perfectly indifferent to the cumulative damage being done to the very fiber of democracy itself. This summer’s issue of Bookforum contains a few poems by Guantanamo Bay detainees -- part of a much larger body confiscated by the military. As a lawyer for the detainees notes, a poem containing the line “Forgive me, my dear wife” was immediately classified as an attempt to communicate with the outside.
It is hard to imagine that this sort of thing really advances the Global War on Terror, or whatever we’re calling it now. But it is not without consequences. It destroys what it pretends to protect.
As I was musing over all of this, a friend pointed out a conspicuous absence from the list of signatories to the manifesto: Todd Gitlin, a professor of sociology and journalism at Columbia University. His book The Intellectuals and the Flag, published earlier this year by Columbia University Press, defends the idea of left-wing American patriotism with a frank interest “in the necessary task of defeating the jihadist enemy.”
This would seem to put him in the Eustonian camp, yet he did not endorse the manifesto. Why not? I contacted him by e-mail to ask. “I recognize a shoddy piece of intellectual patchwork when I see one,” Gitlin responded.
He cites a passage referring to the overthrow of Saddam Hussein as “a liberation of the Iraqi people." A fine thing, to be sure. The sight of a humiliated dictator is good for the soul. “But the resulting carnage is scarcely worthy of the term ‘liberation,’” Gitlin told me. “I'm leery of the euphemism.”
Humanitarian interventionism needs an element of realist calculation. “The duty of ‘intervention and rescue’ when a state commits appalling atrocities,” he continued, “must be tempered by a hard-headed assessment of what is attainable and what are the reasonably foreseeable results of intervention. The document is cavalier about the ease of riding to the rescue. So while I support the lion's and lioness's share of the document's principles, I find it disturbingly, well, utopian. It lacks a sense of the tragic. I have not foregone the forced innocence of the anti-American left only to sign up with another variety of rigid, forced innocence.”
But in the final analysis, there was something else bothersome about the manifesto -- something I couldn’t quite put a finger on, for a while. A vague dissatisfaction, a feeling of blurry inconsequentiality....
Then it suddenly came into focus: The manifesto did not seem like the product of a real movement, nor the founding document of a new organization – nor anything, really, but a proclamation of dissatisfaction by people in an Internet-based transatlantic social network.
I dropped Norman Geras a line, asking about the virtuality of the phenomenon. Aren’t the Eustonians doomed to a kind of perpetual and constitutive blogginess?
“It's true that the manifesto is not seen by us as the rallying point for a particular organization,” Geras wrote back. “But it is seen as a rallying point nonetheless - as a focus for debate on the liberal-left, and for initiatives that might follow from that. The focus for debate part has already happened: there's been an enormous response to the manifesto and not only on the internet, but with significant press coverage as well. The venue for the launch meeting had to be changed because we ran out of tickets so fast for the original venue. So this isn't just a ‘virtual’ affair.”
The question from Lenin’s pamphlet comes up: What is to be done? “I'm not going to try to predict where or how far it will go,” says Geras. “One step at a time. But we already have more than 1,500 signatories and that means a lot of people in touch with us and interested in what the manifesto is saying. After the launch, we'll see what we want to do next in the way of forums, conferences, campaigns.”
Perhaps frustration with the document is misplaced? Something better might yet emerge -- once well-meaning people see the limits of the good intentions they have endorsed. You never know. But for now, with only the text to go by, it is hard to shake a suspicion that the Euston Manifesto owes less to Marx than to MySpace.
The hurried patron spying Why Truth Matters (Continuum) on the new arrivals shelf of a library may assume that it is yet another denunciation of the Republicans. New books defending the “reality-based community” are already thick on the ground – and the publishers' fall catalogs swarm with fresh contributions to the cause. Last month, at BookExpo America ( the annual trade show for the publishing industry), I saw an especially economical new contribution to the genre: a volume attributed to G.W. Bush under the title Whoops, I Was Wrong. The pages were completely blank.
Such books change nobody’s mind, of course. The market for them is purely a function of how much enthusiasm the choir feels for the sermon being addressed to it. As it turns out, Why Truth Matters has nothing to do with the G.O.P., and everything to do with what is sometimes called the postmodern academic left -– home to cross dressing Nietzschean dupes to the Sokal hoax.
Or so one gathers from the muttering of various shell-shocked Culture Warriors. Like screeds against the neocons, the diatribes contra pomo now tend to be light on data, and heavy on the indignation. (The choir does love indignation.)
Fortunately, Why Truth Matters by Ophelia Benson and Jeremy Stangroom, is something different. As polemics go, it is short and adequately pugnacious. Yet the authors do not paint their target with too broad a brush. At heart, they are old-fashioned logical empiricists -– or, perhaps, followers of Samuel Johnson, who, upon hearing of Bishop Berkeley’s contention that the objective world does not exist, refuted the argument by kicking a rock. Still, Benson and Stangroom do recognize that there are numerous varieties of contemporary suspicion regarding the concept of truth.
They bend over backwards in search of every plausible good intention behind postmodern epistemic skepticism. And then they kick the rock.
The authors run a Web site of news and commentary, Butterflies and Wheels. And both are editors of The Philosophers’ Magazine,a quarterly journal. In the spirit of full disclosure, it bears mentioning that I write a column for the latter publication.
A fact in no way disposing me, however, to overlook a striking gap in the book’s otherwise excellent index: The lack of any entry for “truth, definition of.” Contacting Ophelia Benson recently for an e-mail interview, that seemed like the place to start.
Q: What is truth? Is there more than one kind? If not, why not?
A: I'll just refer you to jesting Pilate, and let it go at that!
Q: Well, the gripe about jesting Pilate is that "he would not stay for the answer." Whereas I am actually going to stick around and press the point. Your book pays tribute to the human capacity for finding truth, and warns against cultural forces tending to undermine or destroy it. So what's the bottom-line criterion you have in mind for defining truth?
A: It all depends, as pedants always say, on what you mean by "truth." Sure, in a sense, there is more than one kind. There is emotional truth, for instance, which is ungainsayable and rather pointless to dispute. It is also possible and not necessarily silly to talk about somewhat fuzzy-bordered kinds such as literary truth, aesthetic truth, the truth of experience, and the like.
The kind of truth we are concerned with in the book is the fairly workaday, empirical variety that is (or should be) the goal of academic disciplines such as history and the sciences. We are concerned with pretty routine sorts of factual claim that can be either supported or rejected on the basis of evidence, and with arguments that cast doubt on that very way of proceeding.
Q: Is anybody really making a serious dent in this notion of truth? You hear all the time that the universities are full of postmodernists who think that scientific knowledge is just a Eurocentric fad, and therefore people could flap their wings and fly to the moon if they wanted. And yet you never actually see anyone doing that. At least I haven't, and I go to MLA every year.
A: Of course, there is no shortage of wild claims about what people get up to in universities. Such things make good column fodder, good talk show fodder, good gossip fodder, not to mention another round of the ever-popular game of "Who's Most Anti-Intellectual?" But there are people making some serious dents in this notion of truth and of scientific knowledge, yes. That's essentially the subject matter of Why Truth Matters: the specifics of what claims are being made, in what disciplines, using what arguments.
There are people who argue seriously that, as Sandra Harding puts it, the idea that scientific "knowers" are in principle interchangeable means that "white, Western, economically privileged, heterosexual men can produce knowledge at least as good as anyone else can" and that this appears to be an antidemocratic consequence. Harding's books are still, despite much criticism, widely assigned. There are social constructionists in sociology and philosophy of science who view social context as fully explanatory of the formation of scientific belief and knowledge, while excluding the role of evidence.
There are Afrocentric historians who make factual claims that contradict existing historical evidence, such as the claim that Aristotle stole his philosophy from the library at Alexandria when, as Mary Lefkowitz points out, that library was not built until after Aristotle's death. Lefkowitz was shocked to get no support from her colleagues when she pointed out factual errors of this kind, and even more shocked when the dean of her college (Wellesley) told her that "each of us had a different but equally valid view of history." And so on (there's a lot of the "so on" in the book).
That sort of thing of course filters out into the rest of the world, not surprisingly: People go to university and emerge having picked up the kind of thought Lefkowitz's dean had picked up; such thoughts get into newspaper columns and magazine articles; and the rest of us munch them down with our cornflakes.
We don't quite think we could fly to the moon if we tried hard enough, but we may well think there's something a bit sinister and elitist about scientific knowledge, we may well think that oppressed and marginalized groups should be allowed their own "equally valid" view of history by way of compensation, we may well think "there's no such thing as truth, really."
Q: Your book describes and responds to a considerable range of forms of thought: old fashioned Pyrronic skepticism, "standpoint" epistemology, sociology of knowledge, neopragmatism, pomo, etc. Presumably not all questions about the possibility of a bedrock notion of truth are created equal. What kinds have a strong claim to serious consideration?
A: Actually, much of the range of thought we look at doesn't necessarily ask meta-questions about truth. A lot of it is more like second level or borrowed skepticism or relativism about truth, not argued so much as referenced, or simply assumed; waved at rather than defended. The truth relativism is not itself the point, it's rather a tool for the purpose of making truth-claims that are not supported by evidence or that contradict the evidence. Skepticism and relativism about truth in this context function as a kind of veil or smokescreen to obscure the way ideology shapes the truth-claims that are being made.
As a result much of the activity on the subject takes place on this more humdrum quotidian level, in between metaquestions and bedrock notions of truth, where one can ask if this map is accurate or not, if this bus schedule tells us where and when the bus really does go, if this history text contains falsifications or not, if the charges against this scholar or that tobacco company are based on sound evidence or not.
Meta-questions about truth of course do have a strong claim to serious consideration. Maybe we are brains in vats; maybe we all are, without realizing it, Keanu Reeves; there is no way to establish with certainty that we're not; thus questions on the subject do have a claim to consideration, however unresolvable they are. (At the same time, however unresolvable they are, it is noticeable that on the mundane level of this particular possible world, no one really does take them seriously; no one really does seriously doubt that fire burns or that axes chop.)
Intermediate level questions can also be serious, searching, and worth exploring. Standpoint epistemology is reasonable enough in fields where standpoints are part of the subject matter: histories of experience, of subjective views and mentalities, of oppression, for example, surely need at least to consider the subjective stance of the inquirer. Sociology of knowledge is an essential tool of inquiry into the way interests and institutions can shape research programs and findings, provided it doesn't, as a matter of principle, exclude the causative role of evidence. In short there are, to borrow a distinction of Susan Haack's, sober and inebriated versions of questions about the possibility of truth.
Q: Arguably even the most extremist forms of skepticism can have some beneficial effects -- if only indirectly, by raising the bar for what counts as a true or valid statement. (That's one thumbnail version of intellectual history, anyway: no Sextus Empiricus would mean no Descartes.) Is there any sense in which "epistemic relativism" might have some positive effect, after all?
A: Oh, sure. In fact I think it would be extremely hard to argue the opposite. And the ways in which it could have positive effects seem obvious enough. There's Mill's point about the need for contrary arguments in order to know the grounds of one's own views, for one. Our most warranted beliefs, as he says, have no safeguard to rest on other than a standing invitation to everyone to refute them.
If we know only our own side of the case, we don't know much. Matt Ridley made a related point in a comment on the Kitzmiller Intelligent Design trial for Butterflies and Wheels: "My concern ... is about scientific consensus. In this case I find it absolutely right that the overwhelming nature of the consensus should count against creationism. But there have been plenty of other times when I have been on the other side of the argument and seen what Madison called the despotism of the majority as a bad argument.... I agree with the scientific consensus sometimes but not always, but I do not do so because it is is a consensus. Science does not work that way or Newton, Harvey, Darwin and Wegener would all have been voted into oblivion."
Another way epistemic relativism may be of value is that it is one source (one of many) of insight into what it is that some people dislike and distrust about science and reason. In a way it's a silly argument to say that science is elitist or undemocratic, since it is of course the paradigmatic case of the career open to talent. But in another way it isn't silly at all, because as Michael Young pointed out in the '50s, meritocracy has some harsh side-effects, such as erosion of the sense of self-worth of the putative less talented. Epistemic relativism may function partly as a reminder of that.
The arguments of epistemic relativism may be unconvincing, but some of the unhappiness that prompts the arguments may be more worth taking seriously. However one then has to weigh those effects against the effects of pervasive suspicion of science and reason, and one grows pale with fear. At a time when there are so many theocrats and refugees from the reality-based community on the loose, epistemic relativism doesn't seem to need more encouragement than it already has.
Why do narratives of decline have such perennial appeal in the liberal arts, especially in the humanities? Why is it, year after year, meeting after meeting, we hear laments about the good old days and predictions of ever worse days to come? Why is such talk especially common in elite institutions where, by many indicators, liberal education is doing quite well, thank you very much. I think I know why. The opportunity is just too ripe for the prophets of doom and gloom to pass up.
There is a certain warmth and comfort in being inside the “last bastion of the liberal arts,” as B.A. Scott characterized prestigious colleges and research universities in his collection of essays The Liberal Arts in a Time of Crisis (NY Praeger, 1990). The weather outside may be frightful, but inside the elite institutions, if not “delightful,” it’s perfectly tolerable, and likely to remain so until retirement time.
Narratives of decline have also been very useful to philanthropy, but in a negative way. As Tyler Cowen recently noted in The New York Times, “many donors … wish to be a part of large and successful organizations -- the ‘winning team’ so to speak.” They are not eager to pour out their funds in order to fill a moat or build a wall protecting some isolated “last bastion.” Narratives of decline provide a powerful reason not to reach for the checkbook. Most of us in the foundation world, like most other people, prefer to back winners than losers. Since there are plenty of potential winners out there, in areas of pressing need, foundation dollars have tended to flow away from higher education in general, and from liberal education in particular.
But at the campus level there’s another reason for the appeal of the narrative of decline, a genuinely insidious one. If something goes wrong the narrative of decline of the liberal arts always provides an excuse. If course enrollments decline, well, it’s just part of the trend. If students don’t like the course, well, the younger generation just doesn’t appreciate such material. If the department loses majors, again, how can it hope to swim upstream when the cultural currents are so strong? Believe in a narrative of decline and you’re home free; you never have to take responsibility, individual or collective, for anything having to do with liberal education.
There’s just one problem. The narrative of decline is about one generation out of date and applies now only in very limited circumstances. It’s true that in 1890, degrees in the liberal arts and sciences accounted for about 75 percent of all bachelor’s degrees awarded; today the number is about 39 percent, as Patricia J. Gumport and John D. Jennings noted in “Toward the Development of Liberal Arts Indicators” (American Academy of Arts and Sciences, 2005). But most of that decline had taken place by 1956, when the liberal arts and sciences had 40 percent of the degrees.
Since then the numbers have gone up and down, rising to 50 percent by 1970, falling to 33 percent by 1990, and then rising close to the 1956 levels by 2001, the last year for which the data have been analyzed. Anecdotal evidence, and some statistics, suggest that the numbers continue to rise, especially in Research I universities.
For example, in the same AAA&S report ("Tracking Changes in the Humanities) from which these figures have been derived, Donald Summer examines the University of Washington (“Prospects for the Humanities as Public Research Universities Privatize their Finances”) and finds that majors in the humanities have been increasing over the last few years and course demand is strong.
The stability of liberal education over the past half century seems to me an amazing story, far more compelling than a narrative of decline, especially when one recognizes the astonishing changes that have taken place over that time: the vast increase in numbers of students enrolled in colleges and universities, major demographic changes, the establishment of new institutions, the proliferation of knowledge, the emergence of important new disciplines, often in the applied sciences and engineering, and, especially in recent years, the financial pressures that have pushed many institutions into offering majors designed to prepare students for entry level jobs in parks and recreation, criminal justice, and now homeland security studies. And, underlying many of these changes, transformations of the American economy.
The Other, Untold Story
How, given all these changes, and many others too, have the traditional disciplines of the arts and sciences done as well as they have? That would be an interesting chapter in the history of American higher education. More pressing, however, is the consideration of one important consequence of narratives of decline of the liberal arts.
This is the “last bastion” mentality, signs of which are constantly in evidence when liberal education is under discussion. If liberal education can survive only within the protective walls of elite institutions, it doesn’t really make sense to worry about other places. Graduate programs, then, will send the message that success means teaching at a well-heeled college or university, without any hint that with some creativity and determination liberal education can flourish in less prestigious places, and that teaching there can be as satisfying as it is demanding.
Here’s one example of what I mean. In 2000, as part of a larger initiative to strengthen undergraduate liberal education, Grand Valley State University, a growing regional public institution in western Michigan, decided to establish a classics department. Through committed teaching, imaginative curriculum design, and with strong support from the administration, the department has grown to six tenured and tenure track positions with about 50 majors on the books at any given moment. Most of these are first-generation college students from blue-collar backgrounds who had no intention of majoring in classics when they arrived at Grand Valley State, but many have an interest in mythology or in ancient history that has filtered down through popular culture and high school curricula. The department taps into this interest through entry-level service courses, which are taught by regular faculty members, not part timers or graduate students.
That’s a very American story, but the story of liberal education is increasingly a global one as well. New colleges and universities in the liberal arts are springing up in many countries, especially those of the former Soviet Union.
I don’t mean that the spread of liberal education comes easily, in the United States or elsewhere. It’s swimming upstream. Cultural values, economic anxieties, and all too often institutional practices (staffing levels, salaries, leave policies and research facilities) all exert their downward pressure. It takes determination and devotion to press ahead. And those who do rarely get the recognition or credit they deserve.
But breaking out of the protective bastion of the elite institutions is vital for the continued flourishing of liberal education. One doesn’t have to read a lot of military history to know what happens to last bastions. They get surrounded; they eventually capitulate, often because those inside the walls squabble among themselves rather than devising an effective breakout strategy. We can see that squabbling at work every time humanists treat with contempt the quantitative methods of their scientific colleagues and when scientists contend that the reason we are producing so few scientists is that too many students are majoring in other fields of the liberal arts.
The last bastion mentality discourages breakout strategies. Even talking to colleagues in business or environmental studies can be seen as collaborating with the enemy rather than as a step toward broadening and enriching the education of students majoring in these fields. The last bastion mentality, like the widespread narratives of decline, injects the insidious language of purity into our thinking about student learning, hinting that any move beyond the cordon sanitaire is somehow foul or polluting and likely to result in the corruption of high academic standards.
All right, what if one takes this professed concern for high standards seriously? What standards, exactly, do we really care about and wish to see maintained? If it’s a high level of student engagement and learning, then let’s say so, and be forthright in the claim that liberal education is reaching that standard, or at least can reach that standard if given half a chance. That entails, of course, backing up the claim with some systematic form of assessment.
That provides one way to break out of the last bastion mentality. One reason that liberal education remains so vital is that when properly presented it contributes so much to personal and cognitive growth. The subject matter of the liberal arts and sciences provides some of the best ways of helping students achieve goals such as analytical thinking, clarity of written and oral expression, problem solving, and alertness to moral complexity, unexpected consequences and cultural difference. These goals command wide assent outside academia, not least among employers concerned about the quality of their work forces. They are, moreover, readily attainable through liberal education provided proper attention is paid to “transference.” “High standards” in liberal education require progress toward these cognitive capacities.
Is it not time, then, for those concerned with the vitality of liberal education to abandon the defensive strategies that derive from the last bastion mentality, and adopt a new and much more forthright stance? Liberal education cares about high standards of student engagement and learning, and it cares about them for all students regardless of their social status or the institution in which they are enrolled.
There is, of course, a corollary. Liberal education can’t just make the claim that it is committed to such standards, still less insist that others demonstrate their effectiveness in reaching them, unless those of us in the various fields of the arts and sciences are willing to put ourselves on the line. In today’s climate we have to be prepared to back up the claim that we are meeting those standards. Ways to make such assessments are now at hand, still incomplete and imperfect, but good enough to provide an opportunity for the liberal arts and sciences to show what they can do.
That story, I am convinced, is far more compelling than any narrative of decline.
George Scialabba is an essayist and critic working at Harvard University who has just published a volume of selected pieces under the title Divided Mind, issued by a small press in Boston called Arrowsmith. The publisher does not have a Web site. You cannot, as yet, get Divided Mind through Amazon, though it is said to be available in a few Cambridge bookstores. This may be the future of underground publishing: Small editions, zero publicity, and you have to know the secret password to get a copy. (I'll give contact information for the press at the end of this column, for anyone willing to put a check in the mail the old-fashioned way.)
In any case, it is about time someone brought out a collection of Scialabba's work. That it's only happening now (15 years after the National Book Critics Circle gave him its first award for excellence in reviewing) is a sign that things are not quite right in the world of belles lettres. He writes in what William Hazlitt -- the patron saint of generalist essayists -- called the "the familiar style," and he is sometimes disarmingly explicit about the difficulties, even the pain, he experiences in trying to resolve cultural contradictions. That is no way to create the aura of mystery and mastery so crucial for awesome intellectual authority.
Scialabba has his admirers, even so, and one of the pleasant surprises of Divided Mind is the set of comments on the back. "I am one of the many readers who stay on the lookout for George Scialabba's byline," writes Richard Rorty. "He cuts to the core of the ethical and political dilemmas he discusses." The novelist Norman Rush lauds Scialabba's prose itself for "bring[ing] the review-essay to a high state of development, incorporating elements of memoir and skillfully deploying the wide range of literary and historical references he commands." And there is a blurb from Christopher Hitchens praising his "eloquence and modesty" -- though perhaps that is just a gesture of relief that Scialabba has not reprinted his candid reassessment of Hitch, post-9/11.
One passage early in the collection gives a roll call of exemplary figures practicing a certain kind of writing. It includes Randolph Bourne, Bertrand Russell, George Orwell, and Maurice Merleau-Ponty, among others. "Their primary training and frame of reference," Scialabba writes, "were the humanities, usually literature or philosophy, and they habitually, even if only implicitly, employed values and ideals derived from the humanities to criticize contemporary politics.... Their 'specialty' lay not in unearthing generally unavailable facts, but in penetrating especially deeply into the shared culture, in grasping and articulating its contemporary moral/political relevance with special originality and force."
The interesting thing about this passage -- aside from its apt self-portrait of the author -- is the uncertain meaning of that slashmark in the phrase "contemporary moral/political relevance." Does it serve as the equivalent of an equals sign? I doubt that. But it suggests that the relationship is both close and problematic.
We sometimes say that a dog "worries" a bone, meaning he chews it with persistent attention; and in that sense, Divided Mind is a worried book, gnawing with a passion on the "moral/political" problems that go with holding an egalitarian outlook. Scialabba is a man of the left. If you can imagine a blend of Richard Rorty's skeptical pragmatism and Noam Chomsky's geopolitical worldview -- and it's a bit of a stretch to reconcile them, though somehow he does this -- then you have a reasonable sense of Scialabba's own politics. In short, it is the belief that life would be better, both in the United States and elsewhere, with more economic equality, a stronger sense of the common good, and the end of that narcissistic entitlement fostered by the American military-industrial complex.
A certain amount of gloominess goes with holding these principles without believing that History is on the long march to their fulfillment. But there is another complicating element in Divided Mind. It is summed in a passage from the Spanish philosopher José Ortega y Gasset's The Revolt of the Masses, from 1930 -- though you might find the same thought formulated by a dozen other conservative thinkers.
"The most radical division it is possible to make of humanity," Ortega y Gasset declares, "is that which splits it into two classes of creatures: those who make great demands on themselves, piling up difficulties and duties; and those who demand nothing special of themselves, but for whom to live is to be every moment what they already are, without imposing on themselves any effort toward perfection; mere buoys that float on the waves."
Something in Ortega y Gasset's statement must have struck a chord with Scialabba. He quotes it in two essays. "Is this a valid distinction?" he asks. "Yes, I believe it is...." But the idea bothers him; it stimulates none of the usual self-congratulatory pleasures of snobbery. The division of humanity into two categories -- the noble and "the masses" -- lends itself to anti-democratic sentiments, if not the most violently reactionary sort of politics.
At the very least, it undermines the will to make egalitarian changes. Yet it is also very hard to gainsay the truth of it. How, then, to resolve the tension? Divided Mind is a series of efforts -- provisional, personal, and ultimately unfinished -- to work out an answer.
At this point it bears mentioning that Scialabba's reflections do not follow the protocols of any particular academic discipline. He took his undergraduate degree at Harvard (Class of 1969) and has read his way through a canon or two; but his thinking is not, as the saying now goes, "professionalized." He is a writer who works at Harvard -- but not in the way that statement would normally suggest.
"After spells as a substitute teacher and Welfare Department social worker," he told me recently in an e-mail exchange, "I was, for 25 years, the manager or superintendent of a mid-sized academic office building, which housed Harvard's Center for International Affairs and several regional (East Asian, Russian, Latin American, Middle Eastern, etc) research centers. I gave directions to visitors, scheduled the seminar rooms, got offices painted, carpets installed, shelves built, windows washed, keys made, bills paid. I flirted with graduate students and staff assistants, schmoozed with junior faculty, and saw, heard, overheard, and occasionally got to know a lot of famous and near-famous academics."
As day jobs go, it was conducive to writing. "I had a typewriter and a copy machine," he says, "a good library nearby, and didn't come home every night tired or fretting about office politics." When the "homely mid-sized edifice" was replaced with "a vast, two-building complex housing the political science and history departments as well," the daily grind changed as well: "I'm now part of a large staff, and most of my days are spent staring at a flickering screen."
More pertinent to understanding what drives him as a writer, I think, are certain facts about his background that the reader glimpses in various brief references throughout his essays. The son of working-class Italian-American parents, he was once a member of the ascetic and conservative Roman Catholic group Opus Dei. In adolescence, he thought he might have a religious vocation. The critical intelligence of his critical writings is now unmistakably secular and modernist. He shows no sign of nostalgia for the faith now lost to him. But the extreme dislocation implied in leaving one life for another gives an additional resonance to the title of his collection of essays.
"For several hundred years," he told me, "a small minority of Italian/French/Spanish adolescent peasant or working-class boys -- usually the sternly repressed or (like me) libido-deficient ones -- have been devout, well-behaved, studious. Depending on their abilities and on what sort of priest they're most in contact with, they join a diocese or a religious order. Among the latter, the bright ones become Jesuits; the more modestly gifted or mystically inclined become Franciscans. I grew up among Franciscans and at first planned to become one, but I just couldn't resist going to college -- intellectual concupiscence, I guess."
Instead, he was drawn into Opus Dei -- a group trying, as he puts it, "to make a new kind of religious vocation possible, combining the traditional virtues and spiritual exercises with a professional or business career."
He recalls being "tremendously enthusiastic for the first couple of years, trying very hard, though fruitlessly, to recruit my fellow Catholic undergraduates at Harvard in the late 1960s. It was a strain, being a divine secret agent and trying at the same time to survive academically before the blessed advent of grade inflation. But the reward -- an eternity of happiness in heaven!"
The group permitted him to read secular authors, the better to understand and condemn their heresies.
"Then," he says, "Satan went to work on me. As I studied European history and thought, my conviction gradually grew that the Church had, for the most part, been on the wrong side. Catholic philosophy was wrong; Catholic politics were authoritarian....On one occasion, just after I had read Dostoevsky's parable of the Grand Inquisitor, I was rebuked for my intellectual waywardness by a priestly superior with, I fancied, a striking physical resemblance to the terrifying prelate in Ivan's fable. The hair stood up on the back of my neck."
The departure was painful. The new world he discovered on the other side of his crossing "wasn't in the slightest degree an original discovery," he says. "I simply bought the now-traditional narrative of modernity, hook, line and sinker. I still do, pretty much." But he was not quite ready to plunge without reserve into the counterculture of the time -- sex, drugs, rock and roll.
"I was, to an unusual degree, living in my head rather than my body," he says about the 1970s. "I had emerged from Opus Dei with virtually no friends, a conscious tendency to identify my life course with the trajectory of modernity, and an unconscious need to be a saint, apostle, missionary. And I had inherited from my working-class Italian family no middle-class expectations, ambitions, social skills, ego structures."
Instead, he says, "I read a lot and seethed with indignation at all forms of irrational authority or even conventional respectability. So I didn't take any constructive steps, like becoming a revolutionary or a radical academic.... In those days, it wasn't quite so weird not to be ascending some career ladder."
So he settled into a job that left him with time to think and write. And to deal with the possibility of eternal damnation -- something that can occasionally bedevil one part of the mind, even while the secular and modernist half retains its disbelief.
Somewhere in my study is a hefty folder containing, if not George Scialabba's complete oeuvre, then at least the bulk of it. After several years of reading and admiring his essays, I can testify that Divided Mind is a well-edited selection covering many of his abiding concerns. It ought to be interest to anyone interested in the "fourth genre," as the essay is sometimes called. (The other three -- poetry, drama, and fiction -- get all the glory.)
As noted, the publisher seems to be avoiding crass commercialism (not to mention convenience to the reader) by keeping Divided Mind out of the usual online bookselling venues. You can order it from the address below for $13, however. That price includes shipping and handling.
A few days ago, I tried the thought experiment of pretending never to have read anything by Jean Baudrillard – instead trying to form an impression based only on media coverage following his death last week. And there was a lot more of it than I might have expected. The gist being that, to begin with, he was a major postmodernist thinker. Everyone agrees about that much, usually without attempting to define the term, which is probably for the best. It also seems that he invented virtual reality, or at least predicted it. He may have had something to do with YouTube as well, though his role in that regard is more ambiguous. But the really important thing is that he inspired the "Matrix" movie franchise.
A segment on National Public Radio included a short clip from the soundtrack in which Lawrence Fishburn’s character Morpheus intones the Baudrillard catchphrase, “Welcome to the desert of the real.” The cover of Simulacra and Simulation -- in some ways his quintessential theoretical text, first published in a complete English translation by the University of Michigan in 1994 -- is shown in the first film. Furthermore, the Wachowski brothers, who wrote and directed the trilogy, made the book required reading for all the actors, including Keanu Reeves. (It is tempting to make a joke at this point, but we will all be better people for it if I don’t.)
There was more to Baudrillard than his role as Marshall McLuhan of the cyberculture. And yet I can’t really blame harried reporters for emphasizing the most blockbuster-ish dimensions of his influence. "The Matrix" was entertainment, not an educational filmstrip, and Baudrillard himself said that its take on his work “stemmed mostly from misunderstandings.” But its computer-generated imagery and narrative convolutions actually did a pretty decent job of conveying the feel, if not the argument, of Baudrillard’s work.
As he put it in an essay included in The Illusion of the End (Stanford University Press, 1994): “The acceleration of modernity, of technology, events and media, of all exchanges – economic, political, sexual – has propelled us to ‘escape velocity,’ with the result that we have flown free of the referential sphere of the real and of history.” You used to need digitalized special effects to project that notion. But I get the feeling of being “flown free of the referential sphere of the real and of history” a lot nowadays, especially while watching certain cable news programs.
Some of the coverage of Baudrillard’s death was baffled but vaguely respectful. Other commentary has been more hostile – though not always that much more deeply informed. A case in point would be an article by Canadian pundit Robert Fulford that appeared in The National Post on Saturday. A lazy diatribe, it feels like something kept in a drawer for the occasion of any French thinker’s death – with a few spots left blank, for details to be filled in per Google.
A tip-off to the generic nature of the piece is the line: “Strange as it seems, in the 1970s much of the Western world was ready to embrace him.” Here, Fulford can count on the prefab implication of a reference to that decade as a time of New Left-over radicalism and countercultural indulgence. In fact Baudrillard was little known outside France until the 1980s, and even then he had a very small audience until late in the decade. The strong mood coming from most of Baudrillard’s work is that of bitter disappointment that oppositional social movements of earlier years had been neutralized – absorbed into academic bureaucracy and consumer society, with no reason to think that they would revive.
And if we are going to play the game of periodization-by-decade, well, it is perhaps worth mentioning that “much of the Western world was ready to embrace him" only after several years of watching Ronald Reagan -- a man whose anecdotes routinely confused his roles in motion pictures with actual experiences from his own life -- in a position of great power. The distinction between reality and simulation had been worn away quite a bit, by that point. Some of Baudrillard’s crazier flights of rhetoric were starting to sound more and more like apt descriptions of the actual.
Even then, it was by no means a matter of his work persuading university professors “that novels and poems had become irrelevant as subject matter for teaching and research,” as the macro setting for culture-war boilerplate on Fulford’s computer puts it.
Enthusiasm for Baudrillard’s work initially came from artists, writers, and sundry ne’er-do-wells in the cultural underground. The post-apocalyptic tone of his sentences, the science-fictionish quality of his concepts, resonated in ways that at least some people found creatively stimulating, whether or not they grasped his theories. (True confession: While still in my teens, I started writing a novel that opened with an epigraph from one of his books, simply because it sounded cool.)
Baudrillard’s work played no role whatever in the debates of “the canon” to which Fulford alludes. But he was, in a different sense, the most literary of theorists. He translated Bertolt Brecht, among other German authors, into French. Some of his earliest writings were critical articles on the fiction of William Styron and Italo Calvino. In 1978, he published a volume of poems. And a large portion of his output clearly belongs to the literary tradition of the aphorism and the “fragment” (not an unfinished work, but a very dense and compact form of essay). These are things you notice if you actually read Baudrillard, rather than striking po-faced postures of concern about how literature should be “subject matter for teaching and research.”
Besides, it is simply untrue to say that Baudrillard’s reception among American academics was one of uncritical adulation. If there was a protracted lag between the appearance of his first books in the 1960s and the dawn of interest in his work among scholars here in the 1980s, that was not simply a matter of the delay in translation. For one thing, it was hard to know what to make of Baudrillard, and a lot of the initial reception was quite skeptical.
In the mid-1960s, he became a professor of sociology at the University of Paris at Nanterre , but the relationship of his work to the canon of social theory (let alone empirical research) is quite oblique. It’s also difficult to fit him into the history of philosophy as a discipline. Some of his work sounds like Marxist cultural theory, such as the material recently translated in Utopia Deferred: Writings for ‘Utopie’ 1967-1978 -- a collection distributed by MIT Press, a publisher known, not so coincidentally, for its books on avant-garde art. Still, there is plenty in Baudrillard’s work to irritate any Marxist (he grew profoundly cynical about the idea of social change, let alone socialism). And he delighted in baiting feminists with statements equating femininity with appearance, falsehood, and seduction.
Baudrillard was, in short, a provocateur. After a while that was all he was – or so it seemed to me, anyway. The rage of indignant editorialists notwithstanding, a lot of the response to Baudrillardisme amounted to treating him as a stimulating but dubious thinker: not so much a theorist as a prose-poet. A balanced and well-informed critical assessment of his work comes from Douglas Kellner, a professor of philosophy at UCLA, who wrote Jean Baudrillard: From Marxism to Postmodernism and Beyond (Stanford University Press, 1989), the first critical book on him in English. Kellner has provided me with the manuscript of a forthcoming essay on Baudrillard, which I quote here with permission.
“So far,” he writes, “no Baudrillardian school has emerged. His influence has been largely at the margins of a diverse number of disciplines ranging from social theory to philosophy to art history, thus it is difficult to gauge his impact on the mainstream of philosophy, or any specific academic discipline.”
At this point I’d interject that his questionable position within the disciplinary matrix (so to speak) tends to reinforce Baudrillard’s status as a minor literary figure, rather than an academic superstar. Kellner goes on to note that Baudrillard “ultimately goes beyond conventional philosophy and theory altogether into a new sphere and mode of writing that provides occasionally biting insights into contemporary social phenomena and provocative critiques of contemporary and classical thought. Yet he now appears in retrospect as a completely idiosyncratic thinker who went his own way....”
Not that Baudrillard exactly suffered for going his own way, however. A self-portrait of the postmodern intellectual as global jet-setter emerges in the five volumes of his notebook jottings published under the title “Cool Memories.” You get the sense that he spent a lot of time catching planes to far-flung speaking engagements – not to mention seeing various unnamed women out the door, once they had been given a practicum in the theories worked out in his book De la Séduction.
Many of the writings that appeared during the last two decades of his life simply recycled ideas from his early work. But celebrity is a full-time job.
One offer he did turn down was the chance to do a cameo in one of the Matrix sequels. (Instead, it was Cornel West who did his star turn onscreen as gnomic philosophical figure.) Still the appearance of "Simulacra and Seduction" in the first film greatly increased the book’s distribution, if not comprehension of its themes.
According to Mike Kehoe, the sales manager for the University of Michigan Press, which published the English translation, sales doubled in the year following “The Matrix.” The book had often been assigned in university courses. But those sales, too, jumped following the release of the film.
Rather than indulging my own halfbaked quasi-Baudrillardan speculations about how his theories of media metaphysics were reabsorbed by the culture industry, I decided to bring the week’s musings to a close by finding out more about how the book itself ended up on screen.
“It wasn’t the usual sort of product placement,” LeAnn Fields, a senior executive editor for the press, told me by phone. “That is, we didn’t pay them. It was the other way around. The movie makers contacted us for permission. But they reserved the right to redesign the cover for it when it appeared onscreen.”
The familiar Michigan edition is a paperback with bergundy letters on a mostly white cover. “But in the film,” said Fields, “it become a dark green hardcover book. We were quite surprised by that, but I guess it’s understandable since it serves as a prop and a plot device, as much as anything.” (If memory serves, some kind of cyber-gizmo is concealed in it by Keanu Reeves.)
I asked Fields if the press had considered bringing out a special version of the book, simulating its simulation in a deluxe hardback edition. “No,” she said with a laugh, “I don’t think we ever considered that. Maybe we should have, though.”
Recommended Reading: Mark Poster's edition of Baudrillard's Selected Writings, originally published by Stanford University Press in 1988, is now available as a PDF document. The single best short overview of Baudrillard's work is Douglas Kellner's entry on him for the Stanford Encyclopedia of Philosophy. There is an International Journal of Baudrillard Studiesthat publishes both commentary on his work and translations of some of his shorter recent writings.
Word that Richard Rorty was on his deathbed – that he had pancreatic cancer, the same disease that killed Jacques Derrida almost three years ago – reached me last month via someone who more or less made me swear not to say anything about it in public. The promise was easy enough to keep. But the news made reading various recent books by and about Rorty an awfully complicated enterprise. The interviews in Take Care of Freedom and Truth Will Take Care of Itself (Stanford University Press, 2006) and the fourth volume of Rorty’s collected papers, Philosophy as Cultural Politics (Cambridge University Press, 2007) are so bracingly quick-witted that it was very hard to think of them as his final books.
But the experience was not as lugubrious as it may sound. I found myself laughing aloud, and more than once, at Rorty’s consistent indifference to certain pieties and protocols. He was prone to outrageous statements delivered with a deadpan matter-of-factness that could be quite breathtaking. The man had chutzpah.
It’s a “desirable situation,” he told an interviewer, “not to have to worry about whether you are writing philosophy or literature. But, in American academic culture, that’s not possible, because you have to worry about what department you are in.”
The last volume of his collected papers contains a piece called “Grandeur, Profundity, and Finitude.” It opens with a statement sweeping enough to merit that title: “Philosophy occupies an important place in culture only when things seem to be falling apart – when long-held and widely cherished beliefs are threatened. At such periods, intellectuals reinterpret the past in terms of an imagined future. They offer suggestions about what can be preserved and what must be discarded.”
Then, a few lines later, a paradoxical note of rude modesty interrupts all the grandeur and profundity. “In the course of the 20th century," writes Rorty, "there were no crises that called forth new philosophical ideas.”
It's not that the century was peaceful or crisis-free, by any means. But philosophers had less to do with responding to troubles than they once did. And that, for Rorty, is a good thing, or at least not a bad one – a sign that we are becoming less intoxicated by philosophy itself, more able to face the need to face crises at the level (social, economic, political, etc.) they actually present themselves. We may yet be able to accept, he writes, “that each generation will solve old problems only by creating new ones, that our descendants will look back on much that we have done with incredulous contempt, and that progress towards greater justice and freedom is neither inevitable nor impossible.”
Nothing in such statements is new, of course. They are the old familiar Rorty themes. The final books aren’t groundbreaking. But neither was there anything routine or merely contrarian about the way Rorty continued to challenge the boundaries within the humanities, or the frontier between theoretical discussion and public conversation. It is hard to imagine anyone taking his place.
An unexpected and unintentional sign of his influence recently came my way in the form of an old essay from The Journal of American History. It was there that David A. Hollinger, now chair of the department of history at the University of California at Berkeley, published a long essay called “The Problem of Pragmatism in American History.”
It appeared in 1980. And as of that year, Hollinger declared, it was obvious that “‘pragmatism’ is a concept most American historians have proved that they can get along without. Some non-historians may continue to believe that pragmatism is a distinctive contribution of America to modern civilization and somehow emblematic of America, but few scholarly energies are devoted to the exploration or even the assertion of this belief.”
Almost as an afterthought, Hollinger did mention that Richard Rorty had recently addressed the work of John Dewey from a “vividly contemporary” angle. But this seemed to be the a marginal exception to the rule. “If pragmatism has a future,” concluded Hollinger in 1980, “it will probably look very different from the past, and the two may not even share a name.”
Seldom has a comment about the contemporary state of the humanities ever been overtaken by events so quickly and so thoroughly. Rorty’s Philosophy and the Mirror of Nature (Princeton University Press, 1979) had just been published, and he was finishing the last of the essays to appear in Consequences of Pragmatism (University of Minnesota Press, 1982).
It is not that the revival was purely Rorty's doing, and some version of it might have unfolded even without his efforts. In such matters, the pendulum does tend to swing.
But Rorty's suggestion that John Dewey, Martin Heidegger, and Ludwig Wittgenstein were the three major philosophers of the century, and should be discussed together -- this was counterintuitive, to put it mildly. It created excitement that blazed across disciplinary boundaries, and even carried pragmatism out of the provinces and into international conversation. I'm not sure how long Hollinger's point that pragmatism was disappearing from textbooks on American intellectual history held true. But scholarship on the original pragmatists was growing within a few years, and anyone trying to catch up with the historiography now will soon find his or her eyeballs sorely tested.
In 1998, Morris Dickstein, a senior fellow at the City University of New York Graduate Center, edited a collection of papers called The Revival of Pragmatism: New Essays on Social Thought, Law, and Culture (Duke University Press) -- one of the contributors to it being, no surprise, Richard Rorty. “I’m really grieved,” he told me on Monday. "Rorty evolved from a philosopher into a mensch.... His respect for his critics, without yielding much ground to them, went well with his complete lack of pretension as a person.”
In an e-mail note, he offered an overview of Rorty that was sympathetic though not uncritical.
“To my mind," Dickstein wrote, "he was the only intellectual who gave postmodern relativism a plausible cast, and he was certainly the only one who combined it with Dissent-style social democratic politics. He admired Derrida and Davidson, Irving Howe and Harold Bloom, and told philosophers to start reading literary criticism. His turn from analytic philosophy to his own brand of pragmatism was a seminal moment in modern cultural discourse, especially because his neopragmatism was rooted in the 'linguistic turn' of analytic philosophy. His role in the Dewey revival was tremendously influential even though Dewey scholars universally felt that it was his own construction. His influence on younger intellectuals like Louis Menand and David Bromwich was very great and, to his credit, he earned the undying enmity of hard leftists who made him a bugaboo."
The philosopher "had a blind side when it came to religion," continued Dickstein, "and he tended to think of science as yet another religion, with its faith in empirical objectivity. But it's impossible to write about issues of truth or objectivity today without somehow bouncing off his work, as Simon Blackburn and Bernard Williams both did in their very good books on the subject. I liked him personally: he was generous with his time and always civil with opponents.”
A recent essay discussing Rorty challenges the idea that Rorty “had a blind side when it came to religion.” Writing in Dissent, Casey Nelson Blake, a professor of history and American studies at Columbia University, notes that Rorty “in recent years stepped back from his early atheist pronouncements, describing his current position as ‘anti-clerical,’ and he has begun to explore, with increasing sympathy and insight, the social Christianity that his grandfather Walter Rauschenbusch championed a century ago.”
Blake quotes a comment by Rorty from The Future of Religion, an exchange with the Catholic philosopher Gianni Vattimo that Columbia University Press published in 2005. (It comes out in paperback this summer.)
“My sense of the holy,” wrote Rorty, “insofar as I have one, is bound up with the hope that someday, any millennium now, my remote descendants will live in a global civilization in which love is pretty much the only law. In such a society, communication would be domination-free, class and caste would be unknown, hierarchy would be a matter of temporary pragmatic convenience, and power would be entirely at the disposal of the free agreement of a literate and well-educated electorate.”
I'm not sure whether that counts as a religious vision, by most standards. But it certainly qualifies as something that requires a lot of faith.
In a variety of arenas, from politics to high schools, from colleges to the military, Americans argue as though the proper face-to-face discussion in our society ought to be between religion and science. This is a misunderstanding of the taxonomy of thought. Religion and science are in different families on different tracks: science deals with is vs. isn’t and religion, to the extent that it relates to daily life, deals with should vs. shouldn’t.
These are fundamentally different trains. They may hoot at each other in passing, and many people attempt to switch them onto the same track (mainly in order to damage science), but this is an act of the desperate, not the thoughtful.
It is true that a portion of religious hooting has to do with is vs. isn’t questions, in the arena of creationism and its ancillary arguments. However, this set of arguments, important as it might be for some religious people, is not important to a great many (especially outside certain Protestant variants), while the moral goals and effects of religious belief are a far more common and widespread concern among many faiths. I was raised in Quaker meeting, where we had a saying: Be too busy following the good example of Jesus to argue about his metaphysical nature.
Until recently, most scientists didn’t bother trying to fight with religion; for the most part they ignored it or practiced their own faiths. However, in recent years Carl Sagan, Richard Dawkins, Daniel Dennett and Sam Harris have decided to enter the ring and fight religion face to face. The results have been mixed. I have read books by all of these authors on this subject, as well as the interesting 2007 blog exchange between Harris and Andrew Sullivan, one of the best writers active today and a practicing Catholic, and it is clear that a great deal of energy is being expended firing heavy ordnance into black holes with no likelihood of much effect.
The problem that the scientific horsemen face is that theirs is the language of is/isn’t. Their opponents (mostly Christians but by implication observant Jews and Muslims as well) don’t use the word “is” to mean the same thing. To a religious person, God is and that’s where the discussion begins. To a nonreligious scientist, God may or may not be, and that is where the discussion begins.
The two sides, postulating only two for the moment, are each on spiral staircases, but the stairs wind around each other and never connect: this is the DNA of unmeeting thoughts. Only shouting across the gap happens, and the filters of meaning are not aligned. That is why I don’t put much faith, you’ll pardon the expression, in this flying wedge of scientific lancers to change very many minds.
Dennett’s approach is quite different from the others at a basic level; he views religious people as lab rats and wants to study why they squeak the way they do. That way of looking at the issue seems insulting at first but is more honest and practical in that it doesn’t really try to change minds that are not likely to change.
But these arguments are the wrong ones at a very basic level, especially for our schools and the colleges that train our teachers. The contrapuntal force to religion, that force which is in the same family, if a different genus, speaks the same language in different patterns regarding the same issues. It is not science, it is philosophy. That is what our teachers need to understand, and this distinction is the one in which education colleges should train them.
Those of us who acknowledge the factual world of science as genuine and reject the idea of basing moral and “should” questions in the teachings of religion are left seeking an alternate source for sound guidance. Our own judgment based in experience is a strong basic source. The most likely source, the ‘respectable’ source with sound academic underpinnings that can refine, inform and burnish our judgment, is philosophy in its more formal sense.
The word “philosophy” conjures in many minds the image of dense, dismal texts written by oil lamp with made-up words in foreign languages, and far beyond mortal ken. In fact, many writers on philosophy are quite capable of writing like human beings; some of their books are noted below.
When we introduce more religious studies into our K-12 schools, as we must if people are ever to understand each other’s lives, the family of learning into which they must go also contains philosophy. It is this conversation, between the varieties of religious outlooks and their moral conclusions, and the same questions discussed by major philosophers, that needs to happen.
Philosophy is not all a dense, opaque slurry of incomprehensible language. Some excellent basic books are available that any reasonably willing reader can comprehend and enjoy. Simon Blackburn’s Think, Robert Solomon and Kathleen Higgins’ A Passion for Wisdom and Erik Wielenberg’s Value and Virtue in a Godless Universe are some recent examples.
An older text providing a readable commentary on related issues is John Jay Chapman’s Religion and Letters, still in print in his Collected Works but hard to find in the original, single volume . Chapman wrote of changes in our school system that:
“It is familiarity with greatness that we need—an early and first-hand acquaintance with the thinkers of the world, whether their mode of thought was music or marble or canvas or language. Their meaning is not easy to come at, but in so far as it reaches us it will transform us. A strange thing has occurred in America. I am not sure that it has ever occurred before. The teachers wish to make learning easy. They desire to prepare and peptonize and sweeten the food. Their little books are soft biscuits for weak teeth, easy reading on great subjects, but these books are filled with a pervading error: they contain a subtle perversion of education. Learning is not easy, but hard: culture is severe.”
This, published in 1910, is remarkably relevant to education at all levels today. The idea that philosophy is too hard for high school students, which I doubt, simply means that we need to expect more of students all through K-12. Many of them would thank us.
Paul Kurtz’s Affirmations and my brother John Contreras’s Gathering Joy are interesting “guidebooks” that in effect apply philosophical themes in an informal way to people’s real lives. There are also somewhat more academic books that integrate what amount to philosophical views into daily life such as Michael Lynch’s True to Life: Why Truth Matters, physicist Alan Lightman’s A Sense of The Mysterious and the theologian John O’Donohue’s Beauty: The Invisible Embrace.
Some of these are denser than others and not all are suited for public schools, but the ideas they discuss are often the same ideas discussed in the context of religions, and sometimes with similar language. It is this great weave of concepts that our students should be exposed to, the continuum of philosophical thought blended with the best that different religions have to offer.
The shoulds and shouldn’ts that are most important to the future of our society need to be discussed in colleges, schools and homes, and the way to accomplish this is to bring religions and philosophies back to life as the yin and yang of right and wrong. That is the great conversation that we are not having.
Alan L. Contreras has been administrator of the Oregon Office of Degree Authorization, a unit of the Oregon Student Assistance Commission, since 1999. His views do not necessarily represent those of the commission. He blogs at http://oregonreview.blogspot.com.
In the late 1940s, as Richard Rorty was finishing his undergraduate studies and considering a future as a professional philosopher, his parents began to worry about him. This is not surprising. Parents worry; and the parents of philosophers, perhaps especially. But just why Rorty's parents worried – well now, that part is surprising.
They were prominent left-wing journalists. His father, James, also had some minor reputation as a poet; and his mother, Winifred, had done important work on the sociology of race relations, besides trying her hand at fiction. In a letter, James Rorty speculated that going straight into graduate work might be something Richard would later regret. His son would do well to take some time “to discover yourself, possibly through a renewed attempt to release your own creative need: through writing, possibly through poetry....”
In short, becoming an academic philosopher sounded too practical.
Not to go overboard and claim that this is the defining moment of the philosopher’s life (Rosebud!). But surely it is the kind of experience that must somehow mark one’s deepest sense of priorities. How does that inner sense of self then shape a thinker’s work?
Neil Gross’s book Richard Rorty: The Making of an American Philosopher, to be published next month by University of Chicago Press, is not exactly a biography of its subject, who died last year. Rather, it is a study of how institutional forces shape an intellectual’s sense of personal identity, and vice versa. (Gross is currently in transit from Harvard University to the University of British Columbia, where as of this summer he will be an associate professor of sociology.)
Influenced by recent work in sociological theory – but with one eye constantly on the archive of personal correspondence, unpublished writings, and departmental memoranda – Gross reconstructs how Rorty’s interests and intellectual commitments developed within the disciplinary matrix of academic philosophy. He takes the story up through the transformative and sui generis work of Rorty’s middle years, Philosophy and the Mirror of Nature (1979) and Consequences of Pragmatism (1982).
This includes a look at Rorty’s complicated and unhappy relationship with his colleagues at Princeton University in the 1970s. “I find it a bit terrifying,” he wrote in a letter at the time, “that we keep turning out Ph.D.'s who quite seriously conceive of philosophy as a discipline in which one does not read anything written before 1970, except for the purposes of passing odd examinations.” Nor did it help that Rorty felt other professors were taking his ex-wife’s side in their divorce. (What’s the difference between departmental gossip and cultural history? In this case, about 30 years.)
Gross has written the most readable of monographs; and the chapter titled “The Theory of Intellectual Self-Concept” should be of interest even to scholars who aren’t especially concerned with Rorty’s long interdisciplinary shadow. I interviewed Gross recently by e-mail, just before he headed off to Canada. The transcript of our discussion follows.
Q:You identify your work on Richard Rorty not as a biography, or even as a work of intellectual history, but rather as an empirical case study in "the new sociology of ideas." What is that? What tools does a sociologist bring to the job that an intellectual historian wouldn't?
A: Sociology is a diverse field, but if I had to offer a generalization, I'd say that most sociologists these days aim to identify the often hidden social mechanisms, or cascading causal processes, that help to explain interesting, important, or counterintuitive outcomes or events in the social world. How and why do some movements for social change succeed in realizing their goals when others fail to get off the ground? Why isn't there more social mobility? What exactly is the connection between neighborhood poverty and crime? Few sociologists think anymore that universal, law-like answers to such questions can be found, but they do think it possible to isolate the role played by more or less general mechanisms.
Sociologists of ideas are interested in identifying the hidden social processes that can help explain the content of intellectuals' ideas and account for patterns in the dissemination of those ideas. My book attempts to make a theoretical contribution to this subfield. I challenge the approaches taken by two of the leading figures in the area -- Pierre Bourdieu and Randall Collins -- and propose a new approach. I think that the best sociological theory, however, has strong empirical grounding, so I decided to develop my theoretical contribution and illustrate its value by deeply immersing myself in an empirical case: the development of the main lines of Richard Rorty's philosophy.
This entailed doing the same kind of work an intellectual historian would do: digging through archives, reading through Rorty's correspondence and unpublished manuscripts (to which he granted to access,) and of course trying to get a grasp on the diversity of Rorty's intellectual output for the period in question. This work is reflected in the first half of my book, which reads like an intellectual biography.
But the book isn't intended as a biography, and in the second half I try to show that thinking about Rorty's life and career in terms of the hidden social mechanisms at play offers unique explanatory leverage. I love intellectual history, but many intellectual historians are allergic to any effort at generalization. One of my aims in this book is to show them that they needn't be. The old sociology of knowledge may have been terribly reductive -- ideas are an expression of class interests or reflective of dominant cultural tendencies, etc etc -- but the sociology of ideas today offers much more fine-grained theoretical tools.
I only cover Rorty's life up until 1982 because by then most of the main lines of his philosophy had already been developed. After that point, he becomes for the sociologist of ideas a different kind of empirical case: an intellectual superstar and bête noire of American philosophy. It would be fascinating to write about the social processes involved with this, but that was too much for one book.
Q:This might seem like a chicken-or-egg question....Did an interest in Rorty lead you toward this sociological approach, or vice versa?
A: When I was a graduate student in the 1990s I read quite a bit of Rorty's work, and found it both interesting and frustrating. But my interest in the sociology of ideas developed independently. For me, Rorty is just a case, and I remain completely agnostic in the book about the value of his philosophy.
Q:But isn't there something already a little bit pragmatism-minded about analyzing a philosopher's work in sociological terms?
A: It's certainly the case that there are affinities between pragmatism and the sociology of knowledge. But I'm not trying to advance any kind of philosophical theory of knowledge, pragmatist or otherwise. I believe, like every other sociologist of ideas, that intellectuals are social actors and that their thought is systematically shaped by their social experiences. Whether that has any philosophical implications is best left to philosophers to figure out.
I do think that the classical pragmatist philosophers Charles Peirce, William James, John Dewey, and George Herbert Mead had it right in their account of human social action, as Hans Joas has persuasively argued. Some of their insights do make their way into my analysis.
Q: A common account of Rorty's career has him starting out as an analytic philosopher who then undertakes a kind of "turn to pragmatism" in the 1970s, thereby reviving interest in a whole current of American philosophy that had become a preserve of specialists. Your telling is different. What is the biggest misconception embedded in that more familiar thumbnail version?
A: Rorty didn't start out as an analytic philosopher. His masters thesis at Chicago was on Whitehead's metaphysics, and while his dissertation at Yale on potentiality was appreciative in part of analytic contributions, one of its major aims was to show how much value there might be in dialogue between analytic and non-analytic approaches. As Bruce Kuklick has shown, dialogue between various philosophical traditions, and pluralism, were watchwords of the Yale department, and Rorty was quite taken with these metaphilosophical ideals.
Rorty only became seriously committed to the analytic enterprise after graduate school while teaching at Wellesley, his first job. This conversion was directly related to his interest in moving up in the academic hierarchy to an assistant professorship in a top ranked graduate program. At nearly all such programs at the time, analytic philosophy had come to rule the roost. This was very much the case at Princeton, which hired him away from Wellesley, and his commitment to analytic philosophy solidified even more during the years when he sought tenure there.
But the conventional account is flawed in another way as well. It turns out that Rorty read a lot of pragmatism at Yale -- Peirce in particular -- and one of the things that characterized his earliest analytic contributions was a consistent interest in pointing out convergences and overlaps between pragmatism and certain recent developments in analytic thought. So when he finally started calling himself a pragmatist later in his career, it was in many respects a return to a tradition with which he had been familiar from the start, however much he might have come to interpret it differently than specialists in American philosophy would.
Q:You argue for the value of understanding what you call "the intellectual self-concept." Would you explain that idea? What does it permit us to grasp about Rorty that we might not, otherwise?
A: As I've already suggested, my goal in this book was not simply to write a biography of Rorty, but also to make a theoretical contribution to the sociology of ideas. Surprising as it might sound to some, the leading figures in this area today -- to my mind Pierre Bourdieu and Randall Collins -- have tended to depict intellectuals as strategic actors who develop their ideas and make career plans and choices with an eye toward accumulating intellectual status and prestige. That kind of depiction naturally raises the ire of those who see intellectual pursuits as more lofty endeavors -- it was not for nothing that Bourdieu described his study, Homo Academicus, as a "book for burning."
I argue that intellectuals do in fact behave strategically much of the time, but that another important factor influencing their lines of activity is the specific "intellectual self-concept" to which they come to cleave. By this I mean the highly specific narratives of intellectual selfhood that knowledge producers may carry around with them -- narratives that characterize them as intellectuals of such and such a type.
In Rorty's case, one of the intellectual self-concepts that came to be terribly important to him was that of a "leftist American patriot." I argue that intellectual self-concepts, thus understood, are important in at least two respects: they may influence the kinds of strategic choices thinkers make (for example, shaping the nature of professional ambition), and they may also directly influence lines of intellectual activity. The growing salience to Rorty of his self-understood identity as a leftist American patriot, for example, was one of the factors that led him back toward pragmatism in the late 1970s and beyond -- or so I claim.
I develop in the book an account of how the intellectual self-concepts of thinkers form and change over the life course. Rorty took on the leftist American patriot self-concept pretty directly from his parents, and it became reactivated in the 1970s in response to political and cultural developments and also their deaths. My argument is that the sociology of ideas would do well to incorporate the notion of intellectual self-concept into its theoretical toolkit.
But I must say that my ambitions extend beyond this. Bourdieu and Collins are not just sociologists of ideas, but general sociological theorists who happened to have applied their models to intellectual life. Implicit in my respectful criticisms of them is a call to supplement and revise their general models as well, and to fold notions of identity and subjectivity back into sociological theory -- conceptualized in the specific way I lay out, which eclectically draws on Anglo-American social psychology, theories of narrative identity, the ego psychology of Erikson, and other sources.
Q: The philosopher's father, James Rorty, is reasonably well-known to cultural historians as one of the left-wing anti-Communist public intellectuals of the mid-20th century. Your account of his life is interesting, but I found a lot of it rather familiar. By contrast, the chapter on Richard Rorty's mother was a revelation. Winifred Rorty was a clearly a remarkable person, and the question of her influence on her son seems very rich. What was it like to rediscover someone whose career might otherwise be completely forgotten?
A: It's well known that Rorty's mother, Winifred, was the daughter of social gospel theologian Walter Rauschenbusch. What's less well known is that she was a research assistant to the sociologist Robert Park at the University of Chicago. Winifred never entered academe -- she didn't formally enroll as a graduate student at Chicago, and in any event the opportunities for women on the academic labor market at the time were severely limited. Instead, after she left Chicago she worked, like her husband James, as a free lance writer and journalist. Her specialties were race riots and fashion. Very late in her life she wrote a biography of Park.
I ended up devoting one chapter each to Winifred and James because their influence on their son was profound, but also because theirs were fascinating stories that hadn't really been told before. Certainly there is no shortage of scholarship on the New York intellectuals -- a group of which they were loosely a part -- but both led remarkable and distinctive intellectual and writerly lives.
In the case of Rorty's mother I didn't set out to write about someone whose career might otherwise be forgotten, but I can say that it was a great pleasure to immerse myself in her papers and writings. Too often intellectual historians and sociologists of ideas alike focus their attention on the most prominent and "successful" thinkers, but feminist historians, among others, have helpfully reminded us that the stories of those whose careers have been stymied or blocked by discrimination or other factors can be every bit as rich and worth recovering.
Q: Suppose someone were persuaded to pursue research into Rorty's life and work after 1982, working from within the approach you call the "new sociology of ideas." What questions and problems concerning that period would you most want to see studied? What manner of archival resources or other documentary material would be most important for understanding the later phase of Rorty's career?
A: There are lots of questions about this period in Rorty's life that are worth pursuing, but I think one of the most important would be to figure out why Rorty struck a chord with so many people, was vehemently hated by others, and what role exactly his scholarship played in the more general revival of interest in classical American pragmatism that has taken place over the past twenty years or so. My book focuses primarily on the development of ideas, whereas this would be a question of diffusion and reception. I don't think it's possible to give an answer to the question without doing a lot of careful empirical research.
One would want to know about the state of the various intellectual fields in which Rorty's work was received; about the self-concepts and strategic concerns of those who responded to him positively or negatively; about the role of intellectual brokers who helped to champion Rorty and translate his ideas into particular disciplinary idioms; about the availability of resources for pragmatist scholarship; about the role played by scholarly organizations, such as the Society for the Advancement of American Philosophy, in doing the kind of organizational work necessary to lay the groundwork for an intellectual revival; and so on. Here again one might use Rorty as a window into a more general social phenomenon: the emergence of what Scott Frickel and I have called "scientific/intellectual movements," in this case a movement aimed at reviving an intellectual tradition that had long been seen as moribund.
Q: Rorty gave you access to his papers. The notes to your book cite e-mail exchanges you had with him. Any personal impressions that stick with you, beyond what you've had to say in the monographic format?
A: Although Dick and I never formed a friendship, he wrote to me not long after his diagnosis to tell me about it, and to suggest that if I had any unanswered factual questions about his life, I might want to consider asking them of him sooner rather than later.
Some might see this as reflecting a concern to manage his reputation, but he read drafts of the book and -- without commenting on the plausibility of my thesis -- never asked me to change a thing. I think what it shows instead is that he was an incredibly generous, kind, and decent man, even in his final hours; he didn't want to leave a young scholar in the lurch.
Whatever one thinks of Rorty's philosophy, those are qualities all intellectuals could stand to emulate, and live by even in the midst of intense disagreement.