I want to begin with a quotation from Tzvetan Todorov's Facing the Extreme Moral Life in the Concentration Camps, because, of all the many things that might be said in opposition to the American Studies Association boycott of Israeli institutions of higher education, the one I want to focus on is the association's lack of moral courage, which, in this case, includes its failure to have learned the lessons of the association's extraordinary and ethical achievements in previous generations.
This is Todorov: "to denounce slavery constitutes a moral act only at those times when such denunciation is not simply a matter of course and thus involves some personal risk. There is nothing moral in speaking out against slavery today; all it proves is that I'm in step with my society's ideology or else don't want to find myself on the wrong side of the barricades. Something very similar can be said about condemnations of racism, although that would not have been the case in 1936 in Germany."
I would ask the question of the ASA: Who, in their audience of addressees, do they imagine is NOT opposed to the idea of occupation? And who, again in their target audience, is NOT concerned with the rights of Palestinians? Not even the politically right-wing academics in Israel are pro-occupation or against Palestinians as a matter of moral belief or commitment, as were, say, slaveholders in the American South or anti-Semites in fascist Europe. The issue for them, for all of us here, is one that the boycott does not even recognize, let alone address: how do these two entities, Israel and Palestine, find a way to exist side by side?
To be sure Israeli Jews like myself are likely to be more sensitive to the potential extermination of the Jewish population in Israel than individuals outside of Israel. I confess that bias. But the possibilities of the destruction of the State of Israel and the deaths of its citizens are no fantasies of a deluded imagination. Read the Arab press, unless, of course, the boycotters would prefer to remain ignorant of the issues. What is required in Israel is a political solution that produces a Palestinian state and secures the existence of Israel. If any one of the boycotters has a solution that does that, we in Israel would love to hear it.
The generation of Americanists who opposed the 1940s and '50s idea of American exceptionalism and who opened the field of American studies to new voices (many of which are now prominent in the field), took bold stands, not only in terms of attacking the American hegemony of the time and transforming the American literary and historical narrative, but also in terms of the political actions they took: not just opposing segregation and racism, the Vietnam War, sexism, and many other less-than-enviable aspects of the American polity in their writings. Teaching at historically black colleges, producing programs of African American and minority studies, introducing feminism into the curriculum, and supporting the women who would teach those courses. Critics such as Paul Lauter, Leslie Fiedler, Stanley Elkins, Emory Eliot, Sacvan Bercovitch spoke out. They took risks. Many of them were first-generation college-educated; many were Jews. .
One of the boycott advocates, Cynthia Franklin, as quoted in Inside Higher Ed, speaks of the "culture of fear" in speaking out in relation to Israel and Palestine, specifically the fear of "reprisals," such as "not getting tenure or ... jobs." Since neither Israeli institutions of higher learning nor the State of Israel could possibly be the source of such reprisals, I can only imagine that Franklin fears other Americans. Wouldn't it make more sense to address these fellow Americans? If Franklin is right about the threat of reprisals, it would certainly take more moral courage, which apparently the boycotters lack. The president of the association, Curtis Marez also seems to know very little about what the field of American studies has stood for in the United States. As quoted in New York Magazine, he doesn't "dispute that many nations, including many of Israel's neighbors, are generally judged to have human rights records that are worse than Israel's [but ] 'one has to start somewhere' " – start somewhere to do what, exactly?
America, he may have forgotten, is no longer, actually it never was, the City on the Hill. It took decades and many academic arguments to break the American fantasy of itself as a land of equal opportunity for all and to acknowledge racism and sexism and genderism in American culture. These are still not eradicated, whatever the contemporary hegemony of Americanists believes. And there are still other American ills to deal with. To invoke Emerson's words in "Self-Reliance," voiced "to the angry bigot [who] assumes this bountiful cause of Abolition, and comes to me with his latest news from Barbardoes": "Go love thy infant, love thy woodchopper, be good-natured and modest: have that grace; and never varnish your hard, uncharitable ambition with this incredible tenderness for black folk a thousand miles off. Thy love afar is spite at home."
One defense of the boycott has been that, given this allegedly tremendous repression of the conversation in the United States by forces unnamed, and because of the necessity for exceptionalist Americanists to broadcast their hegemonic, moral message to the world, the boycott at least opens up the topic of Israel and Palestine for conversation. Five thousand academics belong to the ASA and not one of them could think of a single other way to open up this conversation? Centerpiecing a work of Arab-American fiction (say, for example, Muhja Kahf's Girl in the Tangerine Scarf, Suzan Muaddi Darraj's Inheritance of Exile or Leila Halabi's West of the Jordan) at the yearly conference might have been a start, in keeping with the association's disciplinary definition as well, though that might have complicated matters for the activists, since, lo and behold, not only is Israel not the only oppressor in these texts but the United States is not exactly a bastion of easy integration. Convening a panel of Israeli and Palestinian Americanists (some of them my former students) might also have been an option – if, of course, what the association wanted was change rather than domination and power.
American Americanists do not need to bring to the attention of Israeli academics the difficulty of getting an education under conditions of occupation or discrimination. I don't even dare bring up ancient history like European (not to mention American) quotas against Jews at the university, since this is not, we are told, a Jewish issue at all (though, who, in truth, are those Americans that the Americanists so fear?). I am talking about life in Palestine, pre-Israel, when Jews were Palestinians. I don't know if a Mandate, as in the British rule over the region from the end of World War I until the birth of Israel, is the same as an occupation, but under the pre-Israel Mandate travel throughout Palestine and for Jews coming into Palestine was severely restricted. Nor were uprisings against Jews (there were no Israelis then) uncommon. Yet 25 years before the declaration of the State of Israel, the Hebrew University was founded, and it flourished. And when, in violation of the truce in 1949, Israelis were forcibly denied access to that university, on Mount Scopus, they studied in a building in Rehavia, until they built a new campus in Givat Ram. After the 1967 war, they returned – note my word: returned – to Mount Scopus once again.
In his memoir, Little Did I Know, Stanley Cavell asks the question that all of us – Israelis, Palestinians, Americans – must ask in the global world we inhabit. He is discussing the return of his good friend, philosopher Kurt Fischer, to the Austria that had made of him a refugee, first in Shanghai, then in the United States. Fischer knows full well that he will now dwell among those very people who had ejected him, and that he is going to have to accept the human situation they now share. This is Cavell: "It takes an extreme case of oppression, which tore him from his home in his adolescence, to be posing the question every decently situated human being, after adolescence, either asks himself in an unjust world, or coarsens himself to avoid asking: Where is one now; how is one living with, hence counting upon, injustice?"
I suggest that the pro-boycotters of the American Studies Association ask themselves how they are now living with and hence counting upon injustice in order to preserve their own hegemonic authority and power and their utterly absurd sense of themselves as exceptional. As Jonathan Chait points out in his New York piece if, as Curtis Marez admits, Israel isn't the worst offender in the neighborhood, then wouldn't it make sense to start with those who are the worst offenders? In the absence of doing that, the boycotters cannot, in good conscience, claim that their boycott is anything more than power politics at its worst. Painfully for an Americanist like myself, it defeats everything that the ASA has stood for over the many years of its existence.
Emily Budick is the Ann and Joseph Edelman Chair of American Studies and chair of English at the Hebrew University of Jerusalem.
Originally published by Encyclopedia Britannica in 1952, Great Books of the Western World offered a selection of core texts representing the highest achievements of European and North American culture. That was the ambition. But today the set is perhaps best remembered as a peculiar episode in the history of furniture.
Many an American living room displayed its 54 volumes -- “monuments of unageing intellect,” to borrow a phrase from Yeats. (The poet himself, alas, did not make the grade as Great.) When it first appeared, the set cost $249.50, the equivalent of about $2,200 today. It was a shrewd investment in cultural capital, or at it least it could be, since the dividends came only from reading the books. Mortimer Adler – the philosopher and cultural impresario who envisioned the series in the early 1940s and led it through publication and beyond, into a host of spinoff projects – saw the Great Books authors as engaged in a Great Conversation across the centuries, enriching the meaning of each work and making it “endlessly rereadable.”
Adler's vision must have sounded enticing when explained by the Britannica salesman during a house call. Also enticing: the package deal, with Bible and specially designed bookcase, all for $10 down and $10 per month. But with some texts the accent was on endless more than rereadable (the fruits of ancient biological and medical research, for example, are dry and stony) and it is a good bet that many Great Books remained all but untouched by human hands.
Well, that’s one way to tell the Great Books story: High culture meets commodity fetishism amidst Cold War anxiety over the state of American education. But Tim Lacy gives a far more generous and considerably more complex analysis of the phenomenon in The Dream of a Democratic Culture: Mortimer J. Adler and the Great Books Idea, just published by Palgrave Macmillan. The book provides many unflattering details about how Adler’s pedagogical ambitions were packaged and marketed, including practices shady enough to have drawn Federal Trade Commission censure in the 1970s. (These included bogus contests, luring people into "advertising research analysis surveys" that turned into sales presentations, and misleading "bundling" of additional Great Books-related products without making clear the additional expense.) At the same time, it makes clear that Adler had more in mind than providing a codified and “branded” set of masterpieces that the reader should passively absorb (or trudge through, as the case may be).
The Dream of a Democratic Culture started life as a dissertation at Loyola University in Chicago, where Lacy is currently an academic adviser at the university’s Stritch School of Medicine. In its final pages, he describes the life-changing impact on him, some 20 years ago, of studying Adler’s How to Read a Book (1940), a longtime bestseller. He owns and is reading his way through the Great Books set, and his study reflects close attention to Adler’s own writings and the various supplementary Great Books projects. But in analyzing the life and work of “the Great Bookie,” as one of Adler’s friends dubbed him, Lacy is never merely celebratory. In the final dozen years or so before his death in 2001, Adler became one of the more splenetic culture warriors – saying, for example, that the reason no black authors appeared in the expanded 1990 edition of the Great Books was because they “didn’t write any good books.”
Other such late pronouncements have been all too memorable -- but Lacy, without excusing them, makes a case that they ought not to be treated as Adler’s definitive statements. On the contrary, they seem to betray principles expressed earlier in his career. Lacy stops short of diagnosing the aging philosopher’s bigoted remarks as evidence of declining mental powers, though it is surely a tempting explanation. Then again, working at a medical school would probably leave a non-doctor chary about that sort of thing.
I found The Dream of a Democratic Culture absorbing and was glad to be able to interview the author about it by email; the transcript follows. Between questions, I looked around a used-books website to check out the market in secondhand copies of Great Books of the Western World is like. One listing for the original 1952 edition is especially appealing, and not just because of its price (under $250, in today’s currency). “The whole set is in very good condition,” the bookseller writes, “i.e., not read at all.”
Q: How did your personal encounter with the Great Books turn into a scholarly project?
A: I started my graduate studies in history, at Loyola University Chicago, during the 1997-98 academic year. My initial plan was to work on U.S. cultural history, with a plan to zoom in on either urban environmental history or intellectual history in an urban context. I was going to earn an M.A. and then see about my possibilities for a Ph.D. program.
By the end of 1998 the only thing that had become clear to me was that I was confused. I had accumulated some debt and a little bit of coursework, but I needed a break rethink my options. I took a leave of absence for the 1999 calendar year. During that period I decided three things: (1) I wanted to stay at Loyola for my Ph.D. work; (2) Environmental history was not going to work for me there; (3) Cultural and intellectual history would work for me, but I would need to choose my M.A. thesis carefully to make it work for doctoral studies.
Alongside this intense re-education in the discipline of history I had maintained, all through the 1997 to 1999 period, my reading of the Britannica's Great Books set. I had also accumulated more books on Adler, including his two autobiographies, during stress-relief forays into Chicago's most excellent used bookstore scene. Given Adler's Chicago connections, one almost always saw his two or three of his works in the philosophy sections of these stores.
During a cold December day in 1999, while sitting in a Rogers Park coffee shop near Loyola, this all came together in a sudden caffeine-laced epiphany: Why not propose the Great Books themselves as the big project for my graduate study? I sat on the idea for a few days, both thinking about all the directions I could take for research and pounding myself on the head for not having thought of the project sooner. I knew at this point that Adler hadn't been studied much, and I had a sense that this could be a career's worth of work.
The project was going to bring together professional and personal interests in a way that I had not imagined possible when thinking about graduate school.
Q: Did you meet any resistance to working on Adler and the Great Books? They aren’t exactly held in the highest academic esteem.
A: The first resistance came late in graduate school, and after, when I began sending papers, based on my work, out to journals for potential publication. There I ran into some surprising resistance, in two ways. First, I noticed a strong reluctance toward acknowledging Adler's contributions to American intellectual life. As is evident in my work and in the writings of others (notably Joan Shelley Rubin and Lawrence Levine, but more recently in Alex Beam), Adler had made a number of enemies in the academy, especially in philosophy. But I had expected some resistance there. I know Adler was brusque, and had written negatively about the increasing specialization of the academy (especially in philosophy but also in the social sciences) over the course of the 20th century.
The second line of resistance, which was somewhat more surprising, came because I took a revisionist, positive outlook on the real and potential contributions of the great books idea. Of course this resistance linked back to Adler, who late in his life — in concert with conservative culture warriors --- declared that the canon was set and not revisable. Some of the biggest promoters of the great books idea had, ironically, made it unpalatable to a great number of intellectuals. I hadn't anticipated the fact that Adler and the Great Books were so tightly intertwined, synonymous even, in the minds of many academics.
Q: Selecting a core set of texts was only part of Adler's pedagogical program. Your account shows that it encompassed a range of forms of instruction, in various venues (on television and in newspapers as well as in classrooms and people’s homes). The teaching was, or is, pitched at people of diverse age groups, social backgrounds, and so on -- with an understanding that there are numerous ways of engaging with the material. Would you say something about that?
A: The great books idea in education --- whether higher, secondary, or even primary --- was seen by its promoters as intellectually romantic, adventurous even. It involved adults and younger students tackling primary texts instead of textbooks. As conceived by Adler and Hutchins, the great books idea focused people on lively discussion rather than boring Ben Stein-style droning lectures, or PowerPoints, or uninspiring, lowest-common-denominator student-led group work.
One can of course pick up bits of E.D. Hirsch-style "cultural literacy" (e.g., important places, names, dates, references, and trivia) through reading great books, or even acquire deeper notes of cultural capital as described in John Guillory's excellent but complex work, Cultural Capital: The Problem of Literary Canon Formation (1993). But the deepest goal of Adler's model of close reading was to lead everyday people into the high stakes world of ideas. This was no mere transaction in a "marketplace of ideas," but a full-fledged dialogue wherein one brought all her or his intellectual tools to the workbench.
Adler, Hutchins, John Erskine, Jacques Barzun, and Clifton Fadiman prided themselves being good discussion leaders, but most promoters also believed that this kind of leadership could be passed to others. Indeed, the Great Books Foundation trained (and still trains) people to lead seminars in a way that would've pleased Erskine and Adler. Education credentials matter to institutions, but the Foundation was willing train people off the street to lead great books reading groups.
This points to the fact that the excellent books by famous authors promoted by the great books movement, and the romance inherent in the world of ideas, mattered more than the personality or skill of any one discussion moderator. All could access an engagement with excellence, and that excellence could manifest in texts from a diverse array of authors.
Q: It seems like the tragedy of Adler is that he had this generous, capacious notion that could be called the Great Books as a sort of shorthand – but what he's remembered for is just the most tangible and commodified element of it. A victim of his own commercial success?
A: Your take on the tragedy of Adler is pretty much mine. Given his lifelong association with the great books project, his late-life failings almost guaranteed that the larger great books idea would be lost in the mess of both his temporary racism and promotion of Britannica's cultural commodity. The idea came to be seen as a mere byproduct of his promotional ability. The more admirable, important, and flexible project of close readings, critical thinking, and good citizenship devolved into a sad Culture Wars spectacle of sniping about race, class, and gender. This is why I tried, in my "Coda and Conclusion" to end on a more upbeat note by discussing the excellent work of Earl Shorris and my own positive adventures with great books and Adler's work.
Q: Was it obvious to you from the start that writing about Adler would entail a sort of prehistory of the culture wars, or did that realization come later?
A: At first I thought I would be exploring Adler's early work on the great books during my graduate studies. I saw myself intensely studying the 1920s-1950s period. Indeed, that's all I covered for my master's project which was completed in 2002.
However, I began to see the Culture Wars more clearly as I began to think in more detail about the dissertation. It was right around this time that I wrote a short, exploratory paper on Adler's 1980s-era Paideia Project. When I mapped Paideia in relation to "A Nation at Risk" and William Bennett, I began to see that my project would have to cover Bloom, the Stanford Affair, and the 1990 release of the second edition of Britannica's set. Around the same time I also wrote a paper on Adler's late 1960s books. When I noticed the correlation between his reactions to "The Sixties" and those of conservative culture warriors, it was plain to me that I would have to explore Adler as the culture warrior.
So even though I never set out to write about the Culture Wars, I got excited when I realized how little had been done on the topic, and that the historiography was thin. My focus would limit my exploration (unlike Andrew Hartman's forthcoming study), but I was pleased to know that I might be hanging around with a vanguard of scholars doing recent history on the Culture Wars.
Q: While Adler’s response to the upheaval of the 1960s was not enthusiastic, he was also quite contemptuous of Alan Bloom’s The Closing of the American Mind. How aware of Bloom's book and its aftermath were you when you bought and started reading the Great Books?
A: Honestly, I had little knowledge of Allan Bloom nor his ubiquitous The Closing of the American Mind until the mid-1990s. This requires a little background explanation. I started college in 1989 and finished in 1994. As a small-town Midwestern teenager and late-1980s high schooler, I was something of a rube when I started college. I was only vaguely aware, in 1989, that there was even a culture war ongoing out there (except in relation to HIV and AIDS).
I'm ashamed to admit, now, how unaware I was of the cultural scene generally. Moreover, I was insulated from some of it, and its intensity, during my early college years when it was at its height because I began college as an engineering student. Not only was my area of study far outside the humanities, the intensity of coursework in engineering sheltered me from all news beyond sports (my news reading outlet at the time). Even when I began to see that engineering wasn't for me, around 1992, my (then) vocational view of college caused me to move to chemistry rather than a humanities subject.
My own rudimentary philosophy of education kept me from thinking more about the Culture Wars until my last few years as a college student. It was then that I first heard about Bloom and his book. Even so, I only read passages in it, through the work of others, until I bought a copy of the book around 2000. I didn't read The Closing of the American Mind, word-for-word, until around 2003-04 while dissertating.
Q: There was no love lost between Adler and Bloom – you make that clear!
A: In my book you can see that Adler really wanted it known that he believed Leo Strauss and all his disciples, especially Bloom, were elitists. Adler believed that the knowledge (philosophy, history, theology, psychology, etc.) contained in great books were accessible to all. While scholarship and the knowledge of elites could add to what one gained from reading great books, there was a great deal in those works that was accessible to the common man and hence available to make better citizens.
So while Adler was sort of a comic-book character, you might say he was a clown for democratic citizenship -- a deceptively smart clown champion for democratizing knowledge and for raising the bar on intelligent discourse. This analogy is faulty, however, because of the intensity and seriousness with which he approached his intellectual endeavors. He loved debate with those who were sincerely engaged in his favorite topics (political philosophy, education, common sense philosophy, etc.).
I see only advantages in the fact that I was not personally or consistently engaged in the culture wars of the late 1980s and early 1990s. It has given me an objective distance, emotionally and intellectually, that I never believed possible for someone working on a topic that had occurred in her/his lifetime. Even though I started graduate school as something of a cultural and religious conservative (this is another story), I never felt invested in making my developing story into something that affirmed my beliefs about religious, culture, and America in general.
A belief that tradition and history had something to offer people today led me to the great books, but that did not confine me into a specific belief about what great books could, or should, offer people today. I was into great books for the intellectual challenge and personal development as a thinker, not for what great books could tell me about today's political, social, cultural, and intellectual scene.
Q: You defend Adler and the Great Books without being defensive, and I take it that you hope your book might help undo some of the damage to the reputation of each -- damage done by Adler himself, arguably, as much as by those who denounced him. But is that really possible, at this late a date? Won’t it take a generation or two? Or is there something about Adler's work that can be revived sooner, or even now?
A: Thank you very much for the compliment in your distinction about defending and being defensive. I did indeed seek to revise the way in which Adler is covered in the historiography. Because most other accounts about him have been, in the main, mocking and condescending, any revisionary project like mine would necessarily have to be more positive -- to inhabit his projects and work, which could result in something that might appear defensive. I think my mentor, Lewis Erenberg, and others will confirm that I did not always strike the right tone in my early work. It was a phase I had to work through to arrive at a mature, professional take on the whole of Adler's life and the Great Books Movement.
As for salvaging Adler's work as a whole, I don't know if that's possible. Some of it is dated and highly contextual. But there is much worth reviewing and studying in his corpus. My historical biography, focused on the great books in the United States, makes some headway in that area.
Some of Adler's other thinking about great books on the international scene will make it into a manuscript, on which I'm currently working, about the transnational history of the great books idea. If all goes well (fingers crossed), that piece will be paired with another by a philosopher and published as "The Great Books Controversy" in a series edited by Jonathan Zimmerman and Randall Curren.
I think a larger book on Adler's work in philosophy is needed, especially his work in his own Institute for Philosophical Research. I don't know if my current professional situation will give me the time and resources to accomplish much more on Adler. And even if my work situation evolves, I do have interests in other historical areas (anti-intellectualism, Chicago's intellectual history, a Jacques Maritain-in-America project). Finally, I also need keep up my hobby of reading more great books!
This year is the 50th anniversary of Anti-Intellectualism in American Life by Richard Hofstadter, whose greatest achievement, someone once said, was keeping it to just the one volume.
As discussed here a short while ago, the revisionist interpretation of American populism appearing in Hofstadter’s book The Age of Reform (1955) has taken a lot of positivistic hits by subsequent historians. He over-generalized on the basis of a (very) narrowly selected pool of primary sources -- and in the final analysis, he wasn’t really writing about the 1890s at all, but rather his own times, equating the mood and worldview of McCarthyism with the agrarian radicals of the People’s Party. Hofstadter was more conscious of the pressure of contemporary affairs in Anti-Intellectualism, which he wrote was “conceived in response to the political and intellectual conditions of the 1950s.”
It was “by no means a formal history,” Hofstadter wrote, “but largely a personal book, whose factual details are organized and dominated by my views.” I take that to be a concession, of sorts, to historians who were finding The Age of Reform problematic. His strength was the essay more than the monograph. A passage such as the following is remarkable for – among other things -- how its urbane diction just barely subdues the remembered experience of dread:
“Of course, intellectuals were not the only targets of McCarthy’s constant detonations -- he was after bigger game -- but intellectuals were in the line of fire, and it seemed to give special rejoicing to his followers when they were hit. His sorties against intellectuals and universities were emulated throughout the country by a host of less exalted inquisitors.”
It is also remarkable for needing only the slightest change of wording to sound uncomfortably applicable to more recent events. The problem lies not with this or that demagogue but with something deeper. Hofstadter spent 400 pages sounding it out. But the American science and science-fiction writer Isaac Asimov condensed it into one sentence of a column for Newsweek in 1980: “The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’ ”
Neither as broad in historical scope as Anti-Intellectualism in American Life nor as trenchant as Asimov’s zinger, Aaron Lecklider’s Inventing the Egghead: The Battle Over Brainpower in American Culture (University of Pennsylvania Press) is explicitly framed as a response to another decade – the ‘00s. While challenging Hofstadter’s ideas, Lecklider, an assistant professor of American studies at the University of Massachusetts Boston, follows his lead in responding to a past that, while recent, somehow already seems distinctly periodized.
It was the worst of time, full stop. Figures in the Bush administration were openly contemptuous of “experts,” with all their “knowledge” about “reality.” The Bush-bashers called the president stupid, and his supporters called the Bush-bashers stupid, and there was a TV game show called “Are You Smarter Than a Fifth Grader?” which hinted that the whole country was stupid, and that’s O.K. (It did well in the ratings.) The culture war was fought with the bluntest of weapons -- not between the intelligentsia and the ignorati, but between anti-intellectuals and anti-anti-intellectuals. The latter expression, while clumsy at best, acknowledges something important: anti-anti-intellectual ≠ intellectual. Laughing at a satirical interview with a creationist on “The Daily Show” entails no substantial engagement with the life of the mind.
Much of the edifying conflict was fought out in popular culture and the mass media – terrain that, Lecklider argues, historians and social critics of Hofstadter’s era either neglected, at best, or regarded as stupefying and regressive. Hence their interpretations of American cultural history tended to be narratives of decline.
In reply, Inventing the Egghead presents a series of case studies from the first six decades of the 20th century in which conflicts over the power and the possession of intellectual capital were fought out in a wide range of popular venues: cartoons, movies, jokes, Tin Pan Alley songs, newsmagazines, posters, popular science journals, handbooks on efficient housekeeping, etc.
The chapters proceed chronologically, from the rechristening of a Coney Island park in 1909 as an institute of science (to skirt blue laws) to Einstein as cult figure, to aspects of the Harlem Renaissance and the New Deal “brain trust,” and on up to the stress-inducing utopia of Oak Ridge and the coining of “egghead” as pejorative. The effect is one of cultural history as collage. Running through all these topics and cultural forms is an uneasy and constantly shifting set of attitudes towards what Lecklider calls “brainpower.”
In the author’s usage “brainpower” means the power to acquire or to stake a claim to knowledge and expertise, whether respected and professionally credentialed or not. Conflicts over who possesses brainpower (and who doesn’t) are continuous. So are disputes over its value and limitations. And that flux comes, in part, from the frequently changing needs of an economy that requires technological advances as well as a steady supply of human brains, serviceable as a factor of production.
In short, there were grounds for ambivalence about brainpower -- for reasons more various and complex than some notion of an unchanging American cultural disposition toward anti-intellectualism. “Competing representations of intelligence” in popular culture, Lecklider writes, “could alternately smash the pretensions of an intellectual elite, position ordinary men and women as smarter than experts, appeal to intellectual culture to validate working-class positions, and dismantle intellectual hierarchies. These representations were often uncomfortable and contradictory, sometimes even self-defeating, particularly when the value of intelligence was diminished in order to level the intellectual playing field.”
But other strains of popular culture – a number of distinctively WPA-era posters promoting libraries, for example -- served to recognize and foster peoples’ self-respect regarding their own mental capacities.
Earlier I suggested that “Are You Smarter Than a Fifth Grader?” implied the viewer probably wasn’t. On reflection, that may have been too dour a view. Perhaps the title gives the viewer something to which to aspire. Be that as it may, in the representations of brainpower that Lecklider inspects, the expressions of ambivalence tend more often to have more hostility or disparagement in the mix than respect for self or others. While written, and blurbed, as an alternative to Anti-Intellectualism in American Life, the book ends up seeming like the extensive elaboration and updating of a point Hofstadter made there in passing: "As the demand for the rights of the common man took form in 19th-century America, it included a program for free elementary education, but it also carried with it a dark and sullen suspicion of high culture, as a creation of the enemy."
Some people will bristle at the expression "high culture." They always do. (I'm not overly fond of it.) But that response misses the point, which, again, was put quite plainly by Isaac Asimov a while back. "I believe that every human being with a physically normal brain can learn a great deal and can be surprisingly intellectual," he wrote. "I believe that what we badly need is social approval of learning and social rewards for learning." That is as cheerful a face as can be put on our situation.
It's not that often that an author has the pleasure of seeing the second edition of a book come out several decades after it first appeared. When that does happen, the title in question is probably a novel or a work of belles lettres, rather than a monograph. Rarer still would be the book that has become topical again in the meantime – pertinent to the stress and strain of public life, perhaps even more than it was when first issued.
So the 50th anniversary edition of Walter Nugent’s The Tolerant Populists: Kansas Populism and Nativism, now out from the University of Chicago Press, is an exceptional book on a number of counts. I’m by no means sure that the author, who is professor emeritus of history at the University of Notre Dame, would feel all that comfortable as a guest on one of those public-affairs TV programs where everybody yells, interrupting each other and stomping all over the fine points of any argument with cleated boots. He might get crushed.
But the book, which once intervened in a fierce historiographical debate, offers a challenge to how Americans understand and discuss politics now.
If taken seriously, Nugent’s book might do irreparable damage to a good deal of the prevailing nonsense, which is the sign of a career well-spent.
To put his contribution in context, we’d have to take a look back at a well-received and influential book published during the last really disastrous global economic crisis anyone alive still can remember. John D. Hicks’s The Populist Revolt (1931) stood as the definitive work on the subject almost as soon as it appeared in 1931 – the most comprehensive treatment, until that point, of the rise and fall of the People’s Party of the 1890s. Hicks treated it as a heroic if flawed challenge by Midwestern and Southern farmers to the economic powers-that-be driving them into the ground through tight credit, mortgage foreclosures, and monopolistic control of railroad shipping costs and the market prices of agricultural goods. The Populists (so dubbed, it is said, by a journalist with a little Latin) became a force to reckon with in some states, and their demand for reform to limit the power of monopolists and financiers resonated beyond the corn and cotton belts.
By 1896 the party had effectively fused with the Democrats – in roughly the sense, as one Populist put it, that Jonah fused with the whale. In the wake of FDR, the populists of the 1890s could be seen as proto-New Dealers. And so they were understood, in keeping with Hicks’s overall rendering of their history, for the next 20 or 30 years. But a revisionist perspective on the People’s Party emerged in the 1950s for which the Populists embodied something much more problematic: a mass movement animated as much by feelings of powerless rage as by rational economic concerns. Other figures worked out some of the argument before Richard Hofstadter published The Age of Reform (1955). But for the sake of convenience, and in recognition of the range and depth of his influence, we might as well call it the Hofstadter thesis. Aspects of it also appeared in his book The Paranoid Style in American Politics and Other Essays (1964).
In contrast to Hicks’s understanding of the People’s Party as an early force for progressive reforms (including the graduated income tax), the Hofstadter thesis saw populism as a reactionary response to industrial production, urbanization, and the role of the United States in the world market place. These forces were undermining the status of the independent, rural farmer – who responded with nativism, conspiracy theories, and a rather hysterical yearning to return to the fine old ways of the good old days. Hofstadter quoted anti-Semitic statements by populist figures, sounding like something from a speech delivered at the end of a torchlight parade in Germany circa 1932. While he stopped short of calling the People’s Party proto-fascist, Hofstadter did situate the populists in a continuum of episodes of irrational American civic life running from the Salem witch trials to McCarthyism. (More recent examples might be adduced here, of course.)
The revisionist perspective held that the populists of the 1890s were suffering from “status anxiety,” leading to political protest directed as much against cultural change as economic conditions. And if populists and McCarthyites alike were xenophobic, anti-intellectual, and belligerently nationalistic – well, in that case the Hofstadter thesis seemed to make some compelling points.
A very big “if,” that one. Hofstadter drew on then-recent psychoanalytic and sociological ideas, and wrote with such power and grace that the two Pulitzer Prizes he received (one of them for The Age of Reform) seem like a matter of course. But the doctoral dissertation that Walter Nugent wrote at the University of Chicago – published, two years after it was accepted, as The Tolerant Populists – went after the Hofstadter thesis with hammer and tongs on its one major weakness: the senior historian hadn’t logged much time in the archives.
Nugent did, and it shows. He focused on Kansas – the epicenter of the Populist political earthquake, where the movement began and quickly established the state’s second most powerful party. Besides analyzing the available demographic and electoral data for the 1890s, Nugent went over scores of newspapers, large and small, including papers published by and for the state’s German-language communities.
The picture emerging from his research is anything but one of close-minded and nostalgic people who gloried in their status as native Kansans, obsessed with bitter feelings about foreigners, paranoid about the outside world, and ready to take it out on immigrants in general or Jews in particular.
In fact the evidence shows, time and again, exactly the opposite. People’s Party organizers appealed for support from every immigrant group in the state and often won their votes. Populist speakers and editorialists were infuriated that Kansans were being dispossessed from their homes by foreign investors who bought up real estate on speculation. A basic populist demand was that the law should ensure that land would be held by people who worked it, but the hostility was directed at foreign landlords; the populists made no effort to restrict the purchase of land by the non-native born who wanted to farm.
The anti-Semitic rants that Hofstadter quoted from populist writings were indeed virulent, but Nugent reports finding only a few examples of anything like them out of the countless documents he read from Kansas. Attacks on the Rothschilds, an eminent Anglo-Jewish banking family, certainly did show up in Populist denunciations – as did similar denunciations of the Morgans and the families of various robber barons. Nugent points out that Jew-baiting and immigrant-bashing were far more common among mainstream politicians and shapers of elite opinion, and that one Jewish writer “had heard so little about Populist anti-Semitism that he sent the Populist governor [of Kansas]… a pamphlet beginning, ‘Moses, the Populist Law-Giver.’ ”
People’s Party candidates in Kansas included an African-American minister (for state auditor), a woman (for state superintendent for public instruction), and a Jew (for postmaster) -- plus too many recently naturalized citizens of German, Welsh, Irish, Swiss, Czech, and other stock, running for too many positions, to list.
Except for “a brogue here and an umlaut there,” says Nugent, they were no different from other Populists. The policies they championed – such as state ownership of railroads and telephone providers, inflationary monetary policies that would reduce the value of their mortgages, and laws prohibiting alien ownership of land – were in response to real economic hardship, not murky unconscious impulses or complaints about cultural disrespect.
“A strong assertion is easier to make than a strong case,” writes Nugent about the revisionists of the 1950s. Around the time The Tolerant Populists first appeared, Norman Pollack and C. Vann Woodward made broadly similar critiques of the Hofstadter thesis, with Michael Rogin continuing the challenge a few years later. But when Nugent took on the Pulitzer-winning historian in the early 1960s, it must have looked like David sizing up Goliath. By the end of the book, the giant has hit the ground but the counter-evidence just keeps flying.
In his preface to the new edition, Nugent makes a very quick sweep over developments in the historiography on populism in the intervening years (to do more than that would have undoubtedly required something as long as the original text) and fulminates over how imprecisely the word populism is used now. It “has become a useful word in dodging informed thinking,” he says. “In American media, it has become an all-purpose put-down.”
Worse, it is most often applied to phenomena, such as the Tea Party, which tend to be as nativist and prone to flight-of-thought as anything subsumed under the Hofstadter thesis. The common element in the reforms proposed by the Populists 120 years ago was, Nugent writes, “to use the government as an instrument on the people’s behalf, rather than on behalf of special interests, monopolies, unregulated banks and other corporations, and (to use today’s term) the one percent.”
The movement “wanted to augment the use of governments, not diminish or circumvent them, because, as the Populist congressman Jerry Simpson put it, ‘the government is the people, and we are the people.’ ”
I don’t know if “Sockless Jerry” would have much of a chance in today’s electoral arena, but sentiments like that wouldn’t get him many well-paid speaking engagements.
Over the last year there has been a steady stream of articles about the “crisis in the humanities,” fostering a sense that students are stampeding from liberal education toward more vocationally oriented studies. In fact, the decline in humanities enrollments, as some have pointed out, is wildly overstated, and much of that decline occurred in the 1970s and 1980s. Still, the press is filled with tales about parents riding herd on their offspring lest they be attracted to literature or history rather than to courses that teach them to develop new apps for the next, smarter phone.
America has long been ambivalent about learning for its own sake, at times investing heavily in free inquiry and lifelong learning, and at other times worrying that we need more specialized training to be economically competitive. A century ago these worries were intense, and then, as now, pundits talked about a flight from the humanities toward the hard sciences.
Liberal education was a core American value in the first half of the 20th century, but a value under enormous pressure from demographic expansion and the development of more consistent public schooling. The increase in the population considering postsecondary education was dramatic. In 1910 only 9 percent of students received a high school diploma; by 1940 it was 50 percent. For the great majority of those who went on to college, that education would be primarily vocational, whether in agriculture, business, or the mechanical arts. But even vocationally oriented programs usually included a liberal curriculum -- a curriculum that would provide an educational base on which one could continue to learn -- rather than just skills for the next job. Still, there were some then (as now) who worried that the lower classes were getting “too much education.”
Within the academy, between the World Wars, the sciences assumed greater and greater importance. Discoveries in physics, chemistry, and biology did not seem to depend on the moral, political, or cultural education of the researchers – specialization seemed to trump broad humanistic learning. These discoveries had a powerful impact on industry, the military, and health care; they created jobs! Specialized scientific research at universities produced tangible results, and its methodologies – especially rigorous experimentation – could be exported to transform private industry and the public sphere. Science was seen to be racing into the future, and some questioned whether the traditional ideas of liberal learning were merely archaic vestiges of a mode of education that should be left behind.
In reaction to this ascendancy of the sciences, many literature departments reimagined themselves as realms of value and heightened subjectivity, as opposed to so-called value-free, objective work. These “new humanists” of the 1920s portrayed the study of literature as an antidote to the spiritual vacuum left by hyperspecialization. They saw the study of literature as leading to a greater appreciation of cultural significance and a personal search for meaning, and these notions quickly spilled over into other areas of humanistic study. Historians and philosophers emphasized the synthetic dimensions of their endeavors, pointing out how they were able to bring ideas and facts together to help students create meaning. And arts instruction was reimagined as part of the development of a student’s ability to explore great works that expressed the highest values of a civilization. Artists were brought to campuses to inspire students rather than to teach them the nuances of their craft. During this interwar period a liberal education surely included the sciences, but many educators insisted that it not be reduced to them. The critical development of values and meaning was a core function of education.
Thus, despite the pressures of social change and of the compelling results of specialized scientific research, there remained strong support for the notion that liberal education and learning for its own sake were essential for an educated citizenry. And rather than restrict a nonvocational education to established elites, many saw this broad teaching as a vehicle for ensuring commonality in a country of immigrants. Free inquiry would model basic democratic values, and young people would be socialized to American civil society by learning to think for themselves.
By the 1930s, an era in which ideological indoctrination and fanaticism were recognized as antithetical to American civil society, liberal education was acclaimed as key to the development of free citizens. Totalitarian regimes embraced technological development, but they could not tolerate the free discussion that led to a critical appraisal of civic values. Here is the president of Harvard, James Bryant Conant, speaking to undergraduates just two years after Hitler had come to power in Germany:
To my mind, one of the most important aspects of a college education is that it provides a vigorous stimulus to independent thinking.... The desire to know more about the different sides of a question, a craving to understand something of the opinions of other peoples and other times mark the educated man. Education should not put the mind in a straitjacket of conventional formulas but should provide it with the nourishment on which it may unceasingly expand and grow. Think for yourselves! Absorb knowledge wherever possible and listen to the opinions of those more experienced than yourself, but don’t let any one do your thinking for you.
This was the 1930s version of liberal learning, and in it you can hear echoes of Thomas Jefferson’s idea of autonomy and Ralph Waldo Emerson’s thoughts on self-reliance.
In the interwar period the emphasis on science did not, in fact, lead to a rejection of broad humanistic education. Science was a facet of this education. Today, we must not let our embrace of STEM fields undermine our well-founded faith in the capacity of the humanities to help us resist “the straitjackets of conventional formulas.” Our independence, our freedom, has depended on not letting anyone else do our thinking for us. And that has demanded learning for its own sake; it has demanded a liberal education. It still does.
Michael Roth is president of Wesleyan University. His new book, Beyond the University: Why Liberal Education Matters, will be published next year by Yale University Press.His Twitter handle is@mroth78
The current state and future prospects of the humanities are occasioning considerable anxious comment. Many humanists are sadly resigned to a belief that the humanities have irrevocably ceded pride of place to the social sciences and sciences; and, indeed, the social sciences and sciences generate and command much intellectual energy in the 21st-century university, for understandable reasons.
The usual remedies proposed for this state of affairs have seemed to me to be limited at best and perhaps even misguided. A typical argument for the utility of the humanistic disciplines is that studying them enhances critical thought and powers of expression, and one would certainly agree.
But I wonder whether such an argument will gain much traction with college-age students and especially their parents. The data suggest a clear national trend away from the humanistic disciplines toward those that seem to offer a different kind of promise or outcome: a vocational utility or practical applicability. Under such circumstances, abstract arguments about the enhancement of critical thought – no matter how skillfully they are advanced, no matter how much one might agree with them – are less likely to prevail.
I propose here one different kind of case for the humanities, one that identifies – and celebrates – their specific vocational utility.
Now, many of my fellow humanists, I suspect, will be troubled – even offended – by such an argument: the humanities ought not to be sullied by vulgar assertions about their supposed practicality. But there would be an irony in that response to my argument.
As a historian, I – like all historians – have invariably found it informative, illuminating and useful to consider the historical context and precedents for the issue at hand. And as a student of the Italian Renaissance, I have always found it ironic that, notwithstanding likely present-day resistance to evaluating the humanities in terms of their vocational utility, they enjoyed the considerable prestige they enjoyed during the Italian Renaissance and thereafter precisely because of their perceived practical utility.
Currently, the humanities, relative not only to the current place of the sciences but also to the place of the humanities during the Italian Renaissance, have withdrawn from a prominent role in the public arena, and this, I suspect, is one of the causes of their momentarily precarious state. During the Italian Renaissance, on the other hand, the humanistic disciplines were prestige subjects of study expressly because they enjoyed a relationship to the political and social order -- because those with political authority saw real practical value in encouraging humanistic study and employing those who had undertaken and completed it.
The adherents of the studia humanitatis held posts in the governments of the Italian cities and courts of the 15th and 16th centuries; their skills enabled them to serve their employers effectively as speech and letter writers, historians of the state, diplomats and government magistrates. They wrote elegant prose that was then deployed in diplomatic dispatches and letters and in speeches that they or their employers – the bearers of political authority – delivered effectively and persuasively, in part due to the elegance of the language, in part to the emphasis characteristic of the humanist program on skilled oratorical delivery. If I understand correctly, this is the collective opinion of a succession of distinguished historians of the Italian Renaissance: Paul Oskar Kristeller; Lauro Martines; Anthony Grafton and Lisa Jardine; James Hankins; and others.
Precisely how were such linguistic and literary skills leveraged as professional assets? In the words of one student of Renaissance humanism, rhetoric “was ... effective in the daily encounters of the tribunal, marketplace, and political forum, not to mention in diplomatic and personal correspondence. Artful communication ... became a[n] ... .instrument for gaining or maintaining power.” Grafton and Jardine have written that the skills
...inculcated had an established practical value in fifteenth-century Italy. The ability to speak extempore on any subject in classical Latin, the ability to compose formal letters to order in the classical idiom... were... valuable assets. Equipped with them the student could serve as an ambassador, or secretary to a government department... In other words, although the programme was strictly literary and non-vocational, it nevertheless opened the way to a number of careers....[T]he independence of liberal arts education from establishment values is an illusion. The individual humanist is defined in terms of his relation to the power structure, and he is praised or blamed, promoted or ignored, to just the extent that he fulfils or fails to fulfil those terms. It is ... a condition of the prestige of humanism in the fifteenth century, as Lauro Martines stresses, that “the humanists ... were ready to serve [the ruling] class.”
“In this setting,” Grafton and Jardine continue, “the rhetoric of humanism represents the power of Latinity and eloquence as actual power – as meshed with civic activity in a close and influential relationship.”
As models for their linguistic practices, the Italian Renaissance humanists turned to familiar and newly recovered classical texts, and the classicizing character of university education in the post-Renaissance European and Europeanized world is directly attributable to the influence of the Renaissance humanists, who advocated strenuously and successfully for the virtues of their particular disciplines. As late as the mid-to-late 19th century, venerable American liberal arts colleges offered a course of study for the A.B. degree that continued to feature classical texts, almost to the exclusion of other subject matter. (The course of study for the A.B. at such institutions also included some more limited course work in “geometry and conic sections,” algebra, plane and spherical trigonometry, mechanics, “general chemistry and the non-metals,” and additional subjects other than classical languages and literatures.)
So persuasive had the Italian humanists been in their advocacy that, centuries later, the course of study in the classic 18th- and 19th-century American liberal arts college continued to reveal the influence of the Italian Renaissance, notwithstanding the challenges one would have faced in arguing compellingly for the continuing utility of such an educational tradition in 18th- and 19th-century America. The Harvard historian Bernard Bailyn wrote that “[t]he classics of the ancient world are everywhere in the literature of the [American] Revolution,” “everywhere illustrative… of thought. They contributed a vivid vocabulary..., a universally respected personification...of political and social beliefs. They heightened the colonists’ sensitivity to ideas and attitudes otherwise derived.” And, indeed, James Madison, A.B., LL.D. Princeton University, 1771, 1787, mastered several ancient languages before “fathering” the American Constitution.
Harvard president and chemist James Bryant Conant could write as late as the 1950s that “[in] Europe west of the Iron Curtain, the literary tradition in education still prevails. An educated man or woman is a person who has acquired a mastery of several tongues and retained a working knowledge of the art and literature of Europe.”
Now, what does one learn from this brief primer on the historical context? First, that advocacy – the kind of advocacy characteristic of the Italian Renaissance humanists, who, according to Kristeller and those who wrote after him, wrested a temporary position of preeminence in their society precisely through the force and effectiveness of their advocacy – is perfectly acceptable, and carries no risk of coarsening the quality of the enterprise: a succession of Italian Renaissance humanists beginning with Petrarch advocated spiritedly for their program, and one could scarcely argue that their intellectual achievement was cheapened as a result of that advocacy.
And second, that such advocacy is especially successful when it legitimately emphasizes vocational utility and professional applicability, when it advances an argument that one’s field of study leads incontrovertibly to coveted careers and has concrete benefits for the state and for the political and social order. Let us be spirited advocates, therefore, and celebrate the utility of the humanities as one of the justifications for studying them.
Could a similar, and similarly effective, case be made today for the humanistic disciplines? I believe so. In what ways could one argue – reasonably, justifiably, and therefore persuasively – that the humanities have direct professional viability, and that one can therefore envision and countenance studying them not only because of the intrinsic intellectual satisfactions of doing so or merely because their study enhances critical thought or powers of expression in some abstract sense, but also because there is true, clear utility to doing so?
It would not be difficult to inventory a considerable number of coveted professions and enterprises where humanistic training is not only professionally valuable, but indispensable. I offer just a few possibilities here, and the list could easily be extended, I should imagine. (For example, Lino Pertile suggested the importance of humanistic training to careers in the growing nonprofit sector.)
And my argument is that, in our advocacy for the humanities, we should not be at all reluctant to make much fuller and more explicit reference to their career utility.
What would a 21st-century inventory of concrete vocational applications of the humanities look like? For example:
A field that embraces what was once termed bioethics and related areas. When one addresses and attempts to resolve such pressing public-policy issues as stem-cell research, abortion, the availability of health care, female genital mutilation, AIDS, epidemics and pandemics, and many others, a satisfactory resolution of the problems encountered will depend not solely on scientific and medical expertise, but also a command of the time-honored questions of the ancient discipline of philosophy: notions of justice (for example, determining how to distribute justly limited resource like health care); morality; and ethics. These are urgent matters that require a humanist’s expertise and the philosophers’ millennia of experience in analyzing such vexing issues. The career possibilities in international health organizations, government agencies, non-government organizations, and think tanks seem promising. The indispensability of the humanities to the successful practice of this field is such that it is now often termed the medical humanities.
Architecture and urban planning. The architect and urban planner creates the built environment (an urgent and practical, enterprise, in that human beings require spaces in which to live and work), and in doing so, he or she functions at the nexus of the political-economic, the social, and the aesthetic; the architect and urban planner is equal parts humanist (who deploys aesthetic sensibilities in the design work) and sensitive reader of the practical social, political, and economic contexts within which he or she necessarily operates. Enlightened city planning offices welcome colleagues with such sensibilities.
Foreign service and diplomacy. Never before has there been a more urgent need for skilled readers of cultural difference. A sensitive humanistic understanding of other cultures, acquired above all through the rigorous study of foreign languages (and literatures), will be indispensable in coming to terms with such developments as the encounter of Islam and the European and Europeanized worlds. The repercussions for so practical a consideration as American national security are obvious, and one can imagine many outlets for such skills in government service.
Various modes of public discourse (or “writing in action,” as my former Tulane colleague Molly Rothenberg has termed it). By this I mean the effective use of language in the public arena, such as journalism (both print and broadcast, and, increasingly, digital) or television and motion-picture screenwriting. But it could also be extended to embrace advertising (increasingly web-based, which entails yet another humanistic skill, the aesthetic sense required in the visual and aural material that now invariably complements text); web-page design (which, once more, will entail a fusion of the visual, aural, and textual); and related enterprises. The humanist’s command of the aesthetic complexities of text and language, visual image, and aural material, and their simultaneous deployment will be indispensable. Indeed, the digital technologies of the 20th and 21st centuries are so powerful, and the full reach of the transition currently under way so difficult to apprehend, that one can only speculate as to what shape human communication will take when the shift to a new paradigm is more or less complete. (Indeed, humanistic sensibilities may prove to have a salutary, tempering influence on the effects of digital technologies.) The skillful fusion of still and moving images, aural material, and text will determine the effectiveness of MOOCs, which will depend as much on humanistic skills as scientific and technical.
Rhetoric and oratory. This element is related to the previous one. The electronic information technologies that emerged beginning with the invention of the telegraph in the 19th century have a characteristic that makes them unlike manuscript copying and print: they “dematerialize” information and make it possible for it to be disseminated with lightning speed across vast distances. And the invention of radio, film, and television added the elements of the aural and moving visual to those that had characterized the medium of print (and manuscript copying before it): written text and still image. These newer technologies more closely replicate “live” human experience, and much more closely than print, which freezes discourse, and alters its character. As a technology, print (and the media associated with it) have been giving way to electronic technologies, with their capacity for the full integration of written and spoken language, still and moving image, and sound (music and other aural material), and for the dematerialization and dissemination of such information. The implication for colleges and universities is as follows: we have invested admirably in initiatives designed to train our students to write well and read texts critically and perceptively. But given the power of the new technologies, there is a case to be made for a return to greater instruction in rhetoric and oratory, to an equal command of the spoken word, which can be captured on audio- or videotape or broadcast over radio, television, and the computer (via Skype), in a guise that print has never demanded. The development of electronic communication technologies that permit us to communicate extemporaneously over vast distances in a conversational tone and manner, suggests that we might well retool our educational system to feature once again the time-honored humanistic practice of effective oratory and refine our students’ facility in the spoken word.
One need only consider the example of Barack Obama’s skilled oratory (or Franklin Roosevelt’s, or Ronald Reagan’s, or John Kennedy’s) to appreciate the importance to the political order of a venerable humanistic skill like oratory; these are political figures who postdate the development of electronic technologies, notably. Columnist George F. Will has observed that the American presidency is “an office whose constitutional powers are weak but whose rhetorical potential is great.”
By no means do the new electronic information technologies obviate the need for continuing skill in other, more traditional and familiar humanistic modes of communication – the kind of careful, comprehensive, subtle argument that written text affords – and the close, sensitive reading and command of existing texts that inform the authorship of new texts. Henry Riecken suggested that “[t]he text of the Federalist Papers was put into machine-readable form in order to carry out an analysis that resolved questions of disputed authority of some of the papers; but the new format did not replace the bound volumes for readers who want to absorb the thoughts and reflect on the aspirations of this stately document.”
Art conservation, and its relationship to the political economy. Nations with an exceptional legacy of monuments in the visual arts (Italy being an well-known example) face a particular challenge with respect to maintaining the condition of that legacy. And in Italy’s case, the relationship of the condition of that legacy to the economy is obvious: given the central place of tourism in the Italian economy, it is vital that the nation’s artistic patrimony be satisfactorily conserved. Sensitive art conservation is at the intersection of the humanistic (the aesthetic), the scientific and technological (an understanding of the nature of surfactants and the effects of environmental conditions), and the political-economic (the need to balance the claims of conserving the artistic patrimony acceptably against other claims on public resources).
What is interesting about this list is how closely its elements are aligned with the Italian Renaissance humanist’s earlier construction of the studia humanitatis. The kind of ethical reasoning demanded in successful practice of the medical humanities is, in its way, a modern iteration of the Renaissance humanist’s moral philosophy; 21st-century applications of writing, rhetoric, and oratory are, in their way, contemporary versions of the Renaissance humanist’s grammar, poetry, and rhetoric; the understanding of foreign cultures and languages required for effective foreign service in today’s bewilderingly complex and interdependent world is, in its way, the modern expression of the Renaissance humanist’s practice of history. The foundational elements of the core humanistic program have perhaps not changed so very much.
What is different is the explicitness with which the Renaissance humanists advocated – persuasively, compellingly, successfully – for the professional utility of their disciplines, which permitted them to secure a place of considerable prestige and authority in their world. There is warrant for their 21st-century successors’ advancing a similar argument: that one undertake the study and practice of the humanistic disciplines not only within the confines of the academic world (as intrinsically worthwhile, in a fundamental intellectual sense) but outside them as well (as critical to the successful execution of one’s expressly professional and vocational responsibilities).
Specifically, I propose that we self-consciously reframe the presentation and delivery of the humanistic offerings of the modern-day college and university to make much more explicit reference to their potential applicability: that we foreground this kind of argument for their virtues. Some of what is now being done within the university is being done absentmindedly, so to speak, without a sufficiently self-conscious articulation of why we do what we do. Were we to reframe our offerings in this way – reposition the humanities and articulate their virtues differently – we might find that the national trend away from them could be halted and perhaps even reversed.
My sense is that many students rather naturally hunger for the humanistic disciplines and are driven to make other curricular choices in part because of concerns about career viability. Were such concerns addressed – legitimately, effectively, persuasively – we might find some such students willing to study what their hearts prompt them to study. In our curriculums, were we to foreground explicit, purposeful reference to the ways in which the humanities are indispensable to the successful practice of some of the esteemed and rewarding professions identified above (rewarding in several senses of that word), we might succeed in alleviating student (and parental) anxiety about the practicality of studying such supposedly “impractical” subjects.
Only by such means, I believe, will the humanities truly be able to re-secure the place they once enjoyed, and still deserve, in the collective cultural imagination and in the great public arena. And by no means should we be hesitant about advancing such an argument, since we have the example of the Italian Renaissance before us: it would be difficult to argue that energetic advocacy on grounds of vocational viability compromised the artistic and intellectual integrity of the achievements of Petrarch and his venerated successors.
Anthony M. Cummings is professor of music and coordinator of Italian studies (and former provost and dean of the faculty) at Lafayette College.
Originally published in France in 1989 and now arriving in English translation from Yale University Press, Arlette Farge’s The Allure of the Archives is a little gem of a book. A diamond, perhaps, given both its clarity and the finesse with which it’s been cut and set. It is an unmistakable classic: one of the great memoirs of the silent, day-to-day drama of research.
Yet it is not the story of a career. A reader unfamiliar with her name will soon enough recognize that Farge specializes in 18th century social history. (She is director of research in modern history at the Centre Nationale de la Recherche Scientifique, in Paris.) One of her endnotes refers to the volume she published in collaboration with Michel Foucault in 1982, two years before the philosopher’s death. It was an edition of selected lettres de cachet from the files of the Bastille. (In A Tale of Two Cities, Dickens portrays the notorious legal instrument as one of the aristocracy’s corrupting privileges, but Farge and Foucault showed that it was disturbingly easy for an ordinary person to take one out against a family member.)
Many an over-active ego has turned to autobiography on far slimmer grounds than that. But in spite of its personal and even intimate tone, The Allure of the Archives is never primarily about the author herself. She instead writes about a subjectivity that exists only upon entering the reading room – or perhaps while trying to find it during the first visit.
As much as an institution, the archive is, for Farge, the site of a distinct array of experiences. It is defined by rules, habits, worries (especially with fragile documents), and a kind of ambient awkwardness that she evokes perfectly. “The silence of the archives,” she writes, “is created out of looks that linger but do not see, or gazes that stare blindly. No one escapes these wandering eyes, not even the most obstinate reader, his face clouded by work.”
In the midst of that tense, watchful silence, the patrons jockey for prime desk space, or drive each other to distraction with coughs and nervous tics. Locating the documents you want requires initiation into the cabala of archival reference numbers. Even if you have one, it “often will only direct the reader to another serial number that will itself only give access to a new series where other serial numbers await.” For guidance, one must turn to the superintendent of the reading room: “She reigns, gives advice that bears a strong resemblance to orders, speaks very loudly, and does not understand what she does not wish to understand, all the while constantly ruffling the pages of her morning newspaper.”
Her descriptions are very true to life, even an ocean away from the French archives she has in mind. The part about reference numbers may sound like comic exaggeration. It isn’t, or at least it corresponds to my last visit to the Library of Congress manuscript collection some years ago.
But all of it is just the setting for the necromancy of research – calling up men and women who are long dead and utterly forgotten, speaking for the first time in two or three hundred years. The dossiers Farge explored in her research consisted of police and judicial records as well as reports to His Majesty on what the rabble of Paris were doing and saying from week to week. A whole layer of informers – called in slang mouches, “flies” – kept track of rumors and complaints making the rounds. (They were, in effect, flies on the wall.) Also in the files are slanderous and subversive posters with traces of grit clinging to the back after the authorities ripped them down.
Police interrogations (transcribed for use during an investigation, then filed away and eventually warehoused) document the everyday happenings, criminal and otherwise, of urban life. Most of those speaking left no other account of their lives. Nor could they; even those who could read a little had not always learned to write. Having become accustomed to reading the faded or erratically punctuated documents, the researcher may undergo a sort of rapture: “the sheer pleasure of being astonished by the beauty of the texts and the overabundance of life brimming in so many ordinary lives.”
At that point, a danger becomes evident: “You can begin to forget that writing history is actually a kind of intellectual exercise, one for which fascinated recollection is just not enough…. Whatever the project is, work in the archives requires a triage, separation of documents. The question is what to take and what to leave.” And the researchers answers it by coming to recognize the patterns that emerge from one document to the next: the unstated assumptions made by those on either side of the interrogation, the changes in tone or preoccupation recorded over time.
The historian learns to frame questions that the archive knows how to answer – while remaining open to the secrets and surprises to be found in the boxes of paper that haven’t been delivered to her desk yet.
“In the archives,” Farge writes, “whispers ripple across the surface of silence, eyes glaze over, and history is decided. Knowledge and uncertainty are ordered through an exacting ritual in which the color of the note cards, the strictness of the archivists, and the smell of the manuscripts are trail markers in this world where you are always a beginner.” Farge's memoir is adamantine: sharp, brilliant, perfect, and created to last.