History

Review of Walter Nugent, 'The Tolerant Populists: Kansas Populism and Nativism'

It's not that often that an author has the pleasure of seeing the second edition of a book come out several decades after it first appeared. When that does happen, the title in question is probably a novel or a work of belles lettres, rather than a monograph. Rarer still would be the book that has become topical again in the meantime – pertinent to the stress and strain of public life, perhaps even more than it was when first issued.

So the 50th anniversary edition of Walter Nugent’s The Tolerant Populists: Kansas Populism and Nativism, now out from the University of Chicago Press, is an exceptional book on a number of counts. I’m by no means sure that the author, who is professor emeritus of history at the University of Notre Dame, would feel all that comfortable as a guest on one of those public-affairs TV programs where everybody yells, interrupting each other and stomping all over the fine points of any argument with cleated boots. He might get crushed.

But the book, which once intervened in a fierce historiographical debate, offers a challenge to how Americans understand and discuss politics now.

If taken seriously, Nugent’s book might do irreparable damage to a good deal of the prevailing nonsense, which is the sign of a career well-spent.

To put his contribution in context, we’d have to take a look back at a well-received and influential book published during the last really disastrous global economic crisis anyone alive still can remember. John D. Hicks’s The Populist Revolt (1931) stood as the definitive work on the subject almost as soon as it appeared in 1931 – the most comprehensive treatment, until that point, of the rise and fall of the People’s Party of the 1890s. Hicks treated it as a heroic if flawed challenge by Midwestern and Southern farmers to the economic powers-that-be driving them into the ground through tight credit, mortgage foreclosures, and monopolistic control of railroad shipping costs and the market prices of agricultural goods. The Populists (so dubbed, it is said, by a journalist with a little Latin) became a force to reckon with in some states, and their demand for reform to limit the power of monopolists and financiers resonated beyond the corn and cotton belts.

By 1896 the party had effectively fused with the Democrats – in roughly the sense, as one Populist put it, that Jonah fused with the whale. In the wake of FDR, the populists of the 1890s could be seen as proto-New Dealers. And so they were understood, in keeping with Hicks’s overall rendering of their history, for the next 20 or 30 years. But a revisionist perspective on the People’s Party emerged in the 1950s for which the Populists embodied something much more problematic: a mass movement animated as much by feelings of powerless rage as by rational economic concerns. Other figures worked out some of the argument before Richard Hofstadter published The Age of Reform (1955). But for the sake of convenience, and in recognition of the range and depth of his influence, we might as well call it the Hofstadter thesis. Aspects of it also appeared in his book The Paranoid Style in American Politics and Other Essays (1964).

In contrast to Hicks’s understanding of the People’s Party as an early force for progressive reforms (including the graduated income tax), the Hofstadter thesis saw populism as a reactionary response to industrial production, urbanization, and the role of the United States in the world market place. These forces were undermining the status of the independent, rural farmer – who responded with nativism, conspiracy theories, and a rather hysterical yearning to return to the fine old ways of the good old days. Hofstadter quoted anti-Semitic statements by populist figures, sounding like something from a speech delivered at the end of a torchlight parade in Germany circa 1932. While he stopped short of calling the People’s Party proto-fascist, Hofstadter did situate the populists in a continuum of episodes of irrational American civic life running from the Salem witch trials to McCarthyism. (More recent examples might be adduced here, of course.)

The revisionist perspective held that the populists of the 1890s were suffering from “status anxiety,” leading to political protest directed as much against cultural change as economic conditions. And if populists and McCarthyites alike were xenophobic, anti-intellectual, and belligerently nationalistic – well, in that case the Hofstadter thesis seemed to make some compelling points.

A very big “if,” that one. Hofstadter drew on then-recent psychoanalytic and sociological ideas, and wrote with such power and grace that the two Pulitzer Prizes he received (one of them for The Age of Reform) seem like a matter of course. But the doctoral dissertation that Walter Nugent wrote at the University of Chicago – published, two years after it was accepted, as The Tolerant Populists – went after the Hofstadter thesis with hammer and tongs on its one major weakness: the senior historian hadn’t logged much time in the archives.

Nugent did, and it shows. He focused on Kansas – the epicenter of the Populist political earthquake, where the movement began and quickly established the state’s second most powerful party. Besides analyzing the available demographic and electoral data for the 1890s, Nugent went over scores of newspapers, large and small, including papers published by and for the state’s German-language communities.

The picture emerging from his research is anything but one of close-minded and nostalgic people who gloried in their status as native Kansans, obsessed with bitter feelings about foreigners, paranoid about the outside world, and ready to take it out on immigrants in general or Jews in particular.

In fact the evidence shows, time and again, exactly the opposite. People’s Party organizers appealed for support from every immigrant group in the state and often won their votes. Populist speakers and editorialists were infuriated that Kansans were being dispossessed from their homes by foreign investors who bought up real estate on speculation. A basic populist demand was that the law should ensure that land would be held by people who worked it, but the hostility was directed at foreign landlords; the populists made no effort to restrict the purchase of land by the non-native born who wanted to farm.

The anti-Semitic rants that Hofstadter quoted from populist writings were indeed virulent, but Nugent reports finding only a few examples of anything like them out of the countless documents he read from Kansas. Attacks on the Rothschilds, an eminent Anglo-Jewish banking family, certainly did show up in Populist denunciations – as did similar denunciations of the Morgans and the families of various robber barons. Nugent points out that Jew-baiting and immigrant-bashing were far more common among mainstream politicians and shapers of elite opinion, and that one Jewish writer “had heard so little about Populist anti-Semitism that he sent the Populist governor [of Kansas]… a pamphlet beginning, ‘Moses, the Populist Law-Giver.’ ”

People’s Party candidates in Kansas included an African-American minister (for state auditor), a woman (for state superintendent for public instruction), and a Jew (for postmaster) -- plus too many recently naturalized citizens of German, Welsh, Irish, Swiss, Czech, and other stock, running for too many positions, to list.

Except for “a brogue here and an umlaut there,” says Nugent, they were no different from other Populists. The policies they championed – such as state ownership of railroads and telephone providers, inflationary monetary policies that would reduce the value of their mortgages, and laws prohibiting alien ownership of land – were in response to real economic hardship, not murky unconscious impulses or complaints about cultural disrespect.

“A strong assertion is easier to make than a strong case,” writes Nugent about the revisionists of the 1950s. Around the time The Tolerant Populists first appeared, Norman Pollack and C. Vann Woodward made broadly similar critiques of the Hofstadter thesis, with Michael Rogin continuing the challenge a few years later. But when Nugent took on the Pulitzer-winning historian in the early 1960s, it must have looked like David sizing up Goliath. By the end of the book, the giant has hit the ground but the counter-evidence just keeps flying.   

In his preface to the new edition, Nugent makes a very quick sweep over developments in the historiography on populism in the intervening years (to do more than that would have undoubtedly required something as long as the original text) and fulminates over how imprecisely the word populism is used now. It “has become a useful word in dodging informed thinking,” he says. “In American media, it has become an all-purpose put-down.”

Worse, it is most often applied to phenomena, such as the Tea Party, which tend to be as nativist and prone to flight-of-thought as anything subsumed under the Hofstadter thesis. The common element in the reforms proposed by the Populists 120 years ago was, Nugent writes, “to use the government as an instrument on the people’s behalf, rather than on behalf of special interests, monopolies, unregulated banks and other corporations, and (to use today’s term) the one percent.”

The movement “wanted to augment the use of governments, not diminish or circumvent them, because, as the Populist congressman Jerry Simpson put it, ‘the government is the people, and we are the people.’ ”

I don’t know if “Sockless Jerry” would have much of a chance in today’s electoral arena, but sentiments like that wouldn’t get him many well-paid speaking engagements.

Editorial Tags: 

Essay on what's missing in discussion of the humanities

Over the last year there has been a steady stream of articles about the “crisis in the humanities,” fostering a sense that students are stampeding from liberal education toward more vocationally oriented studies. In fact, the decline in humanities enrollments, as some have pointed out, is wildly overstated, and much of that decline occurred in the 1970s and 1980s. Still, the press is filled with tales about parents riding herd on their offspring lest they be attracted to literature or history rather than to courses that teach them to develop new apps for the next, smarter phone.

America has long been ambivalent about learning for its own sake, at times investing heavily in free inquiry and lifelong learning, and at other times worrying that we need more specialized training to be economically competitive. A century ago these worries were intense, and then, as now, pundits talked about a flight from the humanities toward the hard sciences.

Liberal education was a core American value in the first half of the 20th century, but a value under enormous pressure from demographic expansion and the development of more consistent public schooling. The increase in the population considering postsecondary education was dramatic. In 1910 only 9 percent of students received a high school diploma; by 1940 it was 50 percent. For the great majority of those who went on to college, that education would be primarily vocational, whether in agriculture, business, or the mechanical arts. But even vocationally oriented programs usually included a liberal curriculum -- a curriculum that would provide an educational base on which one could continue to learn -- rather than just skills for the next job. Still, there were some then (as now) who worried that the lower classes were getting “too much education.”

Within the academy, between the World Wars, the sciences assumed greater and greater importance. Discoveries in physics, chemistry, and biology did not seem to depend on the moral, political, or cultural education of the researchers – specialization seemed to trump broad humanistic learning. These discoveries had a powerful impact on industry, the military, and health care; they created jobs! Specialized scientific research at universities produced tangible results, and its methodologies – especially rigorous experimentation – could be exported to transform private industry and the public sphere. Science was seen to be racing into the future, and some questioned whether the traditional ideas of liberal learning were merely archaic vestiges of a mode of education that should be left behind.

In reaction to this ascendancy of the sciences, many literature departments reimagined themselves as realms of value and heightened subjectivity, as opposed to so-called value-free, objective work. These “new humanists” of the 1920s portrayed the study of literature as an antidote to the spiritual vacuum left by hyperspecialization. They saw the study of literature as leading to a greater appreciation of cultural significance and a personal search for meaning, and these notions quickly spilled over into other areas of humanistic study. Historians and philosophers emphasized the synthetic dimensions of their endeavors, pointing out how they were able to bring ideas and facts together to help students create meaning. And arts instruction was reimagined as part of the development of a student’s ability to explore great works that expressed the highest values of a civilization. Artists were brought to campuses to inspire students rather than to teach them the nuances of their craft. During this interwar period a liberal education surely included the sciences, but many educators insisted that it not be reduced to them. The critical development of values and meaning was a core function of education.

Thus, despite the pressures of social change and of the compelling results of specialized scientific research, there remained strong support for the notion that liberal education and learning for its own sake were essential for an educated citizenry. And rather than restrict a nonvocational education to established elites, many saw this broad teaching as a vehicle for ensuring commonality in a country of immigrants. Free inquiry would model basic democratic values, and young people would be socialized to American civil society by learning to think for themselves.

By the 1930s, an era in which ideological indoctrination and fanaticism were recognized as antithetical to American civil society, liberal education was acclaimed as key to the development of free citizens. Totalitarian regimes embraced technological development, but they could not tolerate the free discussion that led to a critical appraisal of civic values. Here is the president of Harvard, James Bryant Conant, speaking to undergraduates just two years after Hitler had come to power in Germany:

To my mind, one of the most important aspects of a college education is that it provides a vigorous stimulus to independent thinking.... The desire to know more about the different sides of a question, a craving to understand something of the opinions of other peoples and other times mark the educated man. Education should not put the mind in a straitjacket of conventional formulas but should provide it with the nourishment on which it may unceasingly expand and grow. Think for yourselves! Absorb knowledge wherever possible and listen to the opinions of those more experienced than yourself, but don’t let any one do your thinking for you.

This was the 1930s version of liberal learning, and in it you can hear echoes of Thomas Jefferson’s idea of autonomy and Ralph Waldo Emerson’s thoughts on self-reliance.

In the interwar period the emphasis on science did not, in fact, lead to a rejection of broad humanistic education. Science was a facet of this education. Today, we must not let our embrace of STEM fields undermine our well-founded faith in the capacity of the humanities to help us resist “the straitjackets of conventional formulas.” Our independence, our freedom, has depended on not letting anyone else do our thinking for us. And that has demanded learning for its own sake; it has demanded a liberal education. It still does.

Michael Roth is president of Wesleyan University. His new book, Beyond the University: Why Liberal Education Matters, will be published next year by Yale University Press. His Twitter handle is @mroth78

Image Source: 
Getty Images

History shows: the humanities have vocational utility (essay)

The current state and future prospects of the humanities are occasioning considerable anxious comment.  Many humanists are sadly resigned to a belief that the humanities have irrevocably ceded pride of place to the social sciences and sciences; and, indeed, the social sciences and sciences generate and command much intellectual energy in the 21st-century university, for understandable reasons.

The usual remedies proposed for this state of affairs have seemed to me to be limited at best and perhaps even misguided. A typical argument for the utility of the humanistic disciplines is that studying them enhances critical thought and powers of expression, and one would certainly agree.

But I wonder whether such an argument will gain much traction with college-age students and especially their parents. The data suggest a clear national trend away from the humanistic disciplines toward those that seem to offer a different kind of promise or outcome: a vocational utility or practical applicability. Under such circumstances, abstract arguments about the enhancement of critical thought – no matter how skillfully they are advanced, no matter how much one might agree with them – are less likely to prevail.

I propose here one different kind of case for the humanities, one that identifies – and celebrates – their specific vocational utility.

Now, many of my fellow humanists, I suspect, will be troubled – even offended – by such an argument: the humanities ought not to be sullied by vulgar assertions about their supposed practicality. But there would be an irony in that response to my argument.

As a historian, I – like all historians – have invariably found it informative, illuminating and useful to consider the historical context and precedents for the issue at hand. And as a student of the Italian Renaissance, I have always found it ironic that, notwithstanding likely present-day resistance to evaluating the humanities in terms of their vocational utility, they enjoyed the considerable prestige they enjoyed during the Italian Renaissance and thereafter precisely because of their perceived practical utility.

Currently, the humanities, relative not only to the current place of the sciences but also to the place of the humanities during the Italian Renaissance, have withdrawn from a prominent role in the public arena, and this, I suspect, is one of the causes of their momentarily precarious state. During the Italian Renaissance, on the other hand, the humanistic disciplines were prestige subjects of study expressly because they enjoyed a relationship to the political and social order -- because those with political authority saw real practical value in encouraging humanistic study and employing those who had undertaken and completed it.

The adherents of the studia humanitatis held posts in the governments of the Italian cities and courts of the 15th and 16th centuries; their skills enabled them to serve their employers effectively as speech and letter writers, historians of the state, diplomats and government magistrates. They wrote elegant prose that was then deployed in diplomatic dispatches and letters and in speeches that they or their employers – the bearers of political authority – delivered effectively and persuasively, in part due to the elegance of the language, in part to the emphasis characteristic of the humanist program on skilled oratorical delivery.  If I understand correctly, this is the collective opinion of a succession of distinguished historians of the Italian Renaissance: Paul Oskar Kristeller; Lauro Martines; Anthony Grafton and Lisa Jardine; James Hankins; and others.

Precisely how were such linguistic and literary skills leveraged as professional assets? In the words of one student of Renaissance humanism, rhetoric “was ... effective in the daily encounters of the tribunal, marketplace, and political forum, not to mention in diplomatic and personal correspondence.  Artful communication ... became a[n] ... .instrument for gaining or maintaining power.” Grafton and Jardine have written that the skills

...inculcated had an established practical value in fifteenth-century Italy. The ability to speak extempore on any subject in classical Latin, the ability to compose formal letters to order in the classical idiom... were... valuable assets. Equipped with them the student could serve as an ambassador, or secretary to a government department... In other words, although the programme was strictly literary and non-vocational, it nevertheless opened the way to a number of careers....[T]he independence of liberal arts education from establishment values is an illusion. The individual humanist is defined in terms of his relation to the power structure, and he is praised or blamed, promoted or ignored, to just the extent that he fulfils or fails to fulfil those terms. It is ... a condition of the prestige of humanism in the fifteenth century, as Lauro Martines stresses, that “the humanists ... were ready to serve [the ruling] class.”

“In this setting,” Grafton and Jardine continue, “the rhetoric of humanism represents the power of Latinity and eloquence as actual power – as meshed with civic activity in a close and influential relationship.”

As models for their linguistic practices, the Italian Renaissance humanists turned to familiar and newly recovered classical texts, and the classicizing character of university education in the post-Renaissance European and Europeanized world is directly attributable to the influence of the Renaissance humanists, who advocated strenuously and successfully for the virtues of their particular disciplines. As late as the mid-to-late 19th century, venerable American liberal arts colleges offered a course of study for the A.B. degree that continued to feature classical texts, almost to the exclusion of other subject matter. (The course of study for the A.B. at such institutions also included some more limited course work in “geometry and conic sections,” algebra, plane and spherical trigonometry, mechanics, “general chemistry and the non-metals,” and additional subjects other than classical languages and literatures.)

So persuasive had the Italian humanists been in their advocacy that, centuries later, the course of study in the classic 18th- and 19th-century American liberal arts college continued to reveal the influence of the Italian Renaissance, notwithstanding the challenges one would have faced in arguing compellingly for the continuing utility of such an educational tradition in 18th- and 19th-century America. The Harvard historian Bernard Bailyn wrote that “[t]he classics of the ancient world are everywhere in the literature of the [American] Revolution,” “everywhere illustrative… of thought. They contributed a vivid vocabulary..., a universally respected personification...of political and social beliefs. They heightened the colonists’ sensitivity to ideas and attitudes otherwise derived.” And, indeed, James Madison, A.B., LL.D. Princeton University, 1771, 1787, mastered several ancient languages before “fathering” the American Constitution.

Harvard president and chemist James Bryant Conant could write as late as the 1950s that “[in] Europe west of the Iron Curtain, the literary tradition in education still prevails. An educated man or woman is a person who has acquired a mastery of several tongues and retained a working knowledge of the art and literature of Europe.”

Now, what does one learn from this brief primer on the historical context? First, that advocacy – the kind of advocacy characteristic of the Italian Renaissance humanists, who, according to Kristeller and those who wrote after him, wrested a temporary position of preeminence in their society precisely through the force and effectiveness of their advocacy – is perfectly acceptable, and carries no risk of coarsening the quality of the enterprise: a succession of Italian Renaissance humanists beginning with Petrarch advocated spiritedly for their program, and one could scarcely argue that their intellectual achievement was cheapened as a result of that advocacy.

And second, that such advocacy is especially successful when it legitimately emphasizes vocational utility and professional applicability, when it advances an argument that one’s field of study leads incontrovertibly to coveted careers and has concrete benefits for the state and for the political and social order.  Let us be spirited advocates, therefore, and celebrate the utility of the humanities as one of the justifications for studying them.

Could a similar, and similarly effective, case be made today for the humanistic disciplines?  I believe so. In what ways could one argue – reasonably, justifiably, and therefore persuasively – that the humanities have direct professional viability, and that one can therefore envision and countenance studying them not only because of the intrinsic intellectual satisfactions of doing so or merely because their study enhances critical thought or powers of expression in some abstract sense, but also because there is true, clear utility to doing so?

It would not be difficult to inventory a considerable number of coveted professions and enterprises where humanistic training is not only professionally valuable, but indispensable. I offer just a few possibilities here, and the list could easily be extended, I should imagine. (For example, Lino Pertile suggested the importance of humanistic training to careers in the growing nonprofit sector.) 

And my argument is that, in our advocacy for the humanities, we should not be at all reluctant to make much fuller and more explicit reference to their career utility.

What would a 21st-century inventory of concrete vocational applications of the humanities look like? For example:

A field that embraces what was once termed bioethics and related areas. When one addresses and attempts to resolve such pressing public-policy issues as stem-cell research, abortion, the availability of health care, female genital mutilation, AIDS, epidemics and pandemics, and many others, a satisfactory resolution of the problems encountered will depend not solely on scientific and medical expertise, but also a command of the time-honored questions of the ancient discipline of philosophy: notions of justice (for example, determining how to distribute justly limited resource like health care); morality; and ethics. These are urgent matters that require a humanist’s expertise and the philosophers’ millennia of experience in analyzing such vexing issues. The career possibilities in international health organizations, government agencies, non-government organizations, and think tanks seem promising. The indispensability of the humanities to the successful practice of this field is such that it is now often termed the medical humanities.

Architecture and urban planning. The architect and urban planner creates the built environment (an urgent and practical, enterprise, in that human beings require spaces in which to live and work), and in doing so, he or she functions at the nexus of the political-economic, the social, and the aesthetic; the architect and urban planner is equal parts humanist (who deploys aesthetic sensibilities in the design work) and sensitive reader of the practical social, political, and economic contexts within which he or she necessarily operates. Enlightened city planning offices welcome colleagues with such sensibilities.

Foreign service and diplomacy. Never before has there been a more urgent need for skilled readers of cultural difference. A sensitive humanistic understanding of other cultures, acquired above all through the rigorous study of foreign languages (and literatures), will be indispensable in coming to terms with such developments as the encounter of Islam and the European and Europeanized worlds. The repercussions for so practical a consideration as American national security are obvious, and one can imagine many outlets for such skills in government service.

Various modes of public discourse (or “writing in action,” as my former Tulane colleague Molly Rothenberg has termed it). By this I mean the effective use of language in the public arena, such as journalism (both print and broadcast, and, increasingly, digital) or television and motion-picture screenwriting. But it could also be extended to embrace advertising (increasingly web-based, which entails yet another humanistic skill, the aesthetic sense required in the visual and aural material that now invariably complements text); web-page design (which, once more, will entail a fusion of the visual, aural, and textual); and related enterprises. The humanist’s command of the aesthetic complexities of text and language, visual image, and aural material, and their simultaneous deployment will be indispensable. Indeed, the digital technologies of the 20th and 21st centuries are so powerful, and the full reach of the transition currently under way so difficult to apprehend, that one can only speculate as to what shape human communication will take when the shift to a new paradigm is more or less complete. (Indeed, humanistic sensibilities may prove to have a salutary, tempering influence on the effects of digital technologies.) The skillful fusion of still and moving images, aural material, and text will determine the effectiveness of MOOCs, which will depend as much on humanistic skills as scientific and technical.

Rhetoric and oratory. This element is related to the previous one. The electronic information technologies that emerged beginning with the invention of the telegraph in the 19th century have a characteristic that makes them unlike manuscript copying and print: they “dematerialize” information and make it possible for it to be disseminated with lightning speed across vast distances. And the invention of radio, film, and television added the elements of the aural and moving visual to those that had characterized the medium of print (and manuscript copying before it): written text and still image. These newer technologies more closely replicate “live” human experience, and much more closely than print, which freezes discourse, and alters its character. As a technology, print (and the media associated with it) have been giving way to electronic technologies, with their capacity for the full integration of written and spoken language, still and moving image, and sound (music and other aural material), and for the dematerialization and dissemination of such information. The implication for colleges and universities is as follows: we have invested admirably in initiatives designed to train our students to write well and read texts critically and perceptively. But given the power of the new technologies, there is a case to be made for a return to greater instruction in rhetoric and oratory, to an equal command of the spoken word, which can be captured on audio-  or videotape or broadcast over radio, television, and the computer (via Skype), in a guise that print has never demanded. The development of electronic communication technologies that permit us to communicate extemporaneously over vast distances in a conversational tone and manner, suggests that we might well retool our educational system to feature once again the time-honored humanistic practice of effective oratory and refine our students’ facility in the spoken word.

One need only consider the example of Barack Obama’s skilled oratory (or Franklin Roosevelt’s, or Ronald Reagan’s, or John Kennedy’s) to appreciate the importance to the political order of a venerable humanistic skill like oratory; these are political figures who postdate the development of electronic technologies, notably. Columnist George F. Will has observed that the American presidency is “an office whose constitutional powers are weak but whose rhetorical potential is great.”

By no means do the new electronic information technologies obviate the need for continuing skill in other, more traditional and familiar humanistic modes of communication – the kind of careful, comprehensive, subtle argument that written text affords – and the close, sensitive reading and command of existing texts that inform the authorship of new texts. Henry Riecken suggested that “[t]he text of the Federalist Papers was put into machine-readable form in order to carry out an analysis that resolved questions of disputed authority of some of the papers; but the new format did not replace the bound volumes for readers who want to absorb the thoughts and reflect on the aspirations of this stately document.”

Art conservation, and its relationship to the political economy. Nations with an exceptional legacy of monuments in the visual arts (Italy being an well-known example) face a particular challenge with respect to maintaining the condition of that legacy. And in Italy’s case, the relationship of the condition of that legacy to the economy is obvious: given the central place of tourism in the Italian economy, it is vital that the nation’s artistic patrimony be satisfactorily conserved.  Sensitive art conservation is at the intersection of the humanistic (the aesthetic), the scientific and technological (an understanding of the nature of surfactants and the effects of environmental conditions), and the political-economic (the need to balance the claims of conserving the artistic patrimony acceptably against other claims on public resources).

What is interesting about this list is how closely its elements are aligned with the Italian Renaissance humanist’s earlier construction of the studia humanitatis. The kind of ethical reasoning demanded in successful practice of the medical humanities is, in its way, a modern iteration of the Renaissance humanist’s moral philosophy; 21st-century applications of writing, rhetoric, and oratory are, in their way, contemporary versions of the Renaissance humanist’s grammar, poetry, and rhetoric; the understanding of foreign cultures and languages required for effective foreign service in today’s bewilderingly complex and interdependent world is, in its way, the modern expression of the Renaissance humanist’s practice of history. The foundational elements of the core humanistic program have perhaps not changed so very much.

What is different is the explicitness with which the Renaissance humanists advocated – persuasively, compellingly, successfully – for the professional utility of their disciplines, which permitted them to secure a place of considerable prestige and authority in their world. There is warrant for their 21st-century successors’ advancing a similar argument: that one undertake the study and practice of the humanistic disciplines not only within the confines of the academic world (as intrinsically worthwhile, in a fundamental intellectual sense) but outside them as well (as critical to the successful execution of one’s expressly professional and vocational responsibilities).

Specifically, I propose that we self-consciously reframe the presentation and delivery of the humanistic offerings of the modern-day college and university to make much more explicit reference to their potential applicability: that we foreground this kind of argument for their virtues. Some of what is now being done within the university is being done absentmindedly, so to speak, without a sufficiently self-conscious articulation of why we do what we do. Were we to reframe our offerings in this way – reposition the humanities and articulate their virtues differently – we might find that the national trend away from them could be halted and perhaps even reversed. 

My sense is that many students rather naturally hunger for the humanistic disciplines and are driven to make other curricular choices in part because of concerns about career viability.  Were such concerns addressed – legitimately, effectively, persuasively – we might find some such students willing to study what their hearts prompt them to study. In our curriculums, were we to foreground explicit, purposeful reference to the ways in which the humanities are indispensable to the successful practice of some of the esteemed and rewarding professions identified above (rewarding in several senses of that word), we might succeed in alleviating student (and parental) anxiety about the practicality of studying such supposedly “impractical” subjects.

Only by such means, I believe, will the humanities truly be able to re-secure the place they once enjoyed, and still deserve, in the collective cultural imagination and in the great public arena. And by no means should we be hesitant about advancing such an argument, since we have the example of the Italian Renaissance before us: it would be difficult to argue that energetic advocacy on grounds of vocational viability compromised the artistic and intellectual integrity of the achievements of Petrarch and his venerated successors.
 

Anthony M. Cummings is professor of music and coordinator of Italian studies (and former provost and dean of the faculty) at Lafayette College.

Review of Arlette Farge, 'The Allure of the Archives'

Originally published in France in 1989 and now arriving in English translation from Yale University Press, Arlette Farge’s The Allure of the Archives is a little gem of a book. A diamond, perhaps, given both its clarity and the finesse with which it’s been cut and set. It is an unmistakable classic: one of the great memoirs of the silent, day-to-day drama of research.

Yet it is not the story of a career. A reader unfamiliar with her name will soon enough recognize that Farge specializes in 18th century social history. (She is director of research in modern history at the Centre Nationale de la Recherche Scientifique, in Paris.) One of her endnotes refers to the volume she published in collaboration with Michel Foucault in 1982, two years before the philosopher’s death. It was an edition of selected lettres de cachet from the files of the Bastille. (In A Tale of Two Cities, Dickens portrays the notorious legal instrument as one of the aristocracy’s corrupting privileges, but Farge and Foucault showed that it was disturbingly easy for an ordinary person to take one out against a family member.)

Many an over-active ego has turned to autobiography on far slimmer grounds than that. But in spite of its personal and even intimate tone, The Allure of the Archives is never primarily about the author herself. She instead writes about a subjectivity that exists only upon entering the reading room – or perhaps while trying to find it during the first visit.

As much as an institution, the archive is, for Farge, the site of a distinct array of experiences. It is defined by rules, habits, worries (especially with fragile documents), and a kind of ambient awkwardness that she evokes perfectly. “The silence of the archives,” she writes, “is created out of looks that linger but do not see, or gazes that stare blindly. No one escapes these wandering eyes, not even the most obstinate reader, his face clouded by work.”

In the midst of that tense, watchful silence, the patrons jockey for prime desk space, or drive each other to distraction with coughs and nervous tics. Locating the documents you want requires initiation into the cabala of archival reference numbers. Even if you have one, it “often will only direct the reader to another serial number that will itself only give access to a new series where other serial numbers await.” For guidance, one must turn to the superintendent of the reading room: “She reigns, gives advice that bears a strong resemblance to orders, speaks very loudly, and does not understand what she does not wish to understand, all the while constantly ruffling the pages of her morning newspaper.”

Her descriptions are very true to life, even an ocean away from the French archives she has in mind. The part about reference numbers may sound like comic exaggeration. It isn’t, or at least it corresponds to my last visit to the Library of Congress manuscript collection some years ago.

But all of it is just the setting for the necromancy of research – calling up men and women who are long dead and utterly forgotten, speaking for the first time in two or three hundred years. The dossiers Farge explored in her research consisted of police and judicial records as well as reports to His Majesty on what the rabble of Paris were doing and saying from week to week. A whole layer of informers – called in slang mouches, “flies” – kept track of rumors and complaints making the rounds. (They were, in effect, flies on the wall.) Also in the files are slanderous and subversive posters with traces of grit clinging to the back after the authorities ripped them down.  

Police interrogations (transcribed for use during an investigation, then filed away and eventually warehoused) document the everyday happenings, criminal and otherwise, of urban life. Most of those speaking left no other account of their lives. Nor could they; even those who could read a little had not always learned to write. Having become accustomed to reading the faded or erratically punctuated documents, the researcher may undergo a sort of rapture: “the sheer pleasure of being astonished by the beauty of the texts and the overabundance of life brimming in so many ordinary lives.”

At that point, a danger becomes evident: “You can begin to forget that writing history is actually a kind of intellectual exercise, one for which fascinated recollection is just not enough…. Whatever the project is, work in the archives requires a triage, separation of documents. The question is what to take and what to leave.” And the researchers answers it by coming to recognize the patterns that emerge from one document to the next: the unstated assumptions made by those on either side of the interrogation, the changes in tone or preoccupation recorded over time.

The historian learns to frame questions that the archive knows how to answer – while remaining open to the secrets and surprises to be found in the boxes of paper that haven’t been delivered to her desk yet.

“In the archives,” Farge writes, “whispers ripple across the surface of silence, eyes glaze over, and history is decided. Knowledge and uncertainty are ordered through an exacting ritual in which the color of the note cards, the strictness of the archivists, and the smell of the manuscripts are trail markers in this world where you are always a beginner.” Farge's memoir is adamantine: sharp, brilliant, perfect, and created to last.

 

Editorial Tags: 

Review of Benjamin Kline Hunnicut, 'Free Time: The Forgotten American Dream'

“Having had to cut the book nearly in half for the final proof,” writes Benjamin Kline Hunnicut in the introduction to Free Time: The Forgotten American Dream (Temple University Press), “I am keenly aware of things omitted, still on my computer’s hard drive awaiting publication.” This is offered as an apology, though none is needed. Excessive leanness must be the least common fault in scholarly prose – and Free Time deserves laurels for not imposing too much on the scarce resource in question.

The author teaches at the University of Iowa, where he holds the enviable post of professor of leisure studies. He has devoted the better part of 40 years – including two previous books – to investigating the sources and implications of something both obvious and overlooked about the American work week.

Throughout the 19th and into the early 20th centuries, working people fought for, and won, more free time -- despite the dire mutterings by the pundits, who announced that economic collapse and social turmoil were sure to follow if employees worked for only, say, 10 hours a day, 6 days a week. By the 1930s, the combination of increased industrial productivity and collective bargaining made the trend for an ever-shorter work week seem irreversible. The demands of war production didn’t erase the expectation that the 40-hour week would shrink to 30, after peace came.

It did, in some places. For example, Hunnicutt and his students have interviewed retired factory workers from Akron, Ohio and Battle Creek, Michigan who won the six-hour day, once the war was over. And the social forecasts and magazine think-pieces from the 1960s and ‘70s made it sound like the great challenge of the future would be coping with all the free time created by automation and computerization.

Like hover-car collisions and overbooked hotels on the moon, the extremely shortened work week turns out not to be a major 21st-century social issue, after all. “Since the mid-1970s,” Hunnicutt says, “we have been working longer and longer each year, about a half a percentage point more from year to year….” It adds up. Americans now log an average 199 hours -- almost five 40-hour work weeks -- more per year than they did in 1973 – putting in “longer hours than those of other modern industrial nations, with the exception of South Korea,” according to the findings of the Bureau of Labor Statistics in 2009.

The point here is not that extrapolation is unreliable -- or even that seriously regressive trends can begin to seem normal after a generation or two. Hunnicutt begins his broad narrative of how things got like this in the 18th century, with a comment by Benjamin Franklin: “If every man and woman would work for four hours each day on something useful, that labor would produce sufficient to procure all the necessaries and comforts of life, want and misery would be banished out of the world, and the rest of the 24 hours might be leisure and happiness.”

Tracing this sentence back to its original context, I found it appeared in a letter expressing Franklin’s criticism of how much labor and effort went into producing luxury goods for conspicuous consumption. Millions, he wrote, were “employed in doing nothing, or in something that amounts to nothing, when the necessaries and conveniences of life are in question.” It is a good thing the man is dead; five minutes in an American shopping center would kill him.

In Hunnicutt’s reading, the passage is a particularly blunt expression of a perspective or set of values he calls the Higher Progress. The goal of economic development was not just to produce “necessaries and comforts of life” in abundance and affordably – that, too, of course -- but to give people the free time to enjoy what they’d made, as well as one another’s company, and to secure the general welfare through lifelong education and civic involvement. The same vision is expressed by Walt Whitman, Frank Lloyd Wright, “factory girls” writing to newspapers in the 1840s, and Robert Maynard Hutchins’s proposals for university reform. In a book published when he was president of the University Chicago, Hutchins described progress as having three stages:

“We want our private and individual good, our economic well-being… Second, we want the common good: peace, order, and justice. But most of all we want a third order of good, our personal or human good. We want, that is, to achieve the limit of our moral, intellectual, and spiritual powers.”

That “we” is not aristocratic. The examples of Higher Progress thinking that Free Time cites are profoundly democratic in temper. For every patrician worrying that more leisure would just lead to drunkenness and loutish habits, Hunnicutt seems to quote five plebians saying they wanted the time for “the ‘wants’ that were being repressed by long hours: reading newspapers and books, visiting and entertaining at home, writing letters, voting, cultivating flowers, walking with the family, taking baths, going to meetings, and enjoying works of art.”

The Higher Progress was hardly inevitable. Following the Civil War, Walt Whitman, whose poetry often seems the outpouring of a blithe spirit with a caffeine buzz, sounded almost desperate at the scene before him. Despite the country’s technological progress and material abundance, “our New World democracy … is, so far, an almost complete failure in its social aspects, and in really grand religious, moral, and literary results.”

As was his wont, Whitman seems to speak directly to the reader, across the decades, in warning about the danger of fetishizing all our stuff and gizmos: “a secret silent loathing and despair.” A steadily growing GNP would not necessarily prevent the Higher Progress, but consumerism (in the form Franklin criticized as “luxury”) was all too likely to substitute itself for leisure, in the richest possible sense of the word.

So where did things go off track? Why is it that one of the arguments made for the sheer practicality of a shorter work week – that it would reduce joblessness – seems never to have been made given recent unemployment figures? What did the men and women who won 30-hour week in the 1940s respond to the free time?

Free Time addresses all of these questions, or at least points in directions where the answers might be found. But in honor of the author’s own sacrifice – and in hopes of encouraging you to read the book – I am going to make this column half as long as it might well be. It deserves wide attention, and would provoke a more meaningful conversation about the past, present, and future than we’re likely to have otherwise.

Editorial Tags: 

Review of Susan P. Mattern, "The Prince of Medicine: Galen in the Roman Empire"

Intellectual Affairs

For thousands of years, the treatment of illness involved some combination of superstition, trial-and-error, inadvertent torture, and just plain dumb luck. A treatment counted as effective if the patient survived.

Whatever doctors were doing before the middle of the 19th century, it’s hard to consider it medicine – certainly not by the standards of the 150 years or so since the advent of local anesthesia. Physicians have understood the principles of antisepsis for about as long, while aspirin was synthesized only in 1897. We are spoiled. Not for us the clinical technique of Theodoric of York -- the medieval practitioner Steve Martin used to play in a "Saturday Night Live" sketch – whose treatment for every condition included the draining of excess blood.

Comforting one patient’s mother, Theodoric reminds her of how far the healing arts had advanced as of 1303: “Why, just 50 years ago, they thought a disease like your daughter's was caused by demonic possession or witchcraft,” he tells her. “But nowadays we know that Isabelle is suffering from an imbalance of bodily humors, perhaps caused by a toad or a small dwarf living in her stomach.”

Insofar as Theodoric and his colleagues brought any book-learning to their practice, it came from the pen of the Greco-Roman philosopher and physician Galen, born circa 130 A.D. in what is now Turkey. Susan P. Mattern, a professor of history at the University of Georgia, does not exaggerate in calling her biography The Prince of Medicine: Galen in the Roman Empire (Oxford University Press). His posthumous career was, if anything, one of almost regal authority.

An earned authority, just to be clear about it. Galen had an ego the size of the Empire itself. He never tired of pointing out the abject ignorance of other physicians, and was prone to quoting tributes by illustrious people to his own erudition and skill. It is not charming. But Mattern shows Galen to have been tireless as both a practitioner and a scholar -- and his output of treatises, case histories, popular textbooks, and editions of Hippocrates and other medical authors was astounding. “The most modern edition of his corpus runs to 22 volumes, including about 150 titles,” writes Mattern, “and is one-eighth of all the classical Greek literature that survives.”

Tireless in his efforts to accumulate, compare, and synthesize the recorded medical knowledge of previous centuries, Galen also conducted a great deal of anatomical research (including animal vivisection) to test the theories of the day. In his 20s, he did the second-century equivalent of a residency as the physician-on-call to the gladiators of his hometown. A standard treatment for open wounds was to bathe them in hot water, followed by a plaster made of boiled flour, which Galen reports as totally ineffective – crippling, when not lethal. By contrast, he “soaked linen cloths in wine and placed the folds on the wounds,” the biographer says, “covering the cloths in soft sponges which he moistened day and night.” Whether or not the technique saved the lives of all the gladiators in his care during his first year (so Galen claimed) he definitely understood the antiseptic property of alcohol.

Opening a practice in Rome, he distinguished himself in the constant, cutthroat battle of reputation among physicians, both for his skill in diagnosis and treatment and his erudition. He seems to have been acutely sensitive to changes in a patient’s pulse and body temperature. “Long before laboratory testing,” Mattern writes, “he examined urine and feces, sweat, sputum,” and “vomit, pus, and blood for color, texture, viscosity, and sediment.” Galen’s case histories show he “scruitinized his patients’ faces for signs such as change in skin color or the shrunken eyes of wasting or extreme dehydration.” And he knew how to interview patients about their history and symptoms with more finesse than you can expect from one HMO that I could name.

Mattern stresses that Galen was also an exemplary product of Hellenistic culture – urbane, deeply familiar with Greek philosophy and literature as well as the medical authors, and capable of writing in either a matter-of-fact or a high-flown style as the circumstances required.

She notes that we have no evidence that Galen bothered to learn the language of the Empire. He wrote in Greek and did not cite Latin authors, and his reputation took root among the aristocracy, for which familiarity with Greek was the sine qua non of intellectual sophistication. Medical science in particular was a fashionable topic, and a number of Galen’s works were composed as memoranda or instructional guides for the amateurs in his entourage.

The audience for Galen’s work was not limited to the scroll-buying public. He had also learned the arts of oratory and debate practiced by the sophists -- and his encounters with other physicians were brutal rhetorical showdowns, as were his lectures on anatomy, during which slaves held down monkeys and livestock as Galen and his opponents cut them open to demonstrate their points.

Much of it was done in the street. At one point while reading the book, I got an image of the Prince of Medicine crossing paths with the adherent of some medical philosophy he opposed -- the Empiricists, say, or the Dogmatists -- and performing dissection battles while surrounded by their respective crews (students, patients, slaves, aristocratic fanboys, etc.) as well as random passers-by. Like break-dancing, in togas, with gallons and gallons of blood.

It did not hurt Galen’s reputation that he had inherited considerable wealth and could refuse payment for his services. He disdained the very idea of practicing the divine art of Asclepius (the god of medicine who had visited Galen’s father in a dream to provide vocational guidance on the boy’s future) for money. This, too, had its rhetorical side: it meant he could cast aspersions on doctors with more worldly motivations than pursuit of the pure good of healing.

In his writings, Galen expressed reluctance at being summoned by the emperor, Marcus Aurelius, to serve as court physician. He accepted, of course (it was an offer he couldn’t refuse) and was also retained by the succeeding emperors. Mattern suggests that there may have been more to his professed misgivings than thinly disguised braggadocio. Whatever the boost in prominence, the position also meant less autonomy.

Self-aggrandizing as Galen’s devotion to medicine may have been at times, the biographer calls him “a surprisingly pious man,” fascinated by the “cumulative and overpowering evidence of an intelligence at work” in the animals he dissected in hopes of understanding the human anatomy. This made him one of the more appealing pagan thinkers for adherents of the three major monotheistic religions, who translated them into Hebrew, Latin, and Arabic. A number of his works have survived only because Islamic scholars translated them, although it’s entirely possible that the original Greek texts may yet resurface. As recently as 2007, Galen’s treatise “On the Avoidance of Pain” – long presumed lost in any language – turned up in a Greek monastery.

His biographer points out how unfortunate it is that Galen never challenged the medical value of bloodletting – a practice that continued for so long that historians have wondered if it perhaps have had therapeutic value in treating something, though it could prove fatal when done with a shaky hand.

“While it is perhaps wrong to blame him for failing to break from a tradition that his followers, including the great physicians of the Islamic Middle East and of the European Renaissance and Enlightenment mostly did not question,” writes Mattern, “one wishes he had turned his scorn on this therapy” instead of on the “three-day fasting cure” promoted by some of his peers. “For while Galen did not invent bloodletting, he had the power to consign it to oblivion.”

Maybe he did change his mind, but history lost the memo?

 

Editorial Tags: 

Mitch Daniels renews criticism of Howard Zinn

Smart Title: 

Purdue president defends his criticism of the late historian, and continues his attack on his views.

Essay on role of history in Supreme Court decision on gay marriage

At a time when many question the relevance of history,  it is noteworthy that the U.S. Supreme Court case that prohibited  the federal government from undercutting a state’s decision to extend "the recognition, dignity and protection" of marriage to same-sex couples, hinged on arguments advanced by professional historians.

Rarely have historians played as important a role in shaping the outcome of a public controversy as in the same-sex marriage cases. Legal, family, women's, and lesbian and gay historians provided key evidence on which U.S. v. Windsor ultimately turned: that the Defense of Marriage Act (DOMA) represented an unprecedented and improper federal intrusion into a domain historically belonging to the states. As Justice Kennedy affirmed, "the federal government, through our history, has deferred to state law policy decisions with respect to domestic relations."

But historical scholarship did more than substantiate a single pivotal argument.  It framed the majority’s broader understanding of marriage as an evolving institution and helped convince five justices that opposition to same-sex marriage is best understood as part of a long history of efforts to deprive disfavored groups of equal rights and benefits. In the end, the majority opinion hinged on "the community’s ... evolving understanding" of marriage and of equality and the majority’s recognition that DOMA imposed "a disadvantage, a separate status, and so a stigma upon all who enter into same-sex marriages made lawful by the unquestioned authority of the states."

Briefs filed with the Supreme Court by the American Historical Association and the Organization of American Historians demonstrated that far from being a static institution, marriage has profoundly changed its definition, roles, and functions, and that today's dominant marital ideal, emphasizing emotional intimacy, has nothing to do with gender. Currently, marriage's foremost public function is to distribute benefits, such as those involving health insurance, Social Security, and inheritance, making it all the more valuable for same-sex couples.

Furthermore, these briefs proved that contrary to the widely held assumption that marriage has long been defined by its procreative function, this was not the case. Marriage was justified on multiple grounds. Especially important were the notions that marriage contributed to social stability and provided care for family members. No American state ever forbade marriage to those too old to bear children.

Without reducing the legal history of marriage to a Whiggish, Progressive. or linear narrative, the historians showed that two broad themes characterize the shifting law of marriage in the United States. The first is the decline of coverture, the notion that a married woman's identity is subsumed in her husband's. A second theme is the overturning of earlier restrictions about who can marry whom.

Slowly and unevenly, American society has abolished restrictions on marriage based on people's identity. As recently as the 1920s, 38 states barred marriages between whites and blacks, Chinese, Filipinos, Japanese, Indians, "Malays," and "Mongolians." It was not until 1967 in Loving v. Virginia, the Supreme Court decision that threw out a Virginia ban on black-white marriages, that racial and ethnic restrictions were outlawed.

At the same time, there has been an ongoing legal struggle to recognize women as full rights-bearers within marriage. Instead of seeing their identity subsumed in their husband's -- the notion that spouses cannot testify against one another was originally rooted in this principle -- women gradually attained the right to sue, control their own wages, and manage their separate property.

Perhaps the most powerful recent symbols of this shift are prosecutions for marital rape and elimination of the presumption that a husband is head of the household for legal purposes. Opposition to the liberalization of marriage, the historians demonstrated, has rested on historical misconceptions and upon animus, rooted in ethnocentrism and religious sectarianism.

Marriage today bears scant resemblance to marriage even half a century ago, when the male breadwinner family prevailed and dual-earner and single-parent households were far rarer than today. The contemporary notion of marriage as an equal, gender-neutral partnership differs markedly not only from the patriarchal and hierarchical ideals of the colonial era, but from the notion of complementary spousal roles that predominated during the age of companionate marriage that prevailed from the 1920s into the mid-1960s.

Change, not continuity, has been the hallmark of the history of marriage. Even before the 20th century, marriage underwent certain profound transformations. Landmarks in this history included:

  • Enactment of the first Married Women's Property laws in the 1830s and 1840s, which established women's right to control property and earnings separate and apart from their husbands.
  • Passage of the first adoption laws in the mid-19th century, allowing those unable to bear children to rear a child born to other parents as their own.
  • Increased access to divorce, beginning with judicial divorce supplanting legislative divorce.
  • The criminalization of spousal abuse starting in the 1870s.

Marriage's persistence reflects its adaptability. DOMA represented an unprecedented federal attempt to fix the definition of marriage and impose this definition upon the states and their inhabitants. Specifically, DOMA represented a federal effort to prohibit lesbian and gay Americans from securing the same civil rights and benefits available to other citizens. DOMA stigmatized a specific group of Americans and represented federal discrimination based on a particular religious point of view. In Justice Kennedy’s ringing words: "The federal statute is invalid, for no legitimate purpose overcomes the purpose and effect to disparage and to injure those whom the state, by its marriage laws, sought to protect in personhood and dignity."

History, in the same-sex marriage controversy, was not simply "preface" -- an interesting but ultimately insignificant detail in cases involving equal treatment under law. History lay bare a series of dangerously misleading assumptions -- above all, the notion that same-sex marriage deviates from a timeless, unchanging marital norm.

 

Steven Mintz, professor of history at the University of Texas at Austin and the author of Domestic Revolutions: A Social History of American Family Life and Huck’s Raft: A History of American Childhood, signed the American Historical Association brief.

Editorial Tags: 

New Academy of Arts and Sciences report stresses importance of humanities and social sciences

Smart Title: 

Amid talk of outcomes-based education, a new report from the Commission on the Humanities and Social Sciences stresses the disciplines' role in long-term career success and international competitiveness.

Essay questions the push to put presidential libraries on campuses

At the recent dedication of the $500 million George W. Bush Presidential Center at Southern Methodist University, President Clinton called it "the latest, grandest example of the eternal struggle of former presidents to rewrite history." In 2004, the Clinton Center and Foundation stunned with its more than $200 million price tag, and less than a decade later Bush has doubled that when the endowment for the Bush Institute is counted. When the Barack Obama center opens around 2020, perhaps on the campus of the University of Chicago, could it be the first billion-dollar presidential center? Possibly. A total of $1.4 billion was raised for Obama’s two successful presidential campaigns, and so for a center dedicated to his final campaign for a better place in history it’s at least likely that he’ll surpass previous records.

Although the final decision on the location of the Obama center is probably a couple of years away, professors and administrators at the University of Chicago (where he once taught) and the University of Hawaii (where his mother studied and his sister taught) are thinking about what it might mean if it lands on their campus. Chicago State University also wants to be considered. For universities, presidential centers present both opportunities and significant costs and challenges. Academics should consider carefully before getting into a bidding war over a presidential library, and weigh how much these centers promote spin in addition to scholarship.

Prime campus real estate is sometimes sacrificed for these presidential temples, which, although they house valuable historical records impartially managed by the National Archives, also have museums that high school students who have passed the Advanced Placement U.S. History test would likely find biased, as well as foundations or institutes that have agendas that the host university does not control.

Clinton was right in saying that these centers are attempts by former presidents to write their own history and polish their reputations. And to a significant degree they work. President Carter’s reputation was tarnished when he left office in 1981, but as The New York Times put it in a nearly prescient headline in 1986: "Reshaped Carter Image Tied to Library Opening" — and today, Carter is one of the more respected former presidents.

But Clinton exaggerated when he said that the struggle by former presidents to remake their images stretches back to the beginning of American history. Until the 20th century, former presidents rarely even wrote memoirs, and the first president to have a presidential library run by the federal government was Franklin D. Roosevelt. The Roosevelt Library, which opened on his estate at Hyde Park, New York, in 1941, was modest compared with succeeding presidential libraries. Its initial cost was about $7 million in today’s dollars, but critics still accused FDR of building a "Yankee pyramid." There was more than a grain of truth in the charge. When FDR first saw Egypt’s pyramids, he said, "man’s desire to be remembered is colossal." Although what Roosevelt said may not be true for everyone, it certainly was true for FDR and his successors.

Most succeeding presidential libraries dwarf FDR’s: The Harry S. Truman Library in Independence, Missouri, evokes Queen Hatshepsut’s Temple in Egypt, as well as being the first to feature a full-scale Oval Office replica (something copied by most of the others), and the Dwight D. Eisenhower Library in Abilene, Kansas, is a complex of buildings with a park that takes up an entire city block.

 

 

The first president to affiliate his library with a university was President Kennedy. JFK envisioned his library on the grounds of his alma mater, Harvard University. After Kennedy’s death some at Harvard decided they didn’t like the idea of common tourists on their campus (99 percent of the visitors to presidential libraries are tourists, and only 1 percent are researchers), and architecture critic Ada Louis Huxtable humorously lampooned their fear of "Goths overwhelming the intelligentsia." Harvard did establish the Kennedy School of Government, but the Kennedy Library itself was located on a campus of the University of Massachusetts, on a spectacular site overlooking Boston harbor.

The Kennedy Library was also the first to have a "starchitect," when Jackie Kennedy chose I.M. Pei — who later designed the East Building of the National Gallery of Art, as well as the expansion of the Louvre — to design her husband’s memorial. Originally, the Kennedy Library was going to be a large pyramid with the top cut off — representing JFK’s tragically truncated achievement — but eventually that plan was scrapped, and Pei reimagined that design as the glass pyramid at the Louvre. Pei’s final design for The Kennedy Library and Museum was a futuristic glass, steel, and concrete edifice that still looks like it could be used in a Star Trek movie.

President Lyndon Johnson, with Lady Bird Johnson’s help, also hired a star architect for his monument to himself. Gordon Bunshaft of the famous Skidmore, Owings, and Merrill firm had designed such modernist icons as Yale University’s beautiful Beinecke Library with its translucent marble walls. Bunschaft’s design for the Johnson Library on the campus of the University of Texas at Austin has, as Ada Louis Huxtable wrote, "a Pharaonic air of permanence" that "puts Mr. Johnson in the same class as some Popes and Kings who were equally receptive clients for architects with equally large ideas." The Johnson Library looks like a cross between an Egyptian pylon temple and a space-age bureaucracy.

We could talk about award-winning architect James Polshek’s design for the Clinton Center, or the renowned Robert A. M. Stern’s imposing design for the Bush Center at SMU, but you get the idea. All presidents since FDR have an edifice complex. Becoming a patron of a huge architectural project dedicated to yourself is one of the perks of being an Imperial Ex-President. Another perk is becoming a museum curator. Initially, the exhibits in presidential libraries are campaign commercials in museum form, designed with a lot of help from the former president. Eventually these exhibits become more balanced and complete, but it’s usually 30-50 years after a president leaves office before the National Archives installs decent exhibits. The former president and many of his supporters need to die before their power to spin subsides.

As Wayne Slater of The Dallas Morning News writes, the new Bush museum is "a vivid justification for why certain decisions were made," rather than a balanced examination of the real options involved and the costs of presidential choices — such as the decision to invade Iraq. Bush avoids presidential mistakes in his museum, which means, as columnist Maureen Dowd of The New York Times writes, "You could fill an entire other library with what’s not in W’s." Bush is just the latest in a long line of presidents to create self-serving exhibits seen by millions. President Obama will likely follow this tradition.

Supporters of presidential libraries hail their archives with their raw materials of history open to scholars, journalists, and even school kids. But these records would be available anyway because by law they are owned by the American people and must be impartially administered and released by the National Archives. If a president didn’t have a presidential library, the records would be housed in an equally accessible facility (probably in Washington), it just wouldn’t be so architecturally grandiose.

It was Jimmy Carter who first morphed the presidential library into a presidential center. The Carter Center, which is next to but administratively separate from the Carter Library and Museum in Atlanta, has been so effective at living up to its mantra of "Waging Peace. Fighting Disease. Building Hope" that President Carter won the Nobel Peace Prize in 2002. But Carter has also generated considerable controversy over the years because of his views on Israel. If the Carter Center had been located on the campus of nearby Emory University (with which it is loosely affiliated) that institution’s reputation might have been affected, but since the Carter Center is geographically separate from Emory the university was largely shielded.

There is not as much shielding for SMU from former President Bush and his views on such issues as enhanced interrogation techniques. The Bush Institute was inspired in part by the Hoover Institution on the campus of Stanford University, which is considered one of the nation’s leading conservative think tanks. The Hoover Institution has long offered a platform for high-profile Republicans such as George Schultz, Condoleezza Rice, and Donald Rumsfeld.

The Hoover Institution is to a large degree administratively separate from Stanford, and so although it effectively leverages the prestige of its host university to expand its influence, Stanford does not have a corresponding control over it. It’s possible that President Obama will seek a similar arrangement with a host university for a future Obama Center, or whatever he might choose to call it.

And the bottom line here is the bottom line: Although the price tag for the actual building of the Bush Library, Museum, and Institute was a cool quarter of a billion dollars, an equal amount was raised to endow the Bush Institute. And Bush and his supporters will continue their aggressive fund-raising for the foreseeable future, making the ultimate price tag and influence of the Bush Center perhaps in the billion-dollar range sometime in the next decade or two.

When President Johnson helped found the LBJ School of Public Affairs at the University of Texas at Austin, he gleefully anticipated breaking what he called "this goddamned Harvard" hold on top government positions. But like the Kennedy School of Government at Harvard, the Johnson School is run by its university, not by a self-perpetuating board largely independent of the university that seeks, in part, to enhance the reputation of the president whose name is on the building. In other words, as presidential centers have evolved and grown they have become a better and better deal for former presidents, but it’s less certain that they are a good deal for the universities that might host them.

What would make a presidential center a better deal for a university and the public? It would be useful for the 99 percent who will visit the future Obama museum to encourage the involvement of some history professors at the host university to help create exhibits with rigorous content. This content should be of a quality that would actually help future high school students pass the relevant portion of a future AP U.S. history test, rather than just being a museum of spin.

For a future Obama foundation or institute, it would be worthwhile for the university to have a significant number of faculty members from a variety of departments on the governing board. The university should have more than token input into a foundation that will be a big player on campus for many decades, perhaps even centuries. For, as some have noted, these presidential centers have become the American equivalent of the temples and tombs of the pharaohs. If professors, students, and the general public are to be more than bystanders or even would-be political worshippers, the host university needs to negotiate for the best interests of not just the university but the American public. Universities should not simply acquiesce to the desire that Clinton spoke of (only half-jokingly) that presidents have to rewrite their own history in self-glorifying memorials.

And President Obama himself would need to be involved in the process of reforming the presidential center. He has to a degree already taken on this role, for in his first full day in office in 2009 he revoked President Bush’s infamous Executive Order 13233, which restricted access to presidential records for political reasons. Obama and the university he partners with should continue this work so that presidential centers cease to remind us of the lines of the poem by Percy Shelley: "My name is Ozymandias, King of Kings, Look on my works, ye Mighty, and despair!"

Ben Hufbauer is an associate professor of fine arts at the University of Louisville. He is the author of the book Presidential Temples: How Memorials and Libraries Shape Public Memory (University Press of Kansas).

Editorial Tags: 

Pages

Subscribe to RSS - History
Back to Top