It was a classic instance of blaming the messenger: Spanish newspapers carried the earliest reports of a new illness that spread across the globe in the final months of World War I, and so it be came known as “Spanish influenza,” although its real point of origin will never be known. It was virulent and highly communicable. A paper appearing in the Centers for Disease Control and Prevention journalEmerging Infectious Diseases a few years ago estimated that 500 million people, almost a third of the world’s population, were stricken with it. By the end of its reign of terror in the final months of 1920, there were 50 million fatalities -- more than three times as many as died from the war itself. These figures may be on the low side.
In her two long essays on illness, Susan Sontag grappled with the strong and longstanding tendency to treat certain diseases as meaningful: the vehicle for metaphors of social or cultural disturbance. “Feelings about evil are projected onto a disease,” she wrote. “And the disease (so enriched with meanings) is projected onto the world." Just so, one would imagine, with a pandemic. Something in a plague always hints at apocalypse.
But the striking thing about Spanish influenza is how little meaning stuck to it. Plenty of sermons must have figured the Spanish flu as one of the Four Horsemen, at the time, but the whole experience was quickly erased from collective memory, at least in the United States. In 1976, the historian Alfred W. Crosby published a monograph called Epidemic and Peace: 1918 that Cambridge University Press later issued as America’s Forgotten Pandemic (1989). Apart from being snappier, the new title underscored the almost total disappearance from anything but the specialist’s sense of history. One person in four in the U.S. suffered from an attack of Spanish flu, and it killed some 675,000 of them. The catastrophe seems never to have interested Hollywood, though, and the only work of fiction by an author who lived through the outbreak, so far as I know, is Katherine Anne Porter’s novella “Pale Horse, Pale Rider.” (Biblical imagery seems just about unavoidable.) (Note: This article was updated from an earlier version to correct the name of the author of Epidemic and Peace: 1918.)
The title of Nancy K. Bristow’s American Pandemic: The Lost Worlds of the 1918 Influenza Epidemic (Oxford University Press) is an echo of Cross’s America’s Forgotten Pandemic. I don’t want to read too much into the one-word difference, but it does seem that the influenza crisis of almost a century ago has been working its way back into public awareness in recent years. Several more books on the subject have appeared since Gina Kolata’s best-seller Flu: The Story of the Great Influenza Pandemic of 1918 and the Search for the Virus That Caused It came out in 1999. The Public Broadcasting Service has done its part with an excellent documentary as well as an episode of Downton Abbey in which pestilence hits a country house in England during a dinner party.
So “forgotten” is no longer quite the right word for the nightmare. But it remains almost impossible to imagine the ferocity of the pandemic, much less its scale. The contemporary accounts that Bristow draws on retain their horror. Doctors wrote of patients changing from “an apparently well condition to almost prostration within one or two hours,” with raging fevers and severe pain in even the milder cases – and the worst involving a “bloody exudate” coughed up from the “peculiar and intense congestion of the lungs with [a] hemorrhage,” so that it was “simply a struggle for air until they suffocate.”
Morgues were overrun. In poor households, several delirious family members might be crowded into the same bed along with someone who had died. Those who made it to the hospital could lie unattended for days at a time. The authorities were issuing “don’t worry, it’s just another flu”-type pronouncements well into the catastrophic phase of the epidemic. Quarantines and bans on public gatherings were easier to proclaim than to enforce. Having absorbed the relatively new idea that disease was spread by germs, people donned surgical masks to protect themselves – to no avail, since influenza was a virus. The epidemic went through three waves of contagion in as many years, and it wore down whatever patience or civic-mindedness people had when the disaster hit.
A pandemic, by definition, puts everyone at risk. But access to medical help – inadequate as it proved – was far less egalitarian. (As is still the case, of course.) Much of the historical scholarship on disease in recent decades has stressed how the interaction between medical professionals and their clientele tends to reinforce the social hierarchies already in place. Bristow’s work follows this well-established course, combining it with a familiar emphasis on the changes in medicine’s public role in the wake of Progressive Era reforms.
She writes about how poor, immigrant, or Native American sufferers were assumed guilty “of dishonesty and laziness, and of attempting to take advantage of others’ generosity” until proven otherwise, while the African-American population was forced “to continue relying on their own too limited community resources as they sought to provide sufficient care for their sick neighbors.” And while the U.S. Public Health Service had been created in 1912, its capacity to respond to the influenza crisis was limited, given how poorly the disease was understood. Even gathering reliable statistics on it proved almost impossible while the virus was on its rampage.
The most interesting chapter of America’s Pandemic considers how doctors and nurses responded to the crisis. Although they often worked side-by-side together, their experiences were a marked contrast.
“Ignorant of the disease’s etiology, uncertain of the best methods of treatment, and unable to ease the suffering of their patients,” Bristow writes, “physicians often expressed a sense of helplessness as individuals and humility as members of a profession.” (You know something is catastrophic when it reduces doctors to humility.)
Belonging to an almost entirely male profession, they “gauged their work against the masculine standards of skill and expertise” – and the inevitable military metaphor of going to battle against the disease became that much more intense given the example of actual soldiers fighting and dying in the trenches. But the influenza virus was stronger. “Like a hideous monster,” one physician wrote, “he went his way, and none could hinder.” Doctors’ letters and diaries from the period reflect a sense of bewilderment and failure.
For a while the authority of the profession itself was undermined. Patent medicines and related quackery proved no more effective in treating or preventing the disease than anything the doctors could offer. But they weren’t any less effective, either.
The nurses could not have responded more differently. Caring for patients was “a terrific test” and “high privilege,” they wrote, “a most horrible and yet most beautiful experience.” As with doctors, many lost their lives while tending to the sick. But one nurses’ report said that the work was “one of the most immediately satisfactory experiences of our lives” for those who survived it, “and this is true even though we were borne down with the knowledge that, do all we might, the pressing, tragic need for nursing was much greater than could possibly be met.”
And this, too, was a matter of gauging their skill by socially sanctioned gender norms. “Women working as nurses aspired to what they viewed as the uniquely feminine qualities of domesticity, compassion, and selflessness,” writes Bristow. “To measure up to these standards nurses needed only to care for their patients, not cure them, and this they proved able to do.”
A few hours after choosing American Pandemic for this week’s column, I attended a public event at which every third person seemed to be coughing, with a congested wheeze usually audible. Synchronicity is not always your friend. For the past several days I have been reading about influenza, and writing about it, while suffering from one of its milder variants. The experience is not to be recommended.
Two quick points might be worth making before the medication kicks in. Bristow’s final assessment is that the horror and devastation of the pandemic could not be reconciled with the preferred national narrative of progress and redemption, “with its upbeat tenor and its focus on a bright future.” At most, its memory was preserved as part of local history, or through family stories.
The argument is plausible, to a degree, but it overlooks the element of trauma involved – not just the suffering and death during that period, but the feeling of being rendered helpless by an event that’s come out of nowhere.
And what sense does it make to think of the events of 1918-20 as “America’s pandemic,” forgotten or otherwise? Deaths from influenza in the United States during that period represent something like 1.4 percent of the world’s fatalities from the disease. How was it remembered -- or erased from public memory -- elsewhere? Diseases don’t respect borders, and it’s hard to see why the historiography of disease should, either.
In November, Pew Research Center released a report discussing the level of belief in American exceptionalism in the United States. It gauged this by asking whether interviewees accepted the statement "Our people are not perfect, but our culture is superior to others." I have been interested in the history of theories of American exceptionalism for more than twenty years, and gave it a look. Formulating the idea that way struck me as obnoxious and fairly absurd. But then the Pew people are specialists in public opinion research -- and feelings of superiority (or rather, anxieties over it) do seem to be what is at stake as the expression American exceptionalism is used in U.S. politics lately. (It's worth noting that it's the authors of the report who make a connection between superiority and exceptionalism. The interviewers didn't explicitly ask about the latter.)
Republican candidates keep proclaiming their faith in American exceptionalism, or smiting Obama for his failure to believe in it. Not long ago somebody published a letter to the editor claiming that Obama hates American exceptionalism, which would seem to imply that he must believe in it, since hating something you don’t believe in sounds difficult and a real waste of time. But it’s probably best not to expect too much logical consistency at this point in the electoral season.
Obama himself is at least somewhat culpable for the whole situation. The furor all started in 2009 when, in response to a question, he said: “I believe in American exceptionalism, just as I suspect that the Brits believe in British exceptionalism and the Greeks believe in Greek exceptionalism.” That, too, is a misreading of the term, equating it with something like national-self esteem. But of a healthy sort -- well shy of narcissistic grandiosity, with plenty to go around. That's probably what got him into trouble.
Anyway, the Pew study yielded some interesting results. Pew's researchers have been asking whether people agreed with the sentiment "Our people are not perfect, but our culture is superior to others" for at least 10 years now. In 2002, 60 percent of the Americans polled said they did. The figure fell to 55 percent in 2007. Last year, just 49 percent of respondents agreed, with nearly as many (46 percent) saying they disagreed. “Belief in cultural superiority has declined among Americans across age, gender and education groups,” the Pew report said.
The same question was posed in surveys conducted in Britain, France, Germany, and Spain. The level of agreement was higher in the U.S, than elsewhere (Germany and Spain were fairly close) but the variations are less interesting than what held constant: “In the four Western European countries and in the U.S., those who did not graduate from college are more likely than those who did to agree that their culture is superior, even if their people are not perfect.”
Make of that what you will. For my part, the really odd thing about all the recent endorsements of American exceptionalism is that the very expression came into the world as the name for a Communist heresy.
The image of America as a city upon a hill -- uniquely favored by the Almighty and a light unto the heathens -- is older than the United States itself, of course. And it’s true that visitors to the country, including Alexis de Tocqueville, have long declared it “exceptional,” in one way or another, and not always for the better. Charles Dickens thought we were exceptionally prone to printing his books without permission, let alone paying him royalties. But the term "American exceptionalism" is more recent, and it took the Comintern to launch the Republican candidates' preferred way of recommending themselves these days.
Circa 1927-28, a group of American Communist Party leaders began arguing that, yes, the U.S. economy would undoubtedly succumb to the contradictions of capitalism, sooner or later, but it still had plenty of life in it yet, so the comrades abroad should keep that in mind, at least for a while. Their perspective was in accord with the ideas of the Bolshevik theorist Nicholai Bukharin concerning the world economic situation, and he was the one, after all, in charge of the Communist International. So all was copacetic, at least until the summer of 1928, when Stalin quit taking Bukharin’s phone calls.
Before long, the American leaders were called on the carpet by the authorities in Moscow, and found themselves denounced by Stalin himself for an ideological deviation: "American exceptionalism.” Stalin also told them, "When you get back to America, nobody will stay with you except your wives." That turned out to be a slight exaggeration, but they were promptly expelled from the party when they got back home -- taking around a thousand fellow American exceptionalists with them.
As it happened, all of this was just a few months before the stock market crash on Black Tuesday, which made the whole debate seem rather moot. But a catchphrase was born. Stalin’s speeches blasting American exceptionalism were printed as a pamphlet in an enormous edition. The pro-American exceptionalism Communists went off to start their own group, which had a strange and complex history that deserves better scholarship than it has received. But that seems like enough esoterica for now.
David Levering Lewis puts the neologism into a wider context with his essay “Exceptionalism's Exceptions: The Changing American Narrative,” in the new issue of the American Academy of Arts and Sciences’ journal Daedelus. Levering, now a professor of history at New York University, received one Pulitzer Prize each for the two volumes of his biography of W.E.B. Du Bois.
“[I]ts Soviet originators defined American exceptionalism as the colossal historical fallacy that imagined itself exempt from the iron laws of economic determinism,” Levering writes, “whereas most American academics and public intellectuals … avidly embraced a phrase they regarded as an inspired encapsulation of 160 years of impeccable national history.” One of the handful of figures to give the idea a careful, skeptical examination, Levering says, was Du Bois. In his masterpiece Black Reconstruction (1935), he wrote that “two theories of the future of America clashed and blended just after the Civil War.” One was “abolition-democracy based on freedom, intelligence, and power for all men,” and the other was “a new industrial philosophy” with “a vision not of work but of wealth; not of planned accomplishment, but of power.”
American exceptionalism was, in effect, the happy belief that these tendencies reinforced each other. That was not a credible idea for an African-American who received his Ph.D. from Harvard one year before the Supreme Court ruling in Plessy v. Ferguson that endorsed “separate but equal” treatment of the races. For Du Bois, writes Levering, “the cant of exceptionalism survived mainly to keep the Moloch of laissez-faire on life support even as its vital signs failed in the wake of the Great Crash of 1929.”
The doctrine of exceptionalism proved hardier than Du Bois imagined, as the years following World War II showed. Levering mentions that Henry Luce “had already given the world its peacetime marching orders in ‘The American Century,’ a signature 1941 editorial in Life.” Eight other contributors, most of them historians, join Andrew J. Bacevich in assessing that line of march in The Short American Century: A Postmortem (Harvard University Press), a collection of essays spinning off from a lecture series Bacevich organized at Boston University in 2009-2010.
“By the time the seventieth anniversary of Luce’s famous essay rolled around in 2011,” the editor writes, “the gap between what he had summoned Americans to do back in 1941 and what they were actually willing or able to do had become unbridgeable.” Unfortunately the editorial is not reprinted, and it loses something in paraphrase -- a bracing tone of stern moral uplift, perhaps, inherited from his parents, who had been missionaries in China. Here’s a sample:
“[W]hereas their nation became in the 20th Century the most powerful and the most vital nation in the world, nevertheless Americans were unable [after World War One] to accommodate themselves spiritually and practically to that fact. Hence they have failed to play their part as a world power -- a failure which has had disastrous consequences for themselves and for all mankind. And the cure is this: to accept wholeheartedly our duty and our opportunity as the most powerful and vital nation in the world and in consequence to exert upon the world the full impact of our influence, for such purposes as we see fit and by such means as we see fit.”
And plenty more where that came from. “When first unveiled,” Bacevich notes, “Luce’s concept of an American Century amounted to little more than the venting of an overwrought publishing tycoon.” By the end of the war, that had changed: “Claims that in 1941 sounded grandiose became after 1945 unexceptionable.” The American Century brought “plentiful jobs, proliferating leisure activities, cheap energy readily available from domestic sources, and a cornucopia of consumer goods, almost all of them bearing the label ‘Made in the U.S.A.’ ” And all of it while, in Luce’s words, “exert[ing] upon the world the full impact of our influence, for such purposes as we see fit and by such means as we see fit.”
Well, and how did that turn out? The contributors are not of one mind. “As international regimes go, much of the American Century, despite the chronic tensions and occasional blunders of the Cold War (and especially the tragedy of Vietnam) was on the whole a laudably successful affair,” writes David M. Kennedy. For Emily S. Rosenberg, “the period of maximum U.S. power and influence” was “a precursor to a global Consumer Century” that “proved highly adaptive to local cultural variation,” so that equating globalization with Americanization is a misnomer.
In counterpoint, Walter LaFeber rebukes Luce’s vision all along the line. He writes that the American Century “never existed except as an illusion, but an illusion to which Americans, in their repeated willingness to ignore history, fell prey.”
T. J Jackson Lears writes in praise of a “pragmatic realism” informed by the pluralism of William James and Randolph Bourne, and says it “requires a sense of proportionality between means and ends, as well as a careful consideration of consequences – above all, the certain, bloody consequences of war.” But his essay does not exactly portray the American Century as a triumph of pragmatic realism. (C. Wright Mills’s description of the nuclear war strategists’ “crackpot realism” seems a little more apropos.)
Bacevich’s essay concluding the book brings us up the moment by stressing how interconnected the American Century and American exceptionalism have become. “To liken the United States to any other country (Israel possibly excepted) is to defile a central tenet of the American civil religion. In national politics, it is simply impermissible.” Luce’s vision “encapsulate[es] an era about which some (although by no means all) Americans might wax nostalgic, a time, real or imagined, of common purpose, common values, and shared sacrifice.”
Such yearning is understandable, but nostalgia is bad for you: it makes the past seem simpler than it was. And the world has probably had as much exceptionalism as it can stand. As the American psychologist Harry Stack Sullivan put it, we are all much more simply human than anything else. And it seems like there must be a better use of a political figure's time than assuring people that they are all above average.
It's hard to think of with three living figures in American politics who generate more passion than the ones named in the title of Obama, Clinton, Palin: Making History in Election 2008, a collection of essays edited by Liette Gidlow and published by the University of Illinois Press. The word “passion” here subsumes both ardor and loathing. I doubt it is intentional, but the photographs on the book’s cover are arrayed such that they seem almost attached to one another, like Siamese triplets perhaps, or some beast with multiple heads in one of the more psychedelic passages of Biblical prophecy. If the 2012 campaign doesn’t give you nightmares, that image still might.
Gidlow, the editor, is an associate professor of history at Wayne State University, and the 11 other contributors are all historians as well. Their essays frame the 2008 campaign as a late episode in the country’s uneven progress toward incorporating anybody other than white men into elected government. Every so often we hear that the United States entered the “post-feminist” or “post-racial” era at some unspecified point in the (presumably) recent past. But reality has a way of asserting itself, and the next thing you know there are people demonstrating against the president with signs that show him as a cannibal with a bone through his nose, or a politician responds to a female heckler by hinting that she should perform a sexual service for him.
Debates about race and gender came up often during the 2008 primaries and the election season that followed, so the book’s emphasis is hardly misplaced. Much discussed at the time was each campaign’s groundbreaking status in the history of presidential contests -- with Palin being the first woman to run on the Republican ticket, while Obama was the first African-American, and Clinton the first woman, to have a serious shot at the Democratic nomination.
That is true, but it is blinkered. If the essays in Obama, Clinton, Palin could be reduced to a single theme, it might be that the history-making campaigns of 2008 were also products of history, or echoes of it, as well. The most interesting chapter in that regard is Tera W. Hunter’s “The Forgotten Legacy of Shirley Chisholm,” which recalls the African-American Congresswoman’s presidential bid in 1972.
Chisholm had no hope of winning, and knew it, but paved the way for Hillary Clinton and Barack Obama. She was, Hunter says, “antiracist, antisexist, pro-choice, pro-labor, antiwar, fiercely independent, and, above all, principled.” But the point of invoking her memory is hardly to treat the Clinton or Obama campaigns as rightful heirs. In Hunter’s reading, the Democratic primaries of 2008 were a travesty of Chisholm’s effort.
“Hillary Clinton never spoke openly, critically, or engagingly about the status of women in our society, the problems of gender discrimination, and what we should do about it,” writes Hunter. As the competition heated up, Clinton “became more forthright in claiming to be the victim of gender discrimination,” while Obama “continued to be reluctant to dwell on issues related to race and racism.” By contrast, Chisholm “challenged the racist attitudes and practices in the women’s movement,” Hunter writes, “as much as she challenged sexism among African-Americans and the broader society.”
A cynic might reply that she could afford to do that precisely because she was not trying to get elected. To win, you pander, and when somebody complains, you try to figure out how to pander to them, too. But running a winning campaign involves neutralizing reservations as much as enlisting allegiance. On that score Obama “faced the challenge of calming white fears,” Hunter writes, “of reassuring the populace that he was not an ‘angry black man’ seeking racial retribution.” (The point is also made in “Barack Obama and the Politics of Anger” by Tiffany Ruby Patterson, who recalls how the candidate navigated the controversy over Rev. Jeremiah Wright’s fire-next-time sermons.)
Susan M. Hartmann’s “Hillary Clinton’s Candidacy in Historical and Global Context” offers one of the book’s analyses of how gender stereotypes and media sexism created obstacles for the candidate – even as she “benefited not only from her husband’s name and popularity, but also from the access he afforded” to sundry political resources. “By contrast,” Hartmann says, “Republican vice presidential candidate Sarah Palin escaped much of the gender hostility that Clinton faced,” largely because of “her strong right-wing credentials, importantly including opposition to much of the feminist agenda.”
Indirectly challenging that claim is Catherine E. Rymph’s “Political Feminism and the Problem of Sarah Palin.” Rymph makes the case for regarding Palin as the legatee of a strain of G.O.P. feminism going back to the 1940s, when “Republicans made up a greater number of women serving in Congress” than did Democrats. Their party platform endorsed the Equal Rights Amendment in 1940 – four years before the Democrats did. (See also the abundant scholarship on the role of women activists on the right, discussed by Kim Phillips-Fein shows in “Conservatism: A State of the Field,” her thorough survey of recent historiography.)
Clinton and Palin were both “presented in sexualized ways” during their campaigns, Rymph points out: “Clinton was a castrating bitch, while Palin was a ‘MILF’ (presumably more flattering, but equally degrading).” Opponents were relentless in mocking Palin’s hair, clothes, days as a beauty-pageant competitor and the like, which Rymph cites as evidence that “Americans of all stripes can tolerate and even embrace sexism when it is used as a weapon against women with whom they disagree or whom they see as representing the wrong picture of womanhood.”
Thanks to Google, several contributors are able to document the racist and misogynistic rage churning throughout the primaries and campaigns. This is the third or fourth academic press publication I’ve read in the past few months to quote extensively from blog posts, comments sections, Facebook dialogs, and so forth. The effect is sometimes informative or illuminating, but usually it isn’t. You get used to seeing chunks of semiliterate ranting online, but it’s still mildly disconcerting to find it in cold type, properly cited, with scholarly apparatus.
The poisonous material quoted in “Michelle Obama, the Media Circus, and America’s Racial Obsession” by Mitch Katchum makes it perhaps the most horrifying chapter in the book. It is undoubtedly necessary to the job of showing the double dose of stereotyping (“angry black woman,” “Jezebel,” “baby mama”) that emerged in 2008 campaign, and set the tone for much that’s followed. But in the future, historians might do well to focus on computerized content analysis of digital chatter, rather than exhibiting samples, because it does not take that long to reach the threshold marked ad nauseum.
I’ve discussed a few papers in Obama, Clinton, Palin, not attempted a comprehensive review. But one general impression bears mentioning, and a look through the index confirms it: there are no entries for Afghanistan, Iraq, Lehmann Brothers, terrorism, torture, or the Troubled Asset Relief Program, nor any other major issue at stake in 2008. All are mentioned at some point in the book. But the index-maker can't be faulted, because they are always quite peripheral to the project.
What we get, then, is political history at a considerable remove from questions of governance. It’s certainly possible to argue that combat over race or gender in a presidential campaign may serve as a proxy for debates over social or economic policy. But that argument has to be made. Otherwise it seems as if the only issue in an election is whether the most powerful elected offices in the country should or should not be more demographically representative.
In any case, the 2012 presidential race has been pretty uneventful in the politics-of-difference department -- so far, anyway. I contacted Liette Gidlow, the editor of the book, to ask what she made of the contrast with four years ago.
“I do think that the 2008 campaigns expanded leadership opportunities for African-Americans, women, and others in a lasting way,” she responded by e-mail. “The political contests so far this year would seem to suggest otherwise; though Michele Bachmann and Herman Cain had their moments, ultimately their campaigns failed to win broad support among Republicans. But for the past 40 years, it has been the Democratic party, not the Republican party, that has been the driving force behind diversity in political representation, and with the primary contests limited to the Republicans this year, we shouldn't be surprised to see that the field has been dominated by white men. Which doesn't mean that in future presidential contests the Democrats will offer a slate that ‘looks like America’ or that the Republicans won't. But every time a candidate who departs from our usual expectations succeeds, it expands our ability to imagine, and ultimately to accept, different kinds of people as leaders.”
That seems fair enough, all in all. But it leaves open the question of what difference it makes, if any, after that. It certainly felt like something was changing on election night in 2008, but four years later, I often wonder what it was.