In an essay first published in 1948, the American folklorist and cultural critic Gershon Legman wrote about the comic book -- then a fairly recent development -- as both a symptom and a carrier of psychosexual pathology. An ardent Freudian, Legman interpreted the tales and images filling the comics’ pages as fantasies fueled by the social repression of normal erotic and aggressive drives. Not that the comics were unusual in that regard: Legman’s wider argument was that most American popular culture was just as riddled with misogyny, sadomasochism, and malevolent narcissism. And to trace the theory back to its founder, Freud had implied in his paper “Creative Writers and Daydreaming” that any work of narrative fiction grows out of a core of fantasy that, if expressed more directly, would prove embarrassing or offensive. While the comic books of Legman’s day might be as bad as Titus Andronicus – Shakespeare’s play involving incest, rape, murder, mutilation, and cannibalism – they certainly couldn’t be much worse.
But what troubled Legman apart from the content (manifest and latent, as the psychoanalysts say) of the comics was the fact that the public consumed them so early in life, in such tremendous quantity. “With rare exceptions,” he wrote, “every child who was six years old in 1938 has by now absorbed an absolute minimum of eighteen thousand pictorial beatings, shootings, stranglings, blood-puddles, and torturings-to-death from comic (ha-ha) books alone, identifying himself – unless he is a complete masochist – with the heroic beater, strangler, blood-letter, and/or torturer in every case.”
Today, of course, a kid probably sees all that before the age of six. (In the words of Bart Simpson, instructing his younger sister: “If you don't watch the violence, you'll never get desensitized to it.”) And it is probably for the best that Legman, who died in 1999, is not around to see the endless parade of superhero films from Hollywood over the past few years. For in the likes of Superman, he diagnosed what he called the “virus” of a fascist worldview.
The cosmos of the superheroes was one of “continuous guilty terror,” Legman wrote, “projecting outward in every direction his readers’ paranoid hostility.” After a decade of supplying Superman with sinister characters to defeat and destroy, “comic books have succeeded in giving every American child a complete course in paranoid megalomania such as no German child ever had, a total conviction of the morality of force such as no Nazi could even aspire to.”
A bit of a ranter, then, was Legman. The fury wears on the reader’s nerves. But he was relentless in piling up examples of how Americans entertained themselves with depictions of antisocial behavior and fantasies of the empowered self. The rationale for this (when anyone bothered to offer one) was that the vicarious mayhem was a release valve, a catharsis draining away frustration. Legman saw it as a brutalized mentality feeding on itself -- preparing real horrors through imaginary participation.
Nothing so strident will be found in Jason Dittmer’s Captain America and the Nationalist Superhero: Metaphors, Narratives, and Geopolitics (Temple University Press), which is monographic rather than polemical. It is much more narrowly focused than Legman’s cultural criticism, while at the same time employing a larger theoretical toolkit than his collection of vintage psychoanalytic concepts. Dittmer, a reader in human geography at University College London, draws on Homi Bhabha’s thinking on nationalism as well as various critical perspectives (feminist and postcolonial, mainly) from the field of international relations.
For all that, the book shares Legman’s cultural complaints to a certain degree, although none of his work is cited. But first, it’s important to stress the contrasts, which are, in part, differences of scale. Legman analyzed the superhero as one genre among others appealing to the comic-book audience -- and that audience, in turn, as one sector of the mass-culture public.
Dittmer instead isolates – or possibly invents, as he suggests in passing – a subgenre of comic books devoted to what he calls “the nationalist superhero.” This character-type first appears, not in 1938, with the first issue of Superman, but in the early months of 1941, when Captain America hits the stands. Similar figures emerged in other countries, such as Captain Britain and (somewhat more imaginatively) Nelvana of the Northern Lights, the Canadian superheroine. What set them apart from the wider superhero population was their especially strong connection with their country. Nelvana, for instance, is the half-human daughter of the Inuit demigod who rules the aurora borealis. (Any relationship with actual First Nations mythology here is tenuous at best, but never mind.)
Since Captain America was the prototype –- and since many of you undoubtedly know as much about him as I did before reading the book, i.e., nothing – a word about his origins seems in order. Before becoming a superhero, he was a scrawny artist named Steve Rogers who followed the news from Germany and was horrified by the Nazi menace. He tried to join the army well before the U.S entered World War Two but was rejected as physically unfit. Instead, he volunteered to serve as a human guinea pig for a serum that transforms him into an invincible warrior. And so, as Captain America -- outfitted with shield and spandex in the colors of Old Glory – he went off to fight Red Skull, who was not only a supervillain but a close personal friend of Adolf Hitler.
Now, no one questions Superman’s dedication to “truth, justice, and the American way,” but the fact remains that he was an alien who just happened to land in the United States. His national identity is, in effect, luck of the draw. (I learn from Wikipedia that one alternate-universe narrative of Superman has him growing up on a Ukrainian collective farm as a Soviet patriot, with inevitable consequences for the Cold War balance of power.) By contrast, Dittmer’s nationalist superhero “identifies himself or herself as a representative and defender of a specific nation-state, often through his or her name, uniform, and mission.”
But Dittmer’s point is not that the nationalist superhero is a symbol for the country or a projection of some imagined or desired sense of national character. That much is obvious enough. Rather, narratives involving the nationalist superhero are one part of a larger, ongoing process of working out the relationship between the two entities yoked together in the term “nation-state.”
That hyphen is not an equals sign. Citing feminist international-relations theorists, Dittmer suggests that one prevalent mode of thinking counterposes “the ‘soft,’ feminine nation that is to be protected by the ‘hard,’ masculine state” -- which is also defined, per Max Weber, as claiming a monopoly on the legitimate use of violence. From that perspective, the nationalist superhero occupies the anomalous position of someone who performs a state-like role (protective and sometimes violent) while also trying to express or embody some version of how the nation prefers to understand its own core values.
And because the superhero genre in general tends to be both durable and repetitive (the supervillain is necessarily a master of variations on a theme), the nationalist superhero can change, within limits, over time. During his stint in World War II, Captain America killed plenty of people in combat with plenty of gusto and no qualms. It seems that he was frozen in a block of ice for a good part of the 1950s, but was thawed out somehow during the Johnson administration without lending his services to the Vietnam War effort. (He went in Indochina just a couple of times, to help out friends.) At one point, a writer was on the verge of turning the Captain into an overt pacifist, though the publisher soon put an end to that.
Even my very incomplete rendering of Dittmer’s ideas here will suggest that his analysis is a lot more flexible than Legman’s denunciation of the superhero genre. The book also makes more use of cross-cultural comparisons. Without reading it, I might never known that there was a Canadian superhero called Captain Canuck, much less the improbable fact that the name is not satirical.
But in the end, Legman and Dittmer share a sense of the genre as using barely conscious feelings and attitudes in more or less propagandistic ways. They echo the concerns of one of the 20th century's definitive issues: the role of the irrational in politics. And that doesn't seem likely to become any less of a problem any time soon.
In his inaugural address, President Obama referred repeatedly to education – but exclusively to education in STEM disciplines, as if only those fields had a defensible public purpose. Sadly, this is no aberration: in December the White House issued a report entitled "Transformation and Opportunity: The Future of the U.S. Research Enterprise," which completely overlooked research in the humanities and social sciences, even in its brief history of the growth of research at American universities.
Such a narrow focus is surprising, as the president himself apparently consults historians (and probably other scholars); and it is counterproductive, whether in strict dollars and cents terms or broader ones. Some politicians have gone further, aggressively asserting that various humanities and social science disciplines are useless, and attempting to impose higher tuitions on students who major in them, making it all the more important that those who know better actively affirm the value of teaching and research beyond the STEM fields.
I will focus here on the case for history: it is what I know best, and since history straddles the line between humanities and social sciences, many arguments for its importance apply to various allied fields. One might loosely group these into three categories, ranging from the most social scientific to the most humanistic. The first applies to lessons drawn from circumstances relatively close to our own; the second to learning about times and places we know are quite different. The third applies to research showing that some currently accepted ideas are actually fairly novel, and that people not so different from us saw did without them; engaging the concepts they used instead may help us see additional possibilities in the world, whether for good or ill.
Examples of the first category underlie almost any sound public policy debate, as well as many private deliberations. Take, for example, the 2009 stimulus bill. By itself, no mathematical calculation could assess the relative accuracy of the more-or-less Keynesian models suggesting that the stimulus would help the economy and the "real business cycle" models, which predicted that it would be an expensive waste. The difference lay in historical research about how various modern economies had responded to historically specific policy initiatives. Other examples abound, though most are less well-known: closest to home in this regard would be evaluating options for STEM investment in light of the vast literature on what has given rise to specific clusters of innovation in the past, and which innovations proved most beneficial. One would also expect development efforts to gain from examining research on past relationships among, say, education, urbanization, birthrates, and investment.
The benefits of research into the importance of understanding differences in the context of policy decisions abound, with special clarity emerging in what we might call "area studies" knowledge – an enormous part of the growth of U.S. research universities after WWII. Surely we could have saved lives and money had policy-makers known more about religious differences within Iraqi society, the political and social history of Afghanistan, or class relations and popular nationalism in Vietnam before military interventions in those places. The same, I would argue, goes for using research into the evolution of Chinese notions of ethnicity, nationality, race, and geopolitics to understand likely governmental and popular reactions to possible American policies on Tibet, trade, the Diaoyu/Senkaku Islands, and so on.
Perhaps less obvious, but equally important, is the usefulness of research that shows that many ideas we may take to be "natural," or at least of very long standing, are actually relatively new.. Some of these insights may be "just" a contribution to increased self-understanding, but others bear directly on public issues. Urgent debates over how fixed the concept of "marriage" has been come first to mind, but there are many more actual and potential examples. Recognizing that the term "ethnic group" is barely 75 years old reminds us how mutable are our understandings of the basis and implications of human groupings; that "gross national product" is of roughly the same vintage suggests maximizing that particular measurement is not inevitably the paramount goal of economic policy.
It hardly seems a stretch to think that a world facing our current challenges might benefit from awareness of other ways that people have thought about the relationship of work, citizenship, adult status, "independence" and dignity, or about consumption, economic growth, leisure and the nature of progress. Or to take some narrower examples, consider the implications of learning how relatively recently life insurance went from seeming like a morally dubious gambling on death to a taken-for-granted tool for managing risk. Or that, while (as Thomas Ricks noted in a recent Atlantic) almost no U.S. generals were removed from their commands for poor performance during Vietnam, Afghanistan or Iraq, many were so removed during World War II – suggesting that the recent situation does not represent an inevitable feature of government, much less of hierarchy generally. Historical knowledge of this kind does not provide lessons as straightforward as “deficit spending can work,” but it can add significantly to our understandings of what is possible, for better or worse, and how things may become, or cease to be, unthinkable.
Research that produces these results, both testing earlier certainties and responding to new questions , thus seems a useful, even necessary complement to research in the STEM fields. Fortunately, most historical research is also relatively cheap, but it does not thrive on complete neglect.
Kenneth Pomeranz is University Professor of History at the University of Chicago and president of the American Historical Association. The views expressed here are his alone.
Everything would have been perfectly ordinary that October morning in my freshman writing course at Stanford University. Bright autumn light reflected up from the Main Quad to our third floor. Unfed, sleepy-eyed freshmen offered ideas about the assigned reading, which I tracked on the board.
As I often do, I drew a doodle to describe a concept in the reading. This doodle — so I thought — demanded less artistry and complexity than my usual sketches of Thomas Hobbes’s "arrant Wolfe," for which I hash out two mangy-looking wolves squinting at each other, or Immanuel Kant’s famous "crooked timber," for which a bent log suffices to get the idea across. Here, I simply tossed up a rectangle with a triangle inside.
My students gasped.
"What’s wrong?" I asked.
“Um … everything." They wagered cautiously.
"Well," I tried. "This is just like the one Lockhart shows in his essay." I was referring to a drawing in Paul Lockhart’s famous 2002 "Lament" about the state of mathematics education. Here it is, precisely as it appears in the essay, not the version I drew in class.
"Sorry … no … not really, well … it’s not even close," they ventured, as if not to hurt my feelings.
My students, mostly young aspiring mathematicians, found themselves so ill at ease here, because their teacher with a humanities doctorate had not bothered to notice that the triangle inside the rectangle touches both corners of the same length and thus forms several other triangles. My doodle — whatever it looked like, I can’t remember — was simply an approximation, a lonely triangloid adrift in a rectangular sea of lopsidedness.
My students had expected greater precision. After all, the course title "Rigorous and Precise Thinking" had suggested as much. Secondly, this was a college writing course, which, as the rumor goes, is supposed to be a smackdown of style, argument and organization, where freshmen quickly learn they must jettison comfortable high school formats and every illusion of their personal literary genius. Expectations for rigor and many other new adventures ran high in this new course, an experimental hybrid college writing/mathematical thinking and proof writing class, one of five liberal arts courses in a new program called Education as Self-Fashioning.
Like the other four ESF classes, this one intended to "engage actively in the types of thinking promoted through these different conceptions of education for life, so as to try those lives on for ourselves ..." and offer students a “chance to shape [their] educational aspirations in dialogue with fellow students and an exciting group of faculty from across a wide range of disciplines — from the humanities and social sciences through the natural sciences and mathematics." I was the writing instructor paired with Professor Ravi Vakil, an American-Canadian mathematician working in algebraic geometry.
Vakil invented the course concept as a rejoinder to C.P. Snow’s "Two Cultures" hypothesis with the hope of showing undergrads, and even the world, that writing in the humanities and writing in math gained force and excellence through similar structures of precise reasoning. Vakil more than delivered on the rigor and precision. His lectures introduced students to proof writing, number theory, set theory, and many other advanced forms of math most academics expect to address only with advanced university students. For my part, I was simply to help students elaborate the readings from Plato, Descartes, Douglas Hofstadter, Bertrand Russell, Paul Lockhart and many others, while teaching writing.
Tellingly, my imprecise doodle proved to be not my first, second, nor even third example of lack of rigor. In fact, the moment seem to demonstrate the deep divide between Snow’s "two cultures," since I evidently betrayed a lack of familiarity with the basic truths of measurement, "mass, or acceleration, pretty much the scientific equivalent of a humanist asking skeptically, Can you read?" Without a doubt, much of that difference proved disciplinary — the very limit this course hoped to transgress.
Yet, we experienced no ordinary rift between the two cultures. The class had read Snow’s famous 1959 Rede Lecture and chuckled at his description of subverbal grunting mathematicians ruining a young humanist’s dinner party experience. My students saw themselves as beyond what old Stanford lingo designates as the split between "fuzzies" and "techies." Interested equally in learning all things humanist and STEM, e.g., Shakespeare and thermodynamics and beyond, these students insisted that math and math culture far surpassed the cartoonish figures of Snow’s dinner party. Nor (my students believed) were humanists so incorrigibly "fuzzy" as to not be able to reproduce a mathematical doodle — or were they?
Had I inadvertently proven Snow’s point, right before the eyes of my epistemologically optimistic students? In fact, both the students and I discovered that many of the clichés about our respective fields proved instructive. I really do need to be more careful in my doodling — and thinking about my doodling — if I am drawing triangles (with mathematical aspirations) and not wolves (no matter how humanistically inclined).
The awkward doodle moment proved not the existence of two never-the-twain-shall-meet cultures, but rather a need for me to look more closely at the other side. Once I recovered from the initial jolt of difference, I began to realize the opportunity for me to reconsider my pedagogy. Not having seen a university math professor teach proof writing before, I witnessed several fascinating interactions while attending Vakil’s sections of our course. Most striking, when Vakil wrote a problem on the board, the room jumped to life with students calling out and frantically waving their arms. He would ask: "How can you prove the square root of 2 is irrational?" and it was as though Vakil were standing at the board waving a bloody steak at a group of famished tigers. Everyone wanted to offer some solution.
Seldom have I been bombarded with solutions or suggestions when I ask students to show me "textual proof" that Sigmund Freud has a Hobbesian view of nature … hint hint … homo homini … wolf sketch, ... Civilization and Its Discontents, try page number and reference…Freud 1930a [I929], SE 21:111. That special classroom enthusiasm surely arose from Vakil’s charisma and love of his subject, but the response was new to me because humanities courses that I know at least demand a very different kind of invention. Vakil asked a question and students racked their brains trying to imagine which set of mathematical tools or ideas they might use to solve the problem. Confident that they all share these tools, or at least know of such tools, the students seemed to feel much more at ease trying out different approaches.
In humanities courses, previous knowledge certainly helps, especially with literary references, but at the end of the day, a humanist’s tools remain much more contested and may not be applicable in different contexts. For example, students asked me why I requested they not use the third-person plural perspective "we." I told them writing in the humanities differs from math, where one can simply write in a proof “we assume that x=2.” Humanists can neither be sure who that “we” is, nor what to "assume" nor how one can know x. All such terms are permanently available for debate.
In contrast, the mathematicians’ particular disciplinary certainty also revealed a fierce loyalty and love of the subject, which produced a very different discourse than I traditionally hear from humanities students who feel a strong affinity with their work. These math students spoke a Russellian language of awe toward the "cold and austere" "supreme beauty" and "elegance" of math. Perhaps other humanists have encountered students who express an emphatic humility before their subjects, but that this for me was as new as the students’ shock at my imprecise drawing. For I learned that day, that my students had not yet adopted a humanistic skepticism toward mathematical precision. For them precision is very real, especially in a world of increasing complexity and Gödelian incompleteness.
For humanists, precision lies elsewhere, side by side with ambiguity, and we pursue it with nuance rather than with proofs. My task therefore became one of translation. I understood little of the doodles and equations that Vakil and the students so hotly debated in his sections, but I knew that I had helped my students articulate arguments within the very different confines of humanistic inquiry. Where they were convinced of certain mathematical truths in the landscape of defined terms, they nevertheless arrived in my class with the classic freshman enormity of themes.
Asked to find “precise” topics in math to write about for their research papers, nearly all 29 students first chose grandiose topics like "the definition of intuition," "the connections between art and math" or "math and humanistic knowledge." With such great ambitions in mind, they also fervently believed in math as a liberal art capable of teaching the exact same virtues of critical (self) reflection as any of the great classical texts I teach from Greek virtue ethics to Rawls.
Most provocatively, they claimed that by practicing mathematical reasoning they were indeed preparing themselves in the fashion of liberal arts education for ethical citizenship. They claimed with confidence their rigorous and precise thinking could lead them to ethical reasoning as equally well as a discussion of the Plato’s “Apology.” For my part, I could not see how debating a triangle or even practicing some form of applied math as statistics would help me lead the "examined life" in a qualitative fashion.
In class, Vakil often reflected on the limits of mathematical reasoning in a mode reminiscent of Greek virtue ethics; that is, perfecting one’s art whether mathematical or literary skill, is surely a virtue, but not one that can replace ethical action. When asked whether excellence in math could prevent one from doing evil, no one doubted the inadequacy of that proposition. History has no shortage of evil uses of math, and the students could quite easily number these. Yet, many of the students persisted in their strong claims for math.
One student asserted a mathematical imperative in times of emergency: "Just imagine it’s war or a crisis: you have a moral obligation to shut up and do the math." By which she meant one is ethically compelled to run a statistical analysis to develop a more concrete understanding of actual dangers. Another student expressed less certainty about quantitative methods. "Statistics aren’t bulletproof, you know; what matters ultimately is thinking clearly, and math trains the mind for such emergencies."
Vakil softened these strong claims for both applied and pure math:
I'm less certain that this [mathematical reasoning] in any way replaces the approach to the virtues of critical self-reflection through great philosophical texts. I hope that our students will better appreciate the importance of such texts, because of an appreciation of the problems that earlier thinkers were grappling with (and that we should grapple with today). Similarly, I doubt that this is sufficient to lead them to ethical reasoning, although I would make a milder claim that thinking clearly in this way can assist in carrying out ethical reasoning.
Vakil also elaborated ways in which math could serve ethics, both by providing empirical data and asking Socratic questions about knowledge and decision-making. In the end, we hoped the students finished the course knowing a bit more about practices of rigorous thinking in our respective disciplines, and that they would see these as equally essential and complementary. Could this sprawling, seven-unit course provide a model for future courses? We’re not sure, but are happy to share our data and materials.
Ruth Starkman writes on higher education and teaches college writing, biomedical ethics and social media at Stanford University.