In 1892, the president of Leland Stanford University, David Starr Jordan, managed to convince Ewald Flügel, a scholar at the University of Leipzig, to join the young institution’s rudimentary English department. Flügel had received his doctoral degree in 1885 with a study of Thomas Carlyle under the aegis of Richard Wülcker, one of the founders of English studies in Europe. Three years later, he finished his postdoctoral degree, with a study on Sir Philip Sydney, and was appointed to the position of a Privatdozent at Leipzig.
The position of the Privatdozent is one of the most fascinating features at the modern German universities in the late 19th century. Although endowed with the right to direct dissertations and teach graduate seminars, the position most often offered only the smallest of base salaries, leaving the scholar to earn the rest of his keep by students who paid him directly for enrolling in his seminars and lectures. In a 1903 Stanford commencement speech Flügel warmly recommended that his new colleagues in American higher education embrace the Privatdozent concept:
What would the faculty of Stanford University say to a young scholar of decided ability, who, one or two years after his doctorate (taken with distinction), having given proof of high scholarly work and spirit, should ask the privilege of using a certain lecture room at a certain hour for a certain course of lectures? What would Stanford University say, if – after another year or two this young man, unprotected but regarded with a certain degree of kindly benevolence […], this lecturer should attract more and more students (not credit hunters), if he should become an influence at the university? What if the university should become in the course of years a perfect hive of such bees? […] It would modify our departmental boss-system, our worship of "credits," and other traits of the secondary schools; it would stimulate scholarly life at the university; it would foster a healthy competition in scholarly work, promote survival of the fittest, and keep older men from rusting.
Unabashedly Darwinian, Flügel was convinced that his own contingent appointment back in Germany had pushed him, and pushed all Privatdozenten, to become competitive, cutting-edge researchers and captivating classroom teachers until one of the coveted state-funded chair positions might become available. He held that the introduction of this specific academic concept was instrumental at furthering the innovative character and international reputation of higher education in Germany. Flügel himself had thrived under the competitive conditions, of course, and his entrepreneurial spirit led him to make a number of auspicious foundational moves: He took on co-editorship of Anglia, today the oldest continually published journal worldwide focusing exclusively on the study of “English.” And he founded Anglia Beiblatt, a review journal that quickly established an international reputation.
Despite his formidable achievements, however, he could not secure a chair position as quickly as he hoped. Since he was among the very few late 19th-century German professors of English who possessed near-native proficiency, he began to consider opportunities overseas. Even the dire warnings from a number of east coast colleagues ("the place seems farther away from Ithaca, than Ithaca does from Leipzig"; "they have at Stanford a library almost without books") could not scare him away. Once he had begun his academic adventure in the Californian wilderness, he took on a gargantuan research project, the editorship of the Chaucer Dictionary, offered to him by Frederick James Furnivall, the most entrepreneurial among British Chaucerians and founder of the Chaucer Society. As soon as he took over from colleagues who had given up on the project, he found, in this pre-computer age of lexicography, "slips of all sizes, shapes, colors, weights, and textures, from paper that was almost tissue paper to paper that was almost tin. Every slip contained matter that had to be reconsidered, revised, and often added to or deleted.”
Undeterred by this disastrous state of affairs, he decided to resolve the problem with typically enterprising determination: Although grant writing was uncharted territory for him, he applied for and secured three annual grants for $7,500 and one for $11,000 (altogether the equivalent of at least $300,000 in today’s money!) from the Carnegie Foundation for the Advancement of Teaching between 1904 and 1907 "for the preparation of a lexicon for the works of Geoffrey Chaucer," bought himself some time away from Stanford, and signed up a dozen colleagues and students in Europe and North America to assist him in his grand plan.
His and their work would become the foundation of the compendious Middle English Dictionary which now graces every decent college library in the English-speaking world and beyond. Beyond the work on the Chaucer Dictionary, the completion of which he never saw because of his sudden death in 1914, he maintained an impressive publication record and served in leadership positions such as the presidency of the Pacific Branch of the American Philological Association. When Flügel passed away, his American colleagues celebrated his "enthusiastic idealism" and remembered him as "more essentially American" than the other foreign-born colleagues they knew, an appreciation due to his entrepreneurial spirit.
I am relating this story to counteract the often defeatist chorus sung by colleagues in English and other humanities departments when confronted with a request, usually from impatient administrators in more grant-active areas, for at least giving grant writing and other entrepreneurial activities a try. There is no doubt that, compared to the situation in most other Western democracies, government support through the National Endowments for the Humanities and Arts is small in the U.S. Conversely, the number of private foundations, from the American Council of Learned Societies through the Spencer Foundation, makes up for some of the difference.
In my experience, what keeps the majority of English professors from even considering an involvement with entrepreneurial activities is that they deem them an unwelcome distraction from the cultural work they feel they have been educated, hired, and tenured to do. Most grant applications require that scholars explain not only the disciplinary, but also the broader social and cultural relevance of their work. In addition, they entail that scholars put a monetary value on their planned academic pursuits and create a bothersome budget sheet, learn how to use a spreadsheet, develop a timeline, and compose an all-too-short project summary, all grant-enabling formal obstacles many colleagues consider beneath the dignity of their profession.
In fact, many of us believe that the entire discipline of English and the humanities in general may have been created so as to counterbalance the entrepreneurial principles and profit motives which, from within the English habitat, seem to have a stranglehold over work in colleges of business, computing, engineering, and science. However, by making English a bastion of (self-)righteous resistance against the evil trinity of utilitarianism, pragmatism, and capitalism, English professors have relinquished the ability to be public intellectuals and to shape public discourse. After all, too many of our books and articles speak only to ourselves or those in the process of signing up to our fields at colleges and universities.
Ewald Flügel labored hard to remain socially and politically relevant even as he was involved in professionalizing and institutionalizing the very discipline we now inhabit. Recognizing that the skills and kinds of knowledge provided by his emerging field were insufficient for solving complex real-world issues, he became a proponent of a more co-disciplinary approach to academic study, a kind of cultural studies scholar long before that term was invented. Most of us would agree that he applied his formidable linguistic and literary expertise to a number of problematic goals, speaking to academic and public audiences about how the steadily increasing German immigration and the powers of German(ic) philology should and would inevitably turn the United States into an intellectual colony of his beloved home country. However, even if his missionary zeal reeks of the prevailing nationalist zeitgeist, I can appreciate his desire to experiment, innovate, and compete to make the study of historical literature and language as essential to the academy and to humanity as did his approximate contemporaries Roentgen, Eastman, Edison, Diesel, Marconi, and Pasteur with their scientific endeavors.
Perhaps his example might entice some of us to revisit and even befriend the idea of entrepreneurship, especially when it involves NGOs or the kind of for-profit funding sources the Just Enough Profit Foundation might define as (only) "mildly predatory" or (preferably) "somewhat," "very" and "completely humanistic." At the very least, Flügel’s biography provides evidence that today’s prevailing anti-entrepreneurial mindset has not always been among the constitutive elements defining the "English" professoriate.
There are encouraging signs that some colleagues in English studies have begun to abandon that mindset: George Mason University’s Center for Social Entrepreneurship (directed by Paul Rogers, a professor of English) and the University of Texas consortium on Intellectual Entrepreneurship (directed by Richard Cherwitz, a professor of rhetoric and communication), generate promising cross-disciplinary collaboration between the academy and society; English professors at Duke, Georgia Tech, and Ohio State, funded by the Bill & Melinda Gates Foundation, are among the national leaders testing the pedagogical viability of the controversial massive open online courses (MOOCs); and Ellito Visconsi of the University of Notre Dame, and Bryn Mawr colleague Katherine Rowe created Luminary Digital Media LLC, a startup that distributes their "Tempest for iPad," an application designed for social reading, authoring, and collaboration for Shakespeare fans with various levels of education. I believe Ewald Flügel would find these projects exciting.
Richard Utz is professor and chair in the School of Literature, Media, and Communication at the Georgia Institute of Technology.
After yet another joke on "A Prairie Home Companion" about an English major who studies Dickens and ends up at a fast-food restaurant frying chickens, I couldn’t take it anymore. I had to write.
You and I go way back. I started listening to you during my undergraduate years as an English major in the mid-'80s and continued while in graduate school in English literature, when making a nice dinner and listening to "Prairie Home" was my Saturday night ritual. I get that you’re joking. I get the whole Midwesterner take down of — and fascination with — cultural sophistication that animates your show. I get that you yourself were an English major. And I get affectionate irony.
I’m afraid, however, that jokes about bitter and unemployed English majors that are already unfortunate in an economy humming along at 4.5 percent unemployment are downright damaging when the unemployment rate is near 8 percent — and some governors, in the name of jobs, are calling for liberal arts heads. Likewise, the most recent annual nationwide survey of the attitudes of college freshmen reported an all-time high in the number of students who said that "to be able to get a better job" (87.9 percent) and "to be able to make more money" (74.6 percent) were "very important" reasons to go to college. Not surprisingly, the same survey reported that the most popular majors were the most directly vocational: business, the health professions, and engineering (biology was also among the most popular).
The truth, however, is that reports of the deadliness of English to a successful career are greatly exaggerated. According to one major study produced by the Georgetown University Center on Education and the Workforce, the median income for English majors with a bachelor’s but no additional degree is $48,000. This figure is just slightly lower than that for bachelor’s degree holders in biology ($50,000), and slightly higher than for those in molecular biology or physiology (both $45,000). It’s the same for students who received their bachelor’s in public policy or criminology (both $48,000), slightly lower than for those who received their bachelor’s in criminal justice and fire protection ($50,000) and slightly higher than for those who received it in psychology ($45,000).
Another study by the same center paints a similar picture with respect to unemployment. In this study, the average unemployment rate for recent B.A. holders (ages 22-26) over the years 2009-10 was 8.9 percent; for English it was 9.2 percent. Both rates are higher than we would wish, but their marginal difference is dwarfed by that between the average for holders of the B.A. and that of high school graduates, whose unemployment rate during the same period was 22.9 percent (also too high).
Of course, majors in engineering and technology, health, and business often have higher salary averages, between $60,000 (for general business) and $120,000 (for petroleum engineering) and marginally lower unemployment rates, especially for newly minted B.A.s. But there’s nothing reckless about majoring in English compared to many other popular majors. Students who love business or engineering, or who are good at them and simply want to earn the highest possible income, make reasonable choices to pursue study in these fields. But students who want to major in English and are good at it should not believe that they are sacrificing a livelihood to pursue their loves. And students who don’t love what they are learning are less likely to be successful.
Because this kind of information is readily available, it makes me wonder why you, Garrison — and you’re not alone — continue to dump on English as a major. I think it must be because in the world of Lake Wobegon the English major has cultural pretensions that need to be punished with loneliness and unemployment. Likewise, the Midwesterner in you can’t believe that anyone who gets to do these things that you yourself love so much — revel in the pleasures of language and stories — could also be rewarded with a decent job.
Garrison, when it comes to English majors, let your inner Midwesterner go. You can study English and not be a snob. And you can study English and not fail in the world. I know you know these things; you’ve lived them. So my plea to you, Garrison, is this. Your "Writer’s Almanac" does a terrific job promoting the love of language and the study of English. But in my media market it plays at 6:35 am. Even where it gets better play, it has nowhere near the prominence of "A Prairie Home Companion." Can you find a way on the latter to tell stories about English majors that don’t involve failure? These stories would make a fresh alternative on your show to a joke way past its sell-by date. And they might make a few parents less likely to discourage their kids from studying English.
And here’s my final plea to all former English majors. "A Prairie Home Companion" can help, but English also needs its "CSI" or "Numb3rs." I know some of you are out there now writing for television and film. I admit it will take some creative chops to develop stories about English study that are as glamorous and engaging as crime drama. But you were an English major. I know you can do it. And it’s time to pay it forward.
Chair, English Department
George Mason University
P.S. to all former English majors: Since writing this letter I’ve learned about a new Fox TV show called "The Following" that features an English professor. He’s a serial killer who inspires others to kill. Maybe next time the English professor could be the hero? Thanks.
In an essay first published in 1948, the American folklorist and cultural critic Gershon Legman wrote about the comic book -- then a fairly recent development -- as both a symptom and a carrier of psychosexual pathology. An ardent Freudian, Legman interpreted the tales and images filling the comics’ pages as fantasies fueled by the social repression of normal erotic and aggressive drives. Not that the comics were unusual in that regard: Legman’s wider argument was that most American popular culture was just as riddled with misogyny, sadomasochism, and malevolent narcissism. And to trace the theory back to its founder, Freud had implied in his paper “Creative Writers and Daydreaming” that any work of narrative fiction grows out of a core of fantasy that, if expressed more directly, would prove embarrassing or offensive. While the comic books of Legman’s day might be as bad as Titus Andronicus – Shakespeare’s play involving incest, rape, murder, mutilation, and cannibalism – they certainly couldn’t be much worse.
But what troubled Legman apart from the content (manifest and latent, as the psychoanalysts say) of the comics was the fact that the public consumed them so early in life, in such tremendous quantity. “With rare exceptions,” he wrote, “every child who was six years old in 1938 has by now absorbed an absolute minimum of eighteen thousand pictorial beatings, shootings, stranglings, blood-puddles, and torturings-to-death from comic (ha-ha) books alone, identifying himself – unless he is a complete masochist – with the heroic beater, strangler, blood-letter, and/or torturer in every case.”
Today, of course, a kid probably sees all that before the age of six. (In the words of Bart Simpson, instructing his younger sister: “If you don't watch the violence, you'll never get desensitized to it.”) And it is probably for the best that Legman, who died in 1999, is not around to see the endless parade of superhero films from Hollywood over the past few years. For in the likes of Superman, he diagnosed what he called the “virus” of a fascist worldview.
The cosmos of the superheroes was one of “continuous guilty terror,” Legman wrote, “projecting outward in every direction his readers’ paranoid hostility.” After a decade of supplying Superman with sinister characters to defeat and destroy, “comic books have succeeded in giving every American child a complete course in paranoid megalomania such as no German child ever had, a total conviction of the morality of force such as no Nazi could even aspire to.”
A bit of a ranter, then, was Legman. The fury wears on the reader’s nerves. But he was relentless in piling up examples of how Americans entertained themselves with depictions of antisocial behavior and fantasies of the empowered self. The rationale for this (when anyone bothered to offer one) was that the vicarious mayhem was a release valve, a catharsis draining away frustration. Legman saw it as a brutalized mentality feeding on itself -- preparing real horrors through imaginary participation.
Nothing so strident will be found in Jason Dittmer’s Captain America and the Nationalist Superhero: Metaphors, Narratives, and Geopolitics (Temple University Press), which is monographic rather than polemical. It is much more narrowly focused than Legman’s cultural criticism, while at the same time employing a larger theoretical toolkit than his collection of vintage psychoanalytic concepts. Dittmer, a reader in human geography at University College London, draws on Homi Bhabha’s thinking on nationalism as well as various critical perspectives (feminist and postcolonial, mainly) from the field of international relations.
For all that, the book shares Legman’s cultural complaints to a certain degree, although none of his work is cited. But first, it’s important to stress the contrasts, which are, in part, differences of scale. Legman analyzed the superhero as one genre among others appealing to the comic-book audience -- and that audience, in turn, as one sector of the mass-culture public.
Dittmer instead isolates – or possibly invents, as he suggests in passing – a subgenre of comic books devoted to what he calls “the nationalist superhero.” This character-type first appears, not in 1938, with the first issue of Superman, but in the early months of 1941, when Captain America hits the stands. Similar figures emerged in other countries, such as Captain Britain and (somewhat more imaginatively) Nelvana of the Northern Lights, the Canadian superheroine. What set them apart from the wider superhero population was their especially strong connection with their country. Nelvana, for instance, is the half-human daughter of the Inuit demigod who rules the aurora borealis. (Any relationship with actual First Nations mythology here is tenuous at best, but never mind.)
Since Captain America was the prototype –- and since many of you undoubtedly know as much about him as I did before reading the book, i.e., nothing – a word about his origins seems in order. Before becoming a superhero, he was a scrawny artist named Steve Rogers who followed the news from Germany and was horrified by the Nazi menace. He tried to join the army well before the U.S entered World War Two but was rejected as physically unfit. Instead, he volunteered to serve as a human guinea pig for a serum that transforms him into an invincible warrior. And so, as Captain America -- outfitted with shield and spandex in the colors of Old Glory – he went off to fight Red Skull, who was not only a supervillain but a close personal friend of Adolf Hitler.
Now, no one questions Superman’s dedication to “truth, justice, and the American way,” but the fact remains that he was an alien who just happened to land in the United States. His national identity is, in effect, luck of the draw. (I learn from Wikipedia that one alternate-universe narrative of Superman has him growing up on a Ukrainian collective farm as a Soviet patriot, with inevitable consequences for the Cold War balance of power.) By contrast, Dittmer’s nationalist superhero “identifies himself or herself as a representative and defender of a specific nation-state, often through his or her name, uniform, and mission.”
But Dittmer’s point is not that the nationalist superhero is a symbol for the country or a projection of some imagined or desired sense of national character. That much is obvious enough. Rather, narratives involving the nationalist superhero are one part of a larger, ongoing process of working out the relationship between the two entities yoked together in the term “nation-state.”
That hyphen is not an equals sign. Citing feminist international-relations theorists, Dittmer suggests that one prevalent mode of thinking counterposes “the ‘soft,’ feminine nation that is to be protected by the ‘hard,’ masculine state” -- which is also defined, per Max Weber, as claiming a monopoly on the legitimate use of violence. From that perspective, the nationalist superhero occupies the anomalous position of someone who performs a state-like role (protective and sometimes violent) while also trying to express or embody some version of how the nation prefers to understand its own core values.
And because the superhero genre in general tends to be both durable and repetitive (the supervillain is necessarily a master of variations on a theme), the nationalist superhero can change, within limits, over time. During his stint in World War II, Captain America killed plenty of people in combat with plenty of gusto and no qualms. It seems that he was frozen in a block of ice for a good part of the 1950s, but was thawed out somehow during the Johnson administration without lending his services to the Vietnam War effort. (He went in Indochina just a couple of times, to help out friends.) At one point, a writer was on the verge of turning the Captain into an overt pacifist, though the publisher soon put an end to that.
Even my very incomplete rendering of Dittmer’s ideas here will suggest that his analysis is a lot more flexible than Legman’s denunciation of the superhero genre. The book also makes more use of cross-cultural comparisons. Without reading it, I might never known that there was a Canadian superhero called Captain Canuck, much less the improbable fact that the name is not satirical.
But in the end, Legman and Dittmer share a sense of the genre as using barely conscious feelings and attitudes in more or less propagandistic ways. They echo the concerns of one of the 20th century's definitive issues: the role of the irrational in politics. And that doesn't seem likely to become any less of a problem any time soon.
The first distinguished speaker at the recent forum on "Justifying the Humanities" followed a recent trend by asserting that the humanities were invented in the American university of the 1930s as an organizational convenience. The second distinguished speaker explained that in their current "somewhat dated" form the humanities are a product of the Cold War, developed in the 1950s through courses in the Great Books and Western Civilization. By the time the final distinguished speaker began his remarks I feared that we would be told the humanities were invented yesterday in sudden meta-post-Postmodernist fabrication.
First, the good news. It is true that the familiar triadic American curricular structure of liberal education (natural science, social science and the humanities) is relatively recent. Hence, the form of humanistic studies is not chiseled in ancient marble, but has changed and can and should continue to change in response to new circumstances.
The bad news is that recent history is only a small part of the story. The foreshortening perspective on the humanities comes at a price. It’s not just that it overlooks a tradition that reaches back to the Stoic philosophers of ancient Greece, Cicero in ancient Rome, Petrarch and Boccaccio in Italy and the amazing scholars of the Renaissance. Nor is it just that we deprive ourselves of the benefits of breakthroughs in contemporary scholarship. It’s that we risk losing sight of what motivated the great era of humanism.
Renaissance humanists, such as Joseph Justus Scaliger, Marsilio Ficino and Lorenzo Valla, applied immense energy and learning to establishing reliable texts of ancient authors, commenting on them, making them accessible through translations, and teaching them in a way that created an understanding of human beings and moral agency not restricted by the dictates of medieval theology. Philosophy, literature, history and the visual arts were transformed by such humanism. Soon universities were transformed as well.
When I asked Paul Grendler, a professor of history emeritus at the University of Toronto and an expert on education in the Renaissance, about this transition, he reminded me that this change was revolutionary. "A group of 15th-century Italian scholars decided that the best way to train men (and a few women) to be learned, eloquent, and morally responsible leaders of society was to introduce them to the great authors and texts of ancient Greece and Rome.… They coined the phrase studia humanitatis (humanistic studies) for this new, revolutionary school curriculum." This transformative sense of purpose accounts, I believe, for the energy and enduring excitement of their work.
At the university level great changes began around 1425 when humanists began teaching in Italian universities such as Bologna, Florence and Padua. They taught rhetoric, poetry and what they sometimes called humanitas, meaning more or less what Cicero had meant by it, "the knowledge of how to live as a cultivated, educated member of society," as Grendler phrase it. In general these humanists connected this goal to the stadium humanitatis – we would say classical studies broadly conceived. That terminology spread from Italy to the British Isles where, for example, the Scotstarvit chair of humanity was established at the University of St. Andrews in 1620. By 1800 literae humaniores were part of examinations at Oxford. The pattern was revised in the mid-19th century into the famous "Greats" program, which later provided the model for "Modern Greats," that is, Oxford’s degree program Philosophy, Politics and Economics. Humanism, it turns out, is not only adaptable to modern circumstances; it can be infectious.
The term "humanities" did not, then, drop out of the sky into the unknowing laps of American academic bureaucrats. Leaders of colleges and universities in the early 20th century consciously and deliberately evoked the tradition of Renaissance humanism in an effort to develop some equivalent amid mass education in the modern world. We may argue about how successful they were, but they saw the challenge.
It's still the challenge today, almost a century later. In responding to it, we can still learn from those Renaissance scholars. If we neglect them, we overlook an important part of the background to contemporary humanistic studies, but we also we risk replicating, validating, and promulgating one of the gravest failings of the humanities as currently practiced – "presentism," that is, an exclusionary focus on the most highly modernized societies of the contemporary world, and the uncritical judging of the past by today’s interests and standards. In so doing one severs contact with what so motivated and energized these great humanist scholars and with the perspective on human life and conduct that they opened up.
If this root of the humanities is severed by ignorance, neglect or hostility, it will not be surprising if humane learning begins to look a little withered, and if students find what they have learned soon wilts and leaves them without the perspective and depth of understanding that a rigorous and wide-ranging education in the humanities should provide.
W. Robert Connor is senior advisor at the Teagle Foundation.