In 1869, Charles W. Eliot, a professor at the Massachusetts Institute of Technology, wrote an essay in The Atlantic Monthly entitled “The New Education.” He began with a question on the mind of many American parents: “What can I do with my boy?” Parents who were able to afford the best available training and did not think their sons suited for the ministry of a learned profession, Eliot indicated, sought a practical education, suitable for business “or any other active calling”; they did not believe that the traditional course of study adopted by colleges and universities 50 years earlier was now relevant. Less than a year later, Eliot became president of Harvard. Among the reforms he initiated were an expansion of the undergraduate curriculum and substantial improvement in the quality and methods of instruction in the law school and the medical school.
The debate between advocates of traditional liberal learning and partisans of a more “useful” education, Michael Roth, the president of Wesleyan University, reminds us, has deep roots in American soil. In Beyond the University, (Yale University Press) he provides an elegant and informative survey of the work of important thinkers, including Benjamin Franklin, Thomas Jefferson, Ralph Waldo Emerson, W.E.B DuBois, Jane Addams, William James, John Dewey, and Richard Rorty, who, despite significant differences, embraced liberal education because it “fit so well with the pragmatic ethos that linked inquiry, innovation, and self-discovery.” At a time in which liberal learning is under assault, Roth draws on the authority of these heavyweights to argue that “it is more crucial than ever that we not abandon the humanistic frameworks of education in favor of narrow, technical forms of teaching intended to give quick, utilitarian results."
Most of Beyond the University is devoted to claims by iconic intellectuals about the practical virtues of liberal learning, which Roth endorses (with occasional qualifications). Exhibiting a “capacious and open-ended” understanding of educational “usefulness,” Roth indicates, Thomas Jefferson opted for free inquiry at his university in Charlottesville, Va., to equip citizens in the new republic to think for themselves and take responsibility for their actions. Ralph Waldo Emerson resisted education as mere job training; but, he indicated, it should impart knowledge to develop individuals willing and able to use what we now call “critical thinking” to challenge the status quo.
Acknowledging that different people need different kinds of educational opportunities, W.E.B. DuBois nonetheless insisted that the final product of training “must be neither a psychologist nor a brick mason, but a man.” Liberal learning, Jane Addams emphasized, inculcates “affectionate interpretation,” which prepares individuals not only to defend themselves against those with different points of view, but to empathize with others and act in concert with them. And John Dewey, the most influential philosopher of education in the 20 century, looked to a liberal education, according to Roth, to help students learn the lessons of experiment and experience, by trying things out and assessing the results, by themselves and with others, and, then, if appropriate, revising their behavior.
Roth’s approach – a reliance on the authority of seminal thinkers – is not without problems. As he knows, the nature of higher education – and its perceived roles and responsibilities – has changed dramatically since colleges focused on liberal learning. In 1910, only 9 percent of students received a high school diploma; few of them went on to college. These days, about 40 percent of young men and women get a postsecondary degree. Undergraduate, master’s, and doctoral degrees, moreover, are now required, far more than were in the days of Emerson and Eliot, for entry into the most prestigious, and high-paying, professions. Jamie Merisotis, president of the Lumina Foundation, is surely right when he asserts that “to deny that job skills development is one of the key purposes of higher education is increasingly untenable” – and that integration of specific skills into the curriculum can help graduates get work and perform their assigned tasks well.
Roth does not specify how liberal learning might “pull different skills together in project-oriented classes.” Nor does he adequately address “the new sort of criticism” directed at liberal learning. A liberal arts education, many critics now claim, does not really prepare students to love virtue, be good citizens, or recognize competence in any field. As Roth acknowledges, general education, distribution requirements, and free electives are not effective antidotes to specialization; they have failed to help establish common academic goals for students. And, perhaps most disturbingly, doubt has now been cast on the proposition that the liberal arts are the best, and perhaps the only, pathway to “critical thinking” (the disciplined practice of analyzing, synthesizing, applying, and evaluating information).
President Roth may well be right that liberal learning “will continue to be a fundamental part of higher education” if (and, he implies, only if) it rebalances critical thinking and practical exploration. The key question, it seems to me, is how to rebalance, while preserving the essence of liberal learning, at a time in which higher education in general and, most especially, the humanities are under a sustained attack by cost-conscious advocates of an increasingly narrow vocationalism, who are certain to be unpersuaded by the testimony of long-dead intellectuals. The task, moreover, is all the more daunting, moreover, because it will have to be carried out by proponents and practitioners of the liberal arts, many of whom, unlike Michael Roth, are now in despair, in denial, or have lost faith.
Glenn C. Altschuler is the Thomas and Dorothy Litwin Professor of American Studies at Cornell University.
Reporting on the Senate's confirmation of Theodore Mitchell as the U.S. Department of Education's chief higher education official, Inside Higher Ed quoted a statement from Secretary of Education: “He will lead us through this important time in higher education as we continue to work toward the President’s goal to produce the best-educated, most competitive workforce in the world by 2020.” While this brief remark is hardly a major policy statement, its tone and focus are typical of the way Secretary Duncan, President Obama, and many others in politics these days talk about higher education.
This typical rhetoric, in Duncan’s statement and beyond, makes a good point, but it doesn't say enough. To explain why, I will take a leaf from Thucydides. In History of the Peloponnesian War, he explained that his apparent verbatim accounts of speeches by other figures really articulated what he thought they should have said. With due respect for Secretary Duncan and President Obama, here is what the Secretary of Education should have said, on behalf of the President's aims, on the confirmation of a new Under Secretary of Education in charge of higher education affairs:
He will lead us through this important time in higher education as we continue to work toward the president’s goals for higher education in making America a more productive economy, a more just society, a more flourishing democracy, and a richer environment for what the Founders called, in the Declaration of Independence, "the pursuit of happiness," and in the Preamble to the Constitution, "the general welfare."
A part of that economic goal is to produce the best-educated, most competitive workforce in the world by 2020. Another part is to ensure that higher education extends broadly the opportunity to develop the ingenuity and creativity that will drive American innovation in the years ahead.
That means working to ensure that higher education regains its function as an engine of socioeconomic advancement, both for the individual and for society as a whole. This means resisting the increasing stratification of curriculums and opportunities, making sure that the advantages of arts and sciences education are extended as far throughout higher education as possible. This is both prudent, to cultivate the nation's human capital, and also just, to mitigate disadvantages of less-privileged starting points.
Everyone knows that democracy depends on America's capacity to maintain a deliberative electorate, capable of making well-informed choices in a political system they understand and in which they actively participate. It is a responsibility of higher education to enhance this investment in America by helping maintain that electorate. It is a responsibility of government to promote that role.
Finally, when the Founders embraced such goals as " the pursuit of happiness," and securing "the general welfare" of the people, they acknowledged that the well-being of individuals and of society as a whole -- difficult as these concepts are to define -- are legitimate objects of government interest. Higher education has crucial responsibilities of exploration and discovery in this broad field of human well-being. It is here that the perennial American question concerning the scope and limits of government itself is to be explored, and given for inquiry to succeeding generations of Americans.
"So on the appointment of a new Under Secretary with responsibilities toward higher education, we celebrate the many contributions of higher education to American flourishing: its role in contributing to a vibrant economy, certainly; and also its role in sustaining and advancing the broad aims of justice and improvement to which the country has always been committed."
That would have been good to hear from Secretary Duncan, and would be good to hear in any of the administration's speeches about higher education. None of us who are committed to this broader vision of higher education can ever, I emphasize, lose sight of its role in propelling the economy forward. But we cannot permit the purposes of higher education in America to be narrowed solely into the goal of workforce production. More is at stake: access to opportunity, cultivation of ingenuity and innovation, and broad contributions to the future of the country. Phi Beta Kappa joins many voices in advocacy of that vision. We invite Theodore Mitchell, Secretary Duncan, and President Obama to join, as well.
John Churchill is secretary of the Phi Beta Kappa Society.
Brian Cranston’s recitation of “Ozymandias” in last year’s memorable video clip for the final season of Breaking Bad may have elided some of the finer points of Shelley's poem. But it did the job it was meant to do -- evoking the swagger of a grandiose ego, as well as time’s shattering disregard for even the most awe-inspiring claim to fame, whether by an ancient emperor or meth kingpin of the American Southwest.
But time has, in a way, been generous to the figure Shelley calls Ozymandias, who was not a purely fictional character, like Walter White, but rather the pharaoh Ramses II, also called User-maat-re Setep-en-re. (The poet knew of him through a less exact, albeit more euphonious, transcription of the name.) He ruled about one generation before the period that Eric H. Cline, a professor of classics and archeology at George Washington University, recounts in 1177 B.C.: The Year Civilization Collapsed (Princeton University Press).
Today the average person is reasonably likely to know that Ramses was the name of an Egyptian ruler. But very few people will have the faintest idea that anything of interest happened in 1177 B.C. It wasn't one of the 5,000 “essential names, phrases, dates, and concepts” constituting the “shared knowledge of literate American culture” that E.D Hirsch identified in his best-seller Cultural Literacy (1988), nor did it make it onto the revised edition Hirsch issued in 2002. Just over 3,000 years ago, a series of catastrophic events demolished whole cities, destroying the commercial and diplomatic connections among distinct societies that had linked up to form an emerging world order. It seems like this would come up in conversation from time to time. I suspect it may do so more often in the future.
So what happened in 1177 B.C.? Well, if the account attributed to Ramses III is reliable, that was the date of a final, juggernaut-like offensive by what he called the Sea Peoples. By then, skirmishes between Egypt and the seafaring barbarians had been under way, off and on, for some 30 years. But 1177 was the climactic year when, in the pharaoh’s words, “They laid their hands upon the lands as far as the circuit of the earth, their hearts confident…. ” The six tribes of Sea Peoples came from what Ramses vaguely calls “the islands.” Cline indicates that one group, the Peleset, are "generally accepted” by contemporary scholars "as the Philistines, who are identified in the Bible as coming from Crete.” The origins of the other five remain in question. Their rampage did not literally take the Sea Peoples around “the circuit of the earth,” but it was an ambitious military campaign by any standard.
They attacked cities throughout the Mediterranean, in places now called Syria, Turkey, and Lebanon, among others. About one metropolis Ramses says the Sea Peoples “desolated” the population, Ramses says, “and its land was like that which has never come into being.”
Cline reproduces an inscription that shows the Sea Peoples invading Egypt by boat. You need a magnifying glass to see the details, but the battle scene is astounding even without one. Imagine D-Day depicted exclusively with two-dimensional figures. The images are flat, but they swarm with such density that the effect is claustrophobic. It evokes a sense of terrifying chaos, of mayhem pressing in on all sides, so thick that nobody can push through it. Some interpretations of the battle scene, Cline notes, contend that it shows an Egyptian ambush of the would-be occupiers.
Given that the Egyptians ultimately prevailed over the Sea Peoples, it seems plausible: they would have had reason to record and celebrate such a maneuver. Ramses himself boasts of leading combat so effectively that the Sea Peoples who weren't killed or enslaved went home wishing they’d never even heard of Egypt: “When they pronounce my name in their land, then they are burned up.”
Other societies were not so fortunate. One of them, the Hittite empire, at its peak covered much of Turkey and Syria. (If the name seems mildly familiar, that may be because the Hittites, like the Philistines, make a number of appearances in the Bible.) One zone under Hittite control was the harbor city of Ugariot, a mercantile center for the entire region. You name it, Ugarit had it, or at least someone there could order it for you: linen garments, alabaster jars, wine, wheat, olive oil, anything in metal…. In exchange for paying tribute, a vassal city like Ugarit enjoyed the protection of the Hittite armed forces. Four hundred years before the Sea Peoples came on the scene, the king of the Hittites could march troops into Mesopotamia, burn down the city, then march them back home — a thousand miles each way — without bothering to occupy the country, “thus,” writes Cline, “effectively conducting the longest drive-by shooting in history.”
But by the early 12th century, Ugarit had fallen. Archeologists have found, in Cline’s words, "that the city was burned, with a destruction level reaching two meters high in some places.” Buried in the ruins are “a number of hoards … [that] contained precious gold and bronze items, including figurines, weapons and tools, some of them inscribed.” They "appear to have been hidden just before the destruction took place,” but "their owners never returned to retrieve them.” Nor was Ugarit ever rebuilt, which raises the distinct possibility that there were no survivors.
Other Hittite populations survived the ordeal but declined in power, wealth, and security. One of the maps in The Year Civilization Collapsed marks the cities around the Mediterranean that were destroyed during the early decades of the 12th century B.C. — about 40 of them in all.
The overview of what happened in 1177 B.C. that we’ve just taken is streamlined and dramatic — and way too much so not to merit skepticism. It’s monocausal. The Sea Peoples storm the beaches, one city after another collapses, but Ramses III survives to tell the tale…. One value of making a serious study of history, as somebody once said, is that you learn how things don’t happen.
Exactly what did becomes a serious challenge to determine, after a millennium or three. Cline’s book is a detailed but accessible synthesis of the findings and hypotheses of researchers concerned with the societies that developed around the Mediterranean throughout the second millennium B.C., with a special focus on the late Bronze Age, which came to an end in the decades just before and after the high drama of 1177. The last 20 years or so have been an especially productive and exciting time in scholarship concerning that region and era, with important work being done in fields such as archeoseismology and Ugaritic studies. A number of landmark conferences have fostered exchanges across micro-specialist boundaries, and 1177 B.C.: The Year Civilization Collapsed offers students and the interested lay antiquarian a sense of the rich picture that is emerging from debates among the ruins.
Cline devotes more than half of the book to surveying the world that was lost in or around the year in his title — with particular emphasis on the exchanges of goods that brought the Egyptian and Hittite empires, and the Mycenean civilization over in what we now call Greece, into closer contact. Whole libraries of official documents show the kings exchanging goods and pleasantries, calling each “brother,” and marrying off their children to one another in the interest of diplomatic comity. When a ship conveying luxury items and correspondence from one sovereign to another pulled in to dock, it would also carry products for sale to people lower on the social scale. It then returned with whatever tokens of good will the second king was sending back to the first — and also, chances are, commercial goods from that king’s empire, for sale back home.
The author refers to this process as “globalization,” which seems a bit misleading given that the circuits of communication and exchange were regional, not worldwide. In any case, it had effects that can be traced in the layers of scattered archeological digs: commodities and artwork characteristic of one society catch on in another, and by the start of the 12th century a real cosmopolitanism is in effect. At the same time, the economic networks encouraged a market in foodstuffs as well as tin — the major precious resource of the day, something like petroleum became in the 20th century.
But evidence from the digs also shows two other developments during this period: a number of devastating earthquakes and droughts. Some of the cities that collapsed circa 1177 may have been destroyed by natural disaster, or so weakened that they succumbed far more quickly to the marauding Sea Peoples than they would have otherwise. For that matter, it is entirely possible that the Sea Peoples themselves were fleeing from such catastrophes. “In my opinion,” writes Cline, “… none of these individual factors would have been cataclysmic enough on their own to bring down even one of these civilizations, let alone all of them. However, they could have combined to produce a scenario in which the repercussions of each factor were magnified, in what some scholars have called a ‘multiplier effect.’ … The ensuing ‘systems collapse’ could have led to the disintegration of one society after another, in part because of the fragmentation of the global economy and the breakdown of the interconnections upon which each civilization was dependent."
Referring to 1177 B.C. will, at present, only get you blank looks, most of the time. But given how the 21st century is shaping up, it may yet become a common reference point -- and one of more than antiquarian relevance.
I do not know if he was an ancestor of the talk-show host, but one Jean-Baptiste Colbert served as minister of finance for Louis XIV. A page on the tourism-boosting website for Versailles notes that his name lived on "in the concept of colbertism, an economic theory involving strict state control and protectionism."
An apt phrase can echo down through the ages, and the 17th-century Colbert turned at least a couple of them. The idea that each nation has a "balance of trade" was his, for one. And in a piece of wit that surely went over well at court, Colbert explained that "the art of taxation consists in so plucking the goose as to obtain the largest amount of feathers with the least amount of hissing."
Procrastination makes tax resisters of us all, at one time or another. But mostly we submit, just to get it over with, and we keep the hissing to a prudent minimum. Not so the politicians, ideologues, and organizations chronicled by Romain D. Huret in American Tax Resisters (Harvard University Press). Relatively few of them carried rebellion so far as to risk imprisonment or bankruptcy in defense of their principles by outright refusing to pay up. But they were unrelentingly vocal about their fear that the state was hell-bent on reducing them to peonage.
American Tax Resisters proves a little more narrowly focused than its title would suggest; its central concern is with opposition to the income tax, though Huret's interest also extends to protest against any form of progressive taxation. The author is an associate professor of American history at the University of Lyon 2 in France, and writes that he’s now spent two decades pondering "why Americans had such a complex relationship with their federal government."
In selecting one aspect of that complex relationship to study, he makes some surprising though defensible choices. He says very little about the Boston Tea Party or Shay's rebellion, for example. Instead, he takes the Civil War as the moment when anti-tax sentiments began to be expressed in terms that have persisted, with relatively little variation, ever since. The book is weighted more heavily toward narrative than analysis, but the role of major U.S. military commitments in generating and consolidating the country’s tax system does seem to be a recurrent theme.
Before taking office, Lincoln held that government funds ought to be raised solely through tariffs collected, he said, "in large parcels at a few commercial points.” Doing so would require "comparatively few officers in their collection.” In the early months of the war, his administration tried to supplement revenue through an income tax that largely went uncollected. With most of the country’s wealth concentrated in the Northeast, most of the burden would have fallen on a few states.
Instead, revenue came in through the sale of war bonds as well as the increased taxation of goods of all kinds, which meant driving up the prices of household commodities. By 1863, a Congressman from the North was warning of "the enslavement of the white race by debt and taxes and arbitrary power.” The link between anti-tax sentiment and racial politics only strengthened after the Confederacy’s defeat.
The need to pay off war debts, including interest on bonds, kept many of the new taxes imposed by the Lincoln administration in place into the 1880s. Businessmen who prospered during the conflict, as well as tycoons making new fortunes, resented any taxation of their incomes -- let alone the progressive sort, in which the rate increased as the amount of income did. Anti-tax writers insisted that progressive taxation was a policy of European origin, and “communistic,” and even a threat to the nation’s manhood, since it might (through some unspecified means) encourage women to assert themselves in public.
Another current of anti-tax sentiment reflected the anxiety of whites in Dixie, faced with the menace of African-American equality, backed up by the efforts of the Freedmen’s Bureau and other Reconstruction-era government agencies. Huret reprints an anti-tax poster from 1866 in which hard-working white men produce the riches taxed to keep a caricatural ex-slave in happy idleness.
The rhetoric and imagery of anti-tax protests from the late 19th century have shown themselves to be exceptionally durable (only the typography makes that poster seem old-fashioned) and they recur throughout Huret’s account of American tax resistance in the 20th century and beyond. With each new chapter, there is at least one moment when it feels as if the names of the anti-tax leaders and organizations have changed, but not much else. Certainly not the complaints.
Yet that’s not quite true. Something else does emerge in American Tax Resisters, particularly in the chapters covering more recent decades: people's increasingly frustrated and angry sense of the government encroaching on their lives.
By no means does the right wing have a monopoly on the sentiment. But every activist or group Huret writes about is politically conservative, as was also the case in Isaac William Martin's book Rich People’s Movements: Grassroots Campaigns to Untax the One Percent, published last year by Oxford University Press and discussed in this column.
Neither author mentions Edmund Wilson’s book The Cold War and the Income Tax: A Protest (1962), which criticizes “the Infernal Revenue Service,” as some resisters call it, in terms more intelligent and less hysterical than, say, this piece of anti-government rhetoric from 1968 that Hulet quotes: “The federal bureaucracy has among its principle objectives the destruction of the family, the elimination of the middle class, and the creation of a vast mass of people who can be completely controlled.”
Wilson wrote his book after a prolonged conflict with the IRS, which had eventually noticed the author’s failure to file any returns between 1946 and 1955. Wilson explained that as a literary critic he didn’t make much money and figured he was under the threshold of taxable income. Plus which, his lawyer had died. The agents handling his case were unsympathetic, and Wilson’s encounter with the bureaucracy turned into a Kafkaesque farce that eventually drove him from excuses to rationalization: his growing hostility led Wilson to decide that failure to pay taxes was almost an ethical obligation, given that the military-industrial complex was out of control. He vowed never again to earn enough to owe another cent in income tax, though he and the IRS continued to fight it out until his death 10 years later.
I don’t offer this as an example of tax resistance at its most lucid and well-argued. On the contrary, there’s a reason it’s one of Wilson’s few books that fell out of print and stayed there.
But it is a lesson in how the confluence of personal financial strains and the cold indifference of a bureaucratic juggernaut can animate fiery statements about political principle. It’s something to consider, along with the implications of Socrates's definition of man as a featherless biped.
The men who established the republic were no plaster saints of Red State moral uplift. Only one of the half-dozen figures Thomas A. Foster writes about in Sex and the Founding Fathers: The American Quest for a Relatable Past (Temple University Press) would escape denunciation by the Traditional Values Coalition if the Founders were around today.
Accusations of adultery or of fathering children out of wedlock (or both) were made against George Washington, Thomas Jefferson, Benjamin Franklin, and Alexander Hamilton; the last two admitted the truth of the charges. Gouverneur Morris managed to draft the Constitution between rounds of frequent, strenuous fornication -- exercise he pursued despite having a severely mangled right arm and amputated left leg.
Only the the tightly wound John Adams seems to have escaped any hint of scandal. By all evidence, he and Abigail were strictly monogamous and not averse to finger-wagging at the other Founders' morals -- especially Franklin's, which were particularly relaxed. Besides writing a notorious essay on selecting a mistress, Franklin lived with a common-law wife; later, he conducted a good deal of his work as ambassador to France either in bed with well-born Parisian ladies or trying to get them there.
He was also broad-minded in ways that would be fodder for cable TV news today. He seems to have been on friendly terms with one Chevalier d'Eon, a French diplomat who preferred to dress in women's clothing. Poor Richard's ventriloquist was, as it's put nowadays, straight but not narrow.
Tabloid history? No, though much innuendo about the Founders did appear in frankly sensationalist publications of the day. (Negative campaigning goes way back.) Foster, an associate professor of history at DePaul University, is innocent of any muckraking intent. Everything in Sex and the Founding Fathers is a well-established part of the historical record, and in the case of Jefferson's relationship with his slave Sally Hemmings, you'd have to have spent the last 20 years on a desert island not to have heard about it by now.
The author isn't interested in revealing the character or psychology of the early American statesmen. Rather, the book is a metahistory (not that Foster uses such jargon) of how their sex lives and their public roles were understood during across the past 200 years or so. The biography of a major political figure is itself a political act. Historians and others writing about the Founders have dealt with their peccadilloes in different ways over time, the shifts in emphasis and judgment reflecting changes in the national political culture.
George Washington, for example, seems the most austerely virtuous of the country's early leaders, thanks especially to the moralizing fables of Parson Weems. Recent biographies suggest that he had a number of romantic relationships, consummated and otherwise, before marrying Martha. Writers of historical fiction depict the six-foot-three, athletically built military man as exerting powerful animal magnetism upon the colonial womenfolk. (Like Fabio, but with wooden teeth.) In real life, Washington addressed passionate letters to a married woman. If no further improprieties occurred, it was not for want of trying.
Foster notes the tendency to assume that earlier images of the first president were "disembodied" idealizations which have "only recently been humanized." But the record is more nuanced: "Even the earliest images emphasize both his domestic life and his military and government successes," Foster writes, with some 19th-century biographies and paintings "establish[ing] Washington as the romantic man" as well as "head of a prosperous household." But on that last point, one fact was somewhat problematic: Martha, who was a widow when they met, had a number of children by her first husband but never conceived with George.
"No early account hides the fact that he had no children of his own," Foster notes. "But 19th-century writers do not dwell on this aspect of his life, leaving some readers to their own devices to determine this aspect of his private family life." Biographers in the Victorian era "could not anticipate that readers would ever expect an answer to the very personal question of why he had no children."
Refusing to acknowledge the question did not make it go away, however. The lack of progeny was a seeming defect in Washington's status as embodiment of masculine ideals. One answer to the problem was sentimental: The couple could be depicted as blissfully compatible yet saddened by their plight, even without any evidence of it. ("Americans," Foster remarks, "have never hesitated to speak definitively about the loves and inner lives of the Founders, despite a lack of documentation.") Unfortunate as the situation was, Washington finally transcended it by becoming "father of his country." Another solution was to deny that Washingon's virility was compromised at all, by claiming that he had an illegitimate son by the widow of one of his tenant farmers. See also the rumor that Washington died from a cold he caught "from leaping out a window, pants-less, after a romantic encounter with an 'overseer's wife.'"
No other figure in Sex and the Founding Fathers occupies so markedly paternal a role in public life, but in each case Foster brings out the complex and tightly knit relationship between sexual and political life. Even with Benjamin Franklin -- whose flirtatiousness is well-known, as is his earthy advice about the benefits of dating older women -- the author finds aspects of the record that add some nuance to the familiar portrait. I never appreciated just how disturbing a figure he was to his countrymen in the 19th century, when a senator struck his name from a list of candidates for a proposed national hall of fame on these grounds:
"Dr. Franklin's conduct of life was that of a man on a low plane. He was without idealism, without lofty principle, and one side of his character gross and immoral.... [His letter] on the question of keeping a mistress, which, making allowances for the manner of the time, and all allowance for the fact that he might have been in jest, is an abominable and wicked letter; and all his relation to women, and to the family life, were of that character."
Abominable? Well, he wasn't a hypocrite, and that's always a risky thing not to be. Consider also Alexander Hamilton. When accused of financial improprieties involving public funds, he denied it but admitted to having had a fling with a married woman whose husband then tried to blackmail him. "He chose to discuss the affair, in print, publicly, and in the greatest of documented detail to save his public honor," writes Foster. "He was not divorced. His wife did not denounce him. [George] Washington publicly supported him, as did others."
For a long time, biographers treated the matter evasively. They airbrushed the details out of his portrait as much as possible. Nowadays, Foster says, we get "warts-and-all hagiography -- ones that present failings only to dismiss them or have them overshadowed by an overarching theme of national greatness." Either way, he argues, the statesmen of the early republic stand apart from more recent politicians embroiled in sex scandals in one important way. Our contemporary lotharios can skulk off the public stage after a while, while the Founders never can. Their dirty linen hangs out for everyone to see, forever.
As Scott Jaschik points out in his January 13, 2014 article, “The Third Rail,” the terrible stress our newly minted Ph.D.s in English, comp lit, and foreign languages confront when they begin the job search seems only to be escalating rather than abating. Understandably, then, many Modern Language Association convention sessions, as well as a growing body of publications, have been taking up a variety of proposals for addressing the job crisis. Jaschik mentions the session I chaired, “Who Benefits? Competing Agendas and Graduate Education,” and he carefully articulates the basic positions of the panelists as we were all in general agreement that shrinking the size of graduate programs in English would not be the best way to remedy the situation. But the reasons we hold those beliefs in favor of expansion rather than contraction seem to have slipped out of view. I would like to highlight them here.
Let me begin by stating the obvious nature of the suffering: When you defund public higher education, someone is going to have to pay, and it has been our colleagues forced to accept unethically precarious working conditions both during and after grad school, and students at all levels burdened with massively increasing educational debt. These are circumstances we must protest with all the solidarity we can muster. But all this misery, the sense of lives ruined, institutionalized failure, personal anguish — these horrors come not just from oversized grad programs, but from a much larger capitalist economy that is wreaking havoc on many workers and unemployed poor in and out of academia. As Marc Bousquet has explained, it is not a market; or, at least, it is not a “free market” in any real sense despite our common rhetorical reference to the horrors of the “job market.” It is a system we are caught in, and one orchestrated, it’s true, by our own institutional structures that have now been fine-tuned to serve the champions of privatization, defunding, and austerity. In this type of economic system, higher education has become a kind of laboratory for the production of a precarious, contingent, low-wage faculty. The economic inequality within the profession mirrors the economic inequality in the society. From any ethical perspective, it is a system that has gone terribly wrong.
What has been most missing from the discussion about graduate school size has been a concise understanding of why the market logic doesn’t work for English grad programs, and the main reason is because it is not an accurate description of how the system really works. If it were a case of supply and demand, it might make good ethical sense to reduce the overproduction of Ph.D.s to meet the lower demand for tenured professors. In short, if you could reduce the supply without altering demand, this equalizing would clearly make it easier for graduates to get tenured jobs for the simple reason that there would then be fewer Ph.Ds competing for the same number of jobs. But the system does not work that way. Rather, when you reduce supply by shrinking graduate programs, you also end up reducing demand (as I will explain in what follows): our system is so structured that we cannot reduce the one without reducing the other, and that’s a real ethical and political conundrum.
When you shrink graduate student enrollments (the supply side), you inevitably also shrink the size of graduate programs, which means, willy-nilly, that you decrease tenured faculty lines (the demand side) because they are the folks teaching in grad programs. Administrators would be happy to shrink our programs and eliminate some tenured lines through attrition and retirement because new, cheaper temp hires can easily fill in to teach the few undergraduate lower-division classes that some tenured faculty teach.
The gurus of supply and demand would like nothing better than for us graduate faculty to do our own regulating by cutting down of our own accord on producing so many new highly educated people schooled in the legacies of critique and dissent. We then serve the wishes of those seeking more power to hire and fire at will the most vulnerable among us who have no protections under a gutted system of tenure and diminished academic freedom. The system can play itself out under the contraction model, then, as a vicious cycle of reducing supply, which reduces demand for tenured faculty (while increasing the non-tenure-track share of the faculty), which calls for further reducing of supply. To believe that contracting the size of graduate programs can, in and of itself, improve the situation is a misattribution of cause and effect: The real cause of the job misery is the agenda for privatization and defunding public expenditures orchestrated by the global economic system that has been producing misery and suffering for millions of lives around the world as socioeconomic inequalities continue to magnify.
Now, having said all that, I also want to be very clear that there are strategic, local situations where reducing graduate student populations in order to expand funding and support for them, or in order to revise a program (hopefully without shrinking tenured faculty lines), can certainly be the most ethical thing to do. So I am speaking at a general level of overall tactics for the profession, and at that level, shrinking (without other forms of compensation) inevitably leads to weakening graduate education, not strengthening it through some mythical model of “right-sizing” to be achieved by a proposed matching of supply and demand.
But, of course, the pain is real, and it reaches fever pitch in the transitional moments of crisis when graduate students face the “market” for jobs. The wretched system we endure makes it impossible not to sympathize with graduate students who understandably often argue that we must reduce the supply of Ph.D.s to give them a better chance to get a job. Under these enormous tensions, the short-term, crisis-management model of supply and demand can especially seem like the only fair-minded option.
In those moments of anguish, which I myself witness every time one of my own students reaches this transition stage, our only ethical task is to support them and listen to them as best we can to help them navigate the transition. So I want to make sure that my remarks here are not intended to provide any specific advice other than the obvious need for support. Specific situations and contextual demands will have to be navigated with all the pragmatic skills and rhetorical resourcefulness possible. In contrast, then, to a focus on the crisis moment of the job search, I have framed my comments here in terms of a big picture narrative.
From the longer and larger perspective, what becomes most clear is that our system of having elite graduate faculty surrounded by masses of non-tenure-track teachers mostly fulfilling service functions of teaching lower-level humanities distribution courses and writing courses fuels that cycle of devolution. We need, then, to change the academic system over which we do have some control. Systemic changes can be difficult to even imagine, but it is by no means impossible as long as we understand that it will not happen in an overnight revolution. And the first step inevitably leads us to examine more critically the ethical and political work of both curricular revision and resource allocation. In short, it leads us to a careful analysis of the systemic class structure within the profession, bolstered as it is by procedures and policies, many of which we actually have some degree of professional autonomy to alter.
Of course, the resistance to institutional transformation remains overwhelming at times, and the struggle to mitigate our academic hierarchies and internal class stratifications is a long-term project, well beyond the scope of these comments. To even imagine such changes in our local institutional circumstances, we will have to make many arguments convincing our colleagues that a more collective and collaborative approach to teaching assignments will be beneficial for us all in the long run. And I have at least some evidence that something like what I have been suggesting can actually happen. Where I teach in the Pennsylvania State System of Higher Education (PASSHE), our collective bargaining agreement affecting all 14 universities with a total enrollment of over 100,000 students has created an anomaly in U.S. higher education: more than 75 percent of all faculty on all campuses are tenure-track lines (the inverse of the national percentage average), and all faculty teach all levels of courses.
Much work remains to be done, and we too continuously struggle against state underfunding and the pressure to hire more temporary faculty. But the potential benefits of these efforts, I believe, would make our profession less stratified and more responsive to public needs for high quality education at all levels, so that, ultimately, the humanities will become a more vital part of the social fabric of everyday life for more citizens. That is a goal we should never abandon.
David B. Downing is director of graduate studies in literature and criticism at Indiana University of Pennsylvania. He is the editor of Works and Days, and his most recent book (co-edited with Edward J. Carvalho) is Academic Freedom in the Post-9/11 Era.
Whether or not the humanities are truly in crisis, the current debates around them have a certain gun-to-the-head quality. “This is why you -- student, parent, Republican senator -- shouldn’t pull the trigger,” their promoters plead. “We deserve to live; we’re good productive citizens; we, too, contribute to the economy, national security, democracy, etc.” Most of these reasons are perfectly accurate. But it is nonetheless surprising that, in the face of what is depicted as an existential crisis, most believers shy away from existential claims (with someexceptions). And by not defending the humanities on their own turf, we risk alienating the very people on whose support the long-term survival of our disciplines depend: students.
One reason why our defenses can have a desperate ring to them is that we’re not used to justifying ourselves. Most humanists hold the value of the objects they study to be self-evident. The student who falls in love with Kant, Flaubert, or ancient Egypt does not need to provide an explanation for why she would like to devote years of her life to such studies. To paraphrase Max Weber, scholarship in the humanities is a vocation, a “calling” in the clerical sense. It chooses you, you don’t choose it. The problem with this kind of spiritual passion is that it is difficult to describe. To paraphrase another 20th-century giant, Jimi Hendrix, it’s more about the experience.
It’s not surprising, then, that when we humanists feel (or imagine) the budget axe tickling the hairs on the backs of our necks, we don’t have ready-made apologia with which to woo or wow our would-be executioners. And because a calling is hard to explain, we turn instead to more straightforward, utilitarian defenses -- “but employers say they like English majors!” -- which, while true, don’t capture the authentic spirit that moves the humanities student.
There is of course sound logic to this approach. Government and state funding is a zero-sum game, and politicians are more likely to be receptive to practical arguments than to existential propositions. But in the long run, it takes more than state and university budgets to maintain the health of the humanities. It also takes students. And by constantly putting our most productive foot forward, we may unintentionally end up selling ourselves short (disclosure: I, too, have sinned). The fundamental reason why students should devote hours of their weeks to novels, philosophy, art, music, or history is not so that they can hone their communication skills or refine their critical thinking. It is because the humanities offer students a profound sense of existential purpose.
The real challenge that we face today, then, lies in explaining to a perplexed, but not necessarily hostile audience -- and perhaps even to ourselves -- why it is that the study of literature, anthropology, art history, or classics can be so meaningful, and why this existential rationale is equally important as other, more utilitarian ones. This line of argument stands in opposition to proclamations of the humanities’ uselessness: to declare that the humanities are of existential value is to affirm that they are very useful indeed.
So how might we go about defining this existential value? A good place to start would be with existentialism itself. A premise of existentialist philosophy is that we live in a world without inherent meaning. For atheists, this is often understood as the human condition following the death of God. But as Jean-Paul Sartre pointed out in “Existentialism is a Humanism,” even believers must recognize that they ultimately are the ones responsible for the production of meaning (in fact, many early existentialists were Christians). Abraham had to decide for himself whether the angel who commanded him to halt his sacrifice was genuinely a divine messenger. In Sartre's memorable formulation, man is “condemned to be free”; we have no choice but to choose. While it may feel as though a humanities vocation is a calling, you still have to decide to answer the call.
The realization that meaning isn’t something we receive from the outside, from others, but that it always must come from within us, from our conscious, deliberative choices, does not make us crave it any less. We are, existentialists insist, creatures of purpose, a thesis that psychological research has also confirmed.
Now what does this have to do with the humanities? It’s not that obvious, after all, how reading Madame Bovary, the Critique of Pure Reason, or The Book of the Dead can fill your life with purpose. At the same time, we also know that some people do find it deeply meaningful to peruse these works, and even to dedicate their careers to studying them.
What is it, then, that lovers of literature -- to consider but them for the moment -- find so existentially rewarding about reading? In a recent book, my colleague Joshua Landy argues that one of the more satisfying features of literature is that it creates the illusion of a meaningful world. “The poem forms a magic circle from within which all contingency is banished,” he writes apropos of Mallarmé’s celebrated sonnet en -yx. The order we discover in literary works may be magical, but it isn’t metaphysical; it comes from the sense that “everything is exactly what and where it has to be.” Art offers a reprieve from a universe governed by chance; what were merely sordid newspaper clippings can become, when transported into artful narratives, The Red and the Black or Madame Bovary. Landy suggests that fictions produce these illusions through a process of “overdetermination:” the ending of Anna Karenina, for instance, is foreshadowed by its beginning, when Anna witnesses a woman throwing herself under a train.
If art offered only illusions of necessity, it would hardly satisfy existential longing. Pretending that everything happens for a reason is precisely what the existentialists castigated as “bad faith.” Yet there’s an obvious difference between enjoying a novel and, say, believing in Providence. We don’t inhabit fictional worlds, we only pay them visits. No lover of literature actually believes her life is as determined as that of a literary heroine (even Emma Bovary wasn’t psychotic). So why does the semblance of an orderly universe enchant us so?
Well-ordered, fictional worlds attract us, it seems, because we, too, aspire to live lives from which contingency is kept at bay. Beauty, wrote Stendhal, is “only a promise of happiness.” As Alexander Nehamas suggested, in his book of this title, the beautiful work of art provides us with a tantalizing pleasure; beauty engages us in its pursuit. But what do we pursue? “To find something beautiful is inseparable from the need to understand what makes it so,” he writes. Behind the beautiful object -- sonnet, style, or sculpture -- we reach for the idea of order itself. The promise of happiness made by art is a promise of purpose.
But a promise of purpose is still a bird in the bush: it can disappear when you put down the book, or leave the concert hall. For the philosopher Immanuel Kant, art only provides us with an empty sense of purpose; or as he put it, in his distinctively Kantian way, "purposiveness without purpose" (it’s even better in German).
It’s true that few existential crises have been resolved by a trip to the museum or the download of a new album. But Kant may have underestimated how the sense of artistic purpose can also seep into our own lives. For instance, as Plato and every teenager know well, instrumental music can give voice to inexpressible feelings without the help of language. These emotional frameworks can convey a potent sense of purpose. When my youngest daughter spent six weeks in the neonatal ICU with a life-threatening condition, my mind kept replaying the second movement of Beethoven’s seventh symphony to tame my fears. Its somber, resolute progress, punctuated by brief moments of respite, helped to keep my vacillating emotions under control. As in films, sometimes it is the soundtrack that gives meaning to our actions.
The promise of order found in beautiful works of art, then, can inspire us to find purpose in our own lives. The illusion of a world where everything is in its place helps us view reality in a different light. This process is particularly clear -- indeed, almost trivial -- in those humanistic disciplines that do not deal primarily with aesthetic objects, such as philosophy. We aren't attracted to the worldviews of Plato, Kant, or Sartre, purely for the elegance of their formal structure. If we’re swayed by their philosophies, it’s because they allow us to discover hitherto unnoticed patterns in our lives. Sometimes, when you read philosophy, it seems as though the whole world has snapped into place. This is not an experience reserved for professional philosophers, either: at the conclusion of a philosophy course that my colleagues Debra Satz and Rob Reich offer to recovering female addicts, one student declared, “I feel like a butterfly drawn from a cocoon.”
So where art initially appeals to us through intimations of otherworldly beauty, a more prolonged engagement with the humanities can produce a sense of order in the here and now. One could even say that Plato got things the wrong way around: first we’re attracted by an ideal universe, and then we’re led to discover that our own reality is not as absurd as it once seemed. And while particularly evident with philosophy, this sensation of finally making sense of the world, and of your own place in it, can come from many quarters of the humanities. In a delightful interview (originally conducted in French), Justice Stephen Breyer recently exclaimed, “It’s all there in Proust — all mankind!” Other readers have had similar responses to Dante, Shakespeare, Tolstoy, and many more.
But exploring the humanities is not like a trip to the mall: you don't set off to find an off-the-rack outfit to wear. Proust can change your life, but if you only saw the world through his novel, it would be a rather impoverished life. Worse, it would be inauthentic: no author, no matter how great, can tell you what the meaning of your life is. That is something we must cobble together for ourselves, from the bits and pieces of literature, philosophy, religion, history, and art that particularly resonate in us. “These fragments I have shored against my ruins,” T.S. Eliot wrote at the end of The Waste Land. No poem offers a better illustration of this cultural bricolage: Shakespeare answers Dante, and the Upanishads disclose what the Book of Revelation had suppressed.
So here we find an existential rationale for a liberal education. To be sure, the humanities do not figure alone in this endeavor: psychology, biology, and physics can contribute to our perception of ourselves in relation to the world, as can economics, sociology, and political science. But the more a discipline tends toward scientific precision, the more it privileges a small number of accepted, canonical explanations of those aspects of reality it aims to describe. If 20 biology professors lectured on Darwin’s theory of evolution, chances are they’d have a lot in common. But if 20 French professors lectured on Proust’s Recherche, chances are they’d be quite different. The same could be said, perhaps to a lesser extent, for 20 lectures on Plato’s Republic. The kinds of objects that the humanities focus on are generally irreducible to a single explanation. This is why they provide such good fodder for hungry minds: there are so many ways a poem, a painting, or a philosophy book can stick with you.
In his diatribe against the way the humanities have been taught since the '60s, Allan Bloom harrumphed, “On the portal of the humanities is written in many ways and many tongues, ‘There is no truth -- at least here.’ ” But the point of a liberal education is not to read great works in order to discover The Truth. Its point is to give students the chance to fashion purposeful lives for themselves. This is why authors such as Freud, whose truth-value is doubted by many, can still be a source of meaning for others. Conversely, this is also why humanities professors, many of whom are rightfully concerned about the truth-value of certain questions or interpretations, do not always teach the kinds of classes where students can serendipitously discover existential purpose.
There are more than existential reasons to study the humanities. Some are intellectual: history, for instance, responds to our profound curiosity about the past. Some are practical. To celebrate one is not to deny others. The biggest difficulty with defending the humanities is the embarrassment of riches: because humanists are like foxes and learn many different things, it is hard to explain them to the hedgehogs of the world, who want to know what One Big Thing we do well. The danger is that, in compressing our message so it gets heard, we leave out precisely the part that naturally appeals to our future students. Yes, students and parents are worried about employment prospects. But what parents don’t also want their child to lead a meaningful life? We are betraying our students if, as a society, we do not tell them that purpose is what ultimately makes a life well-lived.
Dan Edelstein is a professor of French and (by courtesy) history at Stanford University. He directs the Stanford Summer Humanities Institute.