The number of new academic titles in the humanities published in North America increased from 48,597 in 2009 to 51,789 in 2012, according to a new update in the Humanities Indicators, a project of the American Academy of Arts and Sciences. At the same time, the humanities share of all new academic titles published fell during that time period from 45.1 to 42.9 percent.
It is difficult to recite “The Bells” by Edgar Allan Poe without sounding like an idiot. The first line is navigable without much trouble; the two lines near the close (“From the bells, bells, bells, bells/Bells, bells, bells --”) are just vocal calisthenics. But they return at the same point in the following stanza, with an additional three “bells” for good measure. By the fourth and final stanza, the word repeats twelve times in five lines, and dignity is just a memory.
In one of the harsher evaluations of Poe, the critic Yvor Winters complained about “such resounding puerilities as ‘the pallid bust of Pallas’ ” in “The Raven,” which he called “that attenuated exercise for elocutionists.” That may be, but “The Raven” invites and almost demands oral performance, which in part explains how quickly it became part of American vernacular culture following its publication in 1845. If ever a poem were destined for recitation by James Earl Jones, it is “The Raven.”
In his new book The Poet Edgar Allan Poe: Alien Angel (Harvard University Press), Jerome McGann points out that “The Bells” once served as “an experimental challenge for one of the [Victorian] period’s favorite pastimes, spectacular recitation.” That to some degree mitigates the impression that “The Bells” is, as a poem, a disaster: sufficient grounds for Emerson’s brutal dismissal of Poe as “the jingle man.” It is possible “The Bells” was Poe’s effort to make lightning strike a second and more financially rewarding time (“The Raven” was wildly successful, but he’d sold it for $9), but more important for judging the poem is knowing that it embodies a performative and even competitive aesthetic that simply isn’t part of how we read it now.
Assuming, that is, that we read his poetry at all, beyond middle school. Poe’s fiction looms much larger in contemporary literary culture, and it remains a significant part of popular culture as well. Quantifying such things is hard, but it’s telling that for every book-length study of his poetry that has been published, there are three analyzing his fiction. Emily Dickinson and Walt Whitman figure as the American poets of his era whose influence continued and deepened over time. By contrast Poe’s language and form appear conventional, even when his poetry ventures into realms of madness and erotic obsession – like Longfellow, except morbid. (And to that degree, perhaps, more interesting.)
McGann, a professor of literature at the University of Virginia, rejects that assessment, root and branch. McGann’s early criticism focused on Lord Byron, Dante Gabriel Rossetti, and Algernon Swinburne, but over the past couple of decades he has been a thoughtful advocate for the digital humanities; his most recent book on that front, published earlier this year, is A New Republic of Letters: Memory and Scholarship in the Age of Digital Reproduction, also from Harvard. Besides advocating digital scholarship, McGann has been a practitioner of it, as exemplified by his work on The Complete Writings and Pictures of Dante Gabriel Rossetti: A Hypermedia Archive.
So it comes as a surprise that in his book on Poe’s poetry McGann returns to a vein of critical writing that seems, if not old-fashioned, at least indifferent to today’s modes of focusing (or splintering) attention. The Poet Edgar Allan Poe is, among other things, a response to that take-down by Yvor Winters mentioned earlier – an essay appearing in the journal American Literature, all the way back in 1937.
Winters was comprehensively dismissive of Poe’s work as a whole, calling it “an art to delight the soul of a servant girl” and professing “astonishment that mature men can take this kind of thing seriously.” But the expression of chauvinistic snobbery was incidental to Winters’s more basic objection to Poe’s sensibility – his understanding of what literature was, and should be. He charged Poe with believing that “the subject matter of poetry, properly considered, is by definition incomprehensible and unattainable; the poet, in dealing with something else, toward which he has no intellectual or moral responsibilities whatever … should merely endeavor to suggest that a higher meaning exists – in other words, should endeavor to suggest the presence of a meaning when he is aware of none. The poet has only to write a good description of something physically impressive, with an air of mystery, an air of meaning concealed.”
Winters quotes passages from Poe’s correspondence and literary criticism that seem to corroborate this portrait of Poe as a shallow dandy -- babbling about Beauty and contemptuous of Truth, turning out literature at a self-trivializing remove from any concern with real life or meaningful values.
Winters calls this attitude “obscurantist.” And clearly Poe is not the only offender he has in mind. T.S. Eliot is a likely example of who he’s implicitly attacking -- and Winters makes the overt suggestion that Poe’s outlook was also typical of Hart Crane, who had killed himself just a few years earlier. Aestheticism yields nihilism, then suicide.
Talk about a symptomatic reading…. Jerome McGann goes over many of the same passages Winters adduced in his bill of complaints against Poe, considering them alongside numerous lesser-known writings as well as Poe’s literary models, especially Shelley, Byron, and Coleridge. From a close reading of Poe’s rhetorical tropes and careful reconstructions of context, McGann draws out a much richer understanding of Poe’s perspective on art and life than Winters’s polemic allows.
“Affect is summoned into and then driven from the poems,” McGann says, “and, like an exorcised demon, set free to enter and take possession of the reader. … His poetry does not propose a compensation for the loss of loved and cherished things, it tells a double truth about those losses: first, that they lie beyond redemption; and second that they need not — indeed, must not — lie beyond a ‘mournful and never-ending remembrance.’ For memory is called to cherish even the factitious world.”
For it’s the only world the reader’s got – and not for long, at that. Those losses, and mournful recollections, take place against the backdrop of a teeming and bustling 19th-century America, with no prospect of anything but acceleration ahead. “That,” McGann says, “is the ultimate meaning of Poe’s mortally immortal word ‘Nevermore’….”
Unlike the figure Winters portrayed, McGann’s Poe doesn’t settle for poetry as delicate noises composed somewhere beyond real life; he doesn’t shirk the effort to find and express meaning. The argument is compelling, although McGann’s enthusiasm for “The Bells” seems pushing things too far.
About “The Bells,” I think the best thing you can do is repeat Mark Twain’s considered opinion of Wagner’s music: “It isn’t as bad as it sounds.”
Sixty years ago this month, the U.S. Post Office declared a small journal called ONE: The Homosexual Magazine, published in Los Angeles, to be obscene and thus unlawful to distribute through the mail. All copies of the latest issue were seized and presumably destroyed.
The editors -- having already endured a letter-writing campaign from the Federal Bureau of Investigation that tried to get them fired from their day jobs -- cannot have been that surprised by the postal service’s move. Still, the characterization of ONE as “cheap pornography” (in one judge’s words) was ludicrous. Recent issues had included articles on police entrapment, Walt Whitman, and attitudes toward homosexuality in Britain throughout history. The editors also published a sonnet by William Shakespeare and a salute to the “history-making TV appearance [of] Curtis White of Los Angeles [who] personally stated that he is a homosexual.”
By no stretch of the imagination was it fair to call ONE obscene. At worst, it was feisty. But that was much the same thing at a time when “homosexuals were virtually without constitutional rights,” as Walter Frank put it in Law and the Gay Rights Story: The Long Search for Equal Justice in a Divided Democracy (Rutgers University Press). The turning point came when the Supreme Court overruled the USPS ban on ONE in 1958. The decision was little-noticed at the time -- and it doesn’t even register as a blip in the general public’s historical memory, in which the gay rights struggle began, more or less, with Stonewall.
The Supreme Court decision ran to one sentence and cited the Court’s ruling in Roth v. United States, two years earlier. The author of Law and the Gay Rights Struggle is co-chair of the Law and Literature Committee of the New York County Lawyers Association, and takes for granted closer familiarity with Roth v. U.S. than most non-jurists will possess. (I could have told you that the plaintiff was Samuel, a publisher of girlie magazines, and not Phillip, the novelist -- though not much more.) But upon looking up the decision, it’s fairly easy to spot what has to have been the crucial passage with respect to ONE:
“Obscene material is material which deals with sex in a manner appealing to prurient interest. The portrayal of sex, e.g., in art, literature and scientific works is not itself sufficient reason to deny material the constitutional protection of freedom of speech and press. Sex, a great and mysterious motive force in human life, has indisputably been a subject of absorbing interest to mankind through the ages; it is one of the vital problems of human interest and public concern.”
That it is. And a major strategy of early gay-rights advocates was to insist on the “absorbing interest to mankind through the ages” part with respect to same-sex desire. (Hence the Shakespeare sonnet in ONE.)
Frank’s purview is narrower, and a lot more democratic. He focuses on the seven decades following the end of World War II – a period in which the struggle for equality moved ever more in the direction of grassroots activism and demands for respect in everyday life. Identifying the illustrious gay dead gave way to more mundane but urgent priorities, like securing hospital visitation rights and protection from housing discrimination.
About half of Law and the Gay Rights Story consists of a succinct overview of how gay and lesbian communities and institutions took root within, and against, “a society that had simply decided to place certain people beyond its protection.” In a provocative formulation (I mean that in a good way) Frank writes that “discrimination itself could remain in the closet because gays themselves were not willing to come forward in sufficient numbers or with sufficient energy to contest it.”
A couple of generations of historians have studied how that situation changed – how the numbers and energy accumulated, and began to make a breach in a system that had effectively limited gays and lesbians to two choices, celibacy or criminality. Frank draws on and synthesizes the social and cultural historians’ work without claiming to go beyond it.
He does build in a distinctive periodization, however, by dividing the past few decades of gay-rights struggle into three phases or waves. The first and longest subsumes everything from ONE to Stonewall to the assassination of Harvey Milk: a cycle of growing confidence and assertiveness, coming to an end around the point when reports of a “gay cancer” emerged in 1981. His second period is defined by the AIDS crisis, in which government neglect and anti-gay political sentiment made the gay struggle largely defensive. A third wave, beginning in the early 1990s and continuing through the present, has seen something of a revival of the first period’s vigor but an even more remarkable growth of acceptance of claims for legal equality -- with the Supreme Court defining as unconstitutional both anti-sodomy laws and the Defense of Marriage Act’s definition of marriage to exclude same-sex couples.
In recent years, Frank writes, “concepts of freedom and equality began to overlap in a way they did not in the first phase, when gays were fighting for the right to celebrate themselves without fear and to be allowed some measure of dignity…. The equality that gays have been fighting for in this [most recent] phase concerns all the freedoms that most people take for granted, including the freedom to marry. As that argument has taken hold, the tide of public opinion has shifted, and with it the terrain on which the battle has been fought.”
In other remarks, the author seems perfectly aware of the potential for backlash. Consider the point of view expressed by a voter regarding an anti-gay ballot initiative: "I don't think being gay is right. It's immoral. It's against all religious beliefs. I don't agree with gays at all, but I don't think they should be discriminated against."
Frank cites this arresting blend of sentiments in a context suggesting that it demonstrates a slow growth of tolerance in seemingly inhospitable circumstances. That's one way to look at it. But politics is always a struggle to shift the terrain on which the battle is being fought, and reversals do occur. That said, I'd like to imagine that the person who contributed to ONE under the name Herbert Grant is still alive and well. In 1954, he wrote an article that might well have been the last straw for the authorities. In it, he proposed that same-sex couples be allowed to marry.
In Friday’s decision in Cambridge University Press v. Patton, the U.S. Court of Appeals for the Eleventh Circuit followed decades of jurisprudence in casting aside bright line rules for determining whether faculty made fair use of copyrighted material. This is regrettable, as the celebrated 2012 district court opinion in the same case had opened up the possibility of teaching faculty how to properly make fair use of material using plain terms and easy-to-understand concepts, while the appeals court opinion returns us to the days of case-by-case holistic analysis and detailed exceptions, loopholes, and caveats.
The case revolves around a challenge by several companies that published non-textbook scholarly works to Georgia State University’s electronic reserve systems, wherein faculty and librarians would scan in excerpts of books for students to access digitally, a technological improvement over the traditional practice of leaving a copy or two on reserve at the library circulation desk. The publishers claimed mass copyright infringement while Georgia State cited the fair use provisions of Section 107 of the Copyright Law.
The district court exhaustively analyzed each work uploaded to electronic reserves, finding only five in violation out of the dozens submitted by the publishing companies, by taking a new twist to the law’s four factors for analysis:
The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
The nature of the copyrighted work;
The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
The effect of the use upon the potential market for, or value of, the copyrighted work.
Traditional fair use analysis calls for a case-by-case analysis of each potential use, independently weighing the four factors holistically, which is difficult and often requires knowledge of unavailable facts (such as the effect on the market of the work, which is nearly impossible for those outside of the company to guess at). (For instance, the Supreme Court in Campbell v. Acuff-Rose Music, Inc. specifically discarded any use of “bright line rules” for determining fair use of copyrighted material.)
Judge Orinda Evans went a different route. She found that de minimis use (such as when a faculty member posts a work but no student ever accesses it) is not a violation, and that in most cases, using one chapter or 10 percent of a book that is under copyright protection would meet the fair use test. The judge decided to clearly assign winners in each of the four factors, and then give the overall win to the party with the majority of factors in their favor.
She wrote that factors one and two almost always went in favor of nonprofit higher educational use of academic works. While a determination of factor four may be difficult for a faculty member to determine, and would likely go in favor of the publishers, the judge ruled that 10 percent or one chapter of a work that is digitally available would meet the fair use test for factor three. Adding factors 1, 2, and 3 together let her find a majority and, thus, fair use, even without factor four.
Note that these findings were for those works that could be purchased digitally. In another section, the judge applied some behavioral economics to factor four by finding that for those works that a publisher did not make available digitally, a faculty member could use approximately 18 percent of the work and still win a fair use analysis. That larger limit of factor 3 could encourage publishers to make their works available at reasonable prices, so as to discourage fair use without remuneration.
This was a groundbreaking opinion that allowed intellectual property lawyers in higher education to clearly explain to administrators and faculty members which uses would and would not be fair. Rather than require our botany and geography professors to also become copyright scholars, we could provide them with reasonable tests to ensure they properly balanced the interests of students in accessing the content with the interest of publishers in compensation for developing the content. While this wasn’t the first effort to develop fair use standards, it was the clearest, and the first time that such standards were set by a court.
The appeals court rejected this analysis and found that the “District Court did not err in performing a work-by-work analysis of individual instances of alleged infringement in order to determine the need for injunctive relief. However, the District Court did err by giving each of the four fair use factors equal weight, and by treating the four factors mechanistically.”
The appeals court instead called for a return to the holistic analysis. Rejecting the 10 percent or one chapter bright-line rule, the appellate court wrote that “the District Court should have performed this analysis on a work-by-work basis, taking into account whether the amount taken -- qualitatively and quantitatively -- was reasonable in light of the pedagogical purpose of the use and the threat of market substitution.”
The appeals court decision stands on solid precedential ground, and it is not the first court to call for a holistic and case-by-case analysis. While one can defend that decision by looking to the past, the decision is a poor one for those who look to the future. As content becomes more available in varying formats, and our faculty, staff and students are faced with myriad opportunities to pay for content, make fair use, or violate copyrights of authors and creators, the presence of clear standards and easily digestible rules provided higher education with a fighting chance to educate our academic community and encourage proper balancing and fair (but not inappropriate) use of content.
William Patry and Melville Nimmer, the two seminal thinkers in copyright law, each devote hundreds of pages to explaining copyright law. Their sets of volumes, which cost thousands of dollars, provide a comprehensive analysis of fair use and all of its details. But these books and detailed analysis are well outside the scope of what we expect of our faculty members who do not specialize in intellectual property, and our instructors simply do not have the time to conduct an exhaustive analysis of each use, even if they did take the time to learn all the permutations of the fair use analysis. This isn’t to say that they can’t, but to state the reality that they won’t.
Frankly, the dueling decisions in these cases, and the numerous articles and statements by serious copyright scholars on both sides of this analysis, show that even those who steep themselves in the details of fair use can disagree on whether a certain use is fair or violative.
When intellectual property law experts cannot agree, we should not expect our history and math faculty to do justice to the fair use analysis each time.
Instead, faculty will divide into two camps. One group will “throw caution to the wind” and use whatever content they wish in whatever form they desire, hoping never to raise the ire of the publishing companies.
The other, out of an abundance of caution, will self-censor, and fail to make fair use of content for fear that they might step over a line they cannot possibly identify, and can never be certain of until a judge rules one way or the other. Either way, our students and the publishers lose out.
The district court opinion shed some light into the murky swamp of fair use analysis. The Eleventh Circuit opinion dims that light, and threatens to return us to a regime wherein faculty who are not experts in copyright law will either use without consideration of the law or self-censor, diminishing the utility of the concept of fair use.
The Constitution teaches that the purpose of copyright is to “promote the Progress of Science and useful Arts.” The district court opinion found that small excerpts available to students “would further the spread of knowledge.”
Arming faculty with clear rules and standards to properly balance fair use of content would go a long way toward achieving this goal.
Joseph Storch is an attorney in the State University of New York Office of General Counsel. The views expressed here are his own.
In recent years we’ve had quite a few books on the negative emotions – disgust, malice, humiliation, shame – from scholars in the humanities. In addition, Oxford University Press published its series of little books on the Seven Deadly Sins. Apparently envy is the most interesting vice, to judge by the sales ranks on Amazon, followed by anger -- with lust straggling in third place. (A poor showing, given its considerable claims on human attention.)
The audience for monographs putting unpleasant or painful feelings into cultural and historical context probably doesn’t overlap very much with the far larger pop-psychology readership. But their interests do converge on at least one point. Negative affects do have some benefits, but most of us try to avoid them, or minimize them, both in ourselves and others, and to disguise them when necessary; or, failing that, to do damage control. And because the urge to limit them is so strong, so is the need to comprehend where the feelings come from and how they operate.
Arguably the poets, historians, and philosophers have produced richer understandings of negative emotions, in all their messiness. As for what the likes of Dr. Phil bring to the table, I have no opinion – though obviously they’re the ones leaving it with the biggest bags of money.
But the avoidance / interest dynamic really goes AWOL with the topic Chris Walsh explores in Cowardice: A Brief History (Princeton University Press). The Library of Congress catalog has a subject heading called “Cowardice — history,” with Walsh’s book being the sole entry. That’s a clerical error: Marquette University Press published Lesley J. Gordon’s “I Never Was a Coward”: Questions of Bravery in a Civil War Regiment in 2005. It is 43 pages long, making Walsh the preeminent scholar in the field by a sizable margin. (He is also associate director of the College of Arts and Sciences Writing Program at Boston University.)
“[P]ondering cowardice,” he writes “illuminates (from underneath, as it were) our moral world. What we think about cowardice reveals a great deal about our conceptions of human nature and responsibility, about what we think an individual person can and should have to endure, and how much one owes to others, to community and cause.”
But apart from a typically thought-provoking paper by William Ian Miller a few years ago, cowardice has gone largely unpondered. Plato brought it up while on route to discussing courage. Aristotle stressed the symmetry between cowardice (too much fear, too little confidence) and rashness (too much confidence, too little fear) and went on to observe that rash men tended to be cowards hiding behind bluster.
That insight has survived the test of time, though it’s one of the few analyses of cowardice that Walsh can draw on. But in the historical and literary record it is always much more concrete. (In that regard it’s worth noting that the LOC catalog lists 44 novels about cowardice, as against just two nonfiction works.)
Until sometime in the 19th century, cowardice seems to have been equated simply and directly with fear. It was the immoral and unmanly lack of yearning for the chance at slaughter and glory. The author refers to the American Civil War as a possible turning point, or at least the beginning of a change, in the United States. By the Second World War, the U.S. Army gave new soldiers a pamphlet stating, up front, YOU’LL BE SCARED and even acknowledging their anxiety that they might prove cowards once in battle.
Courage was not an absence of fear but the ability to act in spite of it. This represented a significant change in attitude, and it had the advantage of being sane. But it did not get around a fundamental issue that Walsh shows coming up repeatedly, and one well-depicted in James Jones’s novel The Thin Red Line:
“[S]omewhere in the back of each soldier’s mind, like a fingernail picking uncontrollably at a scabby sore, was the small voice saying: but is it worth it? Is it really worth it to die, to be dead, just to prove to everybody you’re not a coward?”
The answer that the narrator of Louis-Fernand Celine’s Journey to the End of the Night about the First World War (“I wasn’t very bright myself, but at least I had sense enough to opt for cowardice once and for all”) sounds a lot like Mark Twain’s considered opinion in the matter: “The human race is a race of cowards, and I am not only marching in that procession but carrying a banner.”
Both were satirists, but there may be more to the convergence of sentiment than that. In the late 19th and early 20th centuries, war became mechanized and total, with poison gas and machine guns (just a taste of improvements to come) and whole populations mobilized by propaganda and thrown onto the battlefield. The moral defect of the coward was sometimes less than obvious, especially with some hindsight.
In Twain’s case, the remark about fundamental human cowardice wasn’t an excuse for his own military record, which was not glorious. (He numbered himself among the thousands who "entered the war, got just a taste of it, and then stepped out again permanently.") Walsh provides a crucial bit of context by quoting Twain’s comment that “man’s commonest weakness, his aversion to being unpleasantly conspicuous, pointed at, shunned” is better understood as moral cowardice, “the supreme make-up of 9,999 men in the 10,000.”
I’ve indicated a few of Walsh’s themes here, and neglected a few. (The yellow cover, for example, being a reminder of his pages on the link between cowardice and that color.) Someone might well write an essay about how overwhelmingly androcentric the discussion tends to be, except insofar as a male labeled as a coward is called womanly. This is strange. When the time comes for battle, a man can try to flee, but I’ve never heard of anyone escaping childbirth that way. And the relationship between moral cowardice (or courage) and the military sort seems complex enough for another book.
In 2009, the Cornell Law Review published an article called “The Anti-Corruption Principle” by Zephyr Teachout, then a visiting assistant professor of law at Duke University. In it she maintained that that the framers of the U.S. Constitution were “obsessed” (that was Teachout’s word) with the dangers of political corruption – bribery, cronyism, patronage, the making of laws designed to benefit a few at the expense of public well-being, and so on.
Such practices, and the attitudes going with them, had eaten away, termite-like, at the ethos of the ancient Roman republic and done untold damage to the spirit of liberty in Britain as well. The one collapsed; the other spawned “rulers who neither see, nor feel, nor know / but leech-like to their fainting country cling,” as Shelley in a poem about George III’s reign wrote some years later. But in Teachout’s reading, the framers were obsessed with corruption without being fatalistic about it. The best way to reduce the chances of corruption was by reducing the opportunities for temptation – for example, by preventing any “Person holding any Office of Profit or Trust” from “accepting any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State” without explicit permission from Congress. Likewise, a separation of powers among the executive, legislative, and judicial branches was, in part, an expression of the anti-corruption principle.
Teachout indicated in a footnote that her argument would be expanded in a forthcoming book, called The Meaning of Corruption, due out the following year. It was delayed. For one thing, Teachout moved to Fordham University, where she is now an associate professor of law. And for another, her law-review article gained the unusual eminence of being cited by two Supreme Court Justices, Antonin Scalia and John Paul Roberts, in their opinions concerning the landmark Citizens United v. Federal Elections Commission decision.
Now Teachout’s book has appeared as Corruption in America: From Benjamin Franklin’s Snuff Box to Citizens United, from Harvard University Press – an appreciably livelier title, increasing the likelihood (now pretty much a certainty) that it will inform the thinking of many rank-and-file Democratic Party supporters and activists.
Whether it will resonate with their leaders beyond the level of campaign rhetoric is another matter. Each of the two parties has a revolving door between elected office and the lobbying sector. While discussing the book here last week, I mentioned that suspicion and hostility toward lobbying were conspicuous in American political attitudes until fairly recently. They still are, of course, but with nothing like the intensity exhibited when the state of Georgia adopted a constitution outlawing the practice in 1877: “Lobbying is declared to be a crime, and the General Assembly shall influence this provision by suitable penalties,” including a prison sentence of up to five years. Other efforts to curtail lobbying were less severe, though nonetheless sharper than today’s statutes requiring lobbyists to register and disclose their sources of funding.
“[T]he practice of paying someone else to make one’s arguments to people in authority,” writes Teachout, “threatened to undermine the moral fabric of civil society…. In a lobbyist-client relationship, the lobbyist, by virtue of being a citizen, has a distinct relationship to what he himself might believe. He is selling his own citizenship, or one of the obligations of his own citizenship, for a fee.”
The lobbyist’s activity is “more akin to selling the personal right to vote than selling legal skills,” as a lawyer does. Nor is that the only damage lobbying does to the delicate ecology of mutual confidence between state and citizen. It “legitimates a kind of routine sophistry and a casual approach towards public argument. It leads people to distrust the sincerity of public arguments and weakens their own sense of obligation to the public good” – thereby creating “the danger of a cynical political culture.” (So that’s how we got here.)
Clearly something went wrong. The anti-corruption principle, as Teachout formulates it, entails more than the prevention of certain kinds of acts – say, bribery. It’s also supposed to strengthen the individual citizen’s faith in and respect for authority while also promoting the general welfare. But private interest has a way of seeing itself as public interest, as exemplified in a railroad lobbyist’s remarks to Congress during the Gilded Age: If someone “won’t do right unless he’s bribed to do it,” he said, “…I think it’s a man’s duty to go up and bribe him.”
Teachout refers to an erosion of the anti-corruption principle over time, but much of her narrative documents a recurring failure to give anti-corruption laws teeth. “Criminal anticorruption laws were particularly hard to prosecute” during the 19th century, she writes, because “the wrongdoers – the briber and the bribed – had no incentive to complain,” while “the defrauded public was dispersed, with no identifiable victim who would drive the charge.” The concept of corruption has dwindled to that bribery defined as quid pro quo in the narrowest possible terms: “openly asking for a deal in exchange for a specific government action.”
In a colloquy appearing in the Northwestern University Law Review, Seth Barrett Tillman, a lecturer in law at the National University of Ireland Maynooth, suggests that a core problem with Teachout’s argument is that it overstates how single-mindedly anti-corruption the framers of the U.S. Constitution actually were. The Articles of Confederation made broader anti-corruption provisions on some points, for example.
And “if the Framers believed that corruption posed the chief danger to the new Republic,” he writes, “one wonders why corrupt Senate-convicted and disqualified former federal officials were still eligible to hold state offices—offices which could indirectly affect significant operations of the new national government—and were also (arguably) eligible to hold congressional seats, thereby injecting corrupt officials directly into national policy-making.”
Concerned about corruption? Definitely. “Obsessed” with it? Not so much. There is much to like about Teachout’s book, but treating the framers of the Constitution as possessing the keys to resolving 21st-century problems seems extremely idealistic, and not in a good way.
Writing in 1860, a journalist depicted Washington as a miserable little Podunk on the Potomac, quite unworthy of its status as the nation’s capitol. He called it an “out of the way, one-horse town, whose population consists of office-holders, lobby buzzards, landlords, loafers, blacklegs, hackmen, and cyprians – all subsisting on public plunder.”
"Hackmen" meant horse-powered cabbies. "Blacklegs" were crooked gamblers. And cyprians (lower-case) were prostitutes -- a classical allusion turned slur, since Cyprus was a legendary birthplace of Aphrodite. Out-of-towners presumably asked hackmen where to find blacklegs and cyprians.
But sordid entertainment was really the least of D.C. vices. “The paramount, overshadowing occupation of the residents,” the newsman continued, having just gotten warmed up, “is office-holding and lobbying, and the prize of life is a grab at the contents of Uncle Sam’s till. The public-plunder interest swallows up all others, and makes the city a great festering, unbearable sore on the body politic. No healthy public opinion can reach down here to purify the moral atmosphere of Washington.”
Plus ça change! To be fair, the place has grown more metropolitan and now generates at least some revenue from tourism (plundering the public by other means). Zephyr Teachout quotes this description in Corruption in America: From Benjamin Franklin’s Snuff Box to Citizens United (Harvard University Press), a book that merits the large readership it may get thanks to the author’s recent appearance on "The Daily Show," even if much of that interview concerned her remarkable dark-horse gubernatorial campaign in New York state's Democratic primary, in which anti-corruption was one of her major themes. (Teachout is associate professor of law at Fordham University.)
The indignant commentator of 1860 could include lobbyists in the list of ne’er-do-wells and assume readers would share his disapproval. “Lobby buzzards” were as about as respectable as card sharks and hookers. You can still draw cheers for denouncing their influence, of course, but Teachout suggests that something much deeper than cynicism was involved in the complaint. It had a moral logic – one implying a very different set of standards and expectations than prevails now, to judge by recent Supreme Court rulings.
Teachout’s narrative spans the history of the United States from its beginnings through Chief Justice John Roberts’s decision in McCutcheon v. FEC, less than six months ago. One of the books that gripped the country’s early leaders was Edward Gibbon’s Decline and Fall of the Roman Empire, the first volume of which happened to come out in 1776, and Teachout regards the spirit they shared with Gibbon as something like the crucial genetic material in the early republic’s ideological DNA.
To be clear, she doesn’t argue that Gibbon influenced the founders. Rather, they found in his history exceptionally clear and vivid confirmation of their understanding of republican virtue and the need to safeguard it by every possible means. A passage from Montesquieu that Thomas Jefferson copied into his notebook explained that a republican ethos “requires a constant preference of public to private interest [and] is the source of all private virtues….”
That “constant preference” required constant vigilance. The early U.S. statesmen looked to the ancient Roman republic as a model (“in creating something that has never yet existed,” a German political commentator later noted, political leaders “anxiously conjure up the spirits of the past to their service and borrow from them names, battle cries, and costumes in order to present the new scene of world history in this time-honored disguise and this borrowed language”).
But the founders also took from history the lesson that republics, like fish, rot from the head down. The moral authority, not just of this or that elected official, but of the whole government demanded the utmost scruple – otherwise, the whole society would end up as a fetid moral cesspool, like Europe. (The tendency to define American identity against the European other runs deep.)
Translating this rather anxious ideology into clear, sharp legislation was a major concern in the early republic, as Teachout recounts in sometimes entertaining detail. It was the diplomatic protocol of the day for a country’s dignitaries to present lavish gifts to foreign ambassadors -- as when the king of France gave diamond-encrusted snuffboxes, with his majesty’s portrait on them, to Benjamin Franklin and Thomas Jefferson. In Franklin’s case, at least, the gift expressed admiration and affection for him as an individual at least as much as it did respect for his official role.
But all the more reason to require Congressional approval. Doing one’s public duty must be its own reward, not an occasion for private benefit. Franklin received official permission to accept the snuffboxes, as did two other figures Teachout discusses. The practice grated on American sensibilities, but had to be tolerated to avoid offending an ally. Jefferson failed to disclose the gift to Congress and quietly arranged to have the diamonds plucked off and sold to cover his expenses.
Like the separation of powers among the executive, legislative, and judicial branches (another idea taken from Montesquieu), the division of Congress into House and Senate was also designed to preempt corruption: “The improbability of sinister combinations,” wrote Madison, “will be in proportion to the dissimilarity in genius of the two bodies.” Teachout quotes one delegate to the Constitutional Convention referring sarcastically to the “mercenary & depraved ambition” of “those generous & benevolent characters who will do justice to each other’s merit, by carving out offices & rewards for it.”
Hence the need for measures such as the clause in Article 1, Section 6 forbidding legislators from serving simultaneously in an appointed government position. It also prevented them from accepting such a position created during their terms, after they took office. The potential for abuse was clear, but it could be contained. The clause was an effort “to avoid as much as possible every motive for corruption,” in another delegate’s words.
Corruption, so understood, clearly entails far more than bribery, nepotism, and the like – things done with an intent to influence the performance of official duties, in order to yield a particular benefit. The quid pro quo was only the most obvious level of the injustice. Beyond violating a rule or law, it undermines the legitimacy of the whole process. It erodes trust in even the ideal of disinterested official power. Public service itself begins to look like private interest carried on duplicitously.
The public-mindedness and lofty republican principles cultivated in the decades just after the American revolution soon enough clashed with the political and economic realities of a country expanding rapidly westward. There were fortunes to be made, and bribes to be taken. But as late as the 1880s, states were putting laws on the books to wipe out lobbying, on the grounds that it did damage to res publica.
Clearly a prolonged and messy process has intervened in the meantime, which we’ll consider in the next column, along with some of the criticism of Teachout’s ideas that have emerged since she began presenting them in legal journals a few years ago. Until then, consider the proposal that newspaper writer of the 1860s offered for how to clean the Augean stables of Washington: To clear out corruption, the nation’s capitol should be moved to New York City, where it would be under a more watchful eye. Brilliant! What could possibly go wrong?
When young sociologists would consult with C. Wright Mills, it’s said, he would end his recommendations with what was clearly a personal motto: “Take it big!” It was the concentrated expression of an ethos: Tackle major issues. Ask wide-ranging questions. Use the tools of your profession, but be careful not to let them dig a mental rut you can’t escape.
Jo Guldi and David Armitage give much the same advice to their colleagues, and especially their colleagues-to-be, in The History Manifesto, a new book from Cambridge University Press. (Guldi is an assistant professor of history at Brown University, while Armitage is chair of the history department at Harvard.) Only by “taking it big” can their field regain the power and influence it once had in public life – and lost, somewhere along the line, to economics, with its faith in quantification and the seeming rigor of its concepts.
But issues such as climate change and growing economic inequality must be understood in terms of decades and centuries. The role of economists as counselors to the powerful has certainly been up for question over the past six years. Meanwhile, the world’s financial system continues to be shaped by computerized transactions conducted at speeds only a little slower than the decay of subatomic particles. And so, with their manifesto, the authors raise the call: Now is the time for all good historians to come to the aid of their planet.
But first, the discipline needs some major recalibration. “In 1900,” Guldi and Armitage write, “the average number of years covered [by the subject matter of] doctoral dissertations in history in the United States was about 75 years; by 1975, it was closer to 30.” The span covered in a given study is not the only thing that’s narrowed over the intervening four decades. Dissertations have “concentrated on the local and the specific as an arena in which the historian can exercise her skills of biography, archival reading, and periodization within the petri-dish of a handful of years.”
The problem isn’t with the monographs themselves, which are often virtuoso analyses by scholars exhibiting an almost athletic stamina for archival research. Guldi and Armitage recognize the need for highly focused and exhaustively documented studies in recovering the history of labor, racial and religious minorities, women, immigrants, LGBT people, and so forth.
But after two or three generations, the “ever narrower yet ever deeper” mode has become normative. The authors complain that it "determines how we write our studies, where we look for sources, and which debates we engage. It also determines where we break off the conversation.”
Or, indeed, whether the conversation includes a historical perspective at all. “As students in classrooms were told to narrow and to focus” their research interests, “the professionals who deal with past and future began to restrict not only their sources and their data, but sometimes also their ideas.”
In referring to “professionals who deal with past and future,” the authors do not mean historians themselves -- at least not exclusively -- but rather leaders active at all levels of society. The relevance of historical knowledge to public affairs (and vice versa) once seemed obvious. Guldi and Armitage point to Machiavelli’s commentary on Livy as one example of a political figure practicing history, while Eric Williams, who wrote Capitalism and Slavery for his doctoral dissertation, went on to serve as Trinidad’s first prime minister after it became independent.
Between extreme specialization by historians and politicians whose temporal horizons are defined by the election cycle, things look bad. That understanding the past might have some bearing on actions in the present may not seem all that difficult to grasp. But consider the recent American president who invaded a Middle Eastern country without knowing that its population consisted of two religious groups who, over the past millennium or so, have been less than friendly toward one another. (For some reason, I thought of that a couple of times while reading Guldi and Armitage.) Anyway, it did turn out to be kind of an issue.
A manifesto requires more than complaint. It must also offer a program and, as much as possible, rally some forces for realizing its demands. The cure for short-term thinking in politics that Guldi and Armitage propose is the systematic cultivation of long-term thinking in history.
And to begin with, that means putting the norms of what they call “microhistory” in context – keeping in mind that it is really a fairly recent development within the profession. (Not so many decades ago, a historical study covering no more than a hundred years ran the risk of being dismissed as a bit of a lightweight.) The authors call for a revival of the great Ferdinand Braudel’s commitment to study historical processes “of long, even of very long, duration,” as he said in the late 1950s.
Braudel’s longue-durée was the scale on which developments such as the consolidation of trade routes or the growth of a world religion took place: centuries, or millennia. These phenomena “lasted longer than economic cycles, to be sure,” Guldi and Armitage write, but “were significantly shorter than the imperceptibly shifting shapes of mountains or seas, or the rhythms of nomadism or transhumance.”
Braudel counterposed the longue-durée to “the history of events,” which documented ephemeral matters such as wars, political upheaval, and whatnot. The History Manifesto is not nearly so Olympian as that. The aim is not to obliterate what the authors call “the Short Past” but rather to encourage research that would put “events” in the perspective of rhythms of change extending beyond a single human lifetime.
The tools are available. Guldi and Armiutage’s proposed course seems inspired by Big Data as much as by Braudel. Drawing on pools of scattered information about “weather, trade, agricultural production, food consumption, and other material realities,” historians could create broad but detailed accounts of how the social and environmental conditions change over long periods.
“Layering known patterns of reality upon each other,” the authors say, “produces startling indicators of how the world has changed – for instance the concentration of aerosols identified from the mid-twentieth century in parts of India have proven to have disrupted the pattern of the monsoon…. By placing government data about farms next to data on the weather, history allows us to see the interplay of material change with human experience, and how a changing climate has already been creating different sets of winners and losers over decades.”
Any number of questions come to mind about causality, the adequacy of available documents, and whether one’s methodology identifies patterns or creates them. But that’s always the case, whatever the scale a historian is working on.
The History Manifesto is exactly as tendentious as the title would suggest -- and if the authors find it easier to make their case against “microhistory” by ignoring the work of contemporary “macrohistorians” …. well, that’s the nature of the genre. A few examples off the top of my head: Perry Anderson’s Passages from Antiquity to Feudalism and Lineages of the Absolutist State, Michael Mann’s multivolume study of social power over the past five thousand years, and the essays by Gopal Balakrishnan’s collected in Antagonistics, which grapple with the longue-duré in terms both cosmopolitan and stratospheric. They also shuffle quietly past the work of Oswald Spengler, Arnold Toynbee, and Carroll Quigley – an understandable oversight, given the questions that would come up about where megahistory ends and megalomania takes over.
Moments of discretion aside, The History Manifesto is a feisty and suggestive little book, and it should be interesting to see whether much of the next generation of historians will gather beneath its banner.
It's taken a while, but we’ve made a little progress on the mathesis universalis that Leibniz envisioned 300 or so years ago – a mathematical language describing the world so perfectly that any question could be answered by performing the appropriate calculations.
Aware that the computations would be demanding, Leibniz also had in mind a machine to do them rapidly. On that score things are very much farther along than he could ever have imagined. And while the mathesis universalis itself seems destined to remain only the most beautiful dream of rationalist philosophy, there’s no question that Leibniz would appreciate the incredible power to store and retrieve information that we’ve come to take for granted. (Besides being a polymathic genius, he was a librarian.)
Johanna Drucker’s Graphesis: Visual Forms of Knowledge Production, published by Harvard University Press, focuses in part on the capacity of maps, charts, diagrams, and other modes of display to encode and organize information. But only in part: while Drucker’s claims for the power of visual language are less extravagantly ambitious than Leibniz’s for mathematical symbols, it is a matter of degree and not of kind. (The author is professor of bibliographical studies at the Graduate School of Education and Information Studies of the University of California at Los Angeles.)
“The complexity of visual means of knowledge production,” she writes, “is matched by the sophistication of our cognitive processing. Visual knowledge is as dependent on lived, embodied, specific knowledge as any other field of human endeavor, and integrates other sense data as part of cognition. Not only do we process complex representations, but we are imbued with cultural training that allows us to understand them as knowledge, communicated and consensual, in spite of the fact that we have no ‘language’ of graphics or rules governing their use.”
Forget the old saw about a picture being worth a thousand words. Drucker’s claim is not about pictorial imagery, as such. A drawing or painting may communicate information about how a person or place looks, but the forms she has in mind (bar graphs, for example, or Venn diagrams) perform a more complex operation. They convert information into something visually apprehended.
We learn to understand and use these visual forms so readily that they seem almost self-evident. Some people know how to read a map better than others -- but all of us can at least recognize one when we see it. Likewise with tables, graphs, calendars, and family trees. In each case we intuitively understand how the data are organized, if not what they mean.
But the pages of Graphesis teem with color reproductions of 5,000 years’ worth of various modes of visually rendered knowledge – showing how they have emerged and developed over time, growing familiar but also defining or reinforcing ways to apprehend information.
A good example is the mode of plotting information on a grid. Drucker reproduces a chart of planetary movements in that form from 10th-century edition of Macrobius. But the idea didn’t catch on: “The idea of graphical plotting either did not occur, or required too much of an abstraction to conceptualize.” The necessary leap came only in the early 17th century, when Descartes reinvented the grid in developing analytical geometry. His mathematical tool “combined with intensifying interest in empirical measurements,” writes Drucker, “but they were only slowly brought together into graphic form. Instruments adequate for gathering ‘data’ in repeatable metrics came into play … but the intellectual means for putting such information into statistical graphs only appeared in fits and starts.”
And in the 1780s, a political economist invented a variation on the form by depicting the quantity of various exports and imports of Scotland as bars on a graph – an arresting presentation, in that it shows one product being almost twice as heavily traded as any other. (The print is too small for me to determine what it was.) The advantages of the bar graph in rendering information to striking effect seem obvious, but it, too, was slow to enter common use.
“We can easily overlook the leap necessary to abstract data and then give form to its complexities,” writes Drucker. And once the leap is made, it becomes almost impossible to conceive such data without the familiar visual tools.
If the author ever defines her title term, I failed to mark the passage, but graphesis would presumably entail a comprehensive understanding of the available and potential means to record and synthesize knowledge, of whatever kind, in visual form. Drucker method is in large measure inductive: She examines a range of methods of presenting information to the eye and determines how the elements embed logical concepts into images.
While art history and film studies (especially work on editing and montage) are relevant to some degree, Drucker’s project is very much one of exploration and invention. Leibniz’s mathesis was totalizing and deductive; once established, his mathematical language would give final and definitive answers. By contrast, graphesis would entail the regular creation of new visual tools in keeping with the appearance of new kinds of knowledge, and new media for transmitting it.
“The ability to think in and with the tools of computational and digital environments,” the author warns, “will only evolve as quickly as our ability to articulate the metalanguages of our engagement.”
That passage, which is typical, is some indication of why Graphesis will cull its audience pretty quickly. Some readers will want to join her effort; many more will have some difficulty in imagining quite what it is. Deepening the project's fascination, for those drawn to it, is Drucker's recognition of an issue so new that it still requires a name: What happens to the structuring of knowledge when maps, charts, etc. appear not just on a screen, but one responsive to touch? The difficulties that Graphesis presents are only incidentally matters of diction; the issues themselves are difficult. I suspect Graphesis may prove to be an important book, for reasons we'll fully understand only somewhere down the line.