You don’t hear much about the United States being a “postracial society” these days, except when someone is dismissing bygone illusions of the late ’00s, or just being sarcastic. With the Obama era beginning to wind down (as of this week, the president has just under 18 months left in office) American life is well into its post-post-racial phase.
Two thoughts: (1) Maybe we should retire the prefix. All it really conveys is that succession does not necessarily mean progress. (2) It is easy to confuse an attitude of cold sobriety about the pace and direction of change with cynicism, but they are different things. For one, cynicism is much easier to come by. (Often it’s just laziness pretending to be sophisticated.) Lucid assessment, on the other hand, is hard work and not for the faint of spirit.
Naomi Zack’s White Privilege and Black Rights: The Injustice of U.S. Police Racial Profiling and Homicide (Rowman & Littlefield) is a case in point. It consists of three essays plus a preface and conclusion. Remarks by the author indicate it was prepared in the final weeks of last year, with the events in Ferguson, Mo., fresh in mind. But don’t let the title or the book’s relative brevity fool you. The author is a professor of philosophy at the University of Oregon -- and when she takes up terms such as “white privilege” or “black rights,” it is to scrutinize the concepts rather than to use them in slogans.
Despite its topicality, Zack’s book is less a commentary on recent events than part of her continuing effort to think, as a philosopher, about questions of race and justice that are long-standing, but also prone to flashing up, on occasion, with great urgency -- demanding a response, whether or not philosophers (or anyone else) is prepared to answer them.
Zack distinguishes between two ways of philosophizing about justice. One treats justice as an ideal that can be defined and reasoned about, even if no real society in human history ever “fully instantiates or realizes an ideal of justice for all members of that society.” Efforts to develop a theory of justice span the history of Western philosophy.
The other approach begins with injustice and seeks to understand and correct it. Of course, that implies that the philosopher already has some conception of what justice is -- which would seem to beg the question. But Zack contends that theories of justice also necessarily start out from pre-existing beliefs about what it is, which are then strengthened or revised as arguments unfold.
“However it may be done and whatever its subject,” Zack writes, “beginning with concrete injustice and ending with proposals for its correction is a very open-ended and indeterminate task. But it might be the main subject of justice about which people who focus on real life and history genuinely care.”
The philosopher Zack describes may not start out with a theory of what justice is. But that’s OK -- she can recognize justice, paradoxically enough, when it's gone missing.
I wish the author had clarified the approach in the book’s opening pages, rather than two-thirds of the way through, because it proves fundamental to almost everything else she says. She points out how police killings of young, unarmed African-American males over the past couple of years are often explained with references to “white privilege” and “the white supremacist system” -- examples of a sort of ad hoc philosophizing about racial injustice in the United States, but inadequate ones in Zack’s analysis.
Take the ability to walk around talking on the phone carrying a box of Skittles. It is not a “privilege” that white people enjoy, as should be obvious from the sheer absurdity of putting it that way. It is one of countless activities that a white person can pursue without even having to think about it. “That is,” Zack writes, “a ‘privilege’ whites are said to have is sometimes a right belonging to both whites and nonwhites that is violated when nonwhites are the ones who [exercise] it.”
In the words of an online comment the author quotes, “Not fearing the police will kill your child for no reason isn’t a privilege, it’s a right.” The distinction is more than semantic. What Zack calls “the discourse of white privilege” not only describes reality badly but fosters a kind of moral masochism, inducing “self-paralysis in the face of its stated goals of equality.” (She implies that white academics are particularly susceptible to "hold[ing] … progressive belief structures in intellectual parts of their life that are insulated from how they act politically and privately …")
Likewise, “the white supremacist power structure” is a way of describing and explaining oppression that is ultimately incapacitating: “After the civil rights movement, overt and deliberate discrimination in education, housing and employment were made illegal and explicitly racially discriminatory laws were prohibited.” While “de facto racial discrimination is highly prevalent in desirable forms of education, housing and employment,” it does no one any good to assume that “an officially approved ideology of white supremacy” remains embodied in the existing legal order.
None of which should be taken to imply that Zack denies the existence of deep, persisting and tenacious racial inequality, expressed and reinforced through routine practices of violence and humiliation by police seldom held accountable for their actions. But, she says, "What many critics may correctly perceive as societywide and historically deep antiblack racism in the United States does not have to be thoroughly corrected before the immediate issue of police killings of unarmed young black men can be addressed."
She is not a political strategist; her analyses of the bogus logic by which racial profiling and police killings are rationalized are interesting but how to translate them into action is not exactly clear. But in the end, justice and injustice are not problems for philosophers alone.
If you can remember the 1960s, the old quip goes, you weren’t really part of them. By that standard, the most authentic participants ended up as what used to be called “acid casualties”: those who took spiritual guidance from Timothy Leary’s injunction to “turn on, tune in and drop out” and ended up stranded in some psychedelic heaven or hell. Not that they’ve forgotten everything, of course. But the memories aren’t linear, nor are they necessarily limited to the speaker’s current incarnation on this particular planet.
Fortunately Stephen Siff can draw on a more stable and reliable stratum of cultural memory in Acid Hype: American News Media and the Psychedelic Experience (University of Illinois Press). At the same time, communicating about the world as experienced through LSD or magic mushrooms was ultimately as difficult for a sober newspaper reporter, magazine editor or video documentarian as conversation tends to be for someone whose mind has been completely blown. The author, an assistant professor of journalism at Miami University in Ohio, is never less than shrewd and readable in his assessment of how various news media differed in method and attitude when covering the psychedelic beat. The slow and steady buildup of hype (a word Siff uses in a precise sense) precipitated an early phase of the culture wars -- sometimes in ways that partisans now might not expect.
Papers on experimentation with LSD were published in American medical journals as early as 1950, and reports on its effects from newspaper wire services began tickling the public interest by 1954. The following year, mass-circulation magazines were devoting articles to LSD research, followed in short order by a syndicated TV show’s broadcast of film footage showing someone under the influence. The program, Confidential File, sounds moderately sleazy (the episode in question was described as featuring “an insane man in a sensual trance”) but much of the early coverage was perfectly respectable, treating LSD as a potential source of insight into schizophrenia, or a potential expressway to the unconscious for psychoanalysts.
But the difference between rank sensationalism and science-boosting optimism may count for less, in Siff’s interpretation, than how sharply coverage of LSD broke with prevailing media trends that began coming into force in the 1920s.
After the First World War, with wounded soldiers coming back with a morphine habit, newspapers carried on panic-stricken anti-drug crusades (“The diligent dealer in deadly drugs is at your door!”) and any publication encouraging recreational drug use, or treating it as a fact of life, was sure to fall before J. Edgar Hoover’s watchful eye. Early movie audiences enjoyed the comic antics of Douglas Fairbanks Sr.’s detective character Coke Ennyday (always on the case, syringe at the ready), or in a more serious mood they could go to For His Son, D. W. Griffith’s touching story of a man’s addiction to Dopokoke, the cocaine-fueled soft drink that made his father rich. But by the time the talkies came around, the Motion Picture Production Code categorically prohibited any depiction of drug use or trafficking, even as a criminal enterprise. Siff notes that in the 20 years following the code’s establishment in 1930, “not a single major Hollywood film dealing with drug use was distributed to the public.”
Not that depictions of substance abuse were a forbidden fruit the public was craving, exactly. But the relative openness of the mid-1950s (emphasis on “relative”) allowed editors to risk publishing stories on what was, after all, serious research on a potential new wonder drug. Siff points out that general-assignment newspaper reporters attending a scientific or medical conference, unable to tell what sessions were worth covering, could feel reasonably confident that a title mentioning LSD would probably yield a story.
At the same time, writers for major newsmagazines and opinion journals were following the lead of Aldous Huxley, the novelist and late-life religious searcher, who wrote about mystical experiences he had while taking mescaline. In 1955, when the editors of Life magazine decided to commission a feature on hallucinogenic mushrooms, it turned to Wall Street banker and amateur mycologist R. Gordon Wasson. He traveled to Mexico and became, in his own words, one of “the first white men in recorded history to eat the divine mushroom” -- and if not, then surely the first to give an eyewitness report on “the archetypes, the Platonic ideals, that underlie the imperfect images of everyday life” in the pages of a major newsweekly.
Suffice it to say that by the time Timothy Leary and associates come on the scene (wandering around Harvard University in the early 1960s, with continuously dilated pupils and only the thinnest pretense of scientific research) it is rather late in Siff’s narrative. And Leary’s legendary status as psychedelic shaman/guru/huckster seems much diminished by contrast with the less exhibitionistic advocacy of LSD by Henry and Clare Boothe Luce. Beatniks and nonconformists of any type were mocked regularly in the pages of Time or Life, but the Luce publications were for many years very enthusiastic about the potential benefits of LSD. The power couple tripped frequently, and hard. (Some years ago, when I helped organize Mrs. Luce’s papers at the Library of Congress, the LSD notes were a confidence not to be breached, but now the experiments are a matter of public record.)
The hippies, in effect, seem like a late and entirely unintentional byproduct of industrial-strength hype. “During an episode of media hype,” Siff writes, “news coverage feeds on itself, as different news outlets follow and expand on one another’s stories, reacting among themselves and to real-world developments. Influence seems to flow from the larger news organizations to smaller ones, as editors at smaller or more marginal media operations look toward the decisions made by major outlets for ideas and confirmation of their own judgment.”
That is the process, broadly conceived. In Acid Hype, Siff charts the details -- especially how the feedback bounced around between news organizations, not just of different sizes, but with different journalistic cultures. Newspaper coverage initially stuck to the major talking points of LSD researchers; it tended to stress the potential wonder-drug angle, even when the evidence for it was weak. Major magazines wanted to cover the phenomenon in greater depth -- among other things, with firsthand reports on the psychedelic universe by people who’d gone there on assignment. Meanwhile, the art directors tried to figure out how to convey far-out experiences through imagery and layout -- as, in time, did TV producers. (Especially on Dragnet, if memory serves.)
Some magazine editors seem to have been put off by the religious undercurrents of psychedelic discourse. Siff exhibits a passage in a review that quotes Huxley’s The Doors of Perception but carefully removes any biblical or mystical references. But someone like Leary, who proselytized about psychedelic revolution, was eminently quotable -- plus he looked good on TV because (per the advice of Marshall McLuhan) he smiled constantly.
The same hype-induction processes that made hallucinogens seem like the next step toward improving the American way of life (or, conversely, the escape route for an alternative to it) also went into effect when the tide turned: just as dubious claims about LSD’s healing properties were reported without question (it’ll cure autism!), so did horror stories about side effects (it’ll make you stare at the sun until you go bling!).
The reaction seems to have been much faster and more intense than the gradual pro-psychedelic buildup. Siff ends his account of the period in 1969 -- oddly enough, without ever mentioning the figure who emerged into public view that year as the embodiment of LSD's presumed demons: Charles Manson. You didn't hear much about the drug's spiritual benefits after Charlie began explaining them. That was probably for the best.
The Library of Congress today named Juan Felipe Herrera, professor emeritus of creative writing at the University of California at Riverside, the next U.S. poet laureate. Herrera is the author of 28 books of poetry, novels for young adults and collections for children. In a statement, James H. Billington, the librarian of Congress, said, “I see in Herrera’s poems the work of an American original -- work that takes the sublimity and largesse of 'Leaves of Grass' and expands upon it. His poems engage in a serious sense of play -- in language and in image -- that I feel gives them enduring power. I see how they champion voices and traditions and histories, as well as a cultural perspective, which is a vital part of our larger American identity.”
Herrera will be the first Latino to hold the position of poet laureate.
Reading the Emancipation Proclamation for the first time is an unforgettable experience. Nothing prepares you for how dull it turns out to be. Ranking only behind the Declaration of Independence and the Constitution in its consequences for U.S. history, the document contains not one sentence that has passed into popular memory. It was the work, not of Lincoln the wordsmith and orator, but of Lincoln the attorney. In fact, it sounds like something drafted by a group of lawyers, with Lincoln himself just signing off on it.
Destroying an institution of systematic brutalization -- one in such contradiction to the republic’s professed founding principles that Jefferson’s phrase “all men are created equal” initially drew protests from slave owners -- would seem to require a word or two about justice. But the proclamation is strictly a procedural document. The main thrust comes from an executive order issued in late September 1862, “containing, among other things, the following, to wit: ‘That on the first day of January, in the year of our Lord one thousand eight hundred and sixty-three, all persons held as slaves within any State or designated part of a State, the people whereof shall then be in rebellion against the United States, shall be then, thenceforward, and forever free….’”
Then -- as if to contain the revolutionary implications of that last phrase -- the text doubles down on the lawyerese. The proclamation itself was issued on the aforesaid date, in accord with the stipulations of the party of the first part, including the provision recognizing “the fact that any State, or the people thereof, shall on that day be, in good faith, represented in the Congress of the United States by members chosen thereto at elections wherein a majority of the qualified voters of such State shall have participated, shall, in the absence of strong countervailing testimony, be deemed conclusive evidence that such State, and the people thereof, are not then in rebellion against the United States.”
In other words: “If you are a state, or part of a state, that recognizes the union enough to send representatives to Congress, don’t worry about your slaves being freed right away and without compensation. We’ll work something out.”
Richard Hofstadter got it exactly right in The American Political Tradition (1948) when he wrote that the Emancipation Proclamation had “all the moral grandeur of a bill of lading.” It is difficult to believe the same author could pen the great memorial speech delivered at Gettysburg a few months later -- much less the Second Inaugural Address.
But to revisit the proclamation after reading Edna Greene Medford’s Lincoln and Emancipation (Southern Illinois University Press) is also a remarkable experience -- a revelation of how deliberate, even strategic, its lawyerly ineloquence really was.
Medford, a professor of history at Howard University, was one of the contributors to The Emancipation Proclamation: Three Views (Louisiana State University Press, 2006). Her new book is part of SIUP’s Concise Lincoln Library, now up to 17 volumes. Medford’s subject overlaps with topics covered by earlier titles in the series (especially the ones on race, Reconstruction and the Eighteenth Amendment) as well as with works such as Eric Foner’s The Fiery Trial: Abraham Lincoln and American Slavery (Norton, 2010).
Even so, Medford establishes her own approach by focusing not only on Lincoln’s ambivalent and changing sense of what he could and ought to do about slavery (a complex enough topic in its own right) but also on the attitudes and activities of a heterogeneous and dispersed African-American public with its own priorities.
For Lincoln, abolishing the institutionalized evils of slavery was a worthy goal but not, as such, an urgent one. As of 1860, his primary concern was that it not spread to the new states. After 1861, it was to defeat the slaveholders’ secession -- but without making any claim to the power to end slavery itself. He did support efforts to phase it out by compensating slave owners for manumission. (Property rights must be respected, after all, went the thinking of the day.) His proposed long-term solution for racial conflict was to send the emancipated slaves to Haiti, Liberia, or someplace in Central America to be determined.
Thanks in part to newspapers such as The Weekly Anglo-African, we know how free black citizens in the North responded to Lincoln, and it is clear that some were less than impressed with his antislavery credentials. “We want Nat Turner -- not speeches,” wrote one editorialist; “Denmark Vesey -- not resolutions; John Brown -- not meetings.” Especially galling, it seems, were Lincoln’s plans to reimburse former slave owners for their trouble while uprooting ex-slaves from land they had worked for decades. African-American commentators argued that Lincoln was getting it backward. They suggested that the ex-slaves be compensated and their former masters shipped off instead.
To boilMedford’s succinct but rich narrative down into something much more schematic, I’ll just say that Lincoln’s cautious regard for the rights of property backfired. Frederick Douglass wrote that the slaves “[gave] Mr. Lincoln credit for having intentions towards them far more benevolent and just than any he is known to cherish…. His pledges to protect and uphold slavery in the States have not reached them, while certain dim, undefined, but large and exaggerated notions of his emancipating purpose have taken firm hold of them, and have grown larger and firmer with every look, nod, and undertone of their oppressors.” African-American Northerners and self-emancipating slaves alike joined the Union army, despite all the risks and the obstacles.
The advantage this gave the North, and the disruption it created in the South, changed abolition from a moral or political concern to a concrete factor in the balance of forces -- and the Emancipation Proclamation, for all its uninspired and uninspiring language, was Lincoln’s concession to that reality. He claimed the authority to free the slaves of the Confederacy “by virtue of the power in me vested as Commander-in-Chief, of the Army and Navy of the United States in time of actual armed rebellion against the authority and government of the United States, and as a fit and necessary war measure for suppressing said rebellion.”
Despite its fundamentally practical motivation and its avoidance of overt questions about justice, the proclamation was a challenge to the American social and political order that had come before. And it seems to have taken another two years before the president himself could spell out its implications in full, in his speech at the Second Inaugural. The depth of the challenge is reflected in the each week's headlines, though to understand it better you might want to read Medford's little dynamite stick of a book first.
You don’t often come across references to “the moral sciences” these days, unless you read a lot of biographies of well-educated Victorians, and maybe not even then. The term covers economics, psychology, anthropology and other fields in what are now usually called the social sciences. I’m not sure when the one gave way to the other. If the older expression sounds odd to the modern ear, that’s probably because anything called a science now implicitly rests on a fundamental distinction between fact and value.
A science sticks to “is” rather than “ought” -- or it ought to, anyway. (Whether or not the fact/value dichotomy is valid or coherent is a long discussion in itself.) Stjepan Mestrovic, a professor of sociology at Texas A&M University, does not overtly challenge the principle of value-free social science in The Postemotional Bully (SAGE). He uses concepts and distinctions from the canon of classical social theory to interpret contemporary cultural and behavioral trends.
But the ideas he draws on carry a certain amount of residue from the era of the moral sciences, while the phenomena he analyzes (the happy-face sadism at Abu Ghraib, for example) are too disturbing for studied neutrality to seem like anything but complicity.
George Orwell provides the book with its point of departure. “What,” Orwell asked in an article from 1946 that Mestrovic quotes, “is the special quality in modern life that makes a major human motive out of the impulse to bully others? If we could answer that question -- seldom asked, never followed up -- there might occasionally be a bit of good news on the front page of your morning paper.”
Sociology’s founding fathers never tackled the question, as such. But they did create a whole array of fundamental concepts about how the social system emerging over the past two hundred-odd years (often called modernity, a.k.a. capitalism, industrial society, mass society and related aliases) differed from the smaller-scale, slower-moving patterns of life that had gone before. And so Mestrovic can draw on Ferdinand Tonnies’s contrast between Gemeinschaft (community: organic, strong bonds, face-to-face relationships prevail) and Gesellschaft (society: change and dislocation common, regular contact with strangers, many interactions involve an exchange of money). Or on David Riesman’s interpretation of American society as moving from an inner-directed era (during which the sense of personal identity was shaped by values absorbed from parents and authorities) to one that is other-directed (the individual “is group oriented, conformist and changes values constantly to fit into norms and values that are in constant flux,” in Mestrovic’s words).
Other theorists and concepts also enter the discussion, but here it might be best to look in the general direction that the author steers them. An overarching schema of recent decades posits a sort of three-stage movement from a traditional order -- the Gemeinschaft, more or less -- to modern society, where the inner-directed people, at least, functioned with a sense of individual identity and established obligations, despite the alienation and other distractions of the Gesellschaft, including the antics of the other-directed. And beyond that? The postmodern condition, of course, which is endlessly defined and disputed without even reaching a plurality (much less a consensus) as to its meaning.
As an alternative, Mestrovic proposed his concept of “postemotional society” in the 1990s. An awkward expression, it has the sole virtue of allowing its creator to avoid sinking into the postmodernist quicksand without a trace. Perhaps the clearest way to explain postemotionality is to treat it as an extension and updating of Riesman’s characterization of other-directedness. The other-directed person relies on a peer group, rather than a deeply rooted and stringent superego, in determining what’s important and how to behave. Those accepted standards, in turn, often reflect current trends in film, advertising and mass media. By contrast Mestrovic’s postemotional type takes his or her cues from a culture more volatile and ephemeral than anything Riesman, writing in the early 1950s, could have imagined.
Postemotionality is -- for want of a more elegant way of putting it -- hyper-other-directed. It involves relationships that are “not intimate but also not alien.” The individual is “plugged and hooked into his or her electronic screen devices… pretend[ing] to be ‘in touch’ with others and the world” yet in reality “trapped in an electronic solitary confinement.”
But the condition is more than a symptom of the new digital order. In an insight combining Emil Durkheim’s thoughts on the division of labor with Donald Rumsfeld’s doctrine of manpower deployment (“People are fungible. You can have them here or there”), Mestrovic stresses that workplaces and institutions are increasingly prone to “the reduction of the human being as a fungible asset” that is “replaceable and interchangeable.” This aspect of the argument remains underdeveloped, but seems to echo recent discussions of precarity in employment. At the same time, society “aims much of its mechanical, intellectual, artificial and productive powers at the task of systematically faking community and its traits: the managed heart, fake sincerity, false kindness,” and so on, creating “new hybrid forms of emotional life that are neither entirely fake nor sincere.”
And beneath the forced smile -- that emblem of a “social life based upon dead emotions from the best” -- there lurks a considerable potential for cruelty. The author quotes Veblen’s remark that in modern society “simple aggression and unrestrained violence in great measure give place to shrewd practices and chicanery, as the best-approved method of accumulating wealth.”
But the process is not irreversible. The book takes up three cases of contemporary brutalization, postemotional style -- Abu Ghraib, the prolonged and ultimately fatal beating of an unresisting prisoner in an American jail, and a soldier driven to suicide by racial slurs and physical torture. I’ll forgo any discussion of them beyond noting that in each case, nobody within the chain of command was held accountable, the perpetrators’ rationalizations were more or less accepted by the court, and little or no punishment followed.
Add to that the prevailing tone of “screen culture” -- indignation, rage, contempt, malice and a certain cheerful callousness -- and the case can be made that Mestrovic has identified a real tendency, at least in American culture. I doubt the expression “postemotional” will ever catch on, though. It’s just the new normal.
“If you spend much time in libraries,” the late Northrop Frye wrote at the start of an essay from 1959, “you will probably have seen long rows of dark green books with gold lettering, published by Macmillan and bearing the name of Frazer.” These were the collected works of the Victorian classicist and anthropologist Sir James Frazer, author of The Golden Bough (15 volumes) and a great deal else besides.
Frye’s remarks -- originally delivered as a talk on the Canadian Broadcasting Corporation’s radio network -- were aimed for a much broader public than would have read his then-recent book Anatomy of Criticism, which made its author the most-cited name in Anglophone literary studies until at least the early 1980s. (Frye was professor emeritus of English at Victoria College, University of Toronto, when he died in 1991.) He told listeners that it would require “a great many months of hard work, without distractions, to read completely through Frazer.”
And the dedicated person making the effort probably wouldn’t be an anthropologist. The discipline’s textbooks “were respectful enough about him as a pioneer,” Frye wrote, “but it would have taken a Geiger counter to find much influence of The Golden Bough in them.”
And yet Frazer’s ideas about myth and ritual and his comparative approach to the analysis of symbolism exercised an abiding fascination for other readers -- in part through the echoes of them audible in T. S. Eliot’s “The Waste Land,” but also thanks to Frazer’s good sense in preparing an abridged edition of The Golden Bough in one stout volume that it was entirely possible to finish reading in no more than a year.
If you spend much time in libraries these days -- wandering the stacks, that is, rather than sitting at a terminal -- you might have seen other long rows of dark green books with gold lettering, published by the University of Toronto Press and bearing the name of Frye.
The resemblance between The Collected Works of Northrop Frye (in 30 volumes) and the Frazerian monolith is almost certainly intentional, though not the questions such a parallel implies: What do we do with a pioneer whose role is acknowledged and honored, but whose work may be several degrees of separation away from where much of the contemporary intellectual action is? Who visits the monument now? And in search of what?
Part of the answer may be found in Essays on Northrop Frye: Word and Spirit, a new collection of studies by Robert D. Denham, professor emeritus of English at Roanoke College. The publisher named on the title page is Iron Mountain Press of Emory, Va., which appears not to have a website; the listing for the book on Amazon indicates that it is available through CreateSpace Independent Publishing Platform, a print-on-demand service.
Denham has written or edited more than 30 books by or about Frye, including several volumes of notebooks, diaries, letters and works of fiction in the Collected Works, for which he also prepared the definitive edition of Anatomy of Criticism. The second of the three sections in Word and Spirit (as I prefer to call the new book) consists of essays on the Anatomy, examining Frye’s ideas about rhetoric and the imagination and brandishing them in the face of dismissive remarks by Frederick Crews and Tzvetan Todorov.
Frye’s relative decline as a force to be reckoned with in literary theory was already evident toward the end of his life; at this point the defense of Frygian doctrine may seem like a hopelessly arrière-garde action. (“Frygian” is the preferred term, by the way, at least among the Frygians themselves.) But the waning of his influence at the research-university seminar level is only part of the story, and by no means the most interesting part. The continuing pedagogical value of the Anatomy is suggested by how many of Frye’s ideas and taxonomies have made their way into Advanced Placement training materials. Anyone trying to find a way around in William Blake’s poetic universe can still do no better than to start with Frye’s first book, Fearful Symmetry (1947). Before going to see Shakespeare on stage, I’ve found it worthwhile to see what Frye had to say about the play. Bloggers periodically report reading the Anatomy, or Frye’s two books about the Bible and literature, and having their minds blown.
Northrop Frye is the rare case of a literary theorist whose critical prose continues to be read with interest and profit by people who are not engaged in producing more of the stuff. In the talk on Frazer, he noted that The Golden Bough appealed to artists, poets and “students of certain aspects of religion” -- which seems, on the whole, like a fair guess at the makeup of Frye’s own posthumous constituency.
What’s been lacking is the single-volume, one-stop survey of the Frygian landscape. The Collected Works have complicated things -- not just by being vast and intimidating (and too expensive for most of individuals to afford) but by adding thousands of pages of unpublished material to the already imposing mass of Frye’s work.
Denham is as responsible for adding new turns to the labyrinth as anyone. He is the scholar dedicated enough to have solved the riddle of the great man’s handwriting. Most of the lectures and papers in Essays on Northrop Frye: Word and Spirit draw on the private papers, which are of considerably more than biographical interest. Frye used his notebooks to think out loud and to explain himself to himself, working out the links among the work he’d published and things he wanted to write.
They reveal elements of his inner life that remained unstated, or at most implicit, in Frye’s public writings -- for example, his studies in Buddhist and Hindu thought. He also explored the whole gamut of esoteric and mystical writings from the Corpus Hermeticum and Nicolas of Cusa (respectable) to Madame Blavatsky and Aleister Crowley (shady but undeniably fascinating) to titles such as The Aquarian Conspiracy and Cosmic Trigger: The Final Secret of the Illuminati (“kook books,” as Frye called them). Connections existed between this material and his scholarship (you can’t study Blake or Yeats for long without picking up some Gnosticism and theosophy) but Frye also needed to understand his own religious beliefs and occasional experiences of the ineffable. He was interested in the cosmological side of the literary imagination, but also compelled to figure out his own place in the cosmos.
The drives were mutually reinforcing. But references to these interests in his published work were few and far between, and often enough too oblique to notice. With Denham’s close knowledge of Frye’s writings (scholarly and subterranean alike) Word and Spirit seems like the book that’s been necessary for some while -- the thread that can take readers into the depths of the Frygian labyrinth. So on those grounds, I can recommend it -- without guaranteeing you’ll find the way back out again.