Books

Review of Robert L. Belknap's "Plots"

The story is told of how, during an interview at a film festival in the 1960s, someone asked the avant-garde director Jean-Luc Godard, “But you must at least admit that a film has to have a beginning, a middle and an end?” To which Godard replied, “Yes, but not necessarily in that order.”

Touché! Creative tampering with established patterns of storytelling (or with audience expectations, which is roughly the same thing) is among the basic prerogatives of artistic expression -- one to be exercised at whatever risk of ticket buyers demanding their money back. Most of the examples of such tampering that Robert L. Belknap considers in Plots (Columbia University Press) are drawn from literary works now at least a century old. That we still read them suggests their narrative innovations worked -- so well, in fact, that they may go unnoticed now, taken as given. And the measure of Belknap’s excellence as a critic is how rewarding his close attention to them proves.

The late author, a professor of Slavic languages at Columbia University, delivered the three lectures making up Plots in 2011. Belknap’s preface to the book indicates that he considered the manuscript ready for publication at the time of his death in 2014. Plots has an adamantine quality, as if decades of thought and teaching were being crystallized and enormously compressed. Yet it is difficult to read the final paragraphs as anything but the author’s promise to say a great deal more.

Whether the lectures were offered as the overture to Belknap’s magnum opus or in lieu of one, Plots shuttles between narrative theory (from Aristotle to the Russian formalists) and narrative practice (Shakespeare and Dostoevsky, primarily) at terrific speed and with a necessary minimum of jargon. Because the jargon contains an irreducible core of the argument, we might as well start (even though Belknap does not) with the Russian formalists’ contrast between fabula and siuzhet.

Each can be translated as “plot.” The more or less standard sense of fabula, at least as I learned it in ancient times, is the series of events or actions as they might be laid out on a timeline. The author tweaks this a little by defining fabula as “the relationship among the incidents in the world the characters inhabit,” especially cause-and-effect relationships. By contrast, siuzhet is how events unfold within the literary narrative or, as Belknap puts it, “the relationship among the same incidents in the world of the text.”

To frame the contrast another way, siuzhet is how the story is told, while fabula is what “really” happened. The scare quotes are necessary because the distinction applies to fiction and drama as well as, say, memoir and documentary film. “In small forms, like fairy tales,” Belknap notes, fabula and siuzhet “tend to track one another rather closely, but in larger forms, like epics or novels, they often diverge.” (Side note: A good deal of short fiction is also marked by that divergence. An example that comes to mind is “The Tell-Tale Heart” by Edgar Allan Poe, where the siuzhet of the narrator’s account of what happened and why is decidedly different from the fabula to be worked out by the police appearing at the end of the story.)

Belknap returns to Aristotle for the original effort to understand the emotional impact of a certain kind of siuzhet: the ancient tragedies. An effective drama, by the philosopher’s lights, depicted the events of a single day, in a single place, through a sequence of actions so well integrated that no element could be omitted without the whole narrative coming apart. “This discipline in handling the causal relationship between incidents,” says Belknap, “produces the sense of inevitability that characterizes the strongest tragedies.” The taut siuzhet chronicling a straightforward fabula reconciled audiences to the workings of destiny.

Turning Aristotle’s analysis into a rule book, as happened in later centuries, was like forcing playwrights to wear too-small shoes. The fashion could not last. In the second lecture, Belknap turns to Shakespeare, who found another way to work:

“He sacrificed the causal tightness that had served classic drama so well in order to build thematic tightness around parallel plots. Usually the parallel plots involve different social levels -- masters and servants, kings and courtiers, supernatural beings and humans -- and usually the plots are not too parallel to intersect occasionally and interact causally at some level, though never enough to satisfy Aristotle’s criterion that if any incident be removed, the whole plot of the play should cease to make sense …. Similarity in plots can be represented as the overlap between two areas, and those areas may be broken down into individual points of similarity, dissimilarity, contrast, etc. Without knowing it, a Shakespearean audience is making such analyses all the time it watches a play, and the points of overlap and contrast enter their awareness.”

It’s not clear whether Belknap means to include the modern Shakespearean audience -- possibly not, since contemporary productions tend to trim down the secondary plots, if not eliminate them. But the Bard had other devices in hand for complicating fabula-siuzhet arrangements -- including what Belknap identifies as “a little-discussed peculiarity of Shakespearean plotting, the use of lies.” In both classical and Shakespearean drama, there are crucial scenes in which a character’s identity or situation is revealed to others whose confusion or deception has been important for the plot. But whereas mistakes and lies “are about equally prevalent” in the ancient plays, Shakespeare has a clear preference: “virtually every recognition scene is generated primarily out of a lie, not an error.”

In a striking elaboration of that point, Belknap treats the lie as a kind of theatrical performance -- “a little drama, with at least the rudiments of a plot” -- that often “express[es] facts about the liar, the person lied to or the person lied about.” The lie is a manipulative play within a play in miniature. And in Hamlet, at least, the (literal) play within a play is the prince’s means of trying to force his uncle to tell the truth.

Now, such intricate developments at the level of form also involve changes in how the writer and the audience understand the world (and, presumably, themselves). The Shakespearean cosmos gets messier than that of classical drama, but loosening the chains of cause and effect does not create absolute chaos. The motives and consequences of the characters’ actions make manifest their otherwise hidden inner lives. To put it another way, mutations in siuzhet (how the story is told) reflect changes in fabula (what really happens in the world) and vice versa. Belknap suggests -- tongue perhaps not entirely in cheek -- that Shakespeare was on the verge of inventing the modern psychological novel and might have, had he lived a few more years.

By the final lecture, on Dostoyevsky’s Crime and Punishment, Belknap has come home to his area of deepest professional interest. (He wrote two well-regarded monographs on The Brothers Karamazov.) Moving beyond his analysis of parallel plots in Shakespeare, he goes deep into the webs of allusion and cross-referencing among Russian authors of the 19th century to make the case that Crime and Punishment contains a much more deliberate narrative architecture than it is credited with having. (Henry James’s characterization of Russian novels as “fluid puddings” undoubtedly applies.)

He even makes a bid for the novel epilogue as being aesthetically and thematically integral to the book as a whole. Other readers may find that argument plausible. I’ll just say that Plots reveals that with Belknap’s death, we lost a critic and literary historian of great power and considerable ingenuity.

Editorial Tags: 

Essay on 18th-century note taking

Matthew Daniel Eddy’s fascinating paper “The Interactive Notebook: How Students Learned to Keep Notes During the Scottish Enlightenment” is bound to elicit a certain amount of nostalgia in some readers. (The author is a professor of philosophy at Durham University; the paper, forthcoming in the journal Book History, is available for download from his Academia page.)

Interest in the everyday, taken-for-granted aspects of scholarship (the nuts and bolts of the life of the mind) has grown among cultural historians over the past couple of decades. At the same time, and perhaps not so coincidentally, many of those routines have been in flux, with card catalogs and bound serials disappearing from university libraries and scholarship itself seeming to drift ever closer to a condition of paperlessness. The past few years have seen a good deal of work on the history of the notebook, in all its many forms. I think Eddy’s contribution to this subspecialty may prove a breakthrough work, as Anthony Grafton’s The Footnote: A Curious History (1997) and H. J. Jackson’s Marginialia: Readers Writing in Books (2001) were in the early days of metaerudition.

“Lecture notes,” Eddy writes, “as well as other forms of writing such as letters, commonplace books and diaries, were part of a larger early modern manuscript world which treated inscription as an active force that shaped the mind.” It’s the focus on note taking itself -- understood as an activity bound up with various cultural imperatives -- that distinguishes notebook studies (pardon the expression) from the research of biographers and intellectual historians who use notebooks as documents.

Edinburgh in the late 18th century was buzzing with considerable philosophical and scientific activity, but the sound in the lecture notes Eddy describes came mainly from student noses being held to the grindstone. For notebook keeping was central to the pedagogical experience -- a labor-intensive and somewhat costly activity, deeply embedded in the whole social system of academe. Presumably the less impressive specimens became kindling, but the lecture notebooks Eddy describes were the concrete embodiment of intellectual discipline and craftsmanship -- multivolume works worthy of shelf space in the university library or handed down to heirs. Or, often enough, sold, whether to less diligent students or to the very professors who had given the lectures.

The process of notebook keeping, as Eddy reconstructs it, ran something like this: before a course began, the student would purchase a syllabus and a supply of writing materials -- including “quares” of loose papers or “paper books” (which look pocket-size in a photo) and a somewhat pricier “note book” proper, bound in leather.

The syllabus included a listing of topics covered in each lecture. Eddy writes that “most professors worked very hard to provide lecture headings that were designed to help students take notes in an organized fashion” as they tried to keep up with “the rush of the learning process as it occurred in the classroom.” Pen or pencil in hand, the student filled up his quares or paper book with as much of the lecture material as he could grasp and condense, however roughly. The pace made it difficult to do more than sketch the occasional diagram, and Eddy notes that “many students struggled to even write basic epitomisations of what they had heard.”

The shared challenge fostered the student practice of literally comparing notes -- and in any case, even the most nimble student was far from through when the lecture was done. Then it was necessary to “fill out” the rough notes, drawing on memory of what the professor said, the headings in the syllabus and the course readings -- a time-consuming effort that could run late into the night. “Extending my notes taken at the Chemical and Anatomical lectures,” one student wrote in his diary, “employs my whole time and prevents my doing any thing else. Tired, uneasy & low-spirited.”

As his freshman year ended, another wrote, “My late hours revising my notes taken at the lectures wore on my constitution, and I longed for the approach of May and the end of the lectures.”

Nor was revision and elaboration the end of it. From the drafts, written on cheap paper, students copied a more legible and carefully edited text into their leather notebooks, title pages in imitation of those found in printed books. The truly devoted student would prepare an index. “While many of them complained about the time this activity required,” Eddy writes, “I have found no one who questioned the cognitive efficacy that their teachers attached to the act of copying.”

Making a lecture notebook was the opposite of multitasking. It meant doing the same task repeatedly, with deeper attention and commitment at each stage. Eddy surmises that medical students who prepared especially well-crafted lecture notebooks probably attended the same course a number of times, adding to and improving the record, over a course of years.

At the same time, this single-minded effort exercised a number of capacities. Students developed “various reading, writing and drawing skills that were woven together into note-taking routines … that were in turn infused with a sense of purpose, a sense that the acts of note taking and notebook making were just as important as the material notebook that they produced.”

You can fill an immaterial notebook with a greater variety of content (yay, Evernote!), but I’m not sure that counts as either an advantage or an improvement.

Editorial Tags: 
Image Source: 
istock

New book urges colleges to exercise not-so-common sense when optimizing undergraduate experience

Smart Title: 

New book urges colleges to exercise not-so-common sense when it comes to optimizing the undergraduate experience and otherwise striving toward institutional excellence.

Review of Sara L. Crosby, "Poisonous Muse: The Female Poisoner and the Framing of Popular Authorship in Jacksonian America"

The mythological creature called the lamia is something like a hybrid of mermaid and vampire: a beautiful woman from the waist up, atop a serpent’s body, driven by an unwholesome appetite. The indispensable Brewer’s Dictionary of Phrase and Fable elaborates: “A female phantom, whose name was used by the Greeks and Romans as a bugbear to children. She was a Libyan queen beloved by Jupiter, but robbed of her offspring by the jealous Juno; and in consequence she vowed vengeance against all children, whom she delighted to entice and murder.”

Somewhere along the way, the Libyan queen’s proper name turned into the generic term for a whole subspecies of carnivorous nymph. Nor was the menu limited to children. In some tellings, the lamia could conceal her snaky side long enough to lure unwary human males to their deaths. (A femme fatale, if ever.) If the lamia outlived most of the other gods and monsters of antiquity in the Western cultural imagination, I suspect it is in part because of the coincidence that she embodies two aspects of Eden: the beguiling female and the deceiving reptile, merged, literally, into one.

That this overtly misogynistic image might ever have played a part in the political culture of the United States seems improbable -- a little less so in this election year, perhaps, though it remains difficult to picture. And it’s certainly true that the lamia underwent considerable mutation in crossing the Atlantic and finding a place in American literature and party politics. Sara L. Crosby’s Poisonous Muse: The Female Poisoner and the Framing of Popular Authorship in Jacksonian America (University of Iowa Press) follows the lamia’s transformation from the monster known to a classically educated elite to the sympathetic, vulnerable and all-too-human character accepted by the new mass public of early 19th-century America.

The author, an associate professor of English at Ohio State University at Marion, follows the lamia’s trail from antiquity (in Roman literature “she appeared as a dirty hermaphroditic witch who raped young men”) through the poetry of John Keats and on to such early American page-turners as The Female Land Pirate; or Awful, Mysterious, and Horrible Disclosures of Amanda Bannorris, Wife and Accomplice of Richard Bannorris, a Leader in That Terrible Band of Robbers and Murderers, Known Far and Wide as the Murrell Men. (Sample passage: “My whole nature was changed. All the dark passions of Hell seemed to have centered into one; that one into the core of my heart, and that one was revenge! REVENGE!! REVENGE!!!”) There are close readings of stories by Edgar Allan Poe and Nathaniel Hawthorne as well as of the case of Mrs. Hannah Kinney, alleged poisoner of husbands, acquitted after a trial that riveted the country’s newspaper readers.

From this array Crosby builds an argument in layers that may be synopsized roughly along these lines:

A standard version of the lamia story is presented by the Athenian author Philostratus in The Life of Apollonius of Tyana. A young philosopher named Menippus falls under the charms of “a foreign woman, who was good-looking and extremely dainty,” and to all appearances very wealthy as well. He prepares to marry her. Unfortunately, the older and wiser philosopher Apollonius intervenes in time to set the young man straight: “You are a fine youth and are hunted by fine women, but in this case you are cherishing a serpent, and a serpent cherishes you.” Menippus resists this advice, but Apollonius has a verbal showdown with the foreign lady and forces her to admit that she is a lamia, and places her under his control.

Menippus thus receives instruction on the difference between appearance and reality -- and in time to avoid being eaten. The situation can also be read as a kind of political fable: a wise authority figure intervenes to prevent a naïve young person from succumbing to the deceptive, seductive and destructive powers of a woman. For the figure of the lamia is congruent with a whole tradition of misogynistic attitudes, as expressed by the medieval theologian Albertus Magnus: “What [a woman] cannot get, she seeks to obtain through lying and diabolical deceptions. And so, to put it briefly, one must be on one’s guard with every woman, as if she were a poisonous snake and the horned devil.” (This is only one such passage Crosby cites by an authoritative figure maintaining that authority itself is endangered unless men with power practice epistemological as well as moral vigilance.)

But with his 1820 poem “Lamia,” John Keats offers a revisionist telling of the story. To wed her human lover, Lamia sacrifices both her venom and her immortality. In the Philostratusian telling, the confrontation with Apollonius makes her vanish, and presumably kills her, and her beloved immediately falls dead from grief. Having been savaged by reviewers and dismissed as a “Cockney poet” by the literary establishment, Keats recasts the story as a defense of beauty and a challenge to authority. The older man’s knowledge is faulty and obtuse, his power callous and deadly. Poe was an ardent admirer of Keats, and his critical writings are filled with expressions of contempt for the cultural gatekeepers of his day; Crosby interprets the title character of “Ligeia” (a very strange short story that Poe himself considered his best work) as “a revamped Romantic lamia” akin to Keats’s.

The continuity is much easier to see in the case of "Rappaccini's Daughter" by Nathaniel Hawthorne, in which Beatrice (the title figure) is a lamia-like menace to every living thing that crosses her path. This is through no fault of her own; suffice to say that a man with authority has turned her into a kind of walking biological weapon. Once again, the Philostratusian version of the story has been reconfigured. The misogynist vision of the lamia as a force for deception and destruction is abandoned. Her story becomes a fable of oppression, corruption, the illegitimacy of established authority.

These literary reimaginings took shape against a political backdrop that added another layer of significance to the transformation. In the first half century of the United States, citizens “practiced a ‘politics of assent,’ in which a relatively small population of mostly rural voters bowed to the leadership of local elites,” Crosby writes. Editorials and sermons issued Apollonius-like warnings about the need to subdue desire and cultivate virtue. One widely circulated and much-reprinted story told of a daughter who rapidly went from sassing her parents to poisoning them. Clearly the republic’s young females in particular were on the slipperiest of slopes.

“But by the time Andrew Jackson won the presidential election of 1828,” Crosby continues, “the nation was transitioning to a far more raucous and partisan ‘mass democracy,’ characterized by a ‘politics of affiliation’ in which larger populations of newly enfranchised white ‘common men’ identified with national political organizations.” Those organizations issued newspapers and magazines, to which publishers added an enormous flow of relatively cheap books, pamphlets and broadsides.

The old elites (largely associated with the Whig Party) dismissed most of this output as trash, and they may have had a point, if “revenge! REVENGE!! REVENGE!!!” is anything to go by. At the same time, Poe was arguing that, in Crosby’s words, “genius occurred in that space of free exchange between writer and reader” that could open up if Americans could shed their cultural subservience to the Old World. As for Hawthorne, he was a Democratic Party functionary who idolized Jackson, and "Rappaccini's Daughter" was first published in a Democratic Party magazine.

So the basic thematic implications of the “old” (Philostratusian) and “new” (Keatsian) lamia stories lined up fairly well with Whiggish and Jacksonian-Democratic cultural attitudes, respectively. For one side, the American people needed guidance from Apollonius the Whig to avoid the madness of excessive democracy (let the French revolution be a warning) and the lamia-like seductions of the new mass media. For the Democrat, danger came from corrupt authorities, out to manipulate the citizen into believing the worst about the innocent and moral female sex.

The political allegory took on flesh in the case of a number of women accused of murdering with poison -- an act of deception and homicide of decidedly lamia-like character. The Boston trial of Hannah Kinney -- whose third husband was found to have died from arsenic poisoning -- is both fascinating in itself and a striking corroboration of the author’s point about the lamia as a sort of template for public narrative. Early newspaper reports and gossip depicted her as a bewitching temptress of questionable morals and undoubted guilt. But as the trial continued, Democratic journalists described her as a pleasant, somewhat matronly woman whose late husband was mentally disturbed and who was trying to get over syphilis with the help of a shady “doctor.” (The arsenic in his system might well have gotten there through quackery or suicide.)

The jury acquitted her, which cannot have surprised the junior prosecuting attorney: “Recent experience has shown how difficult, if not impossible it has been to obtain a verdict of condemnation, in cases of alleged murders by secret poison, when females have been the parties accused, and men were the persons murdered.” By contrast, a number of British women accused of poisoning during the same period were dispatched to the gallows with haste. Factors besides "the lamia complex" may account for the difference, but the contrast is striking even so.

It’s unlikely that many Americans in the 1840s had read The Life of Apollonius of Tyana, or heard of Keats, for that matter. Cultural influence need not be that direct to be effective; it can be transmitted on the lower frequencies through repurposed imagery and fragments of narrative, through relays remixes. Perhaps that is what we’re seeing now -- with who knows what archetypes being mashed up on the political stage.

Editorial Tags: 
Image Source: 
University of Iowa Press

The enduring power of textbooks in students' lives (essay)

“We can give you three dollars,” the clerk at the campus bookstore told me.

“That’s all?” I asked. I had hoped to get more for the book I wanted to sell back, given what I had paid for it just months before.

“Sorry. It’s not assigned next term.” She shrugged.

“Well,” I decided, “for three dollars, it will look good on my bookshelf.”

That was the moment I kept my first college book.

At the end of every term, college students lug piles of books across campus to sell back to the bookstore (or post the books online to sell directly to next term’s students) for a fraction of what they paid for them. Selling back books is so ingrained in college culture that it seems natural, inevitable. Strapped for cash, most students accept the few dollars joyfully.

But there’s something to be said for keeping books, too. In the years since my own small beginning -- just the one book, just three dollars, just to look good on my bookshelf -- I have developed a lasting commitment to having books around.

These days, as each semester nears its end, I find myself on the losing side of a friendly argument with my own students. I tell them they should not sell their books back. They raise objections:

“The book’s not in my field.”
“I already read it, and I remember what it says.”
“I can always get another copy if I need it.”
“I can find the same text, or the same facts, online.”
“The information will be outdated soon.”
“The edition will be replaced with a new one soon.”
“I want to put this class behind me!”
“It’s too expensive to keep. I need the money.”

I do my best to respond. Then, of course, the students make their own decisions. I’m afraid I’m not very convincing. And I understand why. Most of the reasons to sell back books are quite reasonable. In certain cases, I have to concede the point.

Yes, I do agree, some books are just fine to sell back. I have little fondness in particular for stereotypically textbookish textbooks: repositories of facts, good for exam prep but not for actually reading, likely to be replaced by a new edition in a year or two, apparently written by a committee or a machine, duplicating material available online for free. By all means, students should sell those back.

And, yes, I also agree, some books are too expensive to keep. Costs for textbooks are the primary reason students sell textbooks back, rent them instead of buying, pirate them or simply don’t get them at all. Each year, costs for books increase faster than tuition. A multibillion-dollar industry, bookstores and publishers for college textbooks take advantage of a rather trapped student market. For many students -- particularly students with multiple jobs, loans and responsibilities to young children or aging parents -- just getting books each semester can be a Herculean (indeed, Sisyphean) task. And not selling books back, even if they would love to keep them, can sometimes simply be out of the question. By all means, if it’s keep the books or pay the rent, students should pay the rent.

Accordingly, colleges and universities must recognize that not being able to keep books is a disadvantage faced disproportionately by students from poor and working-class backgrounds. We need to take steps to make books affordable as a matter of social justice. Some have suggested switching to ebooks as a solution. But ebooks can cost just as much or more than used print books and offer less. (Moreover, most students prefer print.) Better ideas include increasing financial aid, reducing the overall cost of college, including books in the cost of tuition, assigning more affordable books and writing our own books and giving them to students for free, among other strategies.

Even with these caveats, I still insist students should keep books. As college teachers, we usually focus more on what students do (or do not do) with books during our courses, not after. But I think we can do more. Just as we would like students to remember what they learn in our courses and to continue learning after the courses have ended, so should we also care that they keep the very books that can help that remembering and learning along.

With the loudest voices (including bookstore advertisements) telling students to sell their books, it’s up to us to teach them otherwise. We can assign books worth keeping. We can help students connect with the books for themselves. We can talk to students about keeping books, telling them something like this:

Keep your books. Not every single one, necessarily, but keep many. Keep most, if possible. Do not let a book go without deliberation. Begrudge the ones truly not worth keeping. Grieve the ones you truly cannot afford. Keep books from your field and from other fields as well. Be well-rounded in your keeping.

Yes, appreciate what the internet can offer (through sites like Project Gutenberg), but also appreciate what books can offer. Yes, some books contain nothing but information with a short shelf life, but keep the books that are not of this sort. Keep books with ideas, argument, voice -- books in which writers say something to readers. Keep books you know you will use again and books you think you won’t, just in case.

Start small, if it helps: keep one book you otherwise would have gotten rid of. Next time, keep two. Keep keeping books until you’ve built a library. Why? There’s value in having books and being the sort of person who has them. This value often outweighs the cost. Sometimes books are even worth a little sacrifice.

Finally, while asserting there’s value in having books, we teachers can also explain just what that value is. We might communicate to students the following points:

Having books around can make a difference in students’ lives. Analyzing decades of data from dozens of countries around the world, sociologists found that the number of books in a home correlates strongly with academic accomplishment for children in the family. Specifically, the more books around, the farther in education the children go. That holds true across time, culture and socioeconomic status. The connection between books and academic accomplishment is so strong, the researchers comment, that there almost seems to be “an intrinsic advantage in growing up around books.”

Of course, merely having books around is “not enough,” they add. One does not imagine books that are just sitting there unread, unnoticed and ignored doing much good. But there is a high “correlation between owning books and reading.” Books offer “skills and knowledge.” Having books around demonstrates “a commitment to investing in knowledge.” Having books around indicates that people in the house “enjoy and value scholarly culture, that they find ideas congenial, reading agreeable, complex and intellectually demanding work attractive.” In a home that has books, it is likely “conversations between parents and their children will include references to books and imaginative ideas growing out of them.”

Students who are (or hope to become) parents should certainly keep books for the sake of the children. But if children benefit from books, no doubt adults do as well. It’s not that books are magical (at least, not in the strictest sense of that word). It’s that deciding to have books and to be the sort of person who has books can change a person’s life and the lives of those closest to them.

Students might want to read certain books in the future. Sometimes students feel finished with a subject once they complete the final exam. Maybe they are, maybe they aren’t. They do not know what they will want to read or reference in 10, 20, 30 years. But if they have built up a library over that time, it will be all the easier for them to grab the right book when they want it.

Students might want to lend books to someone someday. It is easy for students to ask, “Will I use this book again?” But building a library allows students to be a resource to others. One of my fellow professors calls it a “joy” to have the right book on hand to give to someone. He compares it to the proverbial “word fitly spoken.”

Books can serve as physical reminders of what students have read. Reading doesn’t end when one puts a book down for the last time. Reading ends when one thinks about a book for the last time. When students read enough, they will likely forget not just what they read in certain books but even that they read certain books. “Out of sight, out of mind” applies here.

But so does the opposite. Books as physical objects sitting in plain sight on a bookshelf, glanced at regularly and browsed through from time to time, can remind students of what they have read, keeping that reading alive, active in their minds. (For this to work, of course, books can’t stay boxed up in storage.)

Books can serve as physical reminders of what students have not read. As Umberto Eco and Nassim Nicholas Taleb know, unread books remind people of what they do not know. Some unread books eventually get read. Others don’t. In that way, sitting on bookshelves, unread books can remind students to be both curious and humble.

Books shape the meaning of a place. According to place theory, places are not mere locations; they are laden with meaning. The physical environment of a place shapes its meaning (including walls, doors, furniture, the lack thereof, etc.). What happens in a place also shapes its meaning. So do names, memories, objects and so on. A grass field marked by the lines and plates of a baseball diamond means something different than a grass field marked with tombstones and flowers. The apartment wall lined with books means something different than the apartment wall lined with family photos, band posters, sports memorabilia, works of art, bottles of wine or nothing. Having books around says, “This is a place where thinking and learning are valued.”

Books shape students’ identities. Of course, people are more than their books, degrees, careers, relationships or experiences, more than their thoughts, feelings, even bodies. And yet, these all shape how one lives in the world, the kind of person one appears to be, one’s identity. Having books around says, “I am the sort of person who values thinking and learning.”

Keeping books allows students to return to them over the years. The most meaningful connections people can have with books play out over a lifetime. The weeks or months during a course count as an introduction. That’s enough for some books. Others offer more. Students can return to a book after 10 or 20 years, reread the notes they wrote in the margins the last time they read it, observe how their thinking has changed, see what new layers of meaning they can find in the text at different times in life.

Books are a tangible investment in lifelong learning. College students’ finances vary vastly. It’s not my place to tell students whether they can or cannot afford books. At the same time, I know many students already sacrifice a lot to attend college, as an investment in something that matters to them. All I can add to that is that books are a good investment, too, a real commitment to continue learning long after graduating.

Distinguished scholar bell hooks testifies to this final point. Growing up poor in a patriarchal, segregated town, she learned the value of books from her mother, who had never graduated high school. “Against my father’s wishes,” hooks recalls, “she was willing to spend money on books, to let me know the pride of book ownership and the joy of possessing the gift that keeps on giving -- the book that one can read over and over and over.” Reading books, she continues, “empowered me to journey to places with the mind and imagination … expanded my consciousness … made the impossible possible.”

At the end of each semester, when the line at the bookstore to sell back books is at its longest, one of my dear friends and fellow professors walks by crying out, “Traitors! Traitors!” His joke -- and, of course, he does this playfully -- contains a historical pun. The Latin root of the word traitor, traditor, was the name given to those early Christians who under persecution handed over their sacred texts to be burned by the Roman authorities. The Latin cognate literally means “to hand over.” To hand over one’s books is a betrayal of our common purpose -- although if it’s that or die (or miss the rent), one will surely be forgiven.

We hope students leave college with memories, friends, knowledge, skills and a diploma, and we do well when we remind students to obtain them. We need to add a library to the list. When students sell back their books, they sell back part of their education. I care much more about what books students keep, and what notes they wrote in them, than what courses they passed or what grades they earned. Students’ bookshelves say much more than their transcripts.

Paul T. Corrigan is associate professor of English at Southeastern University.

Editorial Tags: 

Essay on Barbara Ehrenreich's 'Living With a Wild God'

Examples of atheist spiritual autobiography are not plentiful, although the idea is not as self-contradictory as it perhaps sounds. A quest story that ends without the grail being located or the ring destroyed may not satisfy most audiences, but it's a quest story even so.

The one example that comes to mind is Twelve Years in a Monastery (1897) by Joseph McCabe, an English Franciscan who spent much of his clerical career grappling with doubts that eventually won out. Twelve Years is framed mainly as a critique and expose of clerical life, but its interest as a memoir comes in part from McCabe’s struggle to accept the moral and intellectual demands imposed by his growing skepticism. For all of its defects, monasticism offered a career in which McCabe’s talents were recognized and even, within ascetic limits, rewarded. Leaving it meant answering the call of a new vocation: He went on to write on an encyclopedic array of scientific, cultural and historical topics.

McCabe also became the translator and primary English spokesman for Ernst Haeckel, the German evolutionary theorist and advocate of pantheism, which seems to have squared easily enough with the lapsed monk’s atheism. (There may be more than a semantic difference between thinking of God and the universe as identical and believing there is no God, just universe. But if so, it is largely in the eye of the beholder.)

Barbara Ehrenreich’s background could not be more different from Joseph McCabe’s. In Living With a Wild God: A Nonbeliever’s Search for the Truth About Everything (Hachette/Twelve) she describes her working-class family as consisting of atheists, rationalists, and skeptics for at least a couple of generations back. “God is an extension of human personality,” she wrote in her journal as an adolescent, “brought into the world and enslaved as man’s glorifier.” McCabe would have had to do penance for such a thought; in Ehrenreich’s case, it was just dinner-table wisdom -- expressed with precocious verbal finesse, later honed into a sharp instrument of social criticism put to work in Nickle and Dimed, among other books.

Her memoir appeared two years ago, though I’ve only just read it, making this column more rumination than review. The usual blurb-phrases apply: it’s brilliantly written, thought-provoking, and often very funny, taking aphoristic turns that crystallize complex feelings into the fewest but most apt words. For example: “[I]f you’re not prepared to die when you’re almost 60, then I would say you’ve been falling down on your philosophical responsibilities as a grown-up human being.” Or: “As a child I had learned many things from my mother, like how to sew a buttonhole and scrub a grimy pot, but mostly I had learned that love and its expressions are entirely optional, even between a mother and child.”   

So, a recommended read. (More in the reviewerly vein is to be found here and here.) Additional plaudits for Living With a Wild God won’t count for much at this late date, while its literary ancestry might still be worth a thought. For it seems entirely possible, even likely, that Ehrenreich’s parents and grandparents in Butte, Montana would have read McCabe -- “the trained athlete of disbelief,” as H.G. Wells called him, in recognition of McCabe’s countless books and pamphlets on science, progress and the benefits of godlessness. Many of them appeared in newsprint editions circulating widely in the United States during the first half of the 20th century, with one publisher reckoning he’d sold over 2.3 million booklets by McCabe between the 1920s and the 1940s.

The inner life she led as a teenager that Ehrenreich depicts in her memoir certainly resembles the world of rationality and order that McCabe evokes, and that her family took as a given: “[E]very visible, palpable object, every rock or grain of sand, is a clue to the larger mystery of how the universe is organized and put together -- a mystery that it was our job, as thinking beings, to solve.” Ehrenreich took up the challenge as an adolescent with particular rigor. Faced with the hard questions that death raises about the value and point of life, she began taking notes in the expectation of working out the answer: “I think it is best to start out with as few as possible things which you hold to be unquestionably true and start from there.”

The problem, as Descartes discovered and Ehrenreich did soon, was that “the unquestionably true” is a vanishingly small thing to determine. You are left with “I exist” and no path to any more inclusive confidence than that. Descartes eventually posits the existence of God, but arguably bends the rules in doing so. Ehrenreich does not and lands in the quicksand of solipsism.

The mature Ehrenreich can see how her younger self’s philosophical conflicts took shape in the midst of more mundane family problems involving alcoholism, career frustration and each parent’s set of inescapable disappointments. (After all, solipsism means never having to say you’re sorry.) But she also recognizes that the quandaries of her teenage prototype weren’t just symptoms: “Somehow, despite all the peculiarities of my gender, age class, and family background, I had tapped into the centuries-old mainstream of Western philosophical inquiry, of old men asking over and over, one way or another, what’s really going on here?”

In pursuing answers that never quite hold together, she undergoes what sounds very much like the sort of crisis described by the saints and mystics of various traditions. First, there were repeated moments of being overwhelmed by the sheer strangeness and “there”-ness of the world itself. Then, in May 1959, a few months before leaving for college, she underwent a shattering and effectively inexpressible experience that left her earlier musings in ashes. No consciousness-altering chemicals were ingested beforehand; given the circumstances, it is easy to appreciate why the author spent the 1960s avoiding them:    

“[T]he world flamed into life,” she writes. “There were no visions, no prophetic voices or visits by totemic animals, just this blazing everywhere. Something poured into me and I poured out into it. This was not the passive beatific merger with ‘the All,’ as promised by the Eastern mystics. It was a furious encounter with a living substance that was coming at me through all things at once, and one reason for the terrible wordlessness of the experience is that you cannot observe fire really closely without becoming part of it.”

This kind of thing could not be discussed without the risk of involuntary commitment, and Ehrenreich herself refers to a medical hypothesis suggesting that ecstatic states may result when “large numbers of neurons start firing in synchrony, until key parts of the brain are swept up in a single pattern of activity, an unstoppable cascade of electrical events, beginning at the cellular level and growing to encompass the entire terrain that we experience as ‘consciousness.’”

Ehrenreich did not take McCabe’s course in reverse -- going from atheist into the waiting arms of an established faith. For that matter, she remains more or less an agnostic, at least willing to consider the possible merits of a polytheistic cosmos. "My adolescent solipsism is incidental compared to the collective solipsism our species has embraced for the last few centuries in the name of modernity and rationality," she writes, "a worldview in which there exists no consciousness or agency other than our own, where nonhuman animals are dumb mechanisms, driven by instinct, where all other deities or spirits have been eliminated in favor of the unapproachable God...." Whether a nonreligious mysticism can go beyond "modernity and rationality" without turning anti-modern and irrationalist is something we'll take up in the column on a future occasion. 

Editorial Tags: 
Image Source: 
Hachette Book Group

University of Florida, Elsevier explore interoperability in the publishing space

Smart Title: 

U of Florida connects its institutional repository to Elsevier's ScienceDirect platform to try to increase the visibility of the university's intellectual work.

Review of Terry Eagleton's "Culture"

If ideas are tools -- “equipment for living,” so to speak -- we might well imagine the culture as a heavily patched-up conceptual backpack that has been around the world a few times. It has been roughly handled along the way.

The stitches strain from the sheer quantity and variety of stuff crammed into it over the years: global culture, national culture, high culture, popular culture, classical and print and digital cultures, sub- and countercultures, along with cultures of violence, of affluence, of entitlement, of critical discourse …. It’s all in there, and much else besides. How it all fits -- what the common denominator might be -- is anyone’s guess. We could always draw on the useful clarifying distinction between: (1) culture as a category of more or less aesthetic artifacts, perhaps especially those that end up in museums and libraries, and (2) culture as the shared elements of a way of life.

The difference is, in principle, one of kind, not of quality, although assumptions about value assert themselves anyway. The first variety is sometimes called “the Matthew Arnold idea of culture,” after that Victorian worthy’s reference, in his book Culture and Anarchy, to “the best which has been thought and said.” Presumably music and painting also count, but Arnold’s emphasis on verbal expression is no accident: culture in his use of the term implies literacy.

By contrast “culture in the anthropological sense” -- as the second category is often called -- subsumes a good deal that can be found in societies without writing: beliefs about the nature of the world, ways of dressing, gender roles, assumptions about what may be eaten and what must be avoided, how emotions are expressed (or not expressed) and so on. Culture understood as a way of life includes rules and ideas that are highly complex though not necessarily transmitted through formal education. You absorb culture by osmosis, often through being born into it, and much of it goes without saying. (This raises the question of whether animals such as primates or dolphins may be said to have cultures. If not, why not? But that means digging through a whole other backpack.)

The dichotomy isn’t airtight, by any means, but it has served in recent years as a convenient pedagogical starting point: a way to get students (among others) to think about the strange ubiquity and ambiguity of culture as a label we stick on almost everything from the Code of Hammurabi to PlayStation 4, while also using it to explain quite a bit. Two people with a common background will conclude a discussion of the puzzling beliefs or behavior of a third party by agreeing, “That’s just part of their culture.” This seems more of a shrug than an explanation, really, but it implies that there isn’t much more to say.

One way to think of Terry Eagleton’s new book, Culture (Yale University Press), is as a broad catalog of the stuff that comes out when you begin unpacking the concept in its title -- arranging the contents along a spectrum rather than sorting them into two piles. In doing so, Eagleton, a distinguished professor of English literature at the University of Lancaster, follows closely the line of thought opened by the novelist and critic Raymond Williams, who coined the expression “culture as a whole way of life.” Williams probably derived the concept in turn, not from the anthropologists, but from T. S. Eliot. In distinguishing “culture as ordinary” (another Williams phrase) from culture as the work that artists, writers, etc. produce, the entire point was to link them: to provoke interest in how life and art communicated, so to speak.

For Williams, the operative word in “culture as a whole way of life” was, arguably, “whole”: something integral, connected and coherent, but also something that could be shattered or violated. Here, too, Eagleton is unmistakably Williams’s student. His assessment of how ideas about culture have taken shape over the past 200 years finds in them a pattern of responses to both industrialism (along with its spoiled heir, consumerism) and the French revolution (the definitive instance of “a whole way of life” exploding, or imploding, under its own strains). “If it is the cement of the social formation,” Eagleton writes, culture “is also its potential point of fracture.”

It may be that I am overemphasizing how closely Eagleton follows Williams. If so, it is still a necessary corrective to the way Williams has slowly turned into just another name in the Cultural Studies Hall of Fame rather than a felt moral and intellectual influence. His emphasis on culture as “a whole way of life” -- expressed with unabashed love and grief for the solidarity and community he knew when growing up in a Welsh mining community -- would sound remarkably anachronistic (if not ideologically totalizing and nostalgically uncritical) to anyone whose cultural reference points are of today’s commodified, virtual and transnational varieties.

And to that extent, Eagleton’s general survey of ideas about culture comes to a sharp point -- aimed directly at how the concept functions now in a capitalist society that he says, “relegates whole swaths of its citizenry to the scrap heap, but is exquisitely sensitive about not offending their beliefs.”

He continues, in a vein that Williams would have appreciated: “Culturally speaking, we are all to be granted equal respect, while economically speaking the gap between the clients of food banks and the clients of merchant banks looms ever larger. The cult of inclusivity helps to mask these material differences. The right to dress, worship or make love as one wishes is revered, while the right to a decent wage is denied. Culture acknowledges no hierarchies, but the educational system is riddled with them.” This may explain why culture is looking so raggedy and overburdened as a term. Pulled too tight, stretched too thin, it covers too many things that it would be difficult to face straight on.

Editorial Tags: 
Image Source: 
Yale University Press

Review of Christina Crosby, "A Body, Undone: Living on After Great Pain (A Memoir)"

Somewhere along the way, Nietzsche’s apothegm “That which does not destroy me makes me stronger” lost all the irony and ambiguity it had in context and turned into an edifying sentiment -- a motivational catchphrase, even on the order of that poster saying, “Hang in there, baby!” with the cat clinging to a tree branch.

“Destroy” is often rendered “kill,” giving it a noirish and edgy sound. Either way, the phrase is uplifting if and only if understood figuratively, as a statement about mental resilience. For when taken literally, it is barely even half true, as a moment’s reflection reveals. A life-threatening virus can make us stronger -- i.e., immune to it in the future -- but a bullet to the brain never will. That truth would not have been lost on Nietzsche, who understood philosophy as a mode of psychology and both as rooted in physiology.

He expected the reader not just to absorb a thought but to test it, to fill in its outlines and pursue its implications -- including, I think, a contradictory thought: Whatever does not kill me might very well leave me wishing it had.

While riding her newly repaired bicycle early in the fall semester of 2003 -- pumping the pedals hard, with the strong legs of someone just entering her 50s and determined not to feel it -- Christina Crosby, a professor of English and feminist, gender and sexuality studies at Wesleyan University, got a branch caught in the front wheel. She went flying from her seat, landing on the pavement chin first, fracturing two vertebrae in her neck. The broken bone scraped her spinal cord. One indication of how fast it all happened is that reflexes to break a fall never kicked in. Her hands were not damaged at all.

“Serious neurological damage started instantly,” Crosby writes in A Body, Undone: Living On After Great Pain (NYU Press); “blood engorged the affected site, and the tissue around the lesion began to swell, causing more and more damage as the cord pressed against the broken vertebrae. I also smashed my chin into tiny pieces, tore open my lips, slashed open my nose, breaking the cartilage, and multiply fractured the orbital bone underneath my right eye.” She had been wearing wire-frame glasses and the force of the impact drove the metal into the bridge of her nose.

Crosby spent three weeks in the hospital, unconscious in intensive care for most of it, and only found out later, from her lover, Janet, that the neurosurgeons and plastic surgeons “debated who should go first.” The plastic surgeons won. It sounds as if they had proportionately the more hopeful and effective job to do -- piecing together her chin from fragments, reconstructing her mouth, removing the eyeglass frames from her flesh and leaving only a half-moon scar.

The neurological damage was much more extensive and included both paralysis and a loss of sensation from the neck down. In time, Crosby regained limited use of her hands and arms and could begin to overcome the extreme (and dangerous) atrophy that set in following the accident. She was able to return to teaching part time at Wesleyan in 2005.

The author refers to herself dictating the memoir, but it feels very much as a piece of writing -- that is, as something composed in large part through revision, through grappling with the enormous problem of communicating sequences of experience and thought that few readers will have shared. The accident occurred relatively late in her life and without warning; the contrast between her existence before and after the catastrophic event is made even starker by the fact that she cannot remember it happening. “My sense of a coherent self,” she writes, “has been so deeply affronted” that the book in large measure serves as a way to try to put the fragments back together again without minimizing the depth of the chasm she has crossed.

“You become who you are,” Crosby writes, “over the course of a life that unfolds as an ongoing interaction with objects and others, from the infant you once were, whose bodily cartography slowly emerged as you were handled by caregivers whose speech washed over you, to the grown-up you are today, drawn beyond reason to one person rather than another.”

On that last point she has been extraordinarily fortunate in whom she found herself drawn to: the bond she shares with Janet seems like a rope across the abyss, or more like a steel cable, perhaps. (I call her by her first name simply because the author does. The view from Janet R. Jakobsen’s side of things may be read in a thoughtful essay from 2007.) At the same time, A Body, Undone is anything but sentimental about the possibilities of growth and healing. As doctors lowered the dosage of Crosby’s painkillers, new forces imposed themselves on her daily life:

“I feel an unassuageable loneliness, because I will never be able to adequately describe the pain I suffer, nor can anyone accompany me into the realm of pain …. Pain is so singular that it evades direct description, so isolating because in your body alone. Crying, and screaming, and raging against pain are the signs of language undone. … I have no exact account of how pain changes my interaction with my students and my colleagues, but I know there are times when I don’t feel fully present. It’s not that the pain is so bad that it demands all my attention, but rather that it’s so chronic as to act like a kind of screen.”

No pseudo-Nietzschean bromides to be had here. There is also the difficult new relationship with one’s bowels when they cease to be under any control by the brain -- the discovery of a whole terra incognita beyond ordinary feelings of awkwardness and embarrassment. Crosby discusses her reality with a candor that must be experienced to be believed. And the reader is left to face the truth that one’s embodiment (and the world that goes with it) can change utterly and forever, in a heartbeat.

Editorial Tags: 
Image Source: 
NYU Press

Review of Benedict Anderson, "Life Beyond Boundaries: A Memoir"

The folklore of Indonesia and Thailand tells of a frog who is born under half of a coconut-shell bowl and lives out his life there. In time, he draws the only sensible conclusion: the inside of the shell is the whole universe.

“The moral judgment in the image,” writes Benedict Anderson in Life Beyond Boundaries: A Memoir (Verso), “is that the frog is narrow-minded, provincial, stay-at-home, and self-satisfied for no good reason. For my part, I stayed nowhere long enough to settle down in one place, unlike the proverbial frog.”

Anderson, a professor emeritus of international studies, government and Asia studies at Cornell University, wrote major studies of the history and culture of Southeast Asia. A certain degree of cosmopolitanism went with the fieldwork. But the boundaries within a society can be patrolled just as insistently as its geographical borders -- and in the case of academic specialties, the guards inspecting passports tend to be quite unapologetically suspicious.

In that regard, Anderson was an even more remarkable citizen of the world, for his death late last year has been felt as a loss in several areas of the humanities as well as at least a couple of the social sciences. Nearly all of this reflects what someone writing in a scholarly journal once dubbed “Benedict Anderson’s pregnant phrase” -- i.e., the main title of his 1983 work Imagined Communities: Reflections on the Origin and Spread of Nationalism, which treated the mass production of books and periodicals in vernacular languages (what he called “print capitalism”) as a catalytic factor in creating a shared sense of identity and, with it, the desire for national sovereignty.

By the 1990s, people were pursuing tangents from Anderson’s argument with ever more tenuous connection to nationalism -- and still less to the specific emphasis on print capitalism. Any group formed and energized by some form of mass communication might be treated as an imaginary community. Here one might do a search for “Benedict Anderson” and ”World of Warcraft” to see why the author came to think of his best-known title as “a pair of words from which the vampires of banality have by now sucked almost all the blood.” Even so, Imagined Communities has shown remarkable longevity, and its landmark status is clearly international: it had been translated into more than 30 languages as of 2009, when it appeared in a Thai edition.

The reader of Life Beyond Boundaries soon understands why Anderson eventually developed mixed feelings about his “pregnant phrase” and its spawn. His sense of scholarship, and of life itself, was that it ought to be a mode of open-ended exploration, of using what you’ve learned to figure out what you could learn. Establishing a widely known line of thought must have become frustrating once it’s assumed to represent the only direction in which you can move. Professional interest is not the only kind of interest; what it recognizes as knowledge is no measure of the world outside the shell.

Anderson wrote the memoir by request: a Japanese colleague asked for it as a resource to show students something of the conduct of scholarship abroad and to challenge the “needlessly timid” ethos fostered by Japanese professors’ “patriarchal attitude.” Long retired -- and evidently reassured by the thought that few of his American colleagues would ever see the book -- Anderson was wry and spot-on in recounting the unfamiliar and not always agreeable experience of American academic life as he found it after emigrating to the United States from England as a graduate student in the late 1950s. For one thing, his professors looked askance at his papers, where he might indulge in a sardonic remark if so inspired, or pursue a digressive point in his footnotes.

“In a friendly way,” he writes, “my teachers warned me to stop writing like this …. It was really hard for me to accept this advice, as in previous schools I had always been told that, in writing, ‘dullness’ was the thing to be avoided at all cost.” He also underscores the paradox that the pragmatic American disinterest in “grand theory” coexisted with an academic hunger for it, renewed on a seasonal basis:

“‘Theory,’ mirroring the style of late capitalism, has obsolescence built into it, in the manner of high-end commodities. In year X students had to read and more or less revere Theory Y while sharpening their teeth on passé Theory W. Not too many years later, they were told to sharpen their teeth on passé Theory Y, admire Theory Z, and forget about Theory W.”

Lest anyone assume this refers to the situation in the humanities, it’s worth clarifying that one example he gives is the “modernization theory” that once ruled the social sciences roost. And similar ridings of the trend wave also prevail in the choice of areas for research. The antidote, he found, came from leaving the academic coconut bowl to explore Indonesia, the Philippines and Thailand:

“I began to realize something fundamental about fieldwork: that it is useless to concentrate exclusively on one’s ‘research project.’ One has to be endlessly curious about everything, sharpen one’s eyes and ears, and take notes about anything …. The experience of strangeness makes all your senses much more sensitive than normal, and your attachment to comparison grows deeper. This is also why fieldwork is so useful when you return home. You will have developed habits of observation and comparison that encourage or force you to start noticing that your own culture is just as strange ….”

Unfortunately the author does not say how his intended Japanese public responded to Life Beyond Boundaries. A lot probably depends on how well the moments of humor and reverie translated. But in English they read wonderfully, and the book is a gem.

Editorial Tags: 
Image Source: 
Verso

Pages

Subscribe to RSS - Books
Back to Top