Cultural studies

The Good, the Bad, and the Ugly

The last episode of the HBO series "Deadwood" ran on Sunday evening, bringing to an end one of the most unusual and absorbing experiments in historical storytelling ever attempted on the small screen. The network’s decision not to continue the program is understandable (it was very expensive to film) if by no means easy to forgive.

Set in a mining camp in the Dakota Territory during the late 1870s, "Deadwood" belongs to the sub-genre of the “revisionist Western” -- a skeptical retelling of how the frontier was settled, one grittier and less prone to melodrama than B-movie versions. Among the people finding their way into town are historical figures who have long since become part of the Western mythology: Wild Bill Hickok, Calamity Jane, the brothers Earp. Most of the other major characters can also be found in chronicles of the real-life town of Deadwood.

A few others were imagined into existence by David Milch, the show’s creator -- but not quite ex nihilo. I’m pretty sure that Alma Garrett, the genteel widow who sets up Deadwood’s bank, wandered into the show from one of Henry James’s notebooks. The refined sociopath Francis Wolcott -- the (fictional) geologist employed by the (very real) mining tycoon George Hearst -- might well have felt at home in William S. Burroughs’s transgressive Western novel The Place of Dead Roads.

And while the unctuous hotel proprietor and mayor E. B. Farnum is based on an actual person who lived in the South Dakota town, he also comes by way of Charles Dickens. E.B. is the American cousin of Uriah Heep, if ever there were one.

Such literary allusions might all exist solely in my imagination, of course. But probably not. Milch, the show’s executive producer and head writer, was a student of Robert Penn Warren and Cleanth Brooks at Yale University in the 1960s. Interviews reveal someone whose mind turns easily to questions of literary form and verbal texture.

The scripts Milch has written for television -- in the early years of "NYPD Blue," for example -- exhibit an interest in how a group of people who live and work together create an argot capable of infinite subtleties of inflection, depending on the circumstance. His years around Warren and Brooks (founding fathers of the old-fashioned New Criticism) must have drilled into Milch the idea that literary works are characterized by irony, tension, and paradox. He seems to have taken this insight to the next step -- listening for how those formal principles can shape the rhythms of ordinary conversation.  

With "Deadwood," the Milchian penchant for conveying the stylization of speech broke new ground -- thanks to HBO’s freedom from the conventional restraints of broadcast television. The characters delivered intricate arias of Victorian syntax and repetitive obscenity. It sounded like some hitherto unimaginable blend of Walter Pater and gangster rap. It was often exhilarating, if sometimes farfetched. You felt awe at the power of the actors to memorize their lines, let alone speak them. The combination lent itself to parody but it is difficult to imagine its like ever being heard on television again.

Milch’s tendency toward stylization bothered some people, who found it mannered and arch. I don’t agree, but will leave the show’s defense in more capable hands. Instead, let me use this chance to discuss another element of the language of "Deadwood" that has passed largely without comment, although it usually proves far more bracing than the familiar obscenities.

I mean the epithets. The women who work in the saloons of Deadwood are called “whores.” Nobody blinks at the word, least of all the women so addressed. The Sioux Indians are more often referred to as “dirt worshippers.” The town’s Chinese laborers live in “Chink Alley.” One of the owners of the hardware store is the entrepeneur Sol Star, better known simply as “the Jew.” (He teaches his girlfriend Trixie, a former whore, how to do bookkeeping. In moments of frustration she calls it a “Jew skill.”) A black drifter arrives in town wearing an old Civil War uniform. If he has a given name, it isn’t mentioned twice. Everyone refers to him as the Nigger General -- in part, because that is what he calls himself.

Often enough the words are used as weapons. But sometimes the insults flow so casually that the offense barely has time to register. And there are moments when they carry no more charge than a “damn” would. It is all a matter of context.

But it is a context in which racism, for example, is naked and unashamed. "Deadwood" takes this for granted as a fact about the world it is presenting -- a reality scarcely more worthy of comment than the mud in the streets.

One citizen of Deadwood in particular is prone to loud and resentment-fueled tirades about the honor that is due him as a white man. You see that most other characters find him disgusting. But that isn’t a matter of his attitudes, so much as his demeanor. After all, he is universally known as Steve the Drunk.

The language proves jarring -- for the television audience, anyway -- precisely because it is treated as ordinary. The  charge of symbolic violence can be taken for granted, just like the fistfight taking place out in the thoroughfare at two in the morning. Its cumulative effect is powerful and eye-opening. (Or maybe “ear-opening,” rather.)

While reading Eric Rauchway’s new bookBlessed Among Nations -- the subject of last week’s column – I found that the ambience of "Deadwood" was almost always at the back of my mind. But only after interviewing Rauchway did it occur to me to ask if he watched the program. Not surprisingly, he did. I asked if he had any thoughts on the show, now that it was winding down.        

“There's an overall story arc of the transition from wilderness to civilization,” he responded, “and the major plot lines have to do with the circumstances under which civic institutions evolve. But it's not Frederick Jackson Turner's frontier – or if it is, it's a decidedly modified Turner.”

It might be worth mentioning here that, of all the historians of the Progressive era, Turner has probably had the most contradictory posthumous career. It’s been a while since any scholar wholeheartedly endorsed his thesis about the closing of the American frontier. But it remains a landmark -- if only the kind used by later generations for target practice -- and I doubt a non-historian can watch "Deadwood" for very long without reinventing some approximation of Turner’s notion that the national character was shaped down to its cells by the Western edge of expansion.

Anyway, as Rauchway was saying, before I so digressively interrupted....     

“There's some evidence that [the show’s characters] are safety-valve types. They're people who say, as Ellsworth does, that they might have "fucked up their lives flatter than hammered shit, but they're beholden to no human cocksucker".... But they're not, Turner-style, out there to get an opportunity to civilize themselves. Which is to say, they don't go West because only there can they get a patch of land and settle, Jeffersonian-like, into civilization.”

Rather, people finding their way to the mining town are looking for a new start -- often because the economy has destroyed their other options.

“In several conversations on 'Deadwood',” notes Rauchway, “we've been told that these people have bumped into each other in other boom towns, before those booms went bust, and now their predilections have brought them here. And we can infer that soon they'll move on again. If they're the advance agents of civilization, they're doing that work unwillingly.”

And the civilization they create reflects that restlessness. The first two of "Deadwood"’s three seasons told a story about people slowly -- almost unwittingly -- establishing a social contract. A swarm of disconnected and sometimes violent individuals created a rough semblance of order (with the emphasis in “rough”). It was not so much a matter of coming to trust one another, as learning the limited utility of constant suspicion and fear.

This past season led up to the town’s first election -- an initial step toward the eventual incorporation of the territory into the United States, proper. But that bit of progress only comes at the cost of sacrifice: the destruction of that order we have watched grow over time. A new regime emerges, now under the control of a consolidated mining operation.

The final image of the series really did sum it up perfectly. It shows a man on his knees, scrubbing a pool of blood off the wooden floor.

Another character, Johnny, has just asked for some reassuring words about the event that led to the giant stain. Johnny leaves, and the man with the brush gets back to work. "Wants me to tell him something pretty," he says.

It's not a rebuke, exactly -- just a reminder that, as someone once put it, every document of civilization is also a document of barbarism.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

YouTube and the Cultural Studies Classroom

"I saw a small iridescent sphere of almost unbearable brightness. At first I thought it was spinning; then I realized that the movement was an illusion produced by the dizzying spectacles inside it."
                  --Jorge Luis Borges, "The Aleph"

On December 17, 2005, “Saturday Night Live” ran a skit by Chris Parnell and Andy Samberg called "Lazy Sunday," a rap video about going out on a "lazy Sunday" to see The Chronicles of Narnia and procuring some cupcakes with "bomb frostings" from the Magnolia Bakery in New York City. The rap touches on the logistics of getting to the theater on the Upper West Side: "Let's hit up Yahoo Maps to find the dopest route./ I prefer Mapquest!/ That's a good one too./ Google Maps is the best!/ True that! Double true!/ 68th and Broadway./ Step on it, sucka!"

Parnell and Samberg make it to the Magnolia for their cupcakes, go to a deli for more treats, and hide their junk food in a backpack for smuggling past movie security. They complain about the high movie prices at the box office ("You can call us Aaron Burr from the way we're dropping Hamiltons") and brag about participating in the pre-movie trivia quiz. Doesn't seem like much if you've never seen it, but for pure joie de vivre, and white suburban dorkiness, "Lazy Sunday" just can't be beat. What makes "Lazy Sunday" special, however, is how its original airing coincided with the birth of Internet video-sharing, enabling the two minute clip to be viewed millions of times on YouTube, a free service that hosts videos posted by users. In fact, the popularity of the clip on YouTube was so great that NBC forced the site to remove it several months later, citing copyright infringement. The prospect of its programming being net-jacked by Internet geeks and magnified through YouTube's powerful interface was just too much for NBC.

I bring up "Lazy Sunday" to foreground my discussion of the pedagogical uses of YouTube because it sums up its spirit and helps us define the genre of video with which YouTube is most associated. Although YouTube is awash in clips from television and film, the sui generis YouTube video is the product of collaborative "lazy Sunday" moments when pals film each other or perform for the camera doing inane things like dancing, lip synching or making bottles of Diet Coke become volcanic after dropping Mentos candies in them.

Parnell and Samberg's references to Internet tools and movie trivia, as well as their parody of rap, perfectly capture a zeitgeist in which all pleasures can be recreated, reinvented and repeated ad nauseam through the magic of the Web. As Sam Anderson describes it in Slate, YouTube is "an incoherent, totally chaotic accretion of amateurism -- pure webcam footage of the collective unconscious." Whatever you're looking for (except porn) can be found in this Borgesian hall of mirrors: videos of puppies, UFO footage, ghosts on film, musical memento mori about recently deceased celebrities, movie and documentary clips, real and faux video diaries, virtuoso guitar picking performances and all kinds of amateur films. In my case, the video that sold me on YouTube was "Where the Hell is Matt Harding Dancing Now?" -- a strangely uplifting video of a guy called Matt Harding who traveled around the world and danced in front of landmarks such as Macchu Picchu in Peru, Area 51 in the U.S., the head-shaped monoliths of Easter Island, and the Great Wall of China, among many others.

OK, that's all nice, but what can YouTube do for professors, apart from giving them something to look at during their lunch breaks? Inside Higher Ed has reported on the ways in which YouTube is causing consternation among academics because it is being used by students to stage moments of guerilla theater in the classroom, record lectures without permission and ridicule their professors. Indeed, a search on YouTube for videos of professors can bring up disquieting clips of faculty behaving strangely in front of their students, like the professor who coolly walks over to a student who answers a ringing cell phone in class, politely asks for the device, and then violently smashes it on the floor before continuing on with his lecture as if nothing had happened. It could be staged (authenticity is more often than not a fiction on YouTube) but it is still disturbing.

But I would like to argue for an altogether different take on YouTube, one centered on the ways in which this medium can enrich the learning experience of college students by providing video realia to accompany their textbooks, in-class documentaries and course lectures. Although I can't speak to the applicability of YouTube to every discipline, in what follows I make a case for how the service can be harnessed by professors in the humanities and social sciences.

As a professor Latin American literature and culture, I often teach an introductory, third year course called Latin American Culture and Civilization in which students study history, literature and any other media that the instructor wishes to include in the course, such as music, film, comics and the visual arts. My version of the course emphasizes student engagement with foundational documents and writings that span all periods of Latin American history and that I have annotated for student use. One of the figures we study is President Hugo Chávez of Venezuela, whose outsized political persona has made him a YouTube star. Apart from having my students watch an excerpt of his "Bush as sulfurous devil" speech at the United Nations, I assigned a series of animated cartoons prepared by the Venezuelan state to educate children about the Bolivarian constitution championed by Chávez. These cartoons allow students see the ways in which the legacy of the 19th-century Venezuelan Liberator, Simon Bolívar, remains alive today.

The textual richness of these cartoons invites students to visually experience Bolivarian nationalism in a way that cannot be otherwise recreated in the classroom. It invites them to think critically about the ways in which icons such as Bolívar are creatively utilized to instill patriotism in children. In a similar vein, a Cuban cartoon about Cuba's founding father, José Martí, depicts how a child is transformed into the future champion of independence and social justice when he witnesses the horrors of slavery (this video has now been removed from YouTube). With regard to the Mexican Revolution, one of the most important units of the class, YouTube offers some fascinating period film of the revolutionary icons Emiliano Zapata and Pancho Villa, and especially their deaths. Although I cannot say that these are visual texts that lend themselves to the kind of rich dialogue provoked by the aforementioned cartoons, they are nonetheless an engaging visual complement to readings, discussions and lectures.

Another course in which YouTube has played a part in is my senior-level literature course on the Chilean Nobel Laureate Pablo Neruda. It may seem farfetched to use Internet video in a poetry class, but in this case, YouTube offers several useful media clips. I have utilized film clips in which Neruda's poetry appears (such as Patch Adams and Truly, Madly, Deeply), as well as music videos of Latin American singers who use lyrics by Neruda. More than anything that I could say in class, these videos illustrate the reach and enduring quality of Neruda's poetry in Latin American and North American culture. This said, there are a surprising number of student-produced videos about Neruda on YouTube that are cringe-worthy, the "Lazy Sunday" versions of the poet and his poetry. These are quite fascinating in of themselves as instances in which young people use video to interpret and stage Neruda, in ways that might be set into dialogue with more literary and canonical constructions of his legacy, but I confess that I am not yet convinced of their pedagogical value.

In this regard, the case of Neruda is not so different from that of other literary figures, such as Emily Dickinson, Nathaniel Hawthorne and Robert Frost, who are also the subject of interesting home-made YouTube videos. What do we do, for example, with a Claymation film that recreates Frost's "The Road Not Taken"? I would argue that this film is interesting because it captures the banality of a certain canonical image or version of Robert Frost that is associated with self-congratulatory, folksy Hallmark Card moments.

There are all kinds of video with classroom potential on YouTube. Consider, for example, one of YouTube's greatest stars, Geriatric1927, a 79 year-old Englishman whose video diaries document his memories of World War II, as well as of other periods of English history. Then there are the Michel Foucault-Noam Chomsky debates, in which Foucault sketches out, in animated, subtitled conversation, the key arguments of seminal works such as Discipline and Punish. There's an excellent short slide show of period caricatures of Leon Trotsky, news reels and lectures about the Spanish Civil War, rare footage of Woody Guthrie performing, Malcolm X at the University of Oxford, clips of Chicana activist Dolores Huerta discussing immigration reform and a peculiar musical montage, in reverse, about Che Guevara, beginning with images and reels of his death and ending with footage of him as a child.

Don't let me tell you what you can find; seek and ye shall receive.

YouTube is not necessary for good teaching, in the same way that wheeling a VCR into the classroom is not necessary, or bringing in PowerPoint slide shows with images, or audio recordings. YouTube simply makes more resources available to teachers than ever before, and allows for better classroom management. Rather than use up valuable time in class watching a film or video clips, such media can be assigned to students as homework in the same way that reading is assigned. However, to make it work, faculty should keep in mind that the best way to deliver this content is through a course blog. YouTube provides some simple code that bloggers can use to stream the videos on a blog, rather than having to watch them within the YouTube interface. This can be important because we may not want students to have to deal with advertisements or the obnoxious comments that many YouTube users leave on the more controversial video pages. On my free wordpress.com course blog, I can frame YouTube videos in a way that makes them look more professional and attractive ( sample page here). At this point, courseblogging is so easy that even the least technologically-minded can learn how to use services like blogger or wordpress to post syllabi, course notes and internet media.

There are problems however, the most glaring of which is the legality of streaming a clip that may infringe on copyright. If I am not responsible for illegally uploading a video of Malcolm X onto the web, and yet I stream it from my course blog, am I complicit in infringing on someone's copyright? Now that Google has bought YouTube, and a more aggressive purging of copyright protected works on the service has begun, will content useful for education dwindle over time? I don't have the answers to these urgent questions yet, but even in the worst of cases, we can assume that good, educational material will be made available, legally, on YouTube and other such services in the future, either for free or for a modest fee.

For example, I am confident that soon I will be able to tell my students that, in addition to buying One Hundred Years of Solitude for a class, they will have to purchase a $5 video interview with García Márquez off of the World Wide Web and watch it at home. And, even as I write this, podcasting technologies are already in place that will allow faculty members to tell their students that most of their lectures will be available for free downloading on Itunes so that class time can be used more productively for interactive learning activities, such as group work and presentations. Unlike more static and limited media, like PowerPoint and the decorative course Web page, video and audio-sharing help professors be more creative and ambitious in the classroom.

In sum, my friends, YouTube is not just for memorializing lazy Sundays when you want to "mack on some cupcakes." It can help your students "mack" on knowledge.

Author/s: 
Christopher Conway
Author's email: 
info@insidehighered.com

Christopher Conway is associate professor of modern languages  and coordinator of the Spanish program at the University of Texas at Arlington, where he teaches Latin American literature and culture.

Beyond the Context of No Context

Well, so much for the instantaneous availability of information: I've only just learned about the death of George Trow, whose passing, almost two weeks ago, was noted among some of the blog entries ( this one, for example) regularly channeled through my RSS feed. There is a bitter irony in this situation, and most if it is at my own expense.

Nobody was smarter than George Trow about the bad faith that comes with being "plugged in" to streams of randomized data. He once defined a TV program as "a little span of time made friendly by repetition." (Friendly, the way a con man is friendly.) That was long before most of us started spending ever more of our lives in front of another kind of screen.

Perhaps the name does not ring a bell.... George W.S. Trow, who was 63 when he died, can best be described as a minor American author (no insult intended, it's a better title than most of us will ever merit) who wrote fiction, essays, and the occasional screenplay. Two years ago, the University of Iowa published The Harvard Black Rock Forest, which first appeared in The New Yorker in 1984.

It was the kind of piece that people once had in mind (maybe with admiration and maybe not) when they thought of "a New Yorker article" -- stately in pacing, full of deep-background references, heedless of breaking-news type topicality. Iowa included the book in a series on literary nonfiction. That makes sense, but it's also been hailed by the journal Environmental History as something "every student of the history of conservation should read. Twice."

But it was another essay by Trow that really defined him as a writer to reckon with. "Within the Context of No Context" ran in The New Yorker in 1980 and was brought out the next year by Little, Brown as a book. It was reprinted by Atlantic Monthly Press in 1997 with a new introduction by Trow. He also published a kind of supplement to it, My PilgrimÂ’s Progress: Media Studies 1950-1998 (Vintage, 1999). I say "supplement" and not "sequel" because the two books cross-connect in all sorts of odd, nonlinear ways.

Odd and nonlinear "Within the Context of No Context" itself certainly is. It is short, consisting of a number of brief sections. They range from a single sentence to several paragraphs, and each section has a title. While brief, the text actually takes a while to read. The relationships among the parts are oblique, and some of the prose has the strange feel that you would probably get from a translation of Schopenhauer done by Gertrude Stein.

"Within the Context of No Context" is about television, among other things -- about the history of the mass media, with television as its culminating moment, but also about what TV does to the very possibility of understanding the world as having a history. It is an essay in cultural criticism. But it can just as well be called a work of prose poetry. Trow's thoughts unfold, then draw back into themselves. This is very strange to watch.

After a quarter of a century, it may be difficult to appreciate the originality and insight of  Trow's essay. He seems to be making points about the media that are now familiar to almost everyone. In 1980, though, they were not so obvious. It's not that he was venturing into futurology. Nor was Trow a sociologist or historian, except in the most ad hoc way. He did not offer theories or arguments, exactly, but took notes on the texture of American life following three decades of television.

He was describing long-emerging qualities of everyday experience that had been quietly taking over the entire culture. He assumed existing tendencies would continue and deepen. It was a smart bet, but a depressing one to win.

Trow's central intuition was that TV played a decisive role in shaping "the new scale of national life" in the United States following the second World War. As the scion of a New York publishing family, Trow has various points to make about the shift of power from established WASP elites to the new professional-managerial class. (That social subtext is fleshed out with abundant and eccentric detail in My Pilgrim's Progress.) But those structural changes were occurring behind the scenes. Meanwhile, the national consciousness was changing 

Having won the war, the country was starting to come to terms with its own place in the world as an incredibly affluent society holding hitherto unimaginably military power. At the same time, we were starting to watch TV. We were starting to see the world through its eye. These two developments (a new level of power, a new kind of passivity) coincided in ways it was easy to overlook, just because the process was so ubiquitous and inescapable.

More than print or even radio ever had, television could address an audience of millions simultaneously. "It has other properties," he wrote, "but what television has to a dominant degree is a certain scale and the power to enforce it." And the medium's sense of scale was defined by two grids: "The grid of two hundred million," as Trow put it, "and the grid of intimacy."

Trow does not spell out in any detail what he means by "the middle distance" -- the regions of the culture left out of the TV "grids." But by implication, it seems to include most of what's usually called civil society: the institutions, meeting places, and forums for discussion through which people voluntarily associate.

His point isn't that the media completely avoid representing them, of course. But TV does not really encourage participation in them, either. Watching it is an atomized experience of being exposed to programs crafted to appeal to tens of millions of other people having the same experience.  

"Everything else fell into disuse," wrote Trow. "There was national life -- a shimmer of national life -- and intimate life. The distance between these two grids was very great. The distance was very frightening.... It followed that people were comfortable only with the language of intimacy."

And it has a cumulative effect. Not so much in the sense that TV destroys the mediating institutions of civil society -- you know, how there used to be bowling leagues, but now everybody is bowling alone. Rather, it's that the yearning for "mirages of pseudo-intimacy" (as Trow puts it) becomes a routine part of public life.

Off the top of my head, I do not remember 1980 well enough to recall what Trow might have had in mind, at the time. Perhaps it was the interviewing style of Barbara Walters, or Jimmy Carter admitting that he had lusted in his heart. Today we are far downstream. The "grid of two hundred million"  has become the grid of three hundred million. And finesse at handling the routines of "pseudo-intimacy" now seems like a prerequisite for holding public office.

What you also find tucked away in Trow's gnomic sentences is the anticipation of countless thousands of broadcast hours in which people discuss personal problems before a vast audience of strangers. The media would create, he wrote, "space for mirages of pseuo-intimacy. It is in this space that celebrities dance. And since the dancing celebrities occupy no real space, there is room for other novel forms to take hold. Some of these are really very strange." No one had thought of "reality TV" when Trow wrote this. The idea of becoming famous by leaking videotapes of oneself having sex had not yet occurred to anybody.

What makes the essay powerful, still, is that the word "television" now tends to fade from view as you read. It serves as a synecdoche. It is a name for the whole culture.

"Television is dangerous," wrote Trow in one haunting passage, "because it operates according to an attention span that is childish but is cold. It simulates the warmth of a childish response but is cold. If it were completely successful in simulating the warmth of childish enthusiasm -- that is, if it were warm -- would that be better? It would be better only in a society that had agreed that childish warmth and spontaneity were equivalent to public virtue; that is, a society of children. What is a cold child? A sadist."
 
Over time, the media-nurtured attention span ceases to comprehend anything outside its own history. As Trow put it in a line giving his essays its title: "The work of television is to establish false contexts and to chronicle the unraveling of existing contexts; finally, to establish the context of no context and to chronicle it."

That seems much less like a Zen koan today than it did when first published. It now often feels as if the people making decisions in the media world were deliberately using Trow's work as a guidebook for what to do next: A program like "I Love the '90s" is a literal effort "to establish the context of no context and to chronicle it." (In another line already quoted, Trow anticipated a certain now-familiar tone of nostalgic hipster posturing: "It simulates the warmth of a childish response but is cold.")

My precis of here is selective. It traces one or two strands woven into a very complex pattern. The most it can do is to encourage a few more readers to read Trow himself.

"George W. S. Trow is a sort of tragic hero," as the novelist Curtis White wrote in the best commentary on him I've seen. "His essays offer us clues to how we might correct our national life. But his wisdom is likely to be lost on us, even on those who would agree with him. Like Cassandra, he can tell us things that are true and that would save us if we could understand them, but his working premise seems to be: You will not understand what I am going to say. In fact, why we won't understand is a large part of the truth Trow has to tell us."

Yes, but that's why you find yourself reading him over and over.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Eros Unbound

Valentine’s Day seems an appropriate occasion to honor the late Gershon Legman, who is said to have coined the slogan “Make love, not war.” Odd to think that saying had a particular author, rather than being spontaneously generated by the countercultural Zeitgeist in the 1960s. But I've seen the line attributed to Legman a few times over the years; and the new Yale Book of Quotations (discussed in an earlier column) is even more specific, indicates that he first said it during a speech at Ohio University in Athens, Ohio, sometime in November 1963.

Legman, who died in 1999 at the age of 81, was the rare instance of a scholar who had less of a career than a profound calling -- one that few academic institutions in his day could have accommodated. Legman was the consummate bibliographer and taxonomist of all things erotic: a tireless collector and analyst of all forms of discourse pertaining to human sexuality, including the orally transmitted literature known as folklore. He was an associate of Alfred Kinsey during the 1940s, but broke with him over questions of statistical methodology. If it hadn’t been that, it would have been something else; by all accounts, Legman was a rather prickly character.

But it is impossible to doubt his exacting standards of scholarship after reading The Horn Book: Studies in Erotic Folklore and Bibliography (University Books, 1964) -- a selection of Legman's papers reflecting years of exploration in the “restricted” collections of research libraries. (At the Library of Congress, for example, you will sometimes find a title listed as belonging to “the Delta Collection,” which was once available to a reader only after careful vetting by the authorities. The books themselves have long since been integrated into the rest of the library’s holdings, but not-yet-updated catalog listings still occasionally reveal that a volume formerly had that alluring status: forbidden yet protected.) Legman approached erotic literature and "blue" folklore with philological rigor, treating with care songs and books that only ever circulated on the sly.  

Some of Legman's work appeared from commercial publishers and reached a nonscholarly audience. He assembled two volumes of obscene limericks, organized thematically and in variorum. The title of another project, The Rationale of the Dirty Joke, only hints at its terrible sobriety and analytic earnestness. Sure, you can skim around in it for the jokes themselves. But Legman’s approach was strictly Freudian, his ear constantly turned to the frustration, anxiety, and confusion expressed in humor.

Not all of his work was quite that grim. Any scholar publishing a book called Oragentialism: Oral Techniques in Genital Excitation may be said to have contributed something to the sum total of human happiness. The first version, devoted exclusively to cunnilingus, appeared from a small publisher in the 1940s and can only have had very limited circulation. The commercial edition published in 1969 expanded its scope -- though Legman (who in some of his writings comes across, alas, as stridently hostile to the early gay rights movement) seemed very emphatic in insisting that his knowledge of fellatio was strictly as a recipient.

Defensiveness apart, what’s particularly striking about the book is the degree to which it really is a work of scholarship. You have to see his literature review (a critical evaluation of the available publications on the matter, whether popular, professional, or pornographic, in several languages) to believe it. Thanks to Legman’s efforts, it is possible to celebrate Valentine’s Day with a proper sense of tradition.

Legman was a pioneer of cultural studies, long before anyone thought to call it that. He served as editor for several issues of Neurotica, a great underground literary magazine published between 1948 and 1952. Most of its contributors were then unknown, outside very small circles; but they included Allen Ginsberg, Anatole Broyard, Leonard Bernstein, and an English professor from Canada named Marshall McLuhan.

As the title may suggest, Neurotica reflected the growing cultural influence of Freud. But it also went against the prevalent tendency to treat psychoanalysis as a tool for adjusting misfits to society. The journal treated American popular culture itself as profoundly deranged; and in developing this idea, Legman served as something like the house theorist.

In a series of essays adapted from his pamphlet Love and Death (1948), Legman cataloged the seemingly endless sadism and misogyny found in American movies, comic books, and pulp novels. (Although Love and Death is long out of print, a representative excerpt can be found in Jeet Heer and Kent Worcester's collection Arguing Comics: Literary Masters on a Popular Medium, published by the University Press of Mississippi in 2004.)

Legman pointed out that huge profits were to be made from depicting murder, mutilation, and sordid mayhem. But any attempt at a frank depiction of erotic desire, let alone of sex itself, was forbidden. And this was no coincidence, he concluded. A taste for violence was being “installed as a substitute outlet for forbidden sexuality” by the culture industry.

Censorship and repression were warping the American psyche at its deepest levels, Legman argued. The human needs that ought to be met by a healthy sexual life came back, in distorted form, as mass-media sadism: "the sense of individuality, the desire for importance, attention, power; the pleasure in controlling objects, the impulse toward violent activity, the urge towards fulfillment to the farthest reaches of the individual’s biological possibilities.... All these are lacking in greater or lesser degree when sex is lacking, and they must be replaced in full.”

Replaced, that is, by the noir pleasures of the trashy pop culture available in the 1940s.

Here, alas, it proves difficult to accept Legman's argument in quite the terms framing it. His complaints about censorship and hypocrisy are easy to take for granted as justified. But the artifacts that filled him with contempt and rage -- Gone With the Wind, the novels of Raymond Chandler, comic books with titles like Authentic Police Cases or Rip Kirby: Mystery of the Mangler -- are more likely to fill us with nostalgia.

It's not that his theory about their perverse subtext now seems wrong. On the contrary, it often feels as if he's on to something. But while condemning the pulp fiction or movies of his day as symptomatic of a neurotic culture, Legman puts his finger right on what makes them fascinating now -- their nervous edge, the tug of war between raw lust and Puritan rage.

In any case, a certain conclusion follows from Legman’s argument -- one that we can test against contemporary experience.

Censorship of realistic depictions of sexuality will intensify the climate of erotic repression, thereby creating an audience prone to consuming pop-culture sadomasochism. If so, per Legman, then the easing or abolition of censorship ought to yield, over time, fewer images and stories centering on violence, humiliation, and so on.

Well, we know how that experiment turned out. Erotica is now always just a few clicks away (several offers are pouring into your e-mail account as you read this sentence). And yet one of the most popular television programs in the United States is a drama whose hero is good at torture .

They may have been on to something in the pages of Neurotica, all those decades ago, but things have gotten more complicated in the meantime.

As it happens, I’ve just been reading a manuscript called “Eros Unbound: Pornography and the Internet” by Blaise Cronin, a professor of information science at Indiana University at Bloomington, and former dean of its School of Information and Library Science. His paper will appear in The Internet and American Business: An Historical Investigation, a collection edited by William Aspray and Paul Ceruzzi scheduled for publication by MIT Press in April 2008.

Contacting Cronin to ask permission to quote from his work, I asked if he had any connection with the Kinsey Institute, also in Bloomington. He doesn’t, but says he is on friendly terms with some of the researchers there. Kinsey was committed to recording and tabulating sexual activity in all its forms. Cronin admits that he cannot begin to describe all the varieties of online pornography. Then again, he doesn’t really want to try.

“I focus predominantly on the legal sex industry,” he writes in his paper, “concentrating on the output of what, for want of a better term, might be called the respectable, or at least licit, part of the pornography business. I readily acknowledge the existence of, but do not dwell upon the seamier side, unceremoniously referred to by an anonymous industry insider as the world of ‘dogs, horses, 12-year old girls, all this crazed Third-World s—.’ ”

The notion of a “respectable” pornography industry would have seemed oxymoronic when Legman published Love and Death. It’s clearly much less so at a time when half the hotel chains in the United States offer X-rated films on pay-per-view. Everyone knows that there is a huge market for online depictions of sexual behavior. But what Cronin’s study makes clear is that nobody has a clue just how big an industry it really is. Any figure you might hear cited now is, for all practical purposes, a fiction.

The truth of this seems to have dawned on Cronin following the publication, several years ago, of “E-rogenous Zones: Positioning Pornography in the Digital Marketplace,” a paper he co-authored with Elizabeth Davenport. One of the tables in their paper “estimated global sales figures for the legal sex/pornography industry,” offering a figure of around $56 billion annually. That estimate squared with information gathered from a number of trade and media organizations. But much of the raw data had originally been provided by a specific enterprise -- something called the Private Media Group, Inc., which Cronin describes as “a Barcelona-based, publicly traded adult entertainment company.”

After the paper appeared in the journal Information Society in 2001, Cronin says, he was contacted “by Private’s investor relations department wondering if I could furnish the company with growth projections and other related information for the adult entertainment industry -- I, who had sourced some of my data from their Web site.” That estimate of $56 billion per year, based on research now almost a decade old, is routinely cited as if it were authoritative and up to date.

“Many of the numbers bandied about by journalists, pundits, industry insiders and market research organizations,” he writes, “are lazily recycled, as in the case of our aforementioned table, moving effortlessly from one story and from one reporting context to the next. What seem to be original data and primary sources may actually be secondary or tertiary in character.... Some of the startling revenue estimates and growth forecasts produced over the years by reputable market research firms ... have been viewed all too often with awe rather than healthy skepticism.”

Where Legman was, so to speak, an ideologue of sex, Blaise Cronin seems more scrupulously dispassionate. His manuscript runs to some 50 pages and undertakes a very thorough review of the literature concerning online pornography. (My wife, a reference librarian whose work focuses largely on developments in digital technology and e-commerce, regards Cronin’s paper as one of the best studies of the subject around.) He doesn't treat the dissemination of pornography as either emancipatory or a sign of decadence. It's just one of the facts of life, so to speak.

His paper does contain a surprise, though. It's a commonplace now that porn is assuming an increasingly ordinary role as cultural commodity -- one generating incalculable, but certainly enormous, streams of revenue for cable companies, Internet service providers, hotel chains, and so on. But the "mainstreaming" of porn is a process that works both ways. Large sectors of the once-marginal industry are morphing into something ever more resembling corporate America.

“The sleazy strip joints, tiny sex shops, dingy backstreet video stores and other such outlets may not yet have disappeared,” writes Cronin, “but along with the Web-driven mainstreaming of pornography has come -- almost inevitably, one has to say -- full-blown corporatization and cosmeticization.... The archetypal mom and pop business is being replaced by a raft of companies with business school-trained accountants, marketing managers and investment analysts at the helm, an acceleration of a trend that began at the tail-end of the twentieth century. As the pariah industry strives to smarten itself up, the language used by some of the leading companies has become indistinguishable from that of Silicon Valley or Martha Stewart. It is a normalizing discourse designed to resonate with the industry’s largely affluent, middle class customer base.”

As an example, he quotes what sounds like a formal mission statement at one porn provider’s website: “New Frontier Media, Inc. is a technology driven content distribution company specializing in adult entertainment. Our corporate culture is built on a foundation of quality, integrity and commitment and our work environment is an extension of this…The Company offers diversity of cultures and ethnic groups. Dress is casual and holiday and summer parties are normal course. We support team and community activities.”

That’s right, they have casual Fridays down at the porn factory. Also, it sounds like, a softball team.

I doubt very much that anybody in this brave new world remembers cranky old Gershon Legman, with his index cards full of bibliographical data on Renaissance handbooks on making the beast with two backs. (Nowadays, of course, two backs might be considered conservative.) Ample opportunity now exists to watch or read about sex. Candor seems not just possible but obligatory. But that does not necessarily translate into happiness -- into satisfaction of  "the urge towards towards fulfillment to the farthest reaches of the individual’s biological possibilities," as Legman put it.

That language is a little gray, but the meaning is more romantic than it sounds. What Legman is actually celebrating is the exchange taking place at the farthest reaches of a couple's biological possibilities: the moment when sex turns into erotic communion. And for that, broadband access is irrelevant. For that, you need to be really lucky.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Beach Blanket Bingo

Entertainment is in the eye of the beholder. Consider the case of what are usually called “beach novels” -- bulky sagas of lust, money, and adventure, page-turning epics of escapism that are (it’s said) addictive. I’ve never been able to work up the appetite to read one, even while bored on vacation in a seafront town. Clive James characterized the dialogue of one such novelist as resembling “an argument between two not-very-bright drunks.”

Which might be fun to witness in real life, actually, depending on the subject of the dispute. But reading the transcript seems like an invitation to a bad headache.

Diversion doesn’t have to be mind-numbing, let alone painful. With the end of the semester at hand, then, a few recommendations of recent books and DVDs that are smarter than your average bar fight -- and more entertaining.

The two dozen or so contributors to When I Was a Loser: True Stories of (Barely) Surviving High School managed to wear the entire range of unfortunate hair styles available throughout the 1970s and ‘80s. This collection -- edited by John McNally, who spent last semester as a visiting writer at Columbia College Chicago -- is one of the less solemn works of “creative nonfiction” (as the term of art now has it) currently available. Published by the Free Press, it is available in both paperback and e-book formats.

Most of the mortified authors are novelists and poets, ranging in age from their early 30s through their late 40s. It’s not that their memoirs are devoted to mullets or feathering, as such. But the stories they have to tell are all about the pressure to fit in, to be cool -- failure to do so bringing various penalties, as you may recall. There, on the cusp of adulthood, one has the first opportunity to create a new self. And hair is where it tends to happen first. Sex, religion, and first-job experiences also have their place.

With the benefit of hindsight, of course, the whole effort can seem embarrassing. The essays in When I Was a Loser are all about the different grades of self-consciousness and awkwardness. A few are lushly overwritten (adolescence is a purple thing) and one or two seem more than a little fictionalized. But most have the feel of authentically remembered humiliation, now rendered bearable by time and the cultivation of talent.

Several are well-known, including Dean Bakopoulos, whose novel Please Don't Come Back from the Moon was named by The New York Times as one of the notable books of 2005, and the prominent literary blogger Maud Newton. In the spirit of full disclosure, it bears mentioning that Maud is a friend, and her essay "Confessions of a Cradle Robber" (revealing the dark shame of having once been a fourteen year-old girl with a boyfriend who was twelve) was the first thing I read. My other favorite piece here was "How to Kill the Boy that Nobody Likes" by Will Clarke, a novelist who recalls being the most despised kid in junior high -- one nicknamed "The Will-tard" for his admittedly peculiar comportment. Clarke's rise to the status and celebrity of Student Council treasurer is a tribute to the power of a very silly 1970s paperback about the secret techniques of subliminal advertising. The author's name didn't ring a bell when I picked the book up, but it certainly will in the future.

Adolescence isn’t just for teenagers any more. "Twitch City," an absurdist sitcom that premiered on Canadian television in 1998, offers one of the funniest portraits around of someone determined to avoid the demands of adult life.It ran through 13 episodes before the show ended in 2000. The recent DVD release doesn’t provide many features. Still, it’s good to have the whole series available to those of us who weren’t part of its original cult following.

Its central character, Curtis (played by Don McKellar), is a man in his 20s who spends nearly every waking hour watching television. Among his few distractions from distraction is the effort to sublet more and more of his grungy apartment to anyone who can help him make the rent. His girlfriend Hope (played by the luminous Molly Parker) works at a variety of low-paying jobs. She can never quite figure out why she’s attracted to someone not just utterly lacking in ambition but unwilling even to leave the couch.

Part of the pleasure of "Twitch City" comes from seeing just how many stories can be generated around such a constrained, even claustrophobic premise. It is minimalist without being repetitive, and plausible, somehow, in spite of being preposterous.

When a chain of odd circumstances makes Curtis a media celebrity, he is visited by a woman (Jennifer Jason Leigh) claiming to be a graduate student in semiotics. She interviews him about his habits and outlook, and he delivers an analysis of the aesthetics of “Gilligan’s Island” that is a real tour de force -- a great moment of meta-TV. "Twitch City" is set in a neighborhood of Toronto, which occasionally made me wonder what Marshall McLuhan (who taught at U of T) would have made of it.

Another product of Canada worth a look is "Slings and Arrows," an ensemble comedy/drama that just finished its third and final season on the Sundance Channel. The first two (each consisting of six one-hour episodes) are now available on DVD.

Set at a repertory theater best known for its Shakespeare productions, "Slings and Arrows" is in some ways a show about trying to keep viable routines from turning into a rut of mediocrity. The theater’s regular audience is aging. It buys its season tickets out of force of habit, mostly. But box office sales aren’t what they could be, and it’s hard to find corporate sponsors who won’t try to meddle with how the place is run. And in any case, the troupe’s creative spark has diminished over time.

Revitalization isn’t impossible, but it takes some doing. Each season tracks the production of a different Shakespeare play ( Hamlet, Macbeth, and King Lear) with a keen eye and ear for the way the artistic director and the actors work out the staging. At the same time, plenty of drama and farce takes place behind the scenes.

People who have worked in theater tell me that the situations and backstage dynamics in "Slings and Arrows” are absolutely typical of professional productions. As much as I enjoyed the first season, it was hard to believe that the second would be anything beyond a repetition -- reducing success to a formula. But those misgivings were completely off track. The third season carried things to a natural close.

Nowadays there are sessions at the Modern Language Association meeting devoted to the great German literary theorist Walter Benjamin, whose selected writings have appeared in English in four hefty volumes from Harvard University Press. But if the man himself showed up and wandered the corridors, I doubt he would survive the usual quick and dismissive nametag-check. After all, he wrote mostly for magazines and newspapers. He’d be wearing the wrong kind of nametag to be worth anybody’s time.

Whether or not Howard Hampton is actually the reincarnation of Walter Benjamin, they have the same extraterritorial position vis-a-vis academic criticism. (Hampton writes for The Village Voice, Film Comment, and The Boston Globe, among other endnote-free zones.) And now they share the same publisher, with the recent appearance of Born in Flames: Termite Dreams, Dialectical Fairy Tales, and Pop Apocalypses (Harvard University Press).

Drawn from 15 years’ worth of running commentary on film, music (mostly rock), and books, Hampton’s selected essays transcend “mere reviewing” (as it’s called) to become examples of a fully engaged critical intelligence responding to the mass-media surround. Some of the best pieces are compact but sweeping analyses of changes in sensibility, amounting to miniature works of cultural history.

One example is “Reification Blues: The Persistence of the Seventies,” which listens to how the pop soundtrack of that decade left its mark on later music despite (or maybe because of) artists’ best efforts to forget it. Another case is “Whatever You Desire: Movieland and Pornotopia” -- an analysis of how mainstream Hollywood and pornography have shaped one another over the years, whether through mimicry or rejection of one another’s examples.

The curse of a lot of pop-culture commentary is its tendency to move too quickly toward big sociocultural statements -- ignoring questions of form and texture, instead using the film, album, etc., as pretext for generalized pontifications. That’s not a problem with Born in Flames. It’s a book that helps you pay attention, even to the nuances of Elvis’s performance in "Viva Las Vegas." Perhaps especially to the nuances of Elvis’s performance in "Viva Las Vegas"....

"It's an alternate universe governed by sheer whim," writes Hampton about the King's cinematic ouevre, "untouched by any sense of the outside world." Sounds like the perfect vacation spot.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

C.L.R. James Meets Tony Soprano

Half a century before "The Sopranos" hit its stride, the Caribbean historian and theorist C.L.R. James recorded some penetrating thoughts on the gangster -- or, more precisely, the gangster film -- as symbol and proxy for the deepest tensions in American society. His insights are worth revising now, while saying farewell to one of the richest works of popular culture ever created.

First, a little context. In 1938, shortly before James arrived in the United States, he had published The Black Jacobins, still one of the great accounts of the Haitian slave revolt. He would later write Beyond a Boundary (1963), a sensitive cultural and social history of cricket – an appreciation of it as both a sport and a value system. But in 1950, when he produced a long manuscript titled “Notes on American Civilization,” James was an illegal alien from Trinidad. I have in hand documents from his interrogation by FBI agents in the late 1940s, during which he was questioned in detail about his left-wing political ideas and associations. (He had been an associate of Leon Trotsky and a leader in his international movement for many years.)

In personal manner, James was, like W.E.B. DuBois, one of the eminent black Victorians -- a gentleman and a scholar, but also someone listening to what his friend Ralph Ellison called “the lower frequencies” of American life. The document James wrote in 1950 was a rough draft for a book he never finished. Four years after his death, it was published as American Civilization (Blackwell, 1993). A sui generis work of cultural and political analysis, it is the product of years of immersion in American literature and history, as well as James’s ambivalent first-hand observation of the society around him. His studies were interrupted in 1953 when he was expelled by the government. James was later readmitted during the late 1960s and taught for many years at what is now the University of the District of Columbia.

American Civilization's discussion of gangster films is part of James's larger argument about media and the arts. James focuses on the role they play in a mass society that promises democracy and equality while systematically frustrating those who take those promises too seriously. Traveling in the American South in 1939 on his way back from a meeting with Trotsky in Mexico, James had made the mistake of sitting in the wrong part of the bus. Fortunately an African-American rider explained the rules to him before things got out of hand. But that experience -- and others like it, no doubt -- left him with a keen sense of the country's contradictions.

While James's analysis of American society is deeply shaped by readings of Hegel and Marx, it also owes a great deal to Frederick Jackson Turner’s theory of “the closing of the frontier.” The world onscreen, as James interpreted it, gave the moviegoer an alternative to the everyday experience of a life “ordered and restricted at every turn, where there is no certainty of employment, far less of being able to rise by energy and ability by going West as in the old days.”

Such frustrations intensified after 1929, according to James’s analysis. The first era of gangster films coincided with the beginning of the Great Depression. “The gangster did not fall from the sky,” wrote James. “He is the persistent symbol of the national past which now has no meaning – the past in which energy, determination, bravery were sure to get a man somewhere in the line of opportunity. Now the man on the assembly line, the farmer, know that they are there for life; and the gangster who displays all the old heroic qualities, in the only way he can display them, is the derisive symbol of the contrast between ideals and reality.”

The language and the assumptions here are obviously quite male-centered. But other passages in James’s work make clear that he understood the frustrations to cross gender lines -- especially given the increasing role of women in mass society as workers, consumers, and audience members.

“In such a society,” writes James, “the individual demands an aesthetic compensation in the contemplation of free individuals who go out into the world and settle their problems by free activity and individualistic methods. In these perpetual isolated wars free individuals are pitted against free individuals, live grandly and boldly. What they want, they go for. Gangsters get what they want, trying it for a while, then are killed.”

The narratives onscreen are a compromise between frustrated desire and social control.“In the end ‘crime does not pay,’” continues James, “but for an hour and a half highly skilled actors and a huge organization of production and distribution have given to many millions a sense of active living....”

Being a good Victorian at heart, James might have preferred that the audience seek “aesthetic compensation” in the more orderly pleasures of cricket, instead. But as a historian and a revolutionary, he accepted what he found. In offering “the freedom from restraint to allow pent-up feelings free play,” gangster movies “have released the bitterness, hate, fear, and sadism which simmer just below the surface.” His theoretical framework for this analysis was strictly classical, by the way. James was deeply influenced by Aristotle’s idea that tragedy allowed an audience to “purge” itself of violent emotions. One day, he thought, they would emerge in a new form -- a wave of upheavals that would shake the country to its foundations.

In 6 seasons over 10 years, “The Sopranos” has confirmed again and again C.L.R. James’s point about the gangster is an archetypal figure of American society. But the creators have gone far beyond his early insights. I say that with all due respect to James’s memory – and with the firm certainty that he would have been a devoted fan and capable interpreter.

For James, analyzing gangster films in 1950, there is an intimate connection between the individual viewer and the figure on the screen. At the same time, there is a vast distance between them. Movies offered the audience something it could not find outside the theater. The gangster is individualism personified. He has escaped all the rules and roles of normal life. His very existence – doomed as it is – embodies a triumph of personal will over social obligation.

By contrast, when we first meet Tony Soprano, a boss in the New Jersey mob, he is in some ways all too well integrated into the world around him. So much so, in fact, that it is giving him panic attacks from trying to meet all the demands from juggling the different roles he plays. In addition to being pater of his own brood, residing in a suburban McMansion, he is the dutiful (if put-upon) son in a dysfunctional and sociopathic family.

And then there are the pressures that attend being the competent manager of a successful business with diversified holdings. Even the form taken by his psychic misery seems perfectly ordinary: anxiety and depression, the tag-team heart-breakers of everyday neurosis.

James treats the cinematic gangsters of yesteryear as radical individualists – their crimes, however violent, being a kind of Romantic refusal of social authority. But the extraordinary power of “The Sopranos” has often come from its portrayal of an almost seamless continuum between normality and monstrosity. Perhaps the most emblematic moment in this regard came in the episode entitled “College,” early in show’s first year. We watch Tony, the proud and loving father, take his firstborn, Meadow, off to spend a day at the campus of one of her prospective colleges. Along the way, he notices a mobster who had informed to the government and gone into the witness protection program. Tony tracks the man down and strangles him to death.

At the college he sees an inscription from Hawthorne that reads, “No man ... can wear one face to himself and another to the multitude, without finally getting bewildered as to which one may be true." Earlier, we have seen Tony answer Meadow’s question about whether he is a member of the Mafia by admitting that, well, he does make a little money from illegal gambling, but no, he isn't a gangster. So the quotation from Hawthorne points to one source of Tony’s constant anxiety. But it also underscores part of the audience’s experience – an ambivalence that only grows more intense as “The Sopranos” unfolds.

For we are no more clear than Tony is which of his faces is “true.” To put it another way, all of them are. He really is a loving father and a good breadwinner (and no worse a husband, for all the compulsive philandering, than many) as well as a violent sociopath. The different sides of his life, while seemingly distinct, keep bleeding into one another.

Analyzing the gangster as American archetype in 1950, C.L.R. James found a figure whose rise and fall onscreen provided the audience with catharsis. With “The Sopranos,” we’ve seen a far more complex pattern of development than anything found in Little Caesar or High Sierra (among other films James had in mind).

With the finale, there will doubtless be a reminder – as in the days of the Hays Code – that “crime does not pay.” But an ironized reminder. After all, we’ve seen that it can pay pretty well. (As Balzac put it, “Behind every great fortune, a great crime.”) Closure won’t mean catharsis. Whatever happens to Tony or his family, the audience will be left with his ambivalence and anxiety, which, over time, we have come to make our own.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

Casaubon on Viagra

One generation’s faculty gossip is sometimes another’s cultural history. At the University of Chicago in the early 1950s, a professor stopped a teenage student leaving one of his classes. She was not properly enrolled in the course, but bureaucratic proprieties really did not have anything to do with it. She was stunning. He was smitten. They had lunch. And 10 days later, give or take, Philip Rieff was joined in marriage to a young woman who never actually did change her name to Susan Rieff, instead always being known as Susan Sontag.

They did not live happily ever after. The opening pages of Sontag’s last novel, In America, are written in a first-person voice that sounds very much like the author’s. The narrator mentions reading George Eliot as a young bride and bursting into tears at the realization she had, like Dorothea in Middlemarch, married Casaubon.

As you may recall, Dorothea is at first transfixed by the learning and gravitas of Casaubon, a scholar who is many years her senior. It soon dawns on her (as it does perhaps more quickly for the reader) that he is a bloodless pedant, joyless except when venting spleen against other bloodless pendants. And there are hints, as clear as Victorian propriety will allow, that Dorothea’s honeymoon has been disappointing in other ways as well.

Sontag’s allusion must rank as one of the more subtly devastating acts of revenge ever performed by an ex-wife. At the same time, it is in keeping with some durable and rather less literary attitudes towards professors -- the stereotype that treats them as being not just other-worldly, but also rather desexed by all the sublimation their work requires. This view really took hold in the 19th century, according to the analysis presented by A.D. Nuttall in Dead From the Waist Down: Scholars and Scholarship in Literature and the Popular Imagination (Yale University Press, 2003).

But a different cliché is emerging from Hollywood lately. The summer issue of The American Scholar contains an essay by William Deresiewicz called “Love on Campus” that identifies a “new academic stereotype” visible in popular culture. The sexually underachieving Casaubon’s day is over. The new stereotype of the professor has some notches in his bedpost (this character is almost always a male) and for the most part demonstrates his priapic prowess with students.

Universities in real life are “the most anxiously self-patrolled workplace in Ameican society,” writes Deresiewicz, “especially when it comes to relations between professors and students. This is not to suggest that sexual contact between college students and professors, welcome or unwelcome, never takes place, but the belief that it is the norm is the product of fantasy, not fact.”

Yet the fantasy is played out in numerous contemporary films. It merits examination for what it implies about how academe is perceived and (mis)understood.

The stereotyped character in question is often a professor of English or creative writing, as in "The Wonder Boys" or "The Squid and the Whale." But sometimes he teaches philosophy ("The Life of David Gale") or French ("Little Miss Sunshine"). He is consumed with ambition. But he is also a loser. Those condition -- academic ambition, abject failure -- are identical, at least given the implicit logic of the stereotype.

“In the popular imagination,” writes Deresiewicz, “humanities professors don’t have anything to be ambitious about. No one really knows what they do, and to the extent that people do know, they don’t think it’s worth doing.... It may be simply because academics don’t pursue wealth, power, or, to any real extent, fame, that they are vulnerable to such [criticism]. In our culture, the willingness to settle for something less than these Luciferian goals is itself seen as emasculating.”

So he neglects his family, or drinks, or both. Above all, he seduces his students. The latter is not so much an abuse of power as a symptom of having no real power at all. He is “a figure of creative sterility,” writes Deresiewicz, “and he is creatively sterile because he loves only himself. Hence his vanity, pomposity, and selfishness; his self-pity, passivity, and resentment. Hence his ambition and failure. And thence his lechery, for sleeping with his students is a sign not of virility but of impotence: he can only hit the easy targets; he feeds on his students’ vitality; he can’t succeed in growing up.”

At one level, this new character may look like the negation of earlier clichés about absent-minded and asexual professors. But that appearance is, in some ways, misleading. These more recent fictional figures are, so to speak, Casaubon on Viagra. Like his ancestor, the contemporary on-screen professor is empty and vain, and going nowhere fast. But he has another way to vent. “In both ‘Terms of Endearment’ and ‘We Don’t Live Here Anymore,’” notes Deresiewicz, “ ‘going to the library’ becomes a euphemism for ‘going to sleep with a student.’ ”

Deresiewicz offers a cogent analysis of how this stereotype may reflect the changing place of academe in American society and the contradictory attitudes it evinces. He also presents some thoughts on a dimension of education that popular culture for the most part ignores: the eros of learning, the way a student can fall in love with a teacher for reasons having nothing to do with sexuality. Combining them, as Sontag tried to do with Rieff, seems like a bad idea.

It is a remarkable essay -- cogent on many points, and adventurous in making some of them, given the inescapable risk of being misunderstood. (I half expect to see Deresiewicz on a cable program with the words "Professor Advocating 'Brain Sex' " at the bottom of the screen.) Rather than quote or paraphrase any more of it, let me simply recommend that you read the whole thing.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Jane Austen, Yadda, Yadda, Yadda

Recently I was cornered by a university employee who knows I’m a scholar of British literature, specializing in Jane Austen.

“I started Pride and Prejudice last week,” he told me. “It’s one of those books I know I should have read, but I couldn’t get past the first few chapters.”

“Really,” I replied, eyebrows raised.

“Yeah, I just lost interest,” he went on. “I kept thinking to myself, ‘Oh, brother. I think I know where this is going.’”

Was this disarming honesty or throwing down the gauntlet? Was I being called out? Whatever it was, I shifted nervously as I listened to the rest of his monologue: “My theory is that the novel can be pretty much summed up as Elizabeth and Darcy meet, Elizabeth and Darcy hate each other, Elizabeth and Darcy fall in love, yadda, yadda, yadda.”

Reader, I stared at him blankly. Of course, I spent hours afterward constructing witty, cynical comebacks, such as “Yeah, I know what you mean. I have that response to episodes of VH1’s 'Behind the Music' and to reading the Bible.” But in the moment, all I managed to spit out was something clichéd and professorial resembling, “Hmm. That’s interesting. I think maybe it takes a few readings of Austen to really appreciate her fiction’s depth, humor, and irony.”

That’s also my stock answer to traditional-aged undergraduates on the first day of class -- 20-year-olds who confess that they’ve signed up for a literature class on Austen and her contemporaries because they absolutely love (or absolutely hate) her fiction -- or maybe just the film adaptations. Or Colin Firth or Keira Knightley or Clueless. The Austen-haters often claim to be taking the course because they want to understand what in the world is the big deal. A few of them end up seeing it by the end of the semester, a few more don’t, and that’s fine. But the yadda-yadda-yadda employee was a well-read, middle-aged guy with no sophomore excuse for being sophomoric. My gut reaction to his confession registered somewhere between crestfallen and incensed.

I'm having a similarly mixed reaction to the latest wave of Austen mania in the U.S. and U.K., shifting nervously, while approaching it with a combination of anxiety and dread. I know that all English professors worth their salt should be constructing some theories and responses now, in advance of being cornered by colleagues and co-workers and co-eds, so as not to have to resort to the professorial and clichéd. What will we say when asked about Anne Hathaway’s Becoming Jane (2007); about upcoming The Jane Austen Book Club film, with its star-studded cast; or about PBS’s planned 10-week winter 2008 airing of the Complete Jane Austen on "Masterpiece Theatre"?

What’s the witty, cynical comeback to this cultural flowering of Austen-related stuff, I find myself wondering: “Can’t wait to see it!” “Wish I’d thought of it first!” “The Decline and Fall of Austen’s Empire.” “A tippet in the hand is worth two in the bush.” “A stitch in the huswife saves nine.” “Don’t look a gift pianoforte in the mouth”?

But along with such repartee, we’ll also need to ready weightier observations. First, I believe it’s imperative that we call a moratorium on starting sentences with “It is a truth universally acknowledged,” as in, “It is a truth universally acknowledged that this is the first time in television history Austen’s complete works have been aired in succession.” In the coming months we will no doubt suffer through dozens of newspaper and magazine articles beginning, “It is a truth universally acknowledged.” Best not to add to the collective torture.

In addition, when constructing our soundbites, we ought not to forget the sheer breadth of today’s Austen craze; it’s more than just films and television adaptations we’re in for. New books have appeared, too, like Confessions of a Jane Austen Addict (2007) and Jane Austen for Dummies (2006). Though I worry that these books make reading her fiction sound like something done at an Alcoholics Anonymous meeting for slow learners, surely it’s not too late for some well-placed damage control?

After all, the Austen-inspired publicity stunts are already in full swing. Perhaps you’ve heard about the kerfluffle that unfolded over the pond, “Jane Austen Rejected!” Thinly veiled versions of Austen’s fiction were sent out to British publishers as new work, under the name of Allison Laydee (a.k.a. David Lassman), and all were rejected. Even Harlequin Mills & Boon passed on publishing adulterated Jane Austen plots. The horror! The horror!

But isn’t this is déjà vu all over again? Please raise your tussy mussy if you remember 10 or so years ago, when we were last inundated with Austen film and TV adaptations; with Bridget Jones novels and films; and with Austen board games, stationery, and editorial cartoons. Everyone then seemed to be asking, “Why Austen? Why now?”

The late 1990s were strange days for us longtime members of the Jane Austen Society of North America. It was as if we no longer had to apologize for indulging in our versions of wearing plastic Spock ears, whether quadrille, or quilling, or merely quizzing. Many of us became instant pundits among our friends, family, and the media, providing copy for everything from the Arkansas Democrat to The Wall Street Journal. Only a few periodicals continued to misspell Jane’s name as Austin, while many more managed to render correctly Bennet, Morland, and Love and Freindship. Oh, those were heady times.

If you were there, then you’ll no doubt recall that we came up with some pretty wild theories to explain the Jane train, too. Remember when Camilla Paglia said Austen’s popularity could be explained as a cultural symptom in reaction to the O.J. Trial, as people longed for stories in which no one was being butchered? That was a good one. Or how some claimed that the return to Austen was a result of the fin de siècle’s prompting us to take stock and return to works of past centuries? Seems pretty thin now. Others claimed that Austen’s resurgence happened because we needed to measure the worth of our male heroes, from Bill Clinton and Brad Pitt to Kurt Cobain and Ross Perot. (Jane Austen and Ross Perot?)

So here we are, circa 2007, finding ourselves in danger of being asked yet again, “Why Austen? Why now?” How delightful. How frightening. I’m determined not to be caught off guard, so I’ve constructed some all-purpose answers to explain the latest Austen craze, suitable for everything from The Nation to "Larry King Live" to Marie Claire. Anyone struggling for words is, of course, welcome to use these as conversational building blocks:

Option A: “Today’s Austen mania is a form of cultural compensation for the disaster of the Iraq War and for the genocide in Darfur. Her novels offer us a way to forget the world’s evils by allowing us to travel back to those halcyon post-French Revolutionary days of Napoleon.”

Option B: “Austen’s timeless narratives of women’s romantic searching provide a welcome distraction from the Supreme Court’s rolling back of abortion rights, as we yearn for an era when many women had the power to refuse a proposal of marriage.”

Option C: “Austen’s newfound popularity signals that empire-waist frocks are due for a fashion revival; that irony, having been shunned after 9/11, is back and better than ever; and that Wal-Mart will roll back prices on its imported teas.”

This list is just a draft of talking points. I still have a few more ideas to work out. For instance, can it be an accident that Austen’s popularity is surging, just as Jane magazine has gone defunct? There is certainly a quotable quip in the making there. Even if we don’t perfect our theories in the coming months, I don’t think there should be much cause for worry. Check back with me in 2013, the 200th anniversary of Pride and Prejudice’s publication. Oh, brother. I think I know where this is going.

Author/s: 
Devoney Looser
Author's email: 
info@insidehighered.com

Devoney Looser is associate professor of English at the University of Missouri at Columbia, and the author of British Women Writers and the Writing of History (Johns Hopkins University Press). She has just completed a book on British women writers and old age, to be published next year.

Good Grief

A few weeks ago, a new edition of the selected works of Edmund Wilson appeared. Another monumental book this season is David Michaelis’s Schulz and Peanuts: A Biography (HarperCollins). The critic and the cartoonist never crossed paths, so far as anyone knows. But there is some overlap between these publications, it seems to me. The biography of Charles M. Schulz, who died in 2000, calls to mind Wilson’s The Wound and the Bow, a collection of essays published in 1941 and reprinted in the second of the two Library of America volumes.

The connection is indirect but insistent. In the essay that lent The Wound and the Bow its title, Wilson revisits one of the lesser-known plays by Sophocles -- a telling of the story of Philoctetes, who also appears in the Iliad. Philoctetes is a skilled and powerful archer, but he is also a man in exile through no fault of his own. A snakebite has left him with a wound that not only festers but reeks. Unable to bear the stench or his groans, the Greeks abandon him on a desert island. And there he stays until Odysseus is forced to bring him back into service as the only man able to bend the bow of Heracles.

Wilson (who had started using psychoanalysis as a means of interpreting literary works well before this was required by law) saw in the figure of Philoctetes something like an allegorical emblem for the artist’s inner life. Neurosis is the agonizing wound that leaves the sufferer isolated and bitter, while genius is the ability to bend the bow, to do what others cannot. Creativity and psychic pain, “like strength and mutilation,” as Wilson put it, “may be inextricably bound up together."

Not such a novel idea, after all this time. And one prone to abuse -- reducing artistic creativity to symptomatology. (Or, worse, elevating symptomatology into art: a phenomenon some of us first encounter while dating.)

In Wilson’s hands, though, it was a way through the labyrinth of a writer’s work, of finding hidden passages within it. The two longest studies in The Wound and the Bow were interpretations of Charles Dickens and Rudyard Kipling: two authors whose critical reputations had been nearly done in by their commercial success. Wilson’s criticism, while biographical in method, did not take the debunking route. If he documented the wound, he also showed the strength with which each figure could draw the bow.

Now, I’m not really sure that the archer serves all that well as a model of the artist. (The myths of Daedelus or Orpheus work better, for a variety of reasons, and cover much of the same analogical ground.) On the other hand, Philoctetes did tend to complain a lot -- as did Charles Schulz, it seems. The cartoonist emerges from his biographer’s pages as a man of numerous griefs and grievances. His life was shaped by an upbringing that was economically secure but emotionally complex. His childhood was spent among among relatives who expressed affection through joking insults (to give things the most positive construction possible).

Michaelis, who has also written about the life of the painter N.C. Wyeth, offers numerous well-framed appreciations of Schulz’s artistry. The book is Wilsonian, in that sense. But any revaluation of “Peanuts” as cultural artifact is bound to be less a topic for conversation than the unveiling of details about his melancholia and his resentments.

An episode of the documentary series "American Masters" on PBS airing later this month will be tied to the book, which should reach stores any day now. Soon it will be common knowledge that everyone who met the cartoonist’s first wife had a pretty good idea where Lucy originated. Numerous “Peanuts” strips are embedded throughout the book -- each of them echoing events or situations in Schulz’s life or some aspect of his personality and relationships. (Members of his family are complaining about the biography, a development to be expected.)

The cartoons themselves -- however telling as illustrations of things the biographer has discovered about Schulz -- are rich works in their own right. They fall somewhere between art and literature; but those categories really don't matter very much, because they create their own little world. The biography derives its meaning from the cartoons and not vice versa.

So in an effort to restore some balance, I’d like to recommend some supplementary reading about “Peanuts” -- an essay that says very little about Schulz himself. It focuses instead on what he created. How an artist becomes capable of bending the bow is difficult to understand. Biography is one approach, but it does not exhaust the topic. (In a way it only begins to pose the riddle.)

The piece in question is “The World of Charlie Brown” by Umberto Eco. It appeared in his collection Apocalittica e integrati, a volume that became rather notorious when it first appeared in 1964. Parts of the collection were translated, along with some later pieces, as Apocalypse Postponed (Indiana University Press, 1994)

Like other essays in the book, the analysis of “Peanuts” is part of Eco’s challenge to familiar arguments about “mass culture,” whether framed in Marxist or conservative terms. Either way, the theorists who wrote about the topic tended to be denunciatory. Eco, who was 32 when Apocalittica appeared, had published a couple of monographs on medieval intellectual history and was also working on semiotics and the philosophy of language. Aside from teaching, he paid the bills by working for a television network and a trade publisher. All the quasi-sociological hand-wringing about the media struck Eco as rather obtuse, and he did not hesitate to say so.

From the vantage point of someone who had written about the aesthetic theory of Thomas Aquinus, it was not self-evident that “mass culture” was the fresh horror that worried his contemporaries. He saw it beginning with the cathedrals -- or at least no later than the printing press. The fact that Eco wrote about Superman and television worried some of the reviewers.

One of them complained that treating “Plato and Elvis Presley” as both “equally worthy of consideration” was bound to have grave consequences: “In a few years the majority of Italian intellectuals will be producing films, songs, and comic strips....while in the university chairs, young dons will be analyzing the phenomena of mass culture.” It would be the closing of the Italian mind, I guess.

“The World of Charlie Brown” is evidence that Eco meant to do more than stir up argument. It originally appeared as the preface to the first collection of Schulz’s strips to appear in Italy. It is the work of a critic determined to win “Peanuts” a hearing as a serious work of art.

Eco seems unable to resist a certain amount of elitist chain-yanking. He says that the translators lavished on their work “the meticulous passion that Max Brod devoted to the manuscripts of Kafka...and Father Van Breda to the shorthand notes of Edmund Husserl.” The round-headed Charlie Brown embodies “a moment of the Universal Consciousness,” he writes, “the suburban Philoctetes of the paperbacks.” (I confess that I did not remember that part of the essay until rereading it just now.)

But the tongue soon comes out of his cheek. Eco reveals himself as a devoted student of the history of the American comic strip. He triangulates “Peanuts” with respect to Jules Feiffer’s satirical cartoons and “the lyric vein of Krazy Kat” -- comparisons that are so brilliantly apt that they immediately seem obvious, which they aren’t.

And Eco warns the Italian reader that appreciating the strip involves learning Schulz’s rhythm of theme and variation. “You must thoroughly understand the characters and the situations,” he writes, “for the grace, tenderness, and laughter are born only from the infinitely shifting repetition of the patterns....”

At this point, it is tempting to quote at length from Eco’s quick analysis of the essence of Schulz's characters essence. Each one embodies or resists some part of the human condition -- even, and perhaps especially, Snoopy.

In the world of “Peanuts,” writes Eco, “we find everything: Freud, mass-cult, digest culture, frustrated struggle for success, craving for affection, loneliness, passive acquiescence, and neurotic protest. But all these elements do not blossom directly, as we know them, from the mouths of a group of children: they are conceived and spoken after passing through the filter of innocence.” The strip is “a little human comedy for the innocent reader and for the sophisticated.” A child can enjoy them, and so can the reader who is tempted to draw analogies to Samuel Beckett.

The sophisticated part of Eco’s sensibility can recognize in Schulz’s art a depth that is full of shadows: “These children affect us because in a certain sense they are monsters: they are the monstrous infantile reductions of all the neuroses of a modern citizen of industrial civilization.” But the depths aren’t an abyss. The little monsters, while sometimes cruel, never become unspeakable. They “are capable suddenly of an innocence and a sincerity which calls everything into question....”

Charles Schulz was a neurotic, no doubt; but most neurotics aren’t Charles Schulz. He was something else. And it may be that we need an Italian semiotician to remind us just what: "If poetry means the capacity of carrying tenderness, pity, [and] wickedness to moments of extreme transparence, as if things passed through a light and there were no telling any more what substance they are made of,” as Eco wrote, “then Schulz is a poet.”

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Scott McLemee writes Intellectual Affairs each week. Suggestions and ideas for future columns are welcome.

Zombie Nation

For countless dead bodies to become reanimated and swarm through the streets as cannibalistic ghouls would count as an apocalyptic development, by most people's standards. Then again, it is not one that we have to worry about all that much. Other possibilities of destruction tend to weigh more heavily on the mind. But if you combine extreme improbability with gruesome realism, the effect is a cinematic nightmare that won't go away -- one of the most durable and resonant forms of what Susan Sontag once described as "the imagination of disaster."

It all began with the release of George Romero's Night of the Living Dead in 1968: a low-budget independent film that more or less instituted the conventions of the cannibalistic zombie movie, as further developed in his Dawn of the Dead (1978) and Day of the Dead (1985). Other directors have played variations on his themes, but Romero remains the definitive zombie auteur -- not simply for founding the subgenre, but for making it apocalyptic in the richest sense. For the root meaning of "apocalypse," in Greek, is "an uncovering." Romero's zombies expose the dark underside of American culture: racism, consumerism, militarism, and so on.

His most recent addition to the zombie cycle, Diary of the Dead, which opened last Friday, returns viewers to the opening moments of the undead's onslaught. But while his first film, Night, was set in a world where radio and television were the only sources of information for panicking human refugees, Diary is a zombie film for the age of new media. Romero's band of survivors this time consists of a bunch of college students (and their alcoholic professor) who are busy making a film for class when the end of the world hits. One of them become obsessed with posting footage of the catastrophe online -- a chance for Romero to explore the ways that digital technology makes zombies of its users.

As an enthusiast for Romero's apocalyptic satire, I was somehow not terribly surprised to learn last year that Baylor University Press had published a book called Gospel of the Living Dead: George Romero's Visions of Hell on Earth. The author, Kim Paffenroth, is an associate professor of religious studies at Iona College in New Rochelle, New York.

Romero's zombie apocalypse brings "the complete breakdown of the natural world of food chains, social order, respect for life, and respect for death," writes Paffenroth, "because all those categories are meaningless and impossible to maintain in a world where one of the most fundamental limen, the threshold between alive and dead, has become a threshold that no one really crosses all the way over, but on which everyone lives suspended all the time." And in this moment of revelation, all the deadly sins stand fully revealed (and terribly rapacious).

The release of Diary of the Dead seemed a perfect excuse finally to interview Paffenroth. He answered questions by e-mail; the full transcript follows.

Q:You mention in your book that George Romero's work has literally given you nightmares. How did you go from watching his films to writing about them, and even publishing zombie fiction of your own?

A: Well, I was fascinated with the original Dawn when I was still a teen, but I'm afraid my level of commentary seldom got beyond -- "Zombies! Cool!" And then, to be honest, I didn't think of or watch any zombie films from the time Day came out until the Dawn remake was released. But during those years, I was just reading everything I could -- especially ancient and medieval literature, philosophy, and theology. So when I saw the Dawn remake, things clicked and I could give a more thorough and complicated response than I had when I was a youth, because I could then see how Romero was building on Dante and the Bible.

And to be frank, at that point I'd written a lot of books about the Bible and other theological topics, and no one read them. To an author, that's probably the worst disappointment imaginable. So I took a chance that if people didn't want to read about these theological subjects directly, maybe through the filter of their favorite monster genre, they'd be more open to the discussion and analysis. And it seems that they are.

As for making the transition to fiction writing, that's just crazy hubris that strikes all of us at some point -- the idea that anyone would want to read the tales we write -- and some of us are dogged and patient and lucky enough that it actually amounts to something. I never get over it, when I realize that there are some people who like my fiction and look forward to what I'll write next. That's a huge rush and I want to keep it going as long as I can.

Q:In the New Testament, Jesus dies, then comes back to life. His followers gather to eat his flesh and drink his blood. I am probably going to hell for this, but .... Is Christianity a zombie religion?

A: I think zombie movies want to portray the state of zombification as a monstrous perversion of the idea of Christian resurrection. Christians believe in a resurrection to a new, perfect state where there will be no pain or disease or violence. Zombies, on the other hand, are risen, but exist in a state where only the basest, most destructive human drive is left - the insatiable urge to consume, both as voracious gluttons of their fellow humans, and as mindless shoppers after petty, useless, meaningless objects. It's both a profoundly cynical look at human nature, and a sobering indictment of modern, American consumer culture.

Q:The human beings in Romero's world are living through an experience of "hell on earth." as your subtitle says. There are nods towards some possible naturalistic explanation for the dead within the films (that a virus or "space radiation" somehow brought corpses back to life) but the cause is never very useful or important to any of the characters. And some characters do think mankind is finally being punished. Is the apocalyptic dimension just more or less inevitable in this kind of disaster, or is it deliberate? To what degree is Romero's social satire consciously influenced by Christian themes? Or are those themes just inevitably built into the scenario and imagery?

A: I think "apocalyptic" has just come to mean "end of civilization," so of course, any movie or book with that as its premise is, by definition, "apocalyptic." And even if we throw in the interpretation "God's mad at us -- that big, mean God!" I still don't think that's very close to real, biblical apocalyptic.

Romero's view is a lot closer to biblical apocalyptic or prophetic literature, for he seems to make it clear, over and over, that humanity deserves this horror, and the humans in the films go to great lengths to make the situation even worse than it is already -- by their cruelty, greed, racism, and selfishness. Whether this is conscious or accidental, I really can't address with certainty: I only note that his prophetic vision is compatible with a Christian worldview, not that it stems from that.

Q:The fifth movie in George Romero's zombie cycle, Diary of the Dead , opened over the weekend. Does it seem like a progression or development in his vision, or does it simply revisit his earlier concerns in a new setting?

A: I think each film in the series has a special target that is the particular focus of Romero's disgust at the moment. The media has always been at the periphery in each of the previous films -- cooperating with government ineptitude and coverup in the first two until the plug's pulled and there is no more media -- but now it's the main subject of this installment.

Romero does a great job capturing the sick voyeurism of addiction to cell-phone cameras and the Internet - there are so many shots in this one where you just want to shout at the characters, "Put down the camera and HELP HER! SHE'S BEING EATEN ALIVE YOU IDIOT!" It is surely no accident that the two people who most help our protagonists are either cut off from the media (the Amish man) or they themselves have been the target of unfair representation in the media (black men who are called "looters" when white people in Katrina were said to be "salvaging" or "gathering" supplies). And the one time a crime is committed by one group of humans against another, the camera is forced off.

With that being said, I think in many ways it does return to the vision of Night of the Living Dead with its overwhelming cynicism and despair. Certainly the last shot is meant to evoke the same feeling of finality and doom as the first film, the gripping doubt that there's anything left in human society worth saving.

Q:It feels as if Romero is suggesting that Jason, the character holding the digital camera, is himself almost a zombie. There's something creepy about his detachment -- his appetite for just consuming what is going on around him, rather than acting to help anyone. But there are also indications that the cameraman does have a kind of moral commitment to what he is doing. He's trying to capture and transmit the truth of what is going on, because doing so might save lives. What did you make of that ambiguity? Is something redemptive going on here with behavior that otherwise seems quite inhuman?

A: I'd have to think about it in detail, once I have the DVD "text" to study. My initial reaction is that that interpretation mostly comes from the voice-over by Deb, his girlfriend and the narrator of Diary. The exact motives of Jason remain hazy to me. He says he doesn't want fame (what would it mean in their world?), yet he's obsessed with the 72,000 hits in 9 minutes. But he doesn't exactly explain why in that scene. I don't think he said that maybe some of the 72k people were saved or that he's doing a public service or helping save the world.

He just seems addicted and intoxicated by the 72k number itself -- like even if it's not fame, it's a junkie's fix, it's a validation of his value, as indeed is the chilling (and slightly comical) act of handing the camera to Deb at the end. As she keeps accusing him: if it doesn't happen on camera, it's like it doesn't happen.

So the camera is not reflecting reality, it's creating it. And Jason's version of reality is better than the government's falsified version of the first attack, because it's more accurate, but it's no less addictive or exploitive or inhumane by the end.

Q:Good points, but I still think there's some ambiguity about Jason's role, because this is a problem that comes up in debates over journalistic ethics -- whether the responsibility to report accurately and as a disengaged observer becomes, at some point, irresponsibility to any other standard of civilized behavior. Arguably Romero is having it both ways: criticizing Jason while simultaneously using the narrative format to ask whether or not his behavior might have some justification (however ex post facto or deluded).

A: Perhaps artists can have it both ways in a way journalists can't. Artists deal in ambiguities, journalists (supposedly) deal in facts. But with cell phones and the internet, suddenly everyone is a potential "journalist" and the facts are even more malleable and volatile than they ever were.

Q:You note that this subgenre has proven itself to be both popular with audiences and marginal to Hollywood. "Zombie movies," you write in your book, "just offend too many people on too many levels to be conventional and part of the status quo." And while not quite as gory as some of Romero's earlier work, Diary ends with an image calculated to shock and disgust. Is this a matter of keeping the element of humor under control? While a spoof like Shaun of the Dead was an affectionate homage to Romero, the element of social satire there didn't really have much, well, bite....

A: That's a great way to put it - that humorous homages use humor to offset the gore (look at the really over-the-top squashing scene in Hot Fuzz for an example of just how much gore you can offset, if the movie's funny enough!). But it also works the other way -- that biting social criticism needs some bite, needs to be a little out of control and not tamed or staid. I like that idea.

That being said, Romero makes my job a lot harder. The gore hounds sometimes put their hands over their ears and chant "LALALALA! I can't hear you!" if I say that some image they love on an aesthetic level might *mean* something -- while I think a lot of readers or viewers who might be receptive to critcism of our society just can't make it past the first disemboweling.

I would suppose it's an artistic judgment, and for me at least, Romero has been hitting the right balance for a long time, and is continuing to do so.

Author/s: 
Scott McLemee
Author's email: 
scott.mclemee@insidehighered.com

Pages

Subscribe to RSS - Cultural studies
Back to Top