As is evident from the recent staff shake-up at Virginia Quarterly Review, university quarterlies face a perilous future. They are squeezed by campus-wide cost-benefit analyses on one side and a new wave of popular, innovative independent magazines on the other. Academic literary magazines -- many with staid formats and ossified editorial philosophies -- are struggling to assert their relevance in an era of unprecedented change in publishing technologies. The journey to this point has been long but inexorable. Whether these discouraging trends can be reversed remains to be seen.
University magazines have commonly been placed in a class apart from their quirky, mercurial independent cousins in the century since the emergence of Modernism. The editors of the seminal 1946 study The Little Magazine in America: A History and a Bibliography expressly excluded them from their pages. In the view of the book’s editors, such magazines as The Kenyon Review, The Yale Review and Virginia Quarterly Review were more measured and dignified than the avant-garde magazines of the time, a bit too “conscious of a serious responsibility which does not often permit them the freedom to experiment or to seek out unknown writers.”
In The Little Magazine in America: A Modern Documentary History, published in 1978, magazine critic Charles Robinson insisted that institutional backing created an unfair competitive advantage, as the academic periodicals could “afford posh formats the independents seldom approach.” In the two decades following World War II, the explosion in university enrollment was paralleled by an explosion in the number of university-sponsored literary magazines. The institutional magazine enjoyed many distinct advantages over the little movement magazines upon which they were modeled, including adequate funding, a faculty editor with a broad literary education, cheap or free labor in the form of undergraduate and graduate students, and the instant prestige of the institution that housed them. The first two decades of the 21st century, however, have seen the rationale of the academic magazine come under question. Some have closed, some have been asked to find additional sources of funding, and others have had their print operations eliminated and moved online. Among the magazines that have been impelled to adapt to changing times are TriQuarterly, New England Review and Shenandoah. And, for the second time in five years, Virginia Quarterly Review finds itself under scrutiny.
In concept, the editor of the university magazine -- without fear of the wolf at the door -- was free to pursue an editorial policy that foregrounded art over commerce. And, taken as a whole, the experiment has been a resounding success. Not only have university magazines regularly published content that falls outside the commercial mainstream, including special issues on world literature and on overlooked authors and movements, they have served as a proving ground for the emerging writers who would go on to populate the pages of Best American Stories and Best American Poetry, as well as the O. Henry and Pushcart Prize anthologies. To use the example of our own former publication, the table of contents of TriQuarterly’s “Under 30 Issue,” published in 1967, includes Joyce Carol Oates, Jim Harrison, Louise Glück and James Tate.
However, there is a moral hazard embedded in the university-supported model: without an incentive to undertake the less glamorous business of chasing subscriptions and single copy sales, such matters are easily neglected. As Jeffrey Lependorf, director of the Council of Little Magazine and Presses, observes in our book The Little Magazine in Contemporary America, “Many university magazines, with venerable publishing histories and many ‘first to publish’ credits to their names, because they received such a high level of support, did little to build their readerships. They may have achieved literary excellence, but very few people ever actually read what they published.”
On the occasion of Northwestern University’s shutting down of the TriQuarterly print operation in 2010, Ted Genoways, then editor of Virginia Quarterly Review, wrote, somewhat dismissively, in Mother Jones: “Once strongholds of literature and learned discussion in our country, university-based quarterlies have seen steadily declining subscriber bases since their heyday a half century ago -- and an even greater dent in their cultural relevance.”
He then advanced the made-over VQR as cure to this malady. Indeed, Genoways and his staff transformed a magazine that had had only 2 editors over the previous 60 years, added a web presence and moved VQR into new areas, most notably journalistic reporting from conflict zones. At the same time, while the new VQR was certainly a publication worth following, the lavish upgrade in content resulted only in a short-lived increase in subscriptions, and, sadly, due to the death of managing editor Kevin Morrissey, and the subsequent blow to the magazine’s reputation, we will never know if VQR could have achieved sustainability under Genoways’s editorship.
Now, with the departure of web editor -- and nationally renowned maven of digital publishing -- Jane Friedman, and the apparent ouster of publishing veteran Ralph Eubanks, VQR is once again in the news for reasons it does not wish to be. Faced with the loss of two professionals with the precise experience that the top magazines are seeking, VQR publisher Jon Parrish Peede insists that VQR will expand its operations, including the addition of science and poetry editors, as well as an increased focus “on online long-form journalism, multimedia and e-books...” and plans to reallocate their operational budget “to achieve these and related goals.” The statement addresses content but not operations in a real sense, unless the budget reallocation can generate a significant increase in subscriptions, sales and advertising to underwrite such growth.
What is to be done? In the end, the path back to prominence for VQR and university literary magazines in general may be lit by the leading independent magazines, which are thriving to a greater extent than perhaps ever before. Guided by editors who have achieved reputations beyond their periodicals, magazines such as McSweeney’s, Tin House, Diagram and n+1 all boast distinctive designs and innovative editorial programs that have attracted broader, younger readerships.
University magazines must make cases for themselves within their institutions and without. Editors must demonstrate to their administrations that they are committed to deploying their funds efficiently. They must make efforts to expand circulation through the use of existing technologies to attract, track and maintain subscriptions. In addition to bottom-line concerns, university magazines should strive to contribute to the cultural identity of their institutions. Beyond the university, the editors of university magazines should seek not to merely publish the best of what is thought and said but also to identify distinct missions and develop editorial philosophies that set them apart.
Certainly there are magazines that embody these qualities. New England Review and Alaska Quarterly Review are two magazines that reflect the cultures of their schools and their regions while maintaining national reputations. Kenyon Review is a venerable name in the pantheon that always keeps up with the times. In the end, university literary quarterlies can no longer reply upon the safety of the ivory tower -- nor should they wish to.
Joanne Diaz is associate professor of English at Illinois Wesleyan University. She was an assistant editor at TriQuarterly and is the author of two collections of poetry, The Lessons and My Favorite Tyrants.
Ian Morris is the author of the novel When Bad Things Happen to Rich People and is managing editor of the new magazine Punctuate at Columbia College Chicago.
“Would you mind telling me what those four years of college were for?”
So asks the father of Benjamin Braddock, the protagonist of "The Graduate." A half-century after Mike Nichols made this film, it remains popular at "senior week" events and other end-of-college rituals. And that's because we still haven't answered its central question: what are we doing here, and why?
When Nichols died in November, obituaries inevitably depicted "The Graduate" as an emblem of youth alienation in postwar America. In the 1967 film’s most iconic line, a family friend gives young Braddock a single word of advice: “plastics.” The term became an ironic rallying cry for a rising generation of rebellious Americans, who rejected their elders’ bland conformity and empty consumerism.
But Braddock simply repeats the phrase — “plastics” — in a glassy-eyed stupor. As Nichols told an interviewer after the film’s release, Braddock is “a kid drowning among objects and things, committing moral suicide by allowing himself to be used finally like an object or thing.” Young Benjamin knows what he doesn’t like, but he has no idea how — or even whether — to change it.
That’s why Nichols decided to give the role to an unknown actor named Dustin Hoffman instead of to an established star like Robert Redford, who also campaigned for the part. When Hoffman read the book on which the film was based, he told Nichols that Braddock should be played by Redford or by another classically handsome white Anglo-Saxon Protestant.
But Nichols had something very different in mind. He saw Braddock as an anti-hero, a loser who sleepwalks through life instead of awakening to its challenges. So the director chose a Jewish actor — with dark, ungainly features — instead of the “walking surfboards” (as Nichols mockingly called them) who usually won the big Hollywood roles.
Braddock has an ambivalent and depressingly passionless affair with one of his parents’ friends, Mrs. Robinson, whose name would be immortalized in the song that Paul Simon wrote for the film. (The other Simon and Garfunkel songs on the soundtrack, including “Sounds of Silence,” predated the movie.) Then Braddock falls in love with Mrs. Robinson’s daughter, Elaine, an undergraduate at the University of California at Berkeley.
Conventional to his core, Braddock resolves to win Elaine in the most predictable, socially acceptable fashion: by marrying her. He drives his sportscar up to the Bay Area, where Nichols treats us to the famous shot of Hoffman speeding across the Bay Bridge (but in the wrong direction, as film buffs often note). The budget-conscious Nichols shot most of his college scenes at the University of Southern California, which was much closer to his studio, although we do get a few glimpses of the neighborhood abutting Cal-Berkeley.
What we do not get is a sense of the Free Speech Movement, demonstrations against the Vietnam War, or any of the other political passions that enveloped Berkeley in the late 1960s. The only hint is an exchange with a hostile boardinghouse manager, who inquires whether Braddock is an “agitator"; a few scenes later, a young tenant (played by Richard Dreyfuss, in one of his first roles) asks the manager if he should call the police to arrest Braddock.
On what charge? Braddock isn’t a threat to anyone at the university, where he follows Elaine through the humdrum rhythms of college life — to a class, to the library — while a clock chimes from the tower overhead. There’s nothing here to engage either of them, except the fact that Elaine is herself engaged to be married — and not to Braddock. So he has to win the girl from his rival, who looks very much like Mike Nichols’ walking-surfboard stereotype.
The film’s courtship rituals feel altogether dated in today’s era of student hook-ups and delayed marriage. But the aimless ennui of college should be familiar to anyone who works or studies at one. We have millions of students who are simply drifting through college, just like Benjamin Braddock does in his parents’ pool. As my colleague Richard Arum and his co-author Josipa Roksa have shown, the average undergraduate studies 12 hours per week, and more than a third report studying less than 5 hours a week.
On the other end of the spectrum are the so-called Organization Kids, who have been programmed to climb the social ladder at all costs. They do hit the books, early and often, but there’s something soulless and depressing about their grim quest for grades, connections, and jobs. They’re “excellent sheep,” to quote the title of William Deresiewicz’ recent book, going along in order to get ahead.
In the years since Mike Nichols made "The Graduate," we have transformed our universities into truly mass institutions. Soon, we are told, we'll have "college for all." But college for what? Asked that by his befuddled father, Benjamin Braddock replies simply, “You got me.” We've got to come up with a better answer than that.
Most readers’ first response to David Shumway’s Rock Star: The Making of Musical Icons from Elvis to Springsteen (Johns Hopkins University Press) will be to scan its table of contents and index with perplexity at the performers left out, or barely mentioned. Speaking on behalf of (among others) Lou Reed, Joe Strummer, and Sly and the Family Stone fans everywhere, let me say: There will be unhappiness.
For that matter, just listing the featured artists may do the trick. Besides the names given in the subtitle, we find James Brown, Bob Dylan, the Rolling Stones, the Grateful Dead, and Joni Mitchell – something like the lineup for an hour of programming at a classic rock station. Shumway, a professor of English at Carnegie Mellon University, makes no claim to be writing the history of rock, much less formulating a canon. The choice of artists is expressly a matter of his own tastes, although he avoids the sort of critical impressionism (see: Lester Bangs) that often prevails in rock writing. The author is a fan, meaning he has a history with the music. But his attention extends wider and deeper than that, and it moves in directions that should be of interest to any reader who can get past “Why isn’t _____ here?”
More than a set of commentaries on individuals and groups, Rock Star is a critical study of a cultural category -- and a reflection on its conditions of existence. Conditions which are now, arguably, far along the way to disappearing.
The name of the first rock song or performer is a matter for debate, but not the identity of the first rock star. Elvis had not only the hits but the pervasive, multimedia presence that Shumway regards as definitive. Concurring with scholars who have traced the metamorphoses of fame across the ages (from the glory of heroic warriors to the nuisance of inexplicable celebrities), Shumway regards the movie industry as the birthplace of “the star” as a 20th-century phenomenon: a performer whose talent, personality, and erotic appeal might be cultivated and projected in a very profitable way for everyone involved.
The audience enjoyed what the star did on screen, of course, but was also fascinated by the “real” person behind those characters. The scare quotes are necessary given that the background and private life presented to the public were often somewhat fictionalized and stage-managed. Fans were not always oblivious to the workings of the fame machine. But that only heightened the desire for an authentic knowledge of the star.
Elvis could never have set out to be a rock star, of course – and by the time Hollywood came around to cast him in dozens of films, he was already an icon thanks to recordings and television appearances. But his fame was of a newer and more symbolically charged kind than that of earlier teen idols.
Elvis was performing African-American musical styles and dance steps on network television just a few years after Brown v. Board of Education – but that wasn’t all. “The terms in which Elvis’s performance was discussed,” Shulway writes, “are ones usually applied to striptease: for example, ‘bumping and grinding.’ ” He dressed like a juvenile delinquent (the object of great public concern at the time) while being attentive to his appearance, in particular his hair, to a degree that newspaper writers considered feminine.
The indignation Elvis generated rolled up a number of moral panics into one, and the fans loved him for it. That he was committing all these outrages while being a soft-spoken, polite young man – one willing to wear a coat and tails to sing “Hound Dog” to a basset hound on "The Milton Berle Show" (and later to put on Army fatigues, when Uncle Sam insisted) only made the star power more intense: those not outraged by him could imagine him as a friend.
Elvis was the prototype, but he wasn’t a template. Shumway’s other examples of the rock star share a penchant for capturing and expressing social issues and cultural conflicts in both their songs and how they present themselves, onstage and off. But they do this in very different ways – in the cases of James Brown and Bob Dylan, changing across the length of their careers, gaining and losing sections of their audience with each new phase. The shifts and self-reinventions were very public and sometimes overtly political (with James Brown's support for Richard Nixon being one example) but also reflected in stylistic and musical shifts. In their day, such changes were sometimes not just reactions to the news but part of it, and part of the conversations people had about the world.
Besides the size of the audience, what distinguishes the rock star from other performers is the length of the career, or so goes Shumway’s interpretation of the phenomenon. But rewarding as the book can be – it put songs or albums I’ve heard a thousand times into an interesting new context – some of the omissions are odd. In particular (and keeping within the timespan Shumway covers) the absence of Jimi Hendrix, Janis Joplin, and Jim Morrison seems problematic. I say that not as a fan disappointed not to find them, but simply on the grounds that each one played an enormous role in constituting what people mean by the term “rock star.” (That includes other rock stars. Patti Smith elevated Morrison to mythological status in her own work, while the fact that all three died at 27 was on Kurt Cobain’s mind when he killed himself at the same age.)
I wrote to Shumway to ask about that. (Also to express relief that he left out Alice Cooper, my own rock-history obsession. Publishers offering six-figure advances for a work of cultural criticism should make their bids by email.)
“My choices are to some extent arbitrary,” he wrote back. “One bias that shaped them is my preference for less theatrical performers as opposed to people such as David Bowie (who I have written about, but chose not to include here) or Alice Cooper.” But leaving out the three who died at 27 “was more than a product of bias. Since I wanted to explore rock stars’ personas, I believed that it was more interesting to write about people who didn’t seem to be playing characters on stage or record. I agree with you about the great influence of Jim Morrison, Janis Joplin, and Jimi Hendrix, but I don’t think their personas have the complexity of the ones I did write about. And, they didn’t figure politically to the degree that my seven did. The main point, however, is that there is lots of work to be done here, and I hope that other critics will examine the personas the many other rock stars I did not include.”
The other thing that struck me while reading Rock Star was the sense that it portrayed a world now lost, or at least fading into memory. Rock is so splintered now, and the "technology of celebrity" so pervasive, that the kind of public presence Shumway describes might not be possible now.
“The cause is less the prevalence of celebrity,” he replied, “than the decline of the mass media. Stars are never made by just one medium, but by the interaction of several. Earlier stars depended on newspapers and magazines to keep them alive in their fans hearts and minds between performances. Radio and TV intensified these effects. And of course, movie studios and record companies had a great deal of control over what the public got to see and hear. The result was that very many people saw and heard the same performances and read the same gossip or interviews. With the fragmentation of the media into increasingly smaller niches, that is no longer the case. The role of the internet in music distribution has had an especially devastating effect on rock stardom by reducing record companies’ income and the listeners’ need for albums. The companies aren’t investing as much in making stars and listeners are buying songs they like regardless of who sings them.”
That's not a bad thing, as such, but it makes for a more compartmentalized culture, while the beautiful thing with rock 'n' roll is when it blows the doors off their hinges.