New book on college students and hip-hop culture

Smart Title: 

Author argues that impact on campus life is significant -- and educators should know more about it.

What do research librarians have a 'hunch' about?

Smart Title: 

Members of the Association of Research Libraries pitch ideas about the future of the field during the "first inaugural 'hunchery.'" Holograms ensue.

Review of 'Philip Sparrow Tells All'

A novelist and English professor named Samuel M. Steward was fired by DePaul University in 1956 for the offense of running a tattoo parlor on Chicago’s Skid Row. He did not have the option (so readily taken for granted these days) of explaining it as full-immersion ethnographic research, nor did the fact that he’d practiced this sideline under a pseudonym, Philip Sparrow, count as mitigation. By then Steward was in his late 40s and had been teaching for well over 20 years, but his academic career was finished.

It was a moment of emergence, however, not of decline. Within a few years, the defrocked professor moved to California. His artistry with the ink gun put Philip Sparrow in demand among the Hell’s Angels, whose standards are rigorous and exacting to a degree academe never quite manages. (Being thrown out of the Angels can include relinquishing one’s tattoos, a process that sometimes involves a blowtorch.)

In the late 1970s, he went back to using his given name and under it published, among other things, a collection of letters from his friends Gertrude Stein and Alice B. Toklas. But use of a pseudonym seems to have permitted the flourishing of aspects of his creative identity that might have gone unexpressed. Besides his renown among tat connoisseurs as Philip Sparrow, he also wrote a considerable amount of pornographic fiction under the name Phil Andros -- which was kind of a meta pun, splitting the Greek components of “philanderer” (a man who has sex with a great many women) and repurposing them for gay use (“lover of men”).

He died in 1993 at the age of 84, leaving behind the papers that allowed Justin Spring to write Secret Historian: The Life and Times of Samuel Steward (Farrar, Straus and Giroux), a National Book Award Finalist for 2010. The big online booksellers show the fiction of Phil Andros to be available and in demand, although nearly everything Steward published under his own name has long since gone out of print. But the University of Chicago Press has now added something to his stature as an author by publishing Philip Sparrow Tells All: Lost Essays by Samuel Steward, Writer, Professor, Tattoo Artist, edited by Jeremy Mulderig -- who, by a nice bit of karma, happens to be an associate professor emeritus of English at DePaul University.

Occasionally a book’s subtitle all but defies the reader not to have a look, and in this case the photos of the author on the front cover alone are pretty striking, given Steward's resemblance to John Waters. The contents of the volume are selected from the column Steward wrote, under the Philip Sparrow byline, for the Illinois Dental Journal between 1944 and 1949.

So the publisher’s description says: I did not make it up, nor could I. While Justin Spring’s biography of Steward from five years ago had been widely and well reviewed, I had not heard about it, and so I suspected, for a moment, that Philip Sparrow Tells All was a prank, either by the University of Chicago Press or on it. The essays of a tattoo artist recovered from 70-year-old issues of the Illinois Dental Journal? Come on.

Exercising due diligence, I learned just enough to confirm that the author actually existed -- then decided to stop reading more about him. First, I wanted to read some of the essays themselves. The world is full of colorful characters who try to write, but eccentricity and adventurousness are not, in practice, qualifications for authorship. (To their credit they sometimes recognize this and offer to tell a writer their stories, in exchange for a share of the advance.) So I skipped the book’s introductory matter and the headnotes the editor had prepared for each piece and went right to Steward’s own prose.

The first selection, his inaugural column, was indeed written with the publication’s audience in mind: “The Victim’s Viewpoint: On Sublimated Sadism; or, the Dentist as Iago.” The tone or manner is approximately that of Robert Benchley:

“We have opened our mouth widely for many of these torturers, from Maine to Montana, and we are ready to swear that on more than one occasion -- as we have been approached, lying there helpless and trembling -- we have seen a diabolic gleam in their eyes as they reached for their tools. There is one certain prober, doubtless invented by Beelzebub, which they use when they begin their preliminary surveying. It is shaped vaguely like a sophisticated corkscrew, and is evidently intended to search out the secret places of one’s heart; we personally have felt it go even lower, and are sure it once left a scar on our right kidney. … but let us draw a curtain over this painful scene; even in thinking of it we have worked ourselves into a cold dribble.”

Something like this essay probably appeared at in every college humor magazine in the country at least once per issue for a decade on either side of its January 1944 publication date. It seems well-enough done for what it is; the best that might be said for it is that the author cleared his throat.

An abrupt shift in topic and style comes with the following piece, “On Cryptography,” published that October -- a sprightly introduction to a matter of great wartime interest. The title sounds like an allusion to the essays of Montaigne, and where the Iago reference in his debut seemed arch and contrived, here Steward’s use of classical and contemporary references (e.g., calling Suetonius “the Walter Winchell of ancient Rome”) proves both humorous and apropos. The next month’s column “On Alcoholics Anonymous” -- explaining the principles of an organization just beginning to catch the public’s attention -- comes about as close to memoir as possible without violating the distance implied by the authorial “we.”

It’s a remarkable progression in just three essays, and it doesn’t end there. With the measure of safety provided by a pseudonym -- and also by the less-than-mass circulation of the Illinois Dental Journal -- Steward experimented with the comic, personal and confessional modes of the casual essay in ways that might have been difficult to risk otherwise.

After sampling enough of the book to determine that the columns were of interest in their own right, rather than as the supplement to the author’s biography, I started reading Jeremy Mulderig’s introductory material. It clarifies a great deal, beginning with the essayist’s venue: Steward was attracted to his dentist, who happened to be the magazine’s editor. Its more typical fare was articles with titles like “Important Considerations in Porcelain Veneer Restoration,” but a column written from a nonprofessional vantage point seemed worthwhile, if only for variety. The dentist accepted Steward’s offer to write for the journal, though not, it seems, his other propositions.

After writing several pieces for “The Victim’s Viewpoint” (the column’s title for most of 1944), Steward decided to reboot it as something more wide-ranging. Which explains the nine-month gap between the first and second selections in Philip Sparrow Tells All, and the marked change in the writing itself. Including just one piece from the column’s beta version seems like a wise choice on Mulderig’s part. The wit and whimsy of dentistry as seen from the patient’s-eye view must have been stretched pretty thin after a couple of months.

Many of the columns take on a more humorous quality when you know that the author had a sex life active enough to impress Alfred Kinsey. And no doubt that will be a selling point for the book. But the tension between overt statement and implicit meaning can have effects other than amusement, and in the case of one essay, that tension seems especially powerful.

Published in February 1945, it anticipates the difficulties ahead as American society tries to reabsorb returning servicemen (and vice versa). Here is one passage:

“Only those who have been shot at can love and understand each other. We at home can never comprehend the powerful fraternalism that unites the men who belong, by reason of their experiences, to the ghostly brotherhood of war. When death is behind a bush that trembles, when it explodes in burning phosphorous to kill the friend who was joking a moment before, when it surrounds you with bodies black with flies and bloated by the sun until they at last explode, when your foot slides upon the stinking decayed intestines of a thing that was once a man -- only then, after the bony fingers have inscribed the membership card with your name, and you have looked into the fearful emptiness of the sockets in a fleshless skull, are your dues paid and you yourself a member of the League of War. … They have their own code of morals which we cannot possibly understand, and which will baffle and dismay us utterly. They will be startled and chagrined by what they will consider our indifference, but is really only our own inexperience slowly woven around us in our geographically and emotionally isolated chrysalis.”

Meaningful enough as these lines are on the most manifest level, they take on even more significance in the light of Alan Bérubé’s Coming Out Under Fire: The History of Gay Men and Women in World War Two (Free Press, 1990). Bérubé showed how important the experience of the war was to the formation of a sense of gay identity and community in the United States.

Steward himself was a Naval enlistee but did not see combat. There is an ambivalence, intimacy, pain and sadness to the essay that can be felt by a reader who knows nothing about the author. But it seems clear that the traumatized fighting men he depicts weren’t sociological abstractions but friends and lovers.

It bears reiterating that the name under which he published the essay, Philip Sparrow, was the one he later used as a tattoo artist -- and the one he preferred to go by for some while after being expelled from the groves of academe. It was the identity he assumed at the limits of permissible expression. “Man is least himself when he talks in his own person,” wrote Oscar Wilde. “Give him a mask, and he will tell you the truth.”

Editorial Tags: 

Commentary on Heidegger's Black Notebooks

Every few years, somebody notices that Martin Heidegger was a Nazi -- and it all starts up again: the polemics, the professions of shock, the critiques of his philosophy’s insidious role in the humanities. At times the denunciations have a rather generic quality, as if a search-and-replace macro had been used to repurpose a diatribe again John Dewey or Jacques Derrida. Calls for a boycott of Heidegger’s writings are made, issued by people who cannot name two of them.

The Heidegger bashers tend to be the loudest, but there are counterdemonstrators. Besides the occasional halfhearted search for mitigating circumstances (the Weimar Republic did not make for clear thinking, after all, and the man’s thought was normally pitched at stratospheric levels of abstraction rather than the grubby domain of party politics) there is the sport of itemizing the anti-Heideggerians’ lapses in scholarship. Every line of argument on either side of the dispute was established during the controversy provoked by Victor Farias’s Heidegger and Nazism (1987), yet l’affaire Heidegger has been recycled on at least three or four occasions since then. It’s as if the shock of the scandal was so great that it induced amnesia each time.

The most recent episode (Heidegger Scandal 5.0?) followed the publication in Germany, last year, of the first batch of the philosophical and political musings that Heidegger began recording in notebooks from 1931 onward. An English translation is forthcoming, so count on the outrage to renew in about six months. In the meantime, let me recommend a sharp and succinct overview of the Heidegger matter that may be of interest to anyone who hasn’t caught the earlier reruns. It appeared in the interdisciplinary journal Science & Society under the title “Notes on Philosophy in Nazi Germany.” The author, V. J. McGill, was for many years a professor of philosophy and psychology at Hunter College. “In the midst of the disillusionment and insecurity of postwar Germany and emerging fascism,” he wrote:

“Heidegger saw in care (Sorge) and anxiety (Angst), the basic substance of man. Instead of offering a rational solution of some kind he devoted himself to fine-spun philological distinctions, to an analysis of the pivotal concept of ‘nothing’ and to a subtle exploration of ‘death’ of which he says that we experience it only in the mode of ‘beside’ -- that is, beside death. History, culture, freedom, progress are illusory. He finds our salvation in the recovery of a primordial sense of coexistence with other beings, that is, a bare feeling of togetherness, deprived of all the amenities and hopes which make social life worth while ….

“The hundreds who flocked to Heidegger's very popular lectures in Freiburg learned that anxiety is the final, irremedial substance of man, and left with such esoteric wisdom as their main reward. Heidegger's philosophy was not distasteful to the Nazis, and when he was made rector of the University of Freiburg, he gave an address praising the new life which animated German universities. In recent years a rift has occurred. But philosophers can fall out with the Nazis on other grounds than their ideas and doctrines.”

McGill’s article was published in 1940. Over the intervening three quarters of a century, additional details have emerged, including documentation that Heidegger was not just an ally of the Nazi Party but also a full member from 1933 to 1945. And interest in his work on the part of several generations of philosophers who never showed the slightest bent towards fascism has meant much debate over the validity of reducing Heidegger’s philosophical concepts to their political context. But for all the anger that simmers in McGill’s discussion of Heidegger as an academic lackey of the Third Reich, his account is matter-of-fact and nonsensationalist, and little of the recent commentary can be said to improve upon it.

The Black Notebooks, as Heidegger’s private ruminations are known, sound ghastly on a number of fronts. The volumes published so far cover the years 1931 through 1941. Those covering the rest of the war years are being edited, and Heidegger is reported to have continued keeping the notebooks until his death in 1976. Richard Polt, a professor of philosophy at Xavier University and a translator of Heidegger’s work, identifies 19 passages (out of about 1,200 pages) that attack Jews in terms that might as well have come from an editorial by Joseph Goebbels. After the war Heidegger claimed to have become disillusioned with the Nazis within a couple of years of joining the party -- and the notebooks show this to have been true, strictly speaking. But his objections were to the boorishness and careerism of men who didn’t share his lofty understanding of Hitler’s ideology.

As with the anti-Semitism, this does not come as a revelation, exactly. His reference to “the inner truth and greatness of National Socialism” in a lecture from 1935 remained in the text when he published it in 1953. Beyond defiant, it was a gesture indicating a certain megalomania: Heidegger hadn’t betrayed the Fuhrer’s vision, the Nazis had!

But as David Farrell Krell, a professor emeritus of philosophy at DePaul University, suggests in a recent issue of Research in Phenomenology, the Black Notebooks reveal not just disappointment with the regime (combined with perfect callousness towards its brutality) but levels of rage, bile and despair that keep him from thinking. Heidegger cannot challenge himself, only repeat himself. “From day to day and day after day,” Krell says, Heidegger “entirely forgets that he has written the same things over and over again with the identical amount of dudgeon.”

Heidegger loathed Freud and psychoanalysis, which only makes it tempting to subject him to a little armchair diagnostics. But Krell's point, if I understand him correctly, is that the repetitiveness is more than symptomatic; the Black Notebooks document Heidegger not as a philosopher seduced by totalitarian politics, but as someone who has quit blazing a pathway of thought and instead become trapped in a maze of his own fabrication. Unfortunately, he is not the only one so trapped:

“At least part of the allure of the ongoing Heidegger scandal,” writes Krell in a passage that lights up the sky, “is that it distracts us from our own appalling national stupidities and our galling national avarice -- our own little darkenings, if you will. It is so much easier to fight battles that have already been decided and so lovely to feel oneself securely moored in the harbor of god’s own country. Not the last god but the good old reliable one, who blesses every stupidity and earns interest on every act of avarice. … The irony is that Heidegger’s Notebooks themselves reflect this dire mood. Perhaps by condemning him and them, we hope to gain a bit of space for ourselves, some impossible space for ourselves? That is the shadow these Notebooks cast over those who are so anxious to condemn. And that would be the Notebooks’ most terrible victory: it is not that the last laugh laughs best, for there is no joy and no laughter in them, but that their helpless rage recurs precisely in those who rail against them.”

Remember that next spring, when the controversy starts up once more.

Editorial Tags: 
Image Caption: 
Martin Heidegger

Former marine biologist, in book, says science needs help from Hollywood

Smart Title: 

Author discusses new book about narrative and what Hollywood has to offer science.

Review of Finn Brunton and Helen Nissenbaum, 'Obfuscation: A User's Guide for Privacy and Protest'

When downloading an app or approving a software update, I now usually hesitate for a moment to consider something the comedian John Oliver said early this summer: a software company could include the entire text of Mein Kampf in the user agreement and people would still click the “agree” button.

“Hesitates” is the wrong word for something that happens in a fraction of a second. It’s not as if I ever scrolled back through to make sure that, say, Microsoft is not declaring that it owns the copyright to everything written in OneNote or Word. The fine print goes on for miles, and anyway, a user agreement is typically an all-or-nothing proposition. Clicking “agree” is less a matter of trust than of resignation.

But then, that’s true about far more of life in the contemporary digital surround than most of us would ever want to consider. Every time you buy something online, place a cell phone call, send or receive a text message or email, or use a search engine (to make the list no longer nor more embarrassing than that), it is with a likelihood, verging on certainty, that the activity has been logged somewhere -- with varying degrees of detail and in ways that might render the information traceable directly back to you. The motives for gathering this data are diverse; so are the companies and agencies making use of it. An online bookseller tracks sales of The Anarchist Cookbook in order to remind customers that they might also want a copy of The Minimanual of the Urban Guerrilla, while the National Security Administration will presumably track the purchase with an eye to making correlations of a different sort.

At some level we all know such things are happening, probably without thinking about it any more often than strictly necessary. Harder to grasp is the sheer quantity and variety of the data we generate throughout the day -- much of it trivial, but providing, in aggregate, an unusually detailed map of what we do, who we know and what’s on our minds. Some sites and applications have “privacy settings,” of course, which affect the totality of the digital environment about as much as a thermostat does the weather.

To be a full-fledged participant in 21st-century society means existing perpetually in a state of information asymmetry, in the sense described by Finn Brunton and Helen Nissenbaum in Obfuscation: A User’s Guide for Privacy and Protest (MIT Press). You don’t have to like it, but you do have to live with it. The authors (who teach media culture and communications at New York University, where they are assistant professor and professor, respectively) use the term “obfuscation” to identify various means of leveling the playing field, but first it’s necessary to get a handle on information asymmetry itself.

For one thing, it is distinct from the economic concept of asymmetrical information. The latter applies to “a situation in which one party in a transaction has more or superior information compared to another.” (So I find it explained on a number of websites ranging from the scholarly to the very sketchy indeed.) The informed party has an advantage, however temporary; the best the uninformed can do is to end up poorer but wiser.

By contrast, what Brunton and Nissenbaum call information asymmetry is something much more entrenched, persistent and particular to life in the era of Big Data. It occurs, they explain, “when data about us are collected in circumstances we may not understand, for purposes we may not understand, and are used in ways we may not understand.” It has an economic aspect, but the implications of information asymmetry are much broader.

“Our data will be shared, bought, sold, analyzed and applied, all of which will have consequences for our lives,” the authors write. “Will you get a loan, or an apartment, for which you applied? How much of an insurance risk or a credit risk are you? What guides the advertising you receive? How do so many companies and services know that you’re pregnant, or struggling with an addiction, or planning to change jobs? Why do different cohorts, different populations and different neighborhoods receive different allocations of resources? Are you going to be, as the sinister phrase of our current moment of data-driven antiterrorism has it, ‘on a list’?”

Furthermore (and here Brunton and Nissenbaum’s calm, sober manner can just barely keep things from looking like one of Philip K. Dick’s dystopian novels), we have no way to anticipate the possible future uses of the galaxies of personal data accumulating by the terabyte per millisecond. The recent series Mr. Robot imagined a hacker revolution in which all the information related to personal debt was encrypted so thoroughly that no creditor would ever have access to it again. Short of that happening, obfuscation may be the most practical response to an asymmetry that’s only bound to deepen with time.

A more appealing word for it will probably catch on at some point, but for now “obfuscation” names a range of techniques and principles created to make personal data harder to collect, less revealing and more difficult to analyze. The crudest forms involve deception -- providing false information when signing up with a social media site, for example. A more involved and prank-like approach would be to generate a flood of “personal” information, some of it true and some of it expressing one’s sense of humor, as with the guy who loaded up his Facebook profile with so many jobs, marriages, relocations, interests and so on that the interest-targeting algorithms must have had nervous breakdowns.

There are programs that will click through on every advertisement that appears as you browse a site (without, of course, bothering you with the details) or enter search engine terms on topics that you have no interest in, thereby clouding your real searches in a fog of irrelevancies.

The cumulative effect would be to pollute the data enough to make tracking and scrutiny more difficult, if not impossible. Obfuscation raises a host of ethical and political issues (in fact the authors devote most of their book to encouraging potential obfuscators to think about them) as well as any number of questions about how effective the strategy might be. We’ll come back to this stimulating and possibly disruptive little volume in weeks to come, since the issues it engages appear in other new and recent titles. In the meantime, here is a link to an earlier column on a book by one of the co-authors that still strikes me as very interesting and, alas, all too pertinent.

Editorial Tags: 

Vice chancellor discusses push for change at South Africa's University of the Witwatersrand

Smart Title: 

Adam Habib, vice chancellor of the University of the Witwatersrand, discusses how he plans to recruit more black academics without resorting to a freeze on the hiring of white academics.

Review of Jacques Le Goff, 'Must We Divide History Into Periods?'

George Orwell opened one of his broadcasts on the BBC in the early 1940s by recounting how he’d learned history in his school days. The past, as his teachers depicted it, was “a sort of long scroll with thick black lines ruled across it at intervals,” he said. “Each of these lines marked the end of what was called a ‘period,’ and you were given to understand that what came afterwards was completely different from what had gone before.”

The thick black lines were like borders between countries that didn’t know one another’s languages. “For instance,” he explained, “in 1499 you were still in the Middle Ages, with knights in plate armour riding at one another with long lances, and then suddenly the clock struck 1500, and you were in something called the Renaissance, and everyone wore ruffs and doublets and was busy robbing treasure ships on the Spanish Main. There was another very thick black line drawn at the year 1700. After that it was the Eighteenth Century, and people suddenly stopped being Cavaliers and Roundheads and became extra-ordinarily elegant gentlemen in knee breeches and three-cornered hats … The whole of history was like that in my mind -- a series of completely different periods changing abruptly at the end of a century, or at any rate at some sharply defined date.”

His complaint was that chopping up history and simplistically labeling the pieces was a terrible way to teach the subject. It is a sentiment one can share now only up to a point. Orwell had been an average student; today it would be the mark of a successful American school district if the average student knew that the Renaissance came after the Middle Ages, much less that it started around 1500. (A thick black line separates his day and ours, drawn at 1950, when television sales started to boom.)

Besides, the disadvantages of possessing a schematic or clichéd notion of history are small by contrast to the pleasure that may come later, from learning that the past was richer (and the borders between periods more porous) than the scroll made it appear.

Must We Divide History Into Periods? asked Jacques Le Goff in the title of his last book, published in France shortly before his death in April 2014 and translated by M. B. DeBevoise for the European Perspectives series from Columbia University Press. A director of studies at L'École des Hautes Études en Sciences Sociales in Paris, Le Goff was a prolific and influential historian with a particular interest in medieval European cities. He belonged to the Annales school of historians, which focused on social, economic and political developments over very long spans of time -- although his work also exhibits a close interest in medieval art, literature and philosophy (where changes were slow by modern standards, but faster than those in, say, agricultural technique).

Le Goff’s final book revisits ideas from his earlier work, but in a manner of relaxed erudition clearly meant to address people whose sense of the past is roughly that of young Orwell. And in fact it is that heavy mark on the scroll at the year 1500 -- the break between the Middle Ages and the Renaissance -- that Le Goff especially wants the student to reconsider. (I say “student” rather than “reader” because time with the book feels like sitting in a lecture hall with a memorably gifted teacher.)

He quotes one recent historian who draws the line a little earlier, with the voyage of Christopher Columbus in 1492: “The Middle Ages ended, the modern age dawned, and the world became suddenly larger.” Le Goff is not interested in the date but in the stark contrast that is always implied. Usually the Middle Ages are figured as “a period of obscurity whose outstanding characteristic was ignorance” -- happily dispelled by a new appreciation for ancient, secular literature and a sudden flourishing of artistic genius.

Calling something “medieval” is never a compliment; the image that comes to mind is probably that of a witch trial. By contrast, “Renaissance” would more typically evoke a page from Leonardo da Vinci’s notebooks. Such invidious comparison is not hard to challenge. Witch trials were rare in the Middle Ages, while the Malleus Maleficarum appeared in “the late fifteenth century,” Le Goff notes, “when the Renaissance was already well underway, according to its advocates.”

Given his expertise on the medieval city -- with its unique institutional innovation, the university -- Le Goff makes quick work of demolishing the notion of the Middle Ages having a perpetually bovine and stagnant cultural life. The status of the artist as someone “inspired by the desire to create something beautiful” who had “devoted his life to doing just this” in pursuit of “something more than a trade, nearer to a destiny,” is recognized by the 13th century. And a passage from John of Salisbury describes the upheaval underway in the 12th century, under the influence of Aristotle:

“Novelty was introduced everywhere, with innovations in grammar, changes in dialectic, rhetoric declared irrelevant and the rules of previous teachers expelled from the very sanctuary of philosophy to make way for the promulgation of new systems …”

I can’t say that the name meant anything to me before now, but the entry on John of Salisbury in the Stanford Encyclopedia of Philosophy makes it sound as if Metalogicon (the work just quoted) was the original higher ed polemic. It was “ostensibly written as a defense of the study of logic, grammar and rhetoric against the charges of the pseudonymous Cornificius and his followers. There was probably not a single person named Cornificius; more likely John was personifying common arguments against the value of a liberal education. The Cornificians believe that training is not relevant because rhetorical ability and intellectual acumen are natural gifts, and the study of language and logic does not help one to understand the world. These people want to seem rather than to be wise. Above all, they want to parlay the appearance of wisdom into lucrative careers. John puts up a vigorous defense of the need for a solid training in the liberal arts in order to actually become wise and succeed in the ‘real world.’”

That's something an Italian humanist might write four hundred years later to champion “the new learning” of the day. And that is no coincidence. Le Goff contends that “a number of renaissances, more or less extensive, more or less triumphant,” took place throughout the medieval era -- an elaboration of the argument by the American historian Charles H. Haskins in The Renaissance of the Twelfth Century (1927), a book that influenced scholars without, as Le Goff notes, having much effect on the larger public. The Renaissance, in short, was a renaissance -- one of many -- and in Le Goff’s judgment “the last subperiod of a long Middle Ages.”

So, no bold, clean stroke of the black Magic Marker; more of a watercolor smear, with more than one color in the mix. Le Goff treats the Middle Ages as having a degree of objective reality, insofar as certain social, economic, religious and political arrangements emerged and developed in Europe over roughly a thousand years.

At the same time, he reminds us that the practice of breaking up history into periods has its own history -- deriving, in its European varieties, from Judeo-Christian ideas, and laden with ideas of decline or regeneration. “Not only is the image of a historical period liable to vary over time,” he writes, “it always represents a judgment of value with regard to sequences of events that are grouped together in one way rather than another.”

I'm not entirely clear how, or if, he reconciled the claim to assess periodization on strictly social-scientific grounds with its status as a concept with roots in the religious imagination, but it's a good book that leaves the reader with interesting questions.

Editorial Tags: 

Essay calls for a new approach to college textbooks

The Book Industry Study Group just reported that 52 percent of college students surveyed agreed that “I would rather pay $100 for a learning solution that improves my result by one letter grade and reduces my study time by 25 percent than $50 for my current textbook.” As a professor, I am troubled by declines in the effort many in my classes are willing to put into doing the reading I assign. But as an administrator, I also recognize students’ concerns with scoring high grades, juggling internships and part-time jobs, and minimizing expenses.

Multiple factors are at play here: grade inflation, social pressures, student debt, the iffy job market. Further relevant is the time students report studying each week (now an average of 15 hours, down from about 24 in the 1960s). Yet one of the major culprits is the price tag on textbooks and other course materials, estimated at around $1,200 a year -- assuming you buy them.

Faculty members and students alike are in a quandary over how to handle textbook costs, especially for those hefty tomes often used in introductory courses. Increasingly, students are opting not to purchase these books -- not even rent them. Digital formats (and rentals of any kind) tend to be less expensive than buying print, though frequently the decision is not to acquire the materials at all. The U.S. Public Interest Research Group reports that two-thirds of students have refrained from purchasing at least one assigned textbook because of price.

Recently, American University ran focus groups with our undergraduates, looking to get a sense of how they make textbook decisions. For courses in their major, they are willing to lay out more money than for general education classes, which they perceive (often wrongly) not to require much work anyway. Over all, the common sentiment is that spending more than about $50 for a book is excessive. And of course there are plenty of college textbooks with prices that exceed $50.

This message was reinforced by an anecdote shared with me by Michael Rosenwald, a reporter for The Washington Post. While interviewing American University students for a story on college reading and book-purchasing habits, Rosenwald asked, “Who buys course materials from the campus store these days?” Their answer: “Freshmen,” revealing that once students settle into campus life, they discover less expensive ways to get their books -- or devise strategies on how much reading they'll actually do.

For faculty members, the challenge is to find a workable balance between the amount of reading we would like those in our classes to complete and realistic expectations for student follow-through. While some full-length books may remain on our required list, their numbers have shrunk over time. These days, assignments that used to call for complete books are being slimmed down to single chapters or articles. Our aspirations for our students to encounter and absorb substantial amounts of written material increasingly rub up against their notions of how much is worth reading.

The numbers tell the tale. That same Book Industry Study Group report noted that between 2010 and 2013, the percentage of students indicating that classes they were taking required “no formal course materials” rose from 4 percent to 11 percent.

Student complaints are equally revealing. When Robert Putnam’s Bowling Alone came out, I assigned the book to a group of honors undergraduates, eager for them to experience careful, hypothesis-driven, data-rich social science research. One member of the class balked. In fact, she publicly berated me, demanding to know why I hadn’t told the group about the “short version” of the book -- meaning an article Putnam has written years earlier, before his full study was completed. She went on to inform the class what she had learned from a teacher in high school: books aren’t worth reading, only articles. The rest of what’s in books is just padding.

The author and teacher in me cringed at how this young woman perceived the intellectual enterprise.

For students, besides the understandable limitations on time and finances, there is the question of value proposition. If the objective is learning that lasts, maybe buying the book (and reading it) is worth it. But if the goal is getting a better grade, maybe not. All too often today, it is the grade that triumphs.

One player that faculty members generally leave out of the equation is the publishing industry, including not just the companies whose names are on the spines but the people who print the books, supply the paper and ink, and operate the presses. Recently I spoke at the Book Manufacturers’ Institute Conference and was troubled by the disconnect I perceived between those who produce and distribute textbooks and those who consume them. As students buy fewer books, publishers do smaller print runs, resulting in higher prices, which in turn reinforces the spiral of lower sales.

A potential compensatory financial strategy for publishers is issuing revised editions, intended to render obsolete those already in circulation. In reality, students often take a pass on these new offerings, waiting until they appear on the used book market. Yes, sometimes there is fresh, timely material in the new versions, but how often do we really need to update textbooks on the structure of English grammar or the history of early America?

When speaking with participants in the book manufacturers’ conference, I became increasingly convinced that the current model of book creation, distribution and use is not sustainable. What to do?

There is a pressing need for meaningful collaboration between faculty members and the publishing industry to find ways of producing materials designed to foster learning that reaches beyond the test -- and that students can be reasonably expected to procure and use. I would like to hope that textbook publishers (who I know are financially suffering) are in conversation not just with authors seeking book contracts but with faculty members who can share their own assignment practices, along with personal experiences about how students are voting with their feet regarding purchasing and reading decisions.

To help foster such dialogue, here are some suggestions:

  • Gather data on shifts in the amount and nature of reading that faculty assign, say, over the past 10-20 years.
  • Reconsider publishing strategies regarding those handsome, expensive, color-picture-laden texts, whose purpose is apparently to entice students to read them. If students aren’t willing to shell out the money, the book likely isn’t being read. Focus instead on producing meaningful material written with clear, engaging prose.
  • Rethink when a new edition is really warranted and when not. In many instances, issuing a smaller update, to be used as a supplement to the existing text, is really all that’s needed. (Think of those encyclopedia annuals with which many of us are familiar.) Students -- and far more of them -- will be willing to pay $9.95 for an update to an older book than $109.95 for a new one. McDonald’s learned long ago that you can turn a handsome profit through high volume on low-cost items. The publishing industry needs to do the math.
  • Make faculty members aware of the realities of both textbook prices (some professors never look before placing book orders) and student reading patterns. I heartily recommend hanging out in the student union (or equivalent) and eavesdropping. You will be amazed at how cunning -- and how honest -- students are about their study practices.
  • Encourage professors to assign readings (especially ones students are asked to pay for) that maximize long-term educational value.
  • Educate students about the difference between gaming the assignment system (either for grades or cost savings) and learning.

The results can yield a win-win situation for both the publishing industry and higher education.

Naomi S. Baron is executive director of the Center for Teaching, Research, and Learning at American University and author of Words Onscreen: The Fate of Reading in a Digital World.

Editorial Tags: 

Authors discuss reasoning behind high levels of Asian American achievement

Smart Title: 

Authors discuss new book on high levels of Asian-American achievement in education in which they argue that it's not about culture.


Subscribe to RSS - books
Back to Top