Fame can be fickle and destiny, perverse -- but what are we to call how posterity has treated Stefan Zweig? In the period between the world wars, he was among the best-known authors in the world and, by some reckonings, the most translated. Jewish by birth and Austrian by nationality, Zweig was perhaps most of all Viennese by sensibility. His fiction expressed the refined cynicism about sexual mores that readers associated with Sigmund Freud and Arthur Schnitzler, and he played the role of Vienna coffeehouse man of letters to perfection.
Zweig’s cosmopolitanism was of the essence: his biographical and critical studies of European literary and historical figures spanned the Renaissance through the early 20th century and even roamed far enough abroad to include Mary Baker Eddy, the very American if otherwise sui generis founder of Christian Science. (The portrait of her in his book Mental Healers is etched in acid, but it occupies a surprisingly appropriate spot: between the accounts of Franz Mesmer and Freud.)
His books fueled the Nazi bonfires. Even apart from their racial obsessions, Zweig was precisely the sort of author to drive Hitler and Goebbels into paroxysms, and after years of exile he committed suicide in Brazil in 1942. His reputation did not survive the war, either, at least not among English-language readers. After four decades or so of relatively far-flung reading, I think of him as one of those authors who seem only ever to show up in parentheses and footnotes, or sometimes pointed out as a biographer too prone to psychologizing or melodrama. Being barely remembered trumps being totally forgotten (it’s more than most of us will be, anyway), but Zweig hardly seemed like a figure poised for rediscovery when, not too long ago, the comeback began.
The essays and speeches collected in Messages From a Lost World: Europe on the Brink (Pushkin Press) form a supplement to the volume that launched the revival -- Zweig’s memoir, The World of Yesterday, which the University of Nebraska Press published in the new translation that Pushkin Press issued in 2009. (Finished just before the author’s suicide, the book first appeared in English in 1943. That earlier translation can also be had, in ebook format.) The recent NYRB Classics editions of his fiction have had a lot to do with it being more than a one-book revival, but e-publishing and print-on-demand operations account for nearly everything by Zweig in English now being available. A mixed blessing, given that some such “publishers” do little but sell you copies of material available for free from the Internet Archive or Project Gutenberg.
The World of Yesterday is less autobiography than a self-portrait of the European literati during the final years of the belle epoque -- the four decades of relative peace and prosperity on the continent that ended with World War I. The new communication technologies and modes of high-speed transport were shrinking the world, while the spread of education, scientific progress and humane cultural values would presumably continue. The earliest pieces in Messages From a Lost World contain Zweig’s musings on the spiritual impact of the war, written while it was still in progress and with no end in sight. They are the thoughts of a man trying to find his way out of what must have seemed a completely reasonable state of despair:
“Never since it came into being has the whole world been so communally seized by nervous energy. Until now a war was only an isolated flare-up in the immense organism that is humanity, a suppurating limb which could be cauterized and thus healed, whilst all the remaining limbs were free to perform their normal functions without the least hindrance …. But due to its steady conquest of the globe, humanity forged ever-closer links, so today a fever quivers within its whole organism; horrors easily traverse the entire cosmos. There is not a workshop, not an isolated farm, not a hamlet deep in a forest from which they have not torn a man so that he might launch himself into the fray, and each of these beings is intimately connected to others by myriad threads of feeling; even the most insignificant among them has breathed so much of the feverish heat, his sudden disappearance makes those that remain that much colder, more alone and empty.”
In pieces from the 1920s and early ’30s, Zweig takes it as a moral imperative to champion the cause of peace by reminding his readers and listeners that humanity could no longer afford the sort of belligerent nationalism that had led them into the Great War. Respect for the possibilities of human development should replace claims to military greatness:
“If men lived [in earlier eras] as if in the folds of a mountain, their sight limited by the peaks on either side, we today see as if from a summit all worldly happenings in the same moment, in their exact dimensions and proportions. And because we have this commanding view across the surface of the earth, we must now usher in new standards. It’s no longer a case of which country must be placed ahead of another at their expense, but how to accomplish universal movement, progress, civilization. The history of tomorrow must be a history of all humanity and the conflicts between individual countries must be seen as redundant alongside the common good of the community.”
If the world could be changed by elegantly expressed humanist sentiments, this passage, from a speech delivered in 1932, might have altered the course of history. But the way it tempted fate now looks even more bitterly ironic than it did after Hitler took office a few months later. For in spite of his lofty vantage point (“we today see as if from a summit all worldly happenings in the same moment, in their exact dimensions and proportions”) and a depth of historical vision giving him insight into Magellan, Casanova, Napoleon, Goethe, Nietzsche and Mary Antoinette (among others), Zweig was struck dumb by the course of events after 1933.
Not literally, of course; on the contrary, later selections in Messages From a Lost World show that he could turn on the spigots of eloquence all too easily. (The tendency of Zweig's prose to turn purple and kitschy has been mocked at length by the translator Michael Hoffmann.) But he remained notoriously -- and to a great many people, unforgivably -- averse to speaking out against what was happening in Germany. He did say that, considering the company it put him in, having his book torched by the Nazis was an honor. Yet as a statement of solidarity against a regime that would, in time, burn people, that seems decidedly wanting.
Uncharitable people have accused Zweig of cowardice, while Hannah Arendt’s essay on him, found in Reflections on Literature and Culture (Stanford University Press, 2007) treats Zweig as an example of the cultural mandarin so determined to avoid the grubby realities of politics that he disgraced his own idealism. Whether or not that implies a lack of courage is a difficult question.
But surely it isn’t being excessively generous to wonder if Zweig’s failure was one of imagination, rather than of nerve: the inability, first of all, to grasp that Hitler was more than just another nationalist demagogue, followed by a paralysis at seeing the mad dream of the Thousand-Year Reich tearing into reality across most of Europe, with no plan to stop there. Against the horrible future, all he had left was nostalgia -- with a memory of what security and boundless optimism had been like, once.
A celebrity has been defined as somebody who is well-known for being well-known. And by that measure, Gary A. Olson’s Stanley Fish, America's Enfant Terrible: The Authorized Biography (Southern Illinois University Press) is a celebrity biography of sorts. At no point does the subject go into rehab, but other than that, the book hits most of the tabloid marks: humble origins, plucky self-fashioning, innovative and controversial work, exorbitant earnings (for a Milton scholar), and illicit romance (leading to marital bliss). Much of this was once English department gossip, but now posterity is the richer for it.
Olson, president of Daemen College in Amherst, N.Y., even reveals a brush with Hollywood. When plans were underway to film one of David Lodge’s novels about academe, the producer wanted to cast Walter Matthau as Morris Zapp, a character clearly based on Fish. Matthau wasn't interested, and the movie languished in development hell, but in Olson’s words, “Stanley let it be known that he would love the opportunity to play the role himself.” (Another trait of the celeb bio: subject called by first name.)
If Fish wrote a book of career advice, a suitable title might be The Art of the Deal. Describing his career in the late 1970s and early ’80s, Olson writes:
“Stanley’s annual schedule was unimaginably grueling to many faculty. He would jet from university to university, giving workshops, papers and presentations, both in the United States and abroad. He was in great demand and he knew it, so he pushed the boundaries of what universities would pay to bring in a humanities professor to speak. Whatever he was offered by a university, he would demand more -- and he usually got it. The host who invited him would scramble to meet his fee, asking various campus departments and entities to contribute to the event until the requisite fee had been collected. Throughout his career he must have visited almost every notable university in every state of the union.”
Then again, a book of career advice from Stanley Fish would be virtually useless to anyone else today, like a tourist guide to a city that's been hit by an earthquake. This fall will be the 50th anniversary of Fish receiving tenure at the University of California at Berkeley. He was 28 years old and had been teaching for four years. Berkeley was his first position; he turned down two previous offers before accepting it. Given the circumstances, Stanley Fish, America’s Enfant Terrible will pose a challenge to many readers that rarely comes up with a nonfiction book: that of suspending disbelief. The chapter I just quoted is called “Academic Utopia” -- and as with other utopias, you can’t really get there from here.
“From my point of view,” Olson reports Fish saying, “there are a lot of people out there making mistakes, and I’m just going to tell them they’re making mistakes.” As if to authenticate that statement, the title of Fish’s most recent book is Think Again: Contrarian Reflections on Life, Culture, Politics, Religion, Law, and Education (Princeton University Press) while his next, due this summer, is called Winning Arguments: What Works and Doesn't Work in Politics, the Bedroom, the Courtroom, and the Classroom (Harper).
Originally the “people out there” so targeted were his fellow scholars of 16th- and 17th-century English literature. The range of his corrective efforts grew to include leading figures in literary theory, followed by the legal profession and finally -- with his New York Times column, which ran from 1995 to 2013 -- the entire public sphere. Olson wrote about the theoretical and rhetorical force of Fish’s work in two earlier volumes. Here the author largely underplays Fish’s intellectual biography beyond a few references to professors who influenced him as a student. Instead, Olson emphasizes the tangible institutional power that Fish acquired and learned to wield in his route to becoming a university administrator of national prominence.
Starting in 1985, when he was chair of the English department at Johns Hopkins University, Fish made a series of unexpected and much-discussed moves -- first to Duke University, where he reshaped the respectable but inconspicuous English department into high-theory powerhouse of the early 1990s, and later to the University of Illinois at Chicago, where he was dean of the College of Liberal Arts and Sciences. There, the biographer says, Fish’s “hiring of celebrity academics and the generating of positive publicity” were steps meant “to lift the miasma of an inferiority complex, to help faculty collectively feel that they were part of an enterprise that was known and admired by others.” (There were other appointments and stints and high professional honors along the way; the chronology at the back of the book is overstuffed with them.)
Like Morris Zapp, Stanley Fish has an appealing side, as Olson portrays him: he works hard (as a graduate student, he read Paradise Lost five times in one summer), he sounds like a good teacher and the qualities that read as brash and arrogant in some circumstances are probably gruff but lovable, like a Muppets character, in others. That said, letting decisions and values be determined by the need to feel “known and admired by others” is the very definition of what sociologists used to call “other-directedness.” More recently, and fittingly enough, it’s associated with the culture of celebrity.
No celebrity biography is complete without a catastrophe or two, mixed in with the triumphs -- and in the case of Fish’s role as a “developer” of intellectual and institutional real estate, there is not such a bright line between success and disaster. He enhanced the reputation of Duke’s humanities programs (and that of the university press), but most of the stars he hired relocated within a few years. And Fish’s efforts at UIC are remembered for running at a deficit that lasted beyond his deanship.
In an email note to Olson, I asked about various things that had caused a knot to form in my stomach while reading his book. We learn that Fish arranged a summer stipend of $58,000 for one of his cronies. Olson says he once responded to the challenge of an audience member to explain why anyone should believe him by saying, “Because I am Stanley Fish. I teach at Johns Hopkins University, and I make $75,000 a year.” (I recall hearing that one in the early 1980s; the equivalent now would be around $200,000.)
I've benefited from reading Fish in the past, but I find it hard to imagine anyone racking up student debt that will take decades to pay off regarding the figure Olson presents as anything but a monster. To be fair, the final pages of the book depict Fish in more recent times as a less grandiose figure, even as touched with regret or disappointment. (Being a septuagenarian enfant terrible seems like a pretty melancholy prospect.)
Anyway, Olson replied to my question as follows: “Well, some do think of him as a monster, although probably not for those reasons. I don’t argue in the book that he should be admired -- or reviled. That’s an individual choice. You have to remember: Fish is from an older generation of academics, when higher education was growing exponentially and exuberantly. Fish represents the rise of high theory. Despite his passion for teaching, he is known most as an intellectual and a scholar; that’s what drew the high pay and the high praise. You don’t hire someone like Fish to enhance your teaching as much as you do to bring a certain prestige to your institution. That’s why Duke, UIC and other institutions wanted him. And, generally, these proved to be good institutional decisions.”
Perhaps. It seems to be a question of short-term versus long-term benefit. But it’s hard to understand how giving large barrels of money to a few transient scholarly A-listers “to help faculty collectively feel that they were part of an enterprise that was known and admired by others” was ever a good idea -- much less a sustainable one. After Fish, the deluge?
Early last month I started reading Maria Konnikova’s The Confidence Game: Why We Fall for It … Every Time (Viking) but had to put the book aside under the pressure of other matters -- only to have the expression “con artist” hit the headlines in short order. And in the context of electoral politics, no less, with one Republican presidential candidate applying it to another. (No names! Otherwise people with a Google News alerts for them will just flood into the comments section to rail and vent.)
My notes from a few weeks ago show no inkling that the book might be topical. Instead, I listed movies about the confidence game (the most theatrical of crimes) and dredged up recollections of David W. Maurer’s The Big Con: The Story of the Confidence Man, first published in 1940. A reviewer of one of Maurer’s later books said that while he was “retired as an active professor of English at the University of Louisville” as of 1972, he was “the foremost student of the argots and subcultures in various precincts of the Wild Side … whose easy authority and unerring judgment brought the language and culture of the American underworld to the attention of scholars.” The tribute still stands, although Maurer’s studies of the slang of drug addicts, prostitutes, forgers and pickpockets (many available in JSTOR) are now period pieces: the scholarly equivalent of vintage pulp fiction.
Parts of The Big Con have a Depression-era flavor, but it is the author’s masterpiece and, if not the last word on the subject, certainly the definitive one. Other, lesser criminal subcultures generate argots, but for Maurer the confidence men are the professional elite of the underworld, and their specialized lingo is the concentrated expression of a well-tested understanding of human psychology. Mastery of it distinguishes those qualified for “the long con” (sustained and intricate operations extracting large sums from victims) from, e.g., small-timers running the three-card monte scam on a street corner.
The long con, as depicted on screen, is a marvel to witness. David Mamet wrote and directed the two best movies in this area: House of Games (1987) and The Spanish Prisoner (1997). The latter film takes its name from the quintessential long con, still viable after centuries of use. The contrast between short and long cons is blended into the Oedipal shenanigans of The Grifters (1990), adapted from the novel by pulp-noir virtuoso Jim Thompson, who I suspect read Maurer’s book somewhere along the way. And in a considerably lighter vein there is Dirty Rotten Scoundrels (1988) with Michael Caine as a smooth long-con artist and Steve Martin as the short-con operator who becomes both his nemesis and protégé.
Looking up those dates, I’m struck by the absence from the list of any more recent entry. What we have instead, it seems, is a string of movies such as Boiler Room (2000), The Wolf of Wall Street (2013) and The Big Short (2015), which treat the world of high finance as a racket. The most recent film’s title sounds like a nod to Maurer, and his monograph does cover a classic long con called “the rag,” which involves a phony stockbroker’s office. (There, investors have the chance to get rich through insider trading, or so they think until the office vanishes, along with the suckers’ money.)
“If confidence men operate outside the law, it must be remembered that they are not much further outside than many pillars of our society who go under names less sinister,” Maurer said in a passage that Maria Konnikova, a New Yorker contributor, quotes in The Confidence Game. She moves quickly to give recent evidence for the point, such as the U.S. Department of Justice’s suit against USIS, which she describes as “the contractor that used to supply two-thirds of the security clearances for much of the [U.S.] intelligence community.” The DOJ found that “the company had faked well over half a million background checks between 2008 and 2012 -- or 40 percent of total background checks.” Corruption on that scale lies beyond the most ambitious con’s dreams of success.
Konnikova’s basic argument -- developed through a mixture of anecdotes and behavioral-science findings, after the manner associated with journalist and author Malcolm Gladwell -- is that both the grifter’s manipulative skills and the victim’s susceptibility are matters of human neurobiology and everyday social psychology. She cites an investigation organized by “Charles Bond, a psychologist at Texas Christian University who has studied lying since the 1980s,” who, in 2006, gathered information on beliefs about lying held by dozens of countries in 43 languages. Three-quarters of the responses in one phase of the study identified “gaze aversion” as a signal that someone was lying, while “two-thirds noted a shift in posture, another two-thirds that liars scratch and touch themselves more, and 62 percent said that [liars] tell longer stories. The answers spanned sixty-three countries.” But other studies show that these beliefs -- while cross-cultural, if not universal -- are poor guides to assessing a stranger’s truthfulness. Someone delivering a carefully worded lie while holding a steady gaze and not fidgeting is, in effect, already halfway to being believed (or elected, as the case may be).
Konnikova pays tribute to Maurer’s classic by linking each of her chapters to one of the phases or components of a long con, as itemized in the grifter’s lexicon. The first stage, called “the put-up,” is perhaps the most intuitive: the con artist identifies a mark (victim) by picking up signals of the individual’s interests, personality traits and self-perceptions, and then begins to cultivate casual familiarity or trust. “There’s nothing a con artist likes to do more than make us feel powerful and in control,” she writes. “We are the ones calling the shots, making the choices, doing the thinking. They are merely there to do our bidding. And so as we throw off ever more clues, we are becoming increasingly blind to the clues being thrown off by others.”
Here, the author overstates things, since the talented grifter is also busy creating signals. In a later chapter, she describes a fellow who arrives at a charity event in London, acting a bit drunk and overfriendly and seemingly unaware that he didn’t shave that morning. His mark is woman of the world who knows the type: one of those would-be charming playboys, feckless but harmless, getting by on an allowance from his aristocratic relations. (Suffice to say that she takes a check from him and lives to regret it.)
With the dramatis personae in place, “the play” (the con itself) gets underway. Other characters may have walk-on parts, such as the “secretaries” and “brokers” in a phony stock-market game who get a cut of the criminal profits. The mark is made privy to whatever circumstances the con involves (e.g., a long-forgotten wine cellar full of rare and expensive vintages, an amazing real-estate deal, the chance to get in on the next big invention) and drawn into the secrets and moments of exhilaration that go with this once-in-a-lifetime opportunity.
But the most fascinating and disturbing aspect of a long con is how the con artist manages and redirects any anxieties or misgivings the victim may feel. After the “blowoff” -- when the victim is left with a case of bad wine with fancy labels, ownership of swampland, etc. -- a really well-executed con will leave the mark too embarrassed to complain. (Erving Goffman's classic paper "On Cooling the Mark Out" uses a late phase of the con game as a key to understanding how society at large reconciles the ordinary person to the disappointing realities of life.)
The lab experiments and social-scientific inquiries that Konnikova describes offer plausible (if seldom especially surprising) analyses of the cognitive and emotional forces at work. People prefer to think of themselves as smart, helpful, good judges of character and destined for lives better than the one they’ve settled for, thus far. And someone appealing to those feelings can end up with all of your money and no known forwarding address. Perhaps the most interesting and memorable thing about The Confidence Game is not that researchers can now explain the roots of our vulnerability, but rather the way it confirms something Maurer implied in his book 75 years ago: there’s no one better able to understand the individual human psyche than someone prepared to exploit it, undistracted by the slightest remorse.
Nearly a month has passed since the release of “The Costs of Publishing Monographs: Toward a Transparent Methodology,” a document prepared by the consulting and research division of Ithaka S+R. (Ithaka is also associated with JSTOR, the scholarly journals repository.) The report seems not to have drawn much attention outside the ranks of the Association of American University Presses, which seems odd. It ought to be of some interest to the larger constituency of those who buy, read and/or write scholarly books.
If you mention the price of academic-press books to people who’ve never purchased one, the effect is akin to a cartoon character with eyeballs popping out and exclamation marks hovering in the air, with a thought balloon reading, “What a racket!” (On one occasion I heard it said aloud.) The dismay will usually cool off some as you explain how the specialist nature of scholarly publications tends to preclude economies of scale. A small audience means low press runs, yielding high per-unit costs. That’s not the whole story, of course, but it often suffices to explain why, say, a slender new book interpreting Moby Dick might cost five times as much as a Melville biography thick enough to serve as a doorstop -- and why no one in the family has purchased Aunt Louise’s book, even if they’re proud she got tenure for it.
The authors of the new Ithaka report mention a ballpark estimate of the expense to a press of preparing a scholarly book for publication (not printing, just getting it to that point) that has been bandied about over the past couple: $20,000. It’s problematic, but let’s imagine, for the sake of argument, that it costs that much to prepare and to print a monograph, and that every single one of its 400 copies is sold. In that case the absolute lowest wholesale price of a single volume has to be $50, just to break even. Many trade publishers would consider a print run 10 times that size to be small -- with each copy selling at a much lower price while still making a profit. It’s not that trade presses are models of efficiency that scholarly presses ought somehow to emulate -- not at all. They resemble one another about as much as an ostrich egg and a cannonball do, and the differences cannot be tinkered away.
Ithaka’s researchers collected information on the expenses involved in bringing out 382 books from the arts, humanities and social sciences published by 20 American university presses during their 2014 fiscal year. The data assembled were granular -- drawn from the sort of in-house bookkeeping each department (editorial, production, marketing, etc.) had to do while handling each title. Some expenses are more discretely defined than others. The cost of sending a manuscript out for copyediting, for example, is not too hard to determine; just look at the invoice. Calculating the fraction of an acquisition editor’s salary that went into a given book seems more difficult -- besides which there are the overhead expenses of clerical labor, rent, tech support and so on, some of them provided by the hosting university.
The 20 presses surveyed range from small presses (averaging roughly 11 employees publishing 46 titles per year, with an annual revenue from books of under $1.5 million) to powerhouses (circa 82 employees, 253 titles and more than $6 million annual revenue). They are segmented into four size categories, with five presses each, and with some effort made for geographical diversity and varying publishing foci (monographs, journals, regional titles).
In short, it must be one hell of a spreadsheet -- and the researchers establish three ways of defining cost per book to reflect the varying impacts of staff time, overhead expense and institutional support. One effect of the analysis is that the figure of $20,000 per book in preparation expenses goes right out the window: the study “yielded a wide range of costs per title, from a low of $15,140 to a high of $129,909, and the range of costs is wide both within and across groups.” Taking in the varying ways of assessing the expenses of almost 400 titles, the researchers find that the average cost per monograph is between $28,747 (using the minimal baseline) and not quite $40,000 (factoring in indirect overhead expenses). It bears repeating that this is not the final cost of publishing, printing, binding and warehousing monographs of the predigital sort would entail additional expense.
The Ithaka report focuses, rather, “on the costs of producing the first digital copy of ‘a high-quality digital monograph.’” For that to be the benchmark -- rather than the traditional hardback monograph -- is in keeping with the expectation that scholarship be made available in open-access form, as both federal mandates and the emerging academic ethos increasingly demand.
For scholarly publishing to meet the standards of quality established over the past century will require continued investment in the kinds of intensive, skilled labor that university presses foster. How to meet that demand while simultaneously developing ways of funding open-access publishing remains to be worked out. Ithaka S+R’s report doesn’t underestimate the difficulties; it just reminds us that the problem is on the agenda, or needs to be. Otherwise, the shape of things to come in scholarly publishing could get very messy -- and not in an especially creative way.
“One of the most profoundly exciting moments of my life,” Gertrude Stein recalled in a lecture at Columbia University in the mid-1930s, “was when at about 16 I suddenly concluded that I would not make all knowledge my province.” It is one of her more readily intelligible sentences, but I have never been able to imagine the sentiment it expresses. Why “profoundly exciting”? To me it sounds profoundly depressing, but then we’re all wired differently.
Umberto Eco, who died last week at the age of 84, once defined the polymath as someone “interested in everything, and nothing else.” (Now that’s more like it!) The formulation is paradoxical, or almost: the twist comes from taking “nothing else” to mean “nothing more.” It would be clearer to say that polymaths are “interested in everything, and nothing less,” but also duller. Besides, the slight flavor of contradiction is appropriate -- for Eco is describing an attitude of mind condemned to tireless curiosity and endless dissatisfaction, first of all with its own limits.
Eco’s work has been a model and an inspiration for this column for almost 30 years now, which is about 20 more than I’ve been writing it. The seed was planted by Travels in Hyperreality, the first volume of his newspaper and magazine writings to appear in English. Last year “Intellectual Affairs” celebrated the long-overdue translation of Eco’s book of sage advice on writing a thesis. An earlier essay considered the public dialogues that he and Jürgen Habermas were carrying on with figures from the Vatican. And now -- as if to make a trilogy of it -- saying farewell to Eco seems like an occasion to discuss perhaps the most characteristic quality of Eco’s mind: its rare and distinctive omnivorousness.
Eco himself evidently restricted his own comments on polymathy to that one terse definition. I must be garrulous by contrast but will try to make only two fairly brief points.
(1) As his exchange of open letters with Cardinal Carlo Maria Martini, the former archbishop of Milan, indicated, Eco was a lapsed but not entirely ex-Catholic: one who no longer believed but -- for reasons of personal background and of scholarly expertise as a medievalist -- still carried much of the church’s cultural legacy inside himself. His first book, published in 1956, was a study of St. Thomas Aquinas’s aesthetics that began as a thesis written “in the spirit of the religious worldview” of its subject. And the encyclopedic range and dialectical intricacies of the Angelic Doctor’s Summa Theologica never lost their hold on Eco’s imagination.
“Within Thomas's theological architecture,” Eco wrote in an essay in 1986, “you understand why man knows things, why his body is made in a certain way, why he has to examine facts and opinions to make a decision, and resolve contradictions without concealing them, trying to reconcile them openly …. He aligned the divergent opinions [of established philosophers and theologians], clarified the meaning of each, questioned everything, even the revealed datum, enumerated the possible objections, and essayed the final mediation.”
Eco regarded the Summa’s transformation into an authoritative statement of religious doctrine as nothing less than a disaster. In the hands of his successors, “Thomas's constructive eagerness for a new system” degenerated into “the conservative vigilance of an untouchable system.” Eco was -- like Étienne Gilson and Alasdair MacIntyre, among others -- part of the 20th-century rediscovery of Aquinas as the builder of a dynamo rather than the framer of a dogma. And there’s no question but that the medieval theologian exemplified “an interest in everything, and nothing else.”
(2) In the early 1960s, Eco was invited to participate in an interdisciplinary symposium on “demythicization and image” in Rome, along with an impressive array of philosophers, theologians, historians and classical scholars. Among them would be Jesuit and Dominican monks. He felt an understandable twinge of anxiety. “What was I going to say to them?” he recalled thinking. Remembering his enormous collection of comic books, Eco had a flash of inspiration:
“Basically [Superman] is a myth of our time, the expression not of a religion but of an ideology …. So I arrive in Rome and began my paper with a pile of Superman comics on the table in front of me. What will they do, throw me out? No sirree, half the comic books disappeared; would you believe it, with all the air of wishing to examine them, the monks with their wide sleeves spirited them away ….”
The anecdote might be used as an example of Eco’s interest in semiotics: the direction his work took after establishing himself as a medievalist. Comic books, Leonardo da Vinci paintings, treatises in Latin on demonology …. all collections of signs in systems, and all potentially analyzable. Nor was his conference presentation on Superman the end of it. Not much later, Eco published an essay about the world of Charlie Brown called "On 'Krazy Kat' and 'Peanuts.'"
But in fact those two papers were written before Eco’s turn to semiotics -- or semiology, if you prefer -- in the late 1960s. (The one on Peanuts reads as being influenced by Sartre, as much as anyone else.) Eco’s attitude towards mass media and popular culture was never one either of slumming or of populist celebration. Nor was it a matter of showing off the power and sharpness of cool new theoretical tools by carving up otherwise neglected specimens. He took it as a given that cartoons, movies and the crappy books issued by Italy’s vanity-publishing racket were -- like theological speculation or political conflict -- things that merited analysis and critique or that could become so, given interesting questions about them.
At the end of his remarks on Aquinas 30 years ago, Eco tried to imagine how the author might conduct himself if suddenly returned to life. Of course there’s no way to judge the accuracy of such a thought experiment’s results, but Eco’s conclusion seems like a personal creed: “He would realize that one cannot and must not work out a definitive, concluded system, like a piece of architecture, but a sort of mobile system, a loose-leaf Summa, because in his encyclopedia of the sciences the notion of historical temporariness would have entered …. I know for sure that he would take part in celebrations of his work only to remind us that it is not a question of deciding how still to use what he thought, but to think new things.”
And, Eco might have added, how to avoid settling for less than everything your mind might drive itself to understand.
In October 1985 -- not quite a year before Antonin Scalia took his seat on the U.S. Supreme Court -- the California Law Review published a paper by Fred R. Shapiro called “The Most-Cited Law Review Articles.” Nothing by Scalia was mentioned, and no surprise. He had published a bit of legal scholarship, of course (including a paper in The Supreme Court Review in 1978) but overall his paper trail was fairly thin and unexceptional, which proved a definite advantage in getting the nominee through the Senate hearings without drama.
As for Shapiro's article, it reflected the arrival of a new quantification mind-set about assessing legal scholarship. Culling data concerning some 180 journals, Shapiro (now an associate librarian and lecturer in legal research at the Yale Law School) tabulated and ranked the 50 most influential law review articles published between 1947 and 1985. Or, at least, the 50 most often cited in other law review articles, since he did not count citations in judicial opinions or interdisciplinary journals. At the time, Shapiro described the effort as “somewhere between historiography and parlor game,” but it established him as, in the words of a later law review article, “the founding father of a new and peculiar discipline: ‘legal citology.’”
Shapiro revisited the project in 1996 with a paper that was broader in scope (it included the interdisciplinary “law and ____” journals) and also more fine grained, listing the top 100 “Most-Cited Law Review Articles of All Time” but also identifying the most-cited articles published in each year between 1982 and 1991. The second time around, he stressed the historiographic significance of his findings over any parlor-game aspect. “The great legal iconoclast and footnote-hater, Fred Rodell, missed the point,” wrote Shapiro. “Yes, footnotes are abominations destroying the readability of legal writing, but they proliferate and become discursive because they are where the action is.”
In the meantime, Scalia gave a lecture at Harvard University in early 1989 that appeared in the fall in the University of Chicago Law Review. It had a definite impact. By 1996, Shapiro included Scalia’s “The Rule of Law as a Law of Rules” in the list of the most-cited articles from 1989. It was in fourth place -- flanked, a bit incongruously, by Richard Delgado’s “Storytelling for Oppositionists and Others: A Plea for Narrative” (third) and Joan C. Williams’s “Deconstructing Gender” (fifth). Updating the study once more in 2012, Shapiro and his co-author Michelle Pearse placed Scalia’s “The Rule of Law as a Law of Rules” on its list of the most-cited law-review articles of all time, at number 36. By then, Delgado’s paper was in 68th place, while Williams was not on the list at all.
So much for the late justice’s place in the annals of legal citology. (Wouldn’t it make more sense to call this sort of thing “citistics”?) Turning to “The Rule of Law as a Law of Rules” itself, it soon becomes clear that its impact derives at least as much from the author's name as from the force of Scalia's argument. If written by someone not sitting in the highest court in the land, it would probably have joined countless other papers of its era in the usual uncited oblivion. That said, it is also easy to see why the paper has been of long-term interest, since it is a succinct, lucid and remarkably uncombative statement of basic principles by the figure responsible for some of the Supreme Court’s most vigorous and pungent dissents.
Scalia takes his bearings from a dichotomy he finds expressed in Aristotle’s Politics: “Rightly constituted laws should be the final sovereign; and personal rule, whether it be exercised by a single person or a body of persons, should be sovereign only in those matters on which law is unable, owing to the difficulty of framing general rules for all contingencies, to make an exact pronouncement.”
Scalia assumes here that the reader or listener will know that Aristotle writes this in the context of a discussion of democracy, in which laws are created by those elected to “the court, and the senate, and the assembly” by the many, in keeping with a well-made constitution (rather than issued by monarchs, priests or tyrants). Official policy and decisions must, in turn, follow the body of established and “rightly constituted law.” Anything else would amount to an usurpation of power.
Aristotle’s point would apply to anyone in office, but Scalia is concerned with the authority of judges, in particular. For their part, upholding the law means restraint in determining how it is applied: judges should keep the exercise of their own discretion as minimal as possible. Aristotle allows, and Scalia concurs, that at times it is not clear just how a law ought to be applied. In that case a judge’s decision must be made “on the basis of what we have come to call the ‘totality of the circumstances’ test,” in Scalia’s words.
Sometimes it can't be helped, but Scalia implies that curbs are necessary, lest judges feel an incentive to discover gray areas requiring them to exercise their discretion. “To reach such a stage,” he writes, “is, in a way, a regrettable concession of defeat -- an acknowledgment that we have passed the point where ‘law,’ properly speaking, has any further application.” It is “effectively to conclude that uniformity is not a particularly important objective with respect to the legal question at issue.” And when a higher court reviews a lower one’s decision, Scalia treats appealing to the totality of circumstances as even less acceptable. An appellate decision should draw out and clarify the general principles embodied in the law that apply in the particular case.
“It is perhaps easier for me than it is for some judges to develop general rules,” Scalia writes, “because I am more inclined to adhere closely to the plain meaning of a text.”
What's striking about his formulation is not that Scalia takes a position in the debate between originalism and “living Constitution”-alism, but that he spells out an important assumption. Not only is the “plain meaning” of a law clearly decipherable from the words of its text (once we’ve looked up, if necessary, any unfamiliar expressions from the era when it was written) but so are the rules for determining its principles and for applying the law. The Constitution is like a cake mix with the instructions right there on the box. And if a given concept is not used or defined there --“privacy,” for instance, to name one that Scalia regarded as unconstitutional, or at least nonconstitutional -- then its use is ruled out.
“If a barn was not considered the curtilage of a house in 1791 or 1868,” Scalia writes, “and the Fourth Amendment did not cover it then, unlawful entry into a barn today may be a trespass, but not an unconstitutional search and seizure. It is more difficult, it seems to me, to derive such a categorical general rule from evolving notions of personal privacy.”
The distinction is clear and sharply drawn, however blunt the hermeneutic knife Scalia is wielding. But the example also displays one of the great weaknesses of this approach, spelled out by David A. Strauss in the University of Chicago Law Reviewsome years later: “Even if one can determine what the original understanding was, there is the problem of applying it to radically new conditions: Is a barn in the rural nation of 1791 to be treated as equivalent to, say, a garden shed in 21st-century exurbia?”
Furthermore, the clearly formulated principle in a law can be rendered null and void by those who want only the narrowest construction of “original intent.” In his magnum opus, Reading Law: The Interpretation of Legal Texts (2012), co-authored with Bryan A. Garner, Scalia quoted Joseph Story’s Commentaries on the Constitution of the United States (1833) on the value of preambles in understanding the significance and intended effect of a law: “The preamble of a statute is a key to open the mind of the makers, as to the mischiefs, which are to be remedied, and the objects, which are to be accomplished by the provisions of the statute.” As fellow Reagan judicial appointee Richard A. Posner pointed out when he reviewedReading Law, an obvious instance would be the Second Amendment: “A well regulated Militia, being necessary to the security of a free State …” The preamble spells out that the amendment is, in Posner’s words “not about personal self-defense, but about forbidding the federal government to disarm state militias.” If it matters that the Constitution never explicitly identifies a right to privacy, then the complete lack of any reference to a right to individual gun ownership seems at least as conspicuous a silence.
Posner notes that when Scalia did mention the preamble in one decision, it was dismissive. Sometimes you “adhere closely to the plain meaning of a text,” it seems, and sometimes you just wish it would go away.
The skyrocket ascent of Scalia’s paper is easy to understand: whatever you think of the ideas, they are clearly and at times forcefully expressed, and “The Rule of Law as a Law of Rules” provided a glimpse into at least part of that enigmatic entity known as “the mind of the Supreme Court.” Absent that, its interest is likely to be chiefly historical or biographical. Other cards will take its place in the parlor game of citation and influence.
No Exit by Jean-Paul Sartre involves three characters who are condemned to spend their afterlives together in a shabby but not especially uncomfortable room -- “condemned” because it’s the Devil himself who brings them together. Evidently some kind of infernal algorithm has been used to select the group, designed to create optimal misery. Sartre’s dialogue is quite efficient: we soon learn the broad outlines of their time on Earth and how it was shaped by wishful thinking and self-deception.
We also see how quick they are to recognize one another’s vulnerabilities. Any given pair of characters could find a mutually satisfying way to exploit each other’s neuroses. But there’s always that third party to disrupt things, rubbing salt into old wounds while inflicting fresh ones.
In a moment of clarity, one of them finally recognizes that they are damned and utters Sartre’s best-known line: “Hell is other people.” Quoting this is easy, and misunderstanding it even easier. It sounds like a classical expression of self-aggrandizing misanthropy. But the figures on stage did not wander over from the pages of an Ayn Rand novel. They are not sovereign egos, imposed upon by the demands of lesser beings whose failings they repudiate with lofty scorn.
On the contrary, Sartre’s characters are driven by a desperate and insurmountable need to connect with other people. They crave intimacy, acceptance, reciprocity. They also seek to dominate, manipulate, yield to or seduce one another, which would be difficult enough if they weren’t trying to do more than one at the same time. The efforts fail, and the failures pile up. Things grow messy and frustrating for all parties involved. Hell is other people, but the torment is fueled by one’s own self.
That insight rings even more true in the wake of Hell Is a Very Small Place: Voices From Solitary Confinement (The New Press), an anthology edited by Jean Casella, James Ridgeway and Sarah Shourd. I doubt anyone meant the title as an allusion to No Exit. The activists, scholars and prisoners contributing to the book document a place much darker and more brutal than Sartre imagined -- but akin to it, in that the damned are condemned, not to lakes of fire, but to psychic torture so continuous that it seems eternal. (Casella and Ridgeway are co-founders of Solitary Watch, while Shourd is a journalist who spent 410 days in solitary confinement while imprisoned in Iran.)
A few basic points: solitary confinement initially had humane intentions. Quaker reformers in the early American republic were convinced that prisoners might benefit from a period of reckoning with their own souls, which would come readily in isolation from the evil influence of low company. If so, they would reform and return to society as productive members. More secular versions of this line of thought also caught on. Unfortunately it did not work in practice, since prisoners tended to emerge no better for the experience, when not driven insane. By the turn of the 20th century, the practice was being phased out, if not eliminated, as ineffective and dangerous.
Now, my sense from reading around in JSTOR is that, from about 1820 on, whenever the issue of imprisonment came up, the eyes of the world turned to the United States. Other countries had a similar rise and fall of confidence in solitary confinement over the years. But the practice took on a new life in America starting in the 1980s. The aim of reforming prisoners was no longer a factor. Solitary confinement -- warehousing prisoners alone in a cell for 23 to 24 hours a day, minimizing contact with one another and with the outside world -- permitted mass incarceration at reduced risk to prison guards.
In 2011, Juan E. Méndez, the United Nations Human Rights Council’s Special Rapporteur on Torture and Cruel, Inhuman and Degrading Treatment or Punishment, issued a report on the use of prolonged solitary imprisonment around the world, with “prolonged” meaning more than 15 days. Administrators and government officials rejected his request to inspect isolation units in American prisons. In his contribution to Hell Is a Very Small Place, Méndez writes that the best estimate for the population of those in solitary confinement in the United States at any given time is 80,000 people, “but no one knows that for sure.” The personal accounts by prisoners in the book show that confinement American-style is more than “prolonged.” It can go on for years, and in some cases, for decades.
An isolation cell is sometimes called “the Box,” and the experience of living in one for months and years on end makes it sound like being buried alive. In the chapter “How to Create Madness in Prison,” Terry Kupers describes the symptoms that appear during a long stretch. The prisoner in isolation “may feel overwhelmed by a strange sense of anxiety. The walls may seem to be moving in on him (it is stunning how many prisoners in isolated confinement independently report this experience) …. The prisoner may find himself disobeying an order or inexplicably screaming at an officer, when really all he wants is for the officer to stop and interact with him a little longer than it takes for a food tray to be slid through the slot in his cell door. Many prisoners in isolated confinement report it is extremely difficult for them to contain their mounting rage ….”
But that is far from the extreme end of the spectrum, which involves psychotic breaks, self-mutilation and suicide. In isolation, time no longer passes through the usual cycle of hours, weekdays, months. The damaged mind is left to pick at its own scabs for what might as well be an eternity.
Hell Is a Very Small Place proves fairly repetitious, though it could hardly be otherwise. Reading the book leaves one with the horrible feeling of being overpowered by routines and forces that will just keep running from the sheer force of momentum. Last year, President Obama called for an extensive review and reform of prison conditions, and last month, he issued a ban on the solitary confinement of juveniles in the federal prison system. So that’s the good news, for however long it may last. But consider the enormous obstacle to change represented by the sunk cost of millions or billions of dollars spent to erect Supermax prisons -- let alone the businesses (and lobbyists) who depend on more of them being built.
Anyone housed in solitary for a while would have to envy the characters in No Exit. They have more room (a “box” is typically somewhere between 6 by 9 feet to a luxurious 8 by 10) and they have each other, like it or not. Sartre’s hell is imaginary; it exists only to reveal something about the audience. The idea of burying people alive in concrete tombs degrades the society that has turned it into reality. The phrase “solitary confinement of juveniles in the federal prison system” alone is the sign of something utterly unforgivable.
Trying to explain recent developments in the American presidential primaries to an international audience, a feature in this week’s issue of The Economist underscores aspects of the political landscape common to both the United States and Europe. “Median wages have stagnated even as incomes at the top have soared,” the magazine reminds its readers (as if they didn’t know and had nothing to do with it). “Cultural fears compound economic ones” under the combined demographic pressures of immigration and an aging citizenry.
And then there’s the loss of global supremacy. After decades of American ascent, “Europe has grown used to relative decline,” says The Economist. But the experience is unexpected and unwelcome to those Americans who assumed that the early post-Cold War system (with their country as the final, effectively irresistible superpower) represented the world’s new normal, if not the End of History. The challenges coming from Putin, ISIS and Chinese-style authoritarian state capitalism suggest otherwise.
Those tensions have come to a head in the primaries with the campaigns of Donald Trump and Bernie Sanders: newcomers to their respective parties whose rise seemed far-fetched not so long ago. To The Economist’s eyes, their traction is, if not predictable, then at least explicable: “Populist insurgencies are written into the source code of a polity that began as a revolt against a distant, high-handed elite.”
True enough, as far as it goes. The analysis overlooks an important factor in how “anti-elite” sentiment has been channeled over the past quarter century: through “anti-elitist” tycoons. Trump is only the latest instance. Celebrity, bully-boy charisma and deep pockets have established him as a force in politics, despite an almost policy-free message that seems to take belligerence as an ideological stance. Before that, there was the more discrete largess of the Koch brothers (among others) in funding the Tea Party, and earlier still, Ross Perot’s 1992 presidential campaign, with its folksy infomercials and simple pie charts, which drew almost a fifth of the popular vote. In short, “revolt against a distant, high-handed elite” may be written into the source code of American politics; the billionaires have the incentives and the means to keep trying to hack it.
If anything, even Perot was a latecomer. In the opening pages of Nut Country: Right-Wing Dallas and the Birth of the Southern Strategy (University of Chicago Press), Edward H. Miller takes note of a name that’s largely faded from the public memory: H. L. Hunt, the Texas oilman. Hunt was probably the single richest individual in the world when he died in 1974. He published a mountain of what he deemed “patriotic” literature and also funded a widely syndicated radio program called Life Line. All of it furthered Hunt’s tireless crusade against liberalism, humanism, racial integration, socialized medicine, hippies, the New Deal, the United Nations and sundry other aspects of the International Communist Conspiracy, broadly defined. ("Nut country" is how John F. Kennedy described Dallas to the first lady a few hours before he was killed.)
Hunt’s output was still in circulation when I grew up in Texas a few years after his death, and reading it has meant no end of déjà vu in the meantime: the terrible ideas of today are usually just the terrible ideas of yesterday, polished with a few updated topical references. Miller, an assistant teaching professor of history at Northeastern University Global, reconstructs the context and the mood that made Dallas a hub of far-right political activism between the decline of Joseph McCarthy and the rise of Barry Goldwater -- a city with 700 members of the John Birch Society. A major newspaper, The Dallas Morning News, helped spur the sales of a book called God, the Original Segregationist by running excerpts. Cold War anti-Communism mutated into a belief that the United States and the Soviet Union were in the process of being merged under the direction of the United Nations, in the course of which all reference to God would be outlawed. John F. Kennedy was riding roughshod over American liberties, bypassing Congress and establishing a totalitarian dictatorship in which, as H. L. Hunt warned, there would be “no West Point, no Annapolis, no private firearms -- no defense!”
An almost Obama-like dictatorship, then. Needless to say, these beliefs and attitudes are still with us, even if many of the people who espoused them are not.
Miller identifies two tendencies or camps within right-wing political circles in Dallas during the late 1950s and early ’60s. “Moderate conservatism” was closer to established Republican Party principles of free enterprise, unrelenting anti-Communism and the continuing need to halt and begin rolling back the changes brought by the New Deal. Meanwhile, “ultraconservatism” combined a sense of apocalyptic urgency with fear of all-pervasive subversion and conspiracy. A reader familiar with recent laments about the state of the Republican Party -- that it was once a much broader tent, with room for even the occasional liberal -- might well assume that Miller’s moderate conservatives consisted of people who liked Ike, hated Castro and otherwise leaned to a bit to the right wing of moderation, as opposed to ultraconservative extremism.
That assumption is understandable but largely wrong: Miller’s moderates were much closer to his ultras than either was to, say, the Eisenhower who sent federal troops to Little Rock, Ark. (Or as someone the author quotes puts it, the range of conservative opinion in Dallas was divided between those who wanted to impeach Supreme Court Justice Earl Warren and those who wanted to hang him.)
Where the difference between the moderates and the ultras ultimately combined to create something more durable and powerful than either of them could be separately was in opposition to the Civil Rights movement and their realignment of the segregationist wing of the Democratic Party. I’ll come back to Miller’s argument on this in a later column, once the primary season has progressed a bit. Suffice it to say that questions of realignment are looking a little antiquarian all the time.
There’s a special rung of hell where the serious and the damned writhe in agony, gnashing their teeth and cursing their fate, as they watch an endless marathon of historical documentaries from basic cable networks. Their lidless eyes behold Ancient Aliens, now in its tenth season, and High Hitler, which reveals that the Führer was a dope fiend. The lineup includes at least one program about the career of each and every single condemned soul in the serial-killer department, which is a few rungs down.
In the part of the inferno where I presumably have reservations, a lot of the programming concerns the history of rock music. With each cliché, a devil pokes you, just to rub it in. The monotonous predictability of each band’s narrative arc (struggle, stardom, rehab, Hall of Fame) is just part of it, since there are also the talking-head commentaries, interspersed every few minutes, by people unable to assess any aspect of the music except through hyperbole. Each singer was the voice of the era. Every notable guitarist revolutionized the way the instrument was played -- forever. No stylistic innovation failed to change the way we think about music, influencing all that followed.
Even the devils must weary of it, after a while. It probably just makes them meaner.
Here on earth, of course, such programming can be avoided. Choose to watch Nazi UFOs -- an actual program my TiVo box insists on recording every time it runs -- and you have really no one to blame but yourself.
But David Bowie’s death earlier this month left me vulnerable to the recent rerun of a program covering most of his life and work. Viewing it felt almost obligatory: I quit keeping track of Bowie’s work in the early 1980s (a pretty common tendency among early devotees, the near-consensus opinion being that he entered a long downturn in creativity around that time) so that catching up on Bowie’s last three decades, however sketchily, seemed like a matter of paying respects. It sounds like his last few albums would be worth a chance, so no regrets for watching.
Beyond that, however, the program offered only the usual insight-free superlatives -- echoes of the hype that Bowie spent much of his career both inciting and dismantling. Bowie had a precocious and intensely self-conscious relationship to mass media and spectacle. He was, in a way, Andy Warhol’s most attentive student. That could easily have led Bowie down a dead end of cynicism and stranded him there, but instead it fed a body of creative activity -- in film and theater as well as music -- that fleshed out any number of Oscar Wilde’s more challenging paradoxes. (A few that especially apply to Bowie’s career: “To be premature is to be perfect.” “One should either be a work of art or wear a work of art.” “Man is least himself when he talks in his own person. Give him a mask, and he will tell you the truth.”) There must be a whole cohort of people who lived through early theoretical discussions of postmodernism and performativity while biting our tongues, thinking that an awful lot of it was just David Bowie, minus the genius.
“Genius” can be a hype word, of course, but the biggest problem with superlatives in Bowie’s case isn’t that they are clichéd but that they’re too blunt. Claim that Bowie invented rock stardom, as somebody on TV did, for example, and the statement is historically obtuse while also somehow underestimating just how catalytic an impact he had.
As noted here in a column some months ago, Bowie is not among the artists David Shumway wrote about in Rock Star: The Making of Musical Icons from Elvis to Springsteen (Johns Hopkins University Press, 2014). And yet one aspect of Bowie’s career often taken as quintessential, his tendency to change appearances and styles, actually proves to be one of the basic characteristics of the rock star’s cultural role, well before his Thin White Duke persona rose from the ashes of Ziggy Stardust. Context undercuts the hype.
Elsewhere, in an essay for the edited collectionGoth: Undead Subculture (Duke, 2007), Shumway acknowledges that Bowie did practice a kind of theatricalization that created a distinctive relationship between star and fan: the “explicit use of character, costume and makeup … moved the center of gravity from the person to the performance” in a way that seemed to abandon the rock mystique of authenticity and self-expression in favor of “disguising the self” while also reimagining it.
“His performances taught us about the constructedness of the rock star and the crafting of the rock performance,” Shumway writes. “His use of the mask revealed what Dylan’s insistence on his own authenticity and Elvis’s swagger hid.”
At the same time, Bowie’s decentered/concealed self became something audiences could and did take as a model. But rather than this being some radical innovation that transformed the way we think about rock forever (etc.), Shumway suggests that Bowie and his audience were revisiting one of the primary scenes of instruction for 20th-century culture as a whole: the cinema.
Bowie “did not appear to claim authenticity for his characters,” Shumway writes. “But screen actors do not claim authenticity for the fictional roles they play either. Because he inhabits characters, Bowie is more like a movie star than are most popular music celebrities. In both cases the issue of the star’s authenticity is not erased by the role playing, but made more complex and perhaps more intense.”
That aptly describes Bowie’s effect. He made life “more complex and perhaps more intense” -- with the sound of shattering clichés mixed into the audio track at unexpected moments. And a personal note of thanks to Shumway for breaking some, too.
Last month came the unwelcome if not downright chilling news that the antibiotic of last resort -- the most powerful infection fighter in the medical arsenal -- is now ineffective against some new bacterial strains. If, like me, you heard that much and decided your nerves were not up to learning a lot more, then this might be a good time to click over to see what else looks interesting in the Views section menu. There’s something to be said for deliberate obliviousness on matters that you can’t control anyway.
Hugh Pennington’s Have Bacteria Won? (Polity) is aimed straight at the heart of a public anxiety that has grown over the past couple of decades. The author, an emeritus professor of bacteriology at the University of Aberdeen, is clearly a busy public figure in the United Kingdom, where he writes and comments frequently on medical news for the media. A number of recent articles in British newspapers call him a “leading food-poisoning expert,” but that is just one of Pennington’s areas of expertise. Besides contributing to the professional literature, he has served on commissions investigating disease outbreaks and writes “medico-legal expert witness reports” (he says in the new book) on a regular basis.
The fear resonating in Pennington’s title dates back to the mid-1990s. Coverage of the Ebola outbreak in Zaire in 1995 seemed to compete for attention with reports of necrotizing fasciitis (better known as “that flesh-eating disease”), which inspired such thought-provoking headlines as “Killer Bug Ate My Face.”
Pennington refers to earlier cases of food contamination that generated much press coverage -- and fair enough. But it was the ghastly pair of hypervirulent infections in the news 20 years ago that really raised the stakes of something else that medical researchers were warning us about: the widespread overuse of antibiotics. It was killing off all but the most resilient disease germs. An inadvertent process of man-made natural selection was underway, and the long-term consequences were potentially catastrophic.
But now for the good news, or the nonapocalyptic news, anyway: Pennington makes a calm assessment of the balance of forces between humanity and bacteria and, without being too Pollyannaish about it, suggests that unpanicked sobriety would be a good attitude for the public to cultivate, as well.
The history of medical advances in the industrialized world has, he argues, had unexpected and easily overlooked side effects. Now we live, on average, longer than our ancestors. But we also die for different reasons, with mortality from infection no longer being high on the list. The website of the Centers for Disease Control and Prevention makes the point sharply with a couple of charts: apart from a spike during the influenza pandemic following the First World War, death from infectious disease fell in the United States throughout most of the 20th century. Pennington’s point is that we find this trend throughout the modernized world, wherever life expectancy increased. Medical advances, including the development of antibiotics, played a role, but not in isolation. Improved sanitation and increased agricultural output were also part of it.
“There is a pattern common to rich countries,” Pennington notes. “The clinical effects of an infection become much less severe long before specific control measures or successful treatments become available. Their introduction then speeds up the decline, but from a low base. An adequate diet brings this about.”
So death from infectious disease went from being a terrible fate to something practically anomalous within two or three generations. (To repeat, we’re talking about the developed world here: both prosperity and progress impose blinders.) And when serious infectious disease become rare, it also becomes news. “From time to time,” Pennington says, “the media behave like a chief refracting telescope, focusing on an object of interest but magnifying it with a good deal of aberration and fuzziness at the edges because of the poor quality of their lenses.”
Lest anyone think that the competitive shamelessness of the British tabloid press has excessively distorted Pennington’s outlook, keep in mind that CNN once had a banner headline reading, “Ebola: ‘The ISIS of Biological Agents?’” Nor does he demonize the mass media, as such. “Sometimes the journalistic telescope finds hidden things that should be public,” he writes -- giving as an example how a local newspaper identified and publicized an outbreak of infectious colitis at an understaffed and poorly run hospital in Scotland.
Have Bacteria Won? is packed with case histories of outbreaks from the past 60 or 70 years. Each is awful enough in its own right to keep the reader from feeling much comfort at their relative infrequency, and Pennington’s message certainly isn’t that disease can be eradicated. Powerful and usually quite effective techniques exist to prevent or minimize bacterial contamination of food and water, and we now have systematic ways to recognize and treat a wider range of infections than would have been imaginable not that long ago. But systems fail (he mentions several cases of defective pasteurization equipment causing large-scale outbreaks) and bacteria mutate without warning. “Each microbe has its own rules,” Pennington writes. “Evolution has seen to that.”
We enjoy some advantage, given our great big brains, especially now that we have the tools of DNA sequencing and ever-increasing computational power. "This means," Pennington writes, "that tracking microbes, understanding their evolution and finding their weaknesses gets easier, faster and cheaper every day." Given reports that the MCR-1 gene found in antibiotic-impervious bacteria can move easily between micro-organisms, any encouraging word is welcome right about now.
But Pennington's analysis also implies that the world's incredible and even obscene disparities in wealth are another vulnerability. "An adequate diet" for those who don't have it seems like something all that computational power might also be directed toward. Consider it a form of preventative medicine.