books

Review of Chris Walsh, 'Cowardice: A Brief History'

In recent years we’ve had quite a few books on the negative emotionsdisgust, malice, humiliation, shame – from scholars in the humanities. In addition, Oxford University Press published its series of little books on the Seven Deadly Sins. Apparently envy is the most interesting vice, to judge by the sales ranks on Amazon, followed by anger -- with lust straggling in third place. (A poor showing, given its considerable claims on human attention.)

The audience for monographs putting unpleasant or painful feelings into cultural and historical context probably doesn’t overlap very much with the far larger pop-psychology readership. But their interests do converge on at least one point. Negative affects do have some benefits, but most of us try to avoid them, or minimize them, both in ourselves and others, and to disguise them when necessary; or, failing that, to do damage control. And because the urge to limit them is so strong, so is the need to comprehend where the feelings come from and how they operate.

Arguably the poets, historians, and philosophers have produced richer understandings of negative emotions, in all their messiness. As for what the likes of Dr. Phil bring to the table, I have no opinion – though obviously they’re the ones leaving it with the biggest bags of money.

But the avoidance / interest dynamic really goes AWOL with the topic Chris Walsh explores in Cowardice: A Brief History (Princeton University Press). The Library of Congress catalog has a subject heading called “Cowardice — history,” with Walsh’s book being the sole entry. That’s a clerical error: Marquette University Press published Lesley J. Gordon’s “I Never Was a Coward”: Questions of Bravery in a Civil War Regiment in 2005. It is 43 pages long, making Walsh the preeminent scholar in the field by a sizable margin. (He is also associate director of the College of Arts and Sciences Writing Program at Boston University.)

“[P]ondering cowardice,” he writes “illuminates (from underneath, as it were) our moral world. What we think about cowardice reveals a great deal about our conceptions of human nature and responsibility, about what we think an individual person can and should have to endure, and how much one owes to others, to community and cause.”

But apart from a typically thought-provoking paper by William Ian Miller a few years ago, cowardice has gone largely unpondered. Plato brought it up while on route to discussing courage. Aristotle stressed the symmetry between cowardice (too much fear, too little confidence) and rashness (too much confidence, too little fear) and went on to observe that rash men tended to be cowards hiding behind bluster.

That insight has survived the test of time, though it’s one of the few analyses of cowardice that Walsh can draw on. But in the historical and literary record it is always much more concrete. (In that regard it’s worth noting that the LOC catalog lists 44 novels about cowardice, as against just two nonfiction works.)

Until sometime in the 19th century, cowardice seems to have been equated simply and directly with fear. It was the immoral and unmanly lack of yearning for the chance at slaughter and glory. The author refers to the American Civil War as a possible turning point, or at least the beginning of a change, in the United States. By the Second World War, the U.S. Army gave new soldiers a pamphlet stating, up front, YOU’LL BE SCARED and even acknowledging their anxiety that they might prove cowards once in battle.

Courage was not an absence of fear but the ability to act in spite of it. This represented a significant change in attitude, and it had the advantage of being sane. But it did not get around a fundamental issue that Walsh shows coming up repeatedly, and one well-depicted in James Jones’s novel The Thin Red Line:

“[S]omewhere in the back of each soldier’s mind, like a fingernail picking uncontrollably at a scabby sore, was the small voice saying: but is it worth it? Is it really worth it to die, to be dead, just to prove to everybody you’re not a coward?”

The answer that the narrator of Louis-Fernand Celine’s Journey to the End of the Night about the First World War (“I wasn’t very bright myself, but at least I had sense enough to opt for cowardice once and for all”) sounds a lot like Mark Twain’s considered opinion in the matter: “The human race is a race of cowards, and I am not only marching in that procession but carrying a banner.”

Both were satirists, but there may be more to the convergence of sentiment than that. In the late 19th and early 20th centuries, war became mechanized and total, with poison gas and machine guns (just a taste of improvements to come) and whole populations mobilized by propaganda and thrown onto the battlefield. The moral defect of the coward was sometimes less than obvious, especially with some hindsight.

In Twain’s case, the remark about fundamental human cowardice wasn’t an excuse for his own military record, which was not glorious. (He numbered himself among the thousands who "entered the war, got just a taste of it, and then stepped out again permanently.") Walsh provides a crucial bit of context by quoting Twain’s comment that “man’s commonest weakness, his aversion to being unpleasantly conspicuous, pointed at, shunned” is better understood as moral cowardice, “the supreme make-up of 9,999 men in the 10,000.”

I’ve indicated a few of Walsh’s themes here, and neglected a few. (The yellow cover, for example, being a reminder of his pages on the link between cowardice and that color.) Someone might well write an essay about how overwhelmingly androcentric the discussion tends to be, except insofar as a male labeled as a coward is called womanly. This is strange. When the time comes for battle, a man can try to flee, but I’ve never heard of anyone escaping childbirth that way. And the relationship between moral cowardice (or courage) and the military sort seems complex enough for another book.

Editorial Tags: 

Review (continued) of Zephyr Teachout, "Corruption in America: From Benjamin Franklin's Snuff Box to Citizens United"

In 2009, the Cornell Law Review published an article called “The Anti-Corruption Principle” by Zephyr Teachout, then a visiting assistant professor of law at Duke University. In it she maintained that that the framers of the U.S. Constitution were “obsessed” (that was Teachout’s word) with the dangers of political corruption – bribery, cronyism, patronage, the making of laws designed to benefit a few at the expense of public well-being, and so on.

Such practices, and the attitudes going with them, had eaten away, termite-like, at the ethos of the ancient Roman republic and done untold damage to the spirit of liberty in Britain as well. The one collapsed; the other spawned “rulers who neither see, nor feel, nor know / but leech-like to their fainting country cling,” as Shelley in a poem about George III’s reign wrote some years later. But in Teachout’s reading, the framers were obsessed with corruption without being fatalistic about it. The best way to reduce the chances of corruption was by reducing the opportunities for temptation – for example, by preventing any “Person holding any Office of Profit or Trust” from “accepting any present, Emolument, Office, or Title, of any kind whatever, from any King, Prince, or foreign State” without explicit permission from Congress. Likewise, a separation of powers among the executive, legislative, and judicial branches was, in part, an expression of the anti-corruption principle.  

Teachout indicated in a footnote that her argument would be expanded in a forthcoming book, called The Meaning of Corruption, due out the following year. It was delayed. For one thing, Teachout moved to Fordham University, where she is now an associate professor of law. And for another, her law-review article gained the unusual eminence of being cited by two Supreme Court Justices, Antonin Scalia and John Paul Roberts, in their opinions concerning the landmark Citizens United v. Federal Elections Commission decision.

Now Teachout’s book has appeared as Corruption in America: From Benjamin Franklin’s Snuff Box to Citizens United, from Harvard University Press – an appreciably livelier title, increasing the likelihood (now pretty much a certainty) that it will inform the thinking of many rank-and-file Democratic Party supporters and activists.

Whether it will resonate with their leaders beyond the level of campaign rhetoric is another matter. Each of the two parties has a revolving door between elected office and the lobbying sector. While discussing the book here last week, I mentioned that suspicion and hostility toward lobbying were conspicuous in American political attitudes until fairly recently. They still are, of course, but with nothing like the intensity exhibited when the state of Georgia adopted a constitution outlawing the practice in 1877: “Lobbying is declared to be a crime, and the General Assembly shall influence this provision by suitable penalties,” including a prison sentence of up to five years. Other efforts to curtail lobbying were less severe, though nonetheless sharper than today’s statutes requiring lobbyists to register and disclose their sources of funding.

“[T]he practice of paying someone else to make one’s arguments to people in authority,” writes Teachout, “threatened to undermine the moral fabric of civil society…. In a lobbyist-client relationship, the lobbyist, by virtue of being a citizen, has a distinct relationship to what he himself might believe. He is selling his own citizenship, or one of the obligations of his own citizenship, for a fee.”

The lobbyist’s activity is “more akin to selling the personal right to vote than selling legal skills,” as a lawyer does. Nor is that the only damage lobbying does to the delicate ecology of mutual confidence between state and citizen. It “legitimates a kind of routine sophistry and a casual approach towards public argument. It leads people to distrust the sincerity of public arguments and weakens their own sense of obligation to the public good” – thereby creating “the danger of a cynical political culture.” (So that’s how we got here.)

Clearly something went wrong. The anti-corruption principle, as Teachout formulates it, entails more than the prevention of certain kinds of acts – say, bribery. It’s also supposed to strengthen the individual citizen’s faith in and respect for authority while also promoting the general welfare. But private interest has a way of seeing itself as public interest, as exemplified in a railroad lobbyist’s remarks to Congress during the Gilded Age: If someone “won’t do right unless he’s bribed to do it,” he said, “…I think it’s a man’s duty to go up and bribe him.”

Teachout refers to an erosion of the anti-corruption principle over time, but much of her narrative documents a recurring failure to give anti-corruption laws teeth. “Criminal anticorruption laws were particularly hard to prosecute” during the 19th  century, she writes, because “the wrongdoers – the briber and the bribed – had no incentive to complain,” while “the defrauded public was dispersed, with no identifiable victim who would drive the charge.” The concept of corruption has dwindled to that bribery defined as quid pro quo in the narrowest possible terms: “openly asking for a deal in exchange for a specific government action.”

In a colloquy appearing in the Northwestern University Law Review, Seth Barrett Tillman, a lecturer in law at the National University of Ireland Maynooth, suggests that a core problem with Teachout’s argument is that it overstates how single-mindedly anti-corruption the framers of the U.S. Constitution actually were. The Articles of Confederation made broader anti-corruption provisions on some points, for example.

And “if the Framers believed that corruption posed the chief danger to the new Republic,” he writes, “one wonders why corrupt Senate-convicted and disqualified former federal officials were still eligible to hold state offices—offices which could indirectly affect significant operations of the new national government—and were also (arguably) eligible to hold congressional seats, thereby injecting corrupt officials directly into national policy-making.”

Concerned about corruption? Definitely. “Obsessed” with it? Not so much. There is much to like about Teachout’s book, but treating the framers of the Constitution as possessing the keys to resolving 21st-century problems seems extremely idealistic, and not in a good way.

Editorial Tags: 

Review of Zephyr Teachout, 'Corruption in America: From Benjamin Franklin's Snuff Box to Citizens United'

Writing in 1860, a journalist depicted Washington as a miserable little Podunk on the Potomac, quite unworthy of its status as the nation’s capitol. He called it an “out of the way, one-horse town, whose population consists of office-holders, lobby buzzards, landlords, loafers, blacklegs, hackmen, and cyprians – all subsisting on public plunder.”

"Hackmen" meant horse-powered cabbies. "Blacklegs" were crooked gamblers. And cyprians (lower-case) were prostitutes -- a classical allusion turned slur, since Cyprus was a legendary birthplace of Aphrodite. Out-of-towners presumably asked hackmen where to find blacklegs and cyprians.

But sordid entertainment was really the least of D.C. vices. “The paramount, overshadowing occupation of the residents,” the newsman continued, having just gotten warmed up, “is office-holding and lobbying, and the prize of life is a grab at the contents of Uncle Sam’s till. The public-plunder interest swallows up all others, and makes the city a great festering, unbearable sore on the body politic. No healthy public opinion can reach down here to purify the moral atmosphere of Washington.”

Plus ça change! To be fair, the place has grown more metropolitan and now generates at least some revenue from tourism (plundering the public by other means). Zephyr Teachout quotes this description in Corruption in America: From Benjamin Franklin’s Snuff Box to Citizens United (Harvard University Press), a book that merits the large readership it may get thanks to the author’s recent appearance on "The Daily Show," even if much of that interview concerned her remarkable dark-horse gubernatorial campaign in New York state's Democratic primary, in which anti-corruption was one of her major themes. (Teachout is associate professor of law at Fordham University.)

The indignant commentator of 1860 could include lobbyists in the list of ne’er-do-wells and assume readers would share his disapproval. “Lobby buzzards” were as about as respectable as card sharks and hookers. You can still draw cheers for denouncing their influence, of course, but Teachout suggests that something much deeper than cynicism was involved in the complaint. It had a moral logic – one implying a very different set of standards and expectations than prevails now, to judge by recent Supreme Court rulings.

Teachout’s narrative spans the history of the United States from its beginnings through Chief Justice John Roberts’s decision in McCutcheon v. FEC, less than six months ago. One of the books that gripped the country’s early leaders was Edward Gibbon’s Decline and Fall of the Roman Empire, the first volume of which happened to come out in 1776, and Teachout regards the spirit they shared with Gibbon as something like the crucial genetic material in the early republic’s ideological DNA.

To be clear, she doesn’t argue that Gibbon influenced the founders. Rather, they found in his history exceptionally clear and vivid confirmation of their understanding of republican virtue and the need to safeguard it by every possible means. A passage from Montesquieu that Thomas Jefferson copied into his notebook explained that a republican ethos “requires a constant preference of public to private interest [and] is the source of all private virtues….”

That “constant preference” required constant vigilance. The early U.S. statesmen looked to the ancient Roman republic as a model (“in creating something that has never yet existed,” a German political commentator later noted, political leaders “anxiously conjure up the spirits of the past to their service and borrow from them names, battle cries, and costumes in order to present the new scene of world history in this time-honored disguise and this borrowed language”).

But the founders also took from history the lesson that republics, like fish, rot from the head down. The moral authority, not just of this or that elected official, but of the whole government demanded the utmost scruple – otherwise, the whole society would end up as a fetid moral cesspool, like Europe. (The tendency to define American identity against the European other runs deep.)

Translating this rather anxious ideology into clear, sharp legislation was a major concern in the early republic, as Teachout recounts in sometimes entertaining detail. It was the diplomatic protocol of the day for a country’s dignitaries to present lavish gifts to foreign ambassadors -- as when the king of France gave diamond-encrusted snuffboxes, with his majesty’s portrait on them, to Benjamin Franklin and Thomas Jefferson. In Franklin’s case, at least, the gift expressed admiration and affection for him as an individual at least as much as it did respect for his official role.

But all the more reason to require Congressional approval. Doing one’s public duty must be its own reward, not an occasion for private benefit. Franklin received official permission to accept the snuffboxes, as did two other figures Teachout discusses. The practice grated on American sensibilities, but had to be tolerated to avoid offending an ally. Jefferson failed to disclose the gift to Congress and quietly arranged to have the diamonds plucked off and sold to cover his expenses.

Like the separation of powers among the executive, legislative, and judicial branches (another idea taken from Montesquieu), the division of Congress into House and Senate was also designed to preempt corruption: “The improbability of sinister combinations,” wrote Madison, “will be in proportion to the dissimilarity in genius of the two bodies.” Teachout quotes one delegate to the Constitutional Convention referring sarcastically to the “mercenary & depraved ambition” of “those generous & benevolent characters who will do justice to each other’s merit, by carving out offices & rewards for it.”

Hence the need for measures such as the clause in Article 1, Section 6 forbidding legislators from serving simultaneously in an appointed government position. It also prevented them from accepting such a position created during their terms, after they took office. The potential for abuse was clear, but it could be contained. The clause was an effort “to avoid as much as possible every motive for corruption,” in another delegate’s words.

Corruption, so understood, clearly entails far more than bribery, nepotism, and the like – things done with an intent to influence the performance of official duties, in order to yield a particular benefit. The quid pro quo was only the most obvious level of the injustice. Beyond violating a rule or law, it undermines the legitimacy of the whole process. It erodes trust in even the ideal of disinterested official power. Public service itself begins to look like private interest carried on duplicitously.

The public-mindedness and lofty republican principles cultivated in the decades just after the American revolution soon enough clashed with the political and economic realities of a country expanding rapidly westward. There were fortunes to be made, and bribes to be taken. But as late as the 1880s, states were putting laws on the books to wipe out lobbying, on the grounds that it did damage to res publica.

Clearly a prolonged and messy process has intervened in the meantime, which we’ll consider in the next column, along with some of the criticism of Teachout’s ideas that have emerged since she began presenting them in legal journals a few years ago. Until then, consider the proposal that newspaper writer of the 1860s offered for how to clean the Augean stables of Washington: To clear out corruption, the nation’s capitol should be moved to New York City, where it would be under a more watchful eye. Brilliant! What could possibly go wrong?

 

 

Editorial Tags: 

Review of Jo Guldi and David Armitage, "The History Manifesto"

When young sociologists would consult with C. Wright Mills, it’s said, he would end his recommendations with what was clearly a personal motto: “Take it big!” It was the concentrated expression of an ethos: Tackle major issues. Ask wide-ranging questions. Use the tools of your profession, but be careful not to let them dig a mental rut you can’t escape.

Jo Guldi and David Armitage give much the same advice to their colleagues, and especially their colleagues-to-be, in The History Manifesto, a new book from Cambridge University Press. (Guldi is an assistant professor of history at Brown University, while Armitage is chair of the history department at Harvard.) Only by “taking it big” can their field regain the power and influence it once had in public life – and lost, somewhere along the line, to economics, with its faith in quantification and the seeming rigor of its concepts.

But issues such as climate change and growing economic inequality must be understood in terms of decades and centuries. The role of economists as counselors to the powerful has certainly been up for question over the past six years. Meanwhile, the world’s financial system continues to be shaped by computerized transactions conducted at speeds only a little slower than the decay of subatomic particles. And so, with their manifesto, the authors raise the call: Now is the time for all good historians to come to the aid of their planet.

But first, the discipline needs some major recalibration. “In 1900,” Guldi and Armitage write, “the average number of years covered [by the subject matter of] doctoral dissertations in history in the United States was about 75 years; by 1975, it was closer to 30.” The span covered in a given study is not the only thing that’s narrowed over the intervening four decades. Dissertations have “concentrated on the local and the specific as an arena in which the historian can exercise her skills of biography, archival reading, and periodization within the petri-dish of a handful of years.”

The problem isn’t with the monographs themselves, which are often virtuoso analyses by scholars exhibiting an almost athletic stamina for archival research. Guldi and Armitage recognize the need for highly focused and exhaustively documented studies in recovering the history of labor, racial and religious minorities, women, immigrants, LGBT people, and so forth.

But after two or three generations, the “ever narrower yet ever deeper” mode has become normative. The authors complain that it "determines how we write our studies, where we look for sources, and which debates we engage. It also determines where we break off the conversation.”

Or, indeed, whether the conversation includes a historical perspective at all. “As students in classrooms were told to narrow and to focus” their research interests, “the professionals who deal with past and future began to restrict not only their sources and their data, but sometimes also their ideas.”

In referring to “professionals who deal with past and future,” the authors do not mean historians themselves -- at least not exclusively -- but rather leaders active at all levels of society. The relevance of historical knowledge to public affairs (and vice versa) once seemed obvious. Guldi and Armitage point to Machiavelli’s commentary on Livy as one example of a political figure practicing history, while Eric Williams, who wrote Capitalism and Slavery for his doctoral dissertation, went on to serve as Trinidad’s first prime minister after it became independent.

Between extreme specialization by historians and politicians whose temporal horizons are defined by the election cycle, things look bad. That understanding the past might have some bearing on actions in the present may not seem all that difficult to grasp. But consider the recent American president who invaded a Middle Eastern country without knowing that its population consisted of two religious groups who, over the past millennium or so, have been less than friendly toward one another. (For some reason, I thought of that a couple of times while reading Guldi and Armitage.) Anyway, it did turn out to be kind of an issue.

A manifesto requires more than complaint. It must also offer a program and, as much as possible, rally some forces for realizing its demands. The cure for short-term thinking in politics that Guldi and Armitage propose is the systematic cultivation of long-term thinking in history.

And to begin with, that means putting the norms of what they call “microhistory” in context – keeping in mind that it is really a fairly recent development within the profession. (Not so many decades ago, a historical study covering no more than a hundred years ran the risk of being dismissed as a bit of a lightweight.) The authors call for a revival of the great Ferdinand Braudel’s commitment to study historical processes “of long, even of very long, duration,” as he said in the late 1950s.

Braudel’s longue-durée was the scale on which developments such as the consolidation of trade routes or the growth of a world religion took place: centuries, or millennia. These phenomena “lasted longer than economic cycles, to be sure,” Guldi and Armitage write, but “were significantly shorter than the imperceptibly shifting shapes of mountains or seas, or the rhythms of nomadism or transhumance.”

Braudel counterposed the longue-durée to “the history of events,” which documented ephemeral matters such as wars, political upheaval, and whatnot. The History Manifesto is not nearly so Olympian as that. The aim is not to obliterate what the authors call “the Short Past” but rather to encourage research that would put “events” in the perspective of rhythms of change extending beyond a single human lifetime.

The tools are available. Guldi and Armiutage’s proposed course seems inspired by Big Data as much as by Braudel. Drawing on pools of scattered information about “weather, trade, agricultural production, food consumption, and other material realities,” historians could create broad but detailed accounts of how the social and environmental conditions change over long periods.

“Layering known patterns of reality upon each other,” the authors say, “produces startling indicators of how the world has changed – for instance the concentration of aerosols identified from the mid-twentieth century in parts of India have proven to have disrupted the pattern of the monsoon…. By placing government data about farms next to data on the weather, history allows us to see the interplay of material change with human experience, and how a changing climate has already been creating different sets of winners and losers over decades.”

Any number of questions come to mind about causality, the adequacy of available documents, and whether one’s methodology identifies patterns or creates them. But that’s always the case, whatever the scale a historian is working on.

The History Manifesto is exactly as tendentious as the title would suggest -- and if the authors find it easier to make their case against “microhistory” by ignoring the work of contemporary “macrohistorians” …. well, that’s the nature of the genre. A few examples off the top of my head: Perry Anderson’s Passages from Antiquity to Feudalism and Lineages of the Absolutist State, Michael Mann’s multivolume study of social power over the past five thousand years, and the essays by Gopal Balakrishnan’s collected in Antagonistics, which grapple with the longue-duré in terms both cosmopolitan and stratospheric. They also shuffle quietly past the work of Oswald Spengler, Arnold Toynbee, and Carroll Quigley – an understandable oversight, given the questions that would come up about where megahistory ends and megalomania takes over.

Moments of discretion aside, The History Manifesto is a feisty and suggestive little book, and it should be interesting to see whether much of the next generation of historians will gather beneath its banner.

 

Editorial Tags: 

Review of Johanna Drucker, 'Graphesis: Visual Forms of Knowledge Production'

It's taken a while, but we’ve made a little progress on the mathesis universalis that Leibniz envisioned 300 or so years ago – a mathematical language describing the world so perfectly that any question could be answered by performing the appropriate calculations.

Aware that the computations would be demanding, Leibniz also had in mind a machine to do them rapidly. On that score things are very much farther along than he could ever have imagined. And while the mathesis universalis itself seems destined to remain only the most beautiful dream of rationalist philosophy, there’s no question that Leibniz would appreciate the incredible power to store and retrieve information that we’ve come to take for granted. (Besides being a polymathic genius, he was a librarian.)  

Johanna Drucker’s Graphesis: Visual Forms of Knowledge Production, published by Harvard University Press, focuses in part on the capacity of maps, charts, diagrams, and other modes of display to encode and organize information. But only in part: while Drucker’s claims for the power of visual language are less extravagantly ambitious than Leibniz’s for mathematical symbols, it is a matter of degree and not of kind. (The author is professor of bibliographical studies at the Graduate School of Education and Information Studies of the University of California at Los Angeles.)

“The complexity of visual means of knowledge production,” she writes, “is matched by the sophistication of our cognitive processing. Visual knowledge is as dependent on lived, embodied, specific knowledge as any other field of human endeavor, and integrates other sense data as part of cognition. Not only do we process complex representations, but we are imbued with cultural training that allows us to understand them as knowledge, communicated and consensual, in spite of the fact that we have no ‘language’ of graphics or rules governing their use.”

Forget the old saw about a picture being worth a thousand words. Drucker’s claim is not about pictorial imagery, as such. A drawing or painting may communicate information about how a person or place looks, but the forms she has in mind (bar graphs, for example, or Venn diagrams) perform a more complex operation. They convert information into something visually apprehended.

We learn to understand and use these visual forms so readily that they seem almost self-evident. Some people know how to read a map better than others -- but all of us can at least recognize one when we see it. Likewise with tables, graphs, calendars, and family trees. In each case we intuitively understand how the data are organized, if not what they mean.

But the pages of Graphesis teem with color reproductions of 5,000 years’ worth of various modes of visually rendered knowledge – showing how they have emerged and developed over time, growing familiar but also defining or reinforcing ways to apprehend information.

A good example is the mode of plotting information on a grid. Drucker reproduces a chart of planetary movements in that form from 10th-century edition of Macrobius. But the idea didn’t catch on: “The idea of graphical plotting either did not occur, or required too much of an abstraction to conceptualize.” The necessary leap came only in the early 17th century, when Descartes reinvented the grid in developing analytical geometry. His mathematical tool “combined with intensifying interest in empirical measurements,” writes Drucker, “but they were only slowly brought together into graphic form. Instruments adequate for gathering ‘data’ in repeatable metrics came into play … but the intellectual means for putting such information into statistical graphs only appeared in fits and starts.”

And in the 1780s, a political economist invented a variation on the form by depicting the quantity of various exports and imports of Scotland as bars on a graph – an arresting presentation, in that it shows one product being almost twice as heavily traded as any other. (The print is too small for me to determine what it was.) The advantages of the bar graph in rendering information to striking effect seem obvious, but it, too, was slow to enter common use.

“We can easily overlook the leap necessary to abstract data and then give form to its complexities,” writes Drucker. And once the leap is made, it becomes almost impossible to conceive such data without the familiar visual tools.

If the author ever defines her title term, I failed to mark the passage, but graphesis would presumably entail a comprehensive understanding of the available and potential means to record and synthesize knowledge, of whatever kind, in visual form. Drucker method is in large measure inductive: She examines a range of methods of presenting information to the eye and determines how the elements embed logical concepts into images.

While art history and film studies (especially work on editing and montage) are relevant to some degree, Drucker’s project is very much one of exploration and invention. Leibniz’s mathesis was totalizing and deductive; once established, his mathematical language would give final and definitive answers. By contrast, graphesis would entail the regular creation of new visual tools in keeping with the appearance of new kinds of knowledge, and new media for transmitting it.

“The ability to think in and with the tools of computational and digital environments,” the author warns, “will only evolve as quickly as our ability to articulate the metalanguages of our engagement.”

That passage, which is typical, is some indication of why Graphesis will cull its audience pretty quickly. Some readers will want to join her effort; many more will have some difficulty in imagining quite what it is. Deepening the project's fascination, for those drawn to it, is Drucker's recognition of an issue so new that it still requires a name: What happens to the structuring of knowledge when maps, charts, etc. appear not just on a screen, but one responsive to touch? The difficulties that Graphesis presents are only incidentally matters of diction; the issues themselves are difficult. I suspect Graphesis may prove to be an important book, for reasons we'll fully understand only somewhere down the line.

Editorial Tags: 

Chegg takes to social media after receiving cease and desist order from Southern Connecticut State U.

Smart Title: 

Southern Connecticut State tells Chegg that university's contract with Barnes & Noble bans anyone else from marketing textbook rentals to students.

 

New book explores achievements and challenges of China's 'rising research universities'

Smart Title: 

New book explores achievements and challenges of Chinese research universities as they continue their quest to achieve "world-class" status.

Review of John Urry, 'Offshoring'

Searching JSTOR for the term “globalization” yields well over 89,000 results, which hardly comes as a surprise. But its earliest appearance is odd: it's in an article from the September 1947 issue of The Journal of Education Research called “A Rational Technique of Handwriting” by H. Callewaert, identified by the editors as a Belgian medical doctor.

The article is the only thing he wrote that I have been able to locate, but it makes clear the man's deep commitment to good penmanship. To establish its teaching on a firm, scientific basis, Callewaert felt compelled to criticize “the notion of globalization” – which, in the first half of the 20th century at least, derived from the work of Jean-Ovide Decroly, a pioneering figure in educational psychology, also from Belgium. (Searching JSTOR for the variant spelling “globalisation” turns up a citation of Decroly in The Philosophical Review as early as 1928.) An interesting paper on Decroly available from the UNESCO website explains that globalization in his sense referred to something that happens around the ages of 6 and 7, as experiences of play, curiosity, exercise, formal instruction, and so on develop their “motor, sensory, perceptual, affective, intellectual and expressive capacities” which form “the basis of all their future learning.” In globalization, these activities all come together to form a world. 

Callewaert’s complaint was that teaching kids to write in block letters at that age and trusting “globalization” will enable them to develop the motor skills needed for readable cursive is an error -- whereas teaching them his system of “rational writing,” with its “harmonious coordination of the movements of our fingers,” will do the trick. (There are diagrams.)

Two things sent me off to explore the early history of the term, which is also used in higher mathematics. One of them was a realization that many friends and colleagues -- most of them, probably -- cannot remember a time when references to globalization were not ubiquitous and taken for granted. They’ve heard about it since busy with their own globalization, a là Jean-Ovide Decroly.

Glancing back at JSTOR, we find that papers in economics and international relations begin mentioning globalization in the early 1950s, though not very often. Less than one percent of the items using it (in any of its senses) appeared before 1990. After that, the avalanche: more than 77 percent of the references have been in papers published since the turn of the millennium.

A new book by John Urry called Offshoring (Polity) includes a thumbnail intellectual history of the quarter-century or so since the term became inescapable. “At least a hundred studies each year documented the nature and impact of many global processes,” he writes, surely by way of extreme understatement. “Overall, it seemed that economies, finance, media, migration, tourism, politics, family life, friendship, the environment, the internet, and so on, were becoming less structured within nation-states and increasingly organized across the globe.” (Urry, a professor of sociology at Lancaster University, is the author of a number of such studies.)

Globalization was, so to speak, a social order with the World Wide Web as its template: “characterized by mostly seamless jumps from link to link, person to person, company to company, without regard to conventional, national boundaries through which information was historically located, stored, and curated.”

There were worriers, since sinister and even fatal things could also catch a ride in the flow: terrorism, organized crime, ferocious microorganisms, etc. But the prevailing wisdom seemed to be that globalization was an irresistible force, and an irreversible one; that we were getting more cosmopolitan all the time; and that the cure for the ailments of globalization was more globalization (as John Dewey said about democracy).

Such was the rosy color that the concept took on in the 1990s, which now looks like a period of rather decadent optimism. Offshoring is Urry’s look at what we could call the actually existing globalization of today. Its title refers to what Urry considers the central dynamic of the present-day world: a continuous “moving [of] resources, practices, peoples, and monies from one territory to others, often along routes hidden from view.” It is not a new term, and in common usage it calls to mind at least three well-known activities. One is the transfer of manufacturing from a highly industrialized country to one where the costs of production, in particular wages, are much lower. (In its first appearance in JSTOR, from 1987, “offshoring” is used in this sense.) 

Another kind of offshoring is the concealment of assets through banks or business entities set up in countries where financial regulations and taxes are minimal, if they even exist. A company with its headquarters in the Cayman Islands, for example, is unlikely to have an office or personnel there; its affairs can usually be handled through a post-office box. And finally, there is offshore drilling for oil. 

Distinct as these activities are, Urry understands them as converging aspects of a process that defines the dark underside of globalization. Forget the happy vision of goods, services, culture, and information being exchanged through channels that cut across and negate boundaries between nation-states. The reality is one of an increasingly symbiotic relationship between the economies of prosperous societies and the governments of various countries that serve as tax havens:

“[M]ore than half of world trade passes through these havens, almost all High Net Worth Individuals possess offshore accounts enabling tax ‘planning,’ [and] ninety nine of Europe’s largest companies use offshore subsidiaries…. Overall, one quarter to one third of all wealth is held ‘offshore.’ The scale of this offshored money makes the world much more unequal than previous researchers ever imagined. Fewer than 100 million people own the astonishing $21 trillion offshore fortune. This is equivalent to the combined GDPs of the U.S. and Japan, the world’s first and third largest economies.”

With enormous masses of wealth thus safely secured off in the distance -- far from the clutches of the nation-state, which might insist on diverting some of it to schools, infrastructure, etc. -- postindustrial societies must adapt to “offshored” manufacturing and energy resources. (The author has in mind dependence on fuel coming into a country from any source, not just the rigs pumping oil from beneath the seafloor.) At the same time, another sort of offshoring is under way: the ocean itself occupied by shipping platforms so huge that they cannot dock in any harbor, and “arcane ownership patterns at sea which make it almost impossible to pin down and ensure that ships are properly built, maintained, and kept seaworthy.”

Urry’s earlier work has explored the connections between social organization and the experience of space. Here, he seems to take aim at the old claims for globalization as a force for mobility, links across cultures, and even the emergence of a sense of planetary citizenship. The spaces created by offshoring are very different – characterized by concealment, restricted access, and distances between social strata that look like bottomless chasms. Urry's proposed remedy, in a nutshell, is for the nation-state to reimpose taxation on all that extraterritorial wealth. He must have felt obliged to suggest something, but it's like recommending that you escape from a threatening situation by learning to fly. One would appreciate a lesson in how it is to be done.

 

Review of David R. Shumway, "Rock Star: The Making of Musical Icons from Elvis to Springsteen"

Most readers’ first response to David Shumway’s Rock Star: The Making of Musical Icons from Elvis to Springsteen (Johns Hopkins University Press) will be to scan its table of contents and index with perplexity at the performers left out, or barely mentioned. Speaking on behalf of (among others) Lou Reed, Joe Strummer, and Sly and the Family Stone fans everywhere, let me say: There will be unhappiness.

For that matter, just listing the featured artists may do the trick. Besides the names given in the subtitle, we find James Brown, Bob Dylan, the Rolling Stones, the Grateful Dead, and Joni Mitchell – something like the lineup for an hour of programming at a classic rock station. Shumway, a professor of English at Carnegie Mellon University, makes no claim to be writing the history of rock, much less formulating a canon. The choice of artists is expressly a matter of his own tastes, although he avoids the sort of critical impressionism (see: Lester Bangs) that often prevails in rock writing. The author is a fan, meaning he has a history with the music. But his attention extends wider and deeper than that, and it moves in directions that should be of interest to any reader who can get past “Why isn’t _____ here?”

More than a set of commentaries on individuals and groups, Rock Star is a critical study of a cultural category -- and a reflection on its conditions of existence. Conditions which are now, arguably, far along the way to disappearing.

The name of the first rock song or performer is a matter for debate, but not the identity of the first rock star. Elvis had not only the hits but the pervasive, multimedia presence that Shumway regards as definitive. Concurring with scholars who have traced the metamorphoses of fame across the ages (from the glory of heroic warriors to the nuisance of inexplicable celebrities), Shumway regards the movie industry as the birthplace of “the star” as a 20th-century phenomenon: a performer whose talent, personality, and erotic appeal might be cultivated and projected in a very profitable way for everyone involved.

The audience enjoyed what the star did on screen, of course, but was also fascinated by the “real” person behind those characters. The scare quotes are necessary given that the background and private life presented to the public were often somewhat fictionalized and stage-managed. Fans were not always oblivious to the workings of the fame machine. But that only heightened the desire for an authentic knowledge of the star.

Elvis could never have set out to be a rock star, of course – and by the time Hollywood came around to cast him in dozens of films, he was already an icon thanks to recordings and television appearances. But his fame was of a newer and more symbolically charged kind than that of earlier teen idols.

Elvis was performing African-American musical styles and dance steps on network television just a few years after Brown v. Board of Education – but that wasn’t all. “The terms in which Elvis’s performance was discussed,” Shulway writes, “are ones usually applied to striptease: for example, ‘bumping and grinding.’ ” He dressed like a juvenile delinquent (the object of great public concern at the time) while being attentive to his appearance, in particular his hair, to a degree that newspaper writers considered feminine.

The indignation Elvis generated rolled up a number of moral panics into one, and the fans loved him for it. That he was committing all these outrages while being a soft-spoken, polite young man – one willing to wear a coat and tails to sing “Hound Dog” to a basset hound on "The Milton Berle Show" (and later to put on Army fatigues, when Uncle Sam insisted) only made the star power more intense: those not outraged by him could imagine him as a friend.

Elvis was the prototype, but he wasn’t a template. Shumway’s other examples of the rock star share a penchant for capturing and expressing social issues and cultural conflicts in both their songs and how they present themselves, onstage and off. But they do this in very different ways – in the cases of James Brown and Bob Dylan, changing across the length of their careers, gaining and losing sections of their audience with each new phase. The shifts and self-reinventions were very public and sometimes overtly political (with James Brown's support for Richard Nixon being one example) but also reflected in stylistic and musical shifts. In their day, such changes were sometimes not just reactions to the news but part of it, and part of the conversations people had about the world.  

Besides the size of the audience, what distinguishes the rock star from other performers is the length of the career, or so goes Shumway’s interpretation of the phenomenon. But rewarding as the book can be – it put songs or albums I’ve heard a thousand times into an interesting new context – some of the omissions are odd. In particular (and keeping within the timespan Shumway covers) the absence of Jimi Hendrix, Janis Joplin, and Jim Morrison seems problematic. I say that not as a fan disappointed not to find them, but simply on the grounds that each one played an enormous role in constituting what people mean by the term “rock star.” (That includes other rock stars. Patti Smith elevated Morrison to mythological status in her own work, while the fact that all three died at 27 was on Kurt Cobain’s mind when he killed himself at the same age.)

I wrote to Shumway to ask about that. (Also to express relief that he left out Alice Cooper, my own rock-history obsession. Publishers offering six-figure advances for a work of cultural criticism should make their bids by email.)

“My choices are to some extent arbitrary,” he wrote back. “One bias that shaped them is my preference for less theatrical performers as opposed to people such as David Bowie (who I have written about, but chose not to include here) or Alice Cooper.” But leaving out the three who died at 27 “was more than a product of bias. Since I wanted to explore rock stars’ personas, I believed that it was more interesting to write about people who didn’t seem to be playing characters on stage or record. I agree with you about the great influence of Jim Morrison, Janis Joplin, and Jimi Hendrix, but I don’t think their personas have the complexity of the ones I did write about. And, they didn’t figure politically to the degree that my seven did. The main point, however, is that there is lots of work to be done here, and I hope that other critics will examine the personas the many other rock stars I did not include.”

The other thing that struck me while reading Rock Star was the sense that it portrayed a world now lost, or at least fading into memory. Rock is so splintered now, and the "technology of celebrity" so pervasive, that the kind of public presence Shumway describes might not be possible now. 

“The cause is less the prevalence of celebrity,” he replied, “than the decline of the mass media. Stars are never made by just one medium, but by the interaction of several. Earlier stars depended on newspapers and magazines to keep them alive in their fans hearts and minds between performances. Radio and TV intensified these effects. And of course, movie studios and record companies had a great deal of control over what the public got to see and hear. The result was that very many people saw and heard the same performances and read the same gossip or interviews. With the fragmentation of the media into increasingly smaller niches, that is no longer the case. The role of the internet in music distribution has had an especially devastating effect on rock stardom by reducing record companies’ income and the listeners’ need for albums. The companies aren’t investing as much in making stars and listeners are buying songs they like regardless of who sings them.”

That's not a bad thing, as such, but it makes for a more compartmentalized culture, while the beautiful thing with rock 'n' roll is when it blows the doors off their hinges.

 

 

Editorial Tags: 

Essay on study of ebook publishing

A technological visionary created a little stir in the late ‘00s by declaring that the era of the paper-and-ink book as dominant cultural form was winding down rapidly as the ebook took its place. As I recall, the switch-off was supposed to be complete by the year 2015 -- though not by a particular date, making it impossible to mark your day planner accordingly.

Cultural dominance is hard to measure. And while we do have sales figures, even they leave room for interpretation. In the June issue of Information Research, the peer-reviewed journal’s founder T.D. Wilson takes a look at variations in the numbers across national borders and language differences in a paper called “The E-Book Phenomenon: A Disruptive Technology.” Wilson is a senior professor at the Swedish School of Library and Information Science, University of Borås, and his paper is in part a report on research on the impact of e-publishing in Sweden.

He notes that the Book Industry Study Group, a publishing-industry research and policy organization, reported last year that ebook sales in the United States grew by 45 percent between 2011 and 2012 – although the total of 457 million ebooks that readers purchased in 2012 still lagged 100 million copies behind the number of hardbacks sold the same year. And while sales in Britain also surged by 89 percent over the same period, the rate of growth for non-Anglophone ebooks has been far more modest.

Often it’s simply a matter of the size of the potential audience. “Sweden is a country of only 9.5 million people,” Wilson writes, “so the local market is small compared with, say, the UK with 60 million, or the United States with 314 million.” And someone who knows Swedish is far more likely to be able to read English than vice versa. The consequences are particularly noticeable in the market for scholarly publications. Swedish research libraries “already spend more on e-resources than on print materials,” Wilson writes, “and university librarians expect the proportion to grow. The greater proportion of e-books in university libraries are in the English language, especially in science, technology and medicine, since this is the language of international scholarship in these fields.”

Whether or not status as a world language is a necessary condition for robust ebook sales, it is clearly not a sufficient one. Some 200 million people around the world use French as a primary or secondary language. But the pace of Francophone ebook publishing has been, pardon the expression, snail-like -- growing just 3 percent per year, with “66 percent of French people saying that they had never read an ebook and did not intend to do so,” according to a study Wilson cites. And Japanese readers, too, seem to have retained their loyalty to the printed word: “there are more bookshops in Japan (almost 15,000 in 2012) than there are in the entire U.S.A. (just over 12,000 in 2012).”

Meanwhile, a report issued not long after Wilson’s paper appeared shows that the steady forward march of the ebook in the U.S. has lately taken a turn sideways. The remarkable acceleration in sales between 2008 and 2012 hit a wall in 2013. Ebooks brought in as much that year ($3 billion) as the year before. A number of factors were involved, no doubt, from economic conditions to an inexhaustible demand for Fifty Shades of Grey sequels. But it’s also worth noting that even with their sales plateauing, ebooks did a little better than trade publishing as a whole, where revenues contracted by about $300 million.

And perhaps more importantly, Wilson points to a number of developments suggesting that the ebook format is on the way to becoming its own, full-fledged disruptive technology. Not in the way that, say, the mobile phone is disruptive (such that you cannot count on reading in the stacks of a library without hearing an undergraduate’s full-throated exchange of pleasantries with someone only ever addressed as “dude”) but rather in the sense identified by Clayton Christensen, a professor of business administration at the Harvard Business School.

Disruption, in Christensen’s usage, refers, as his website explains it, to “a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves up market, eventually displacing established competitors.” An example he gives in an article for Foreign Affairs is, not surprisingly, the personal computer, which was initially sold to hobbyists -- something far less powerful as a device, and far less profitable as a commodity, than “real” computers of the day.

The company producing a high-end, state-of-the-art technology becomes a victim of its own success at meeting the demands of clientele who can appreciate (and afford) its product. By contrast, the “disruptive” innovation is much less effective and appealing to such users. It leaves so much room for improvement that its quality can only get better over time, as those manufacturing and using it explore and refine its potentials – without the help of better-established companies, but also without their blinkers. By the time its potential is being realized, the disruptive technology has developed its own infrastructure for manufacture and maintenance, with a distinct customer base.

How closely the ebook may resemble the disruptive-technology model is something Wilson doesn’t assess in his paper. And in some ways, I think, it’s a bad fit. The author himself points out that when the first commercial e-readers went on the market in 1998, it was with the backing of major publishing companies (empires, really) such as Random House and Barnes & Noble. And it’s not even as if the ebook and codex formats were destined to reach different, much less mutually exclusive, audiences. The number of ebook readers who have abandoned print entirely is quite small – in the US, about five percent.

But Wilson does identify a number of developments that could prove disruptive, in Christensen’s sense. Self-published authors can and do reach large readerships through online retailers. The software needed to convert a manuscript into various ebook formats has become more readily available, and people dedicated to developing the skills could well bring out better-designed ebooks than well-established publishers do now. (Alas! for the bar is not high.)

Likewise, I wonder if the commercial barriers to ebook publishing in what Wilson calls “small-language countries” might not be surmounted in a single bound if the right author wrote the right book at a decisive moment. Unlike that Silicon Valley visionary who prophesied the irreversible decline of the printed book, I don’t see it as a matter of technology determining what counts as a major cultural medium. That’s up to writers, ultimately, and to readers as well.

Editorial Tags: 

Pages

Subscribe to RSS - books
Back to Top