Students at OCAD University, an arts institution in Toronto, are furious about a required custom textbook for an art course for which they must pay $180, but which does not feature any illustrations. Petitions are attracting signatures. Bloggers are expressing outrage, and word is spreading. The university notes that students have access to online versions of the art discussed in the book, and that the customized textbook was an attempt to save students money by combining several books. University officials said that obtaining the rights to the art would have resulted in a huge increase in costs. Still, university officials have scheduled a meeting with students later in the week to talk about the issues.
University presses are a reticent lot. We flourish offstage, delighted to shine the spotlight on our authors and their extraordinary works. We want them to get the glory; for ourselves, we hope only for enough reflected light to reveal our individual imprints as standards of excellence. Our books and journals speak not only for themselves, but for us.
Apparently, they don't speak loudly enough. Our modesty -- perhaps a virtue in other times -- has become a liability. Many university presses face serious budget cuts and other convulsive changes. In recent months the University of Missouri, having first announced the closing of its press, reversed course to declare the press would remain open, but operate under a drastically different model. Subsequent to that the university announced that the press will retain many of its original staff, features, and goals. After the highly publicized and contentious deliberations, University of Missouri President Tim Wolfe stated that "my goal is to develop a press that is vibrant and adaptive...."
If university presses spent more time beating our own drum, President Wolfe might have recognized before he first acted that there are few modern educational institutions as adaptive as university presses. In a rapidly changing publishing culture, that's precisely what we must do and have been doing to remain vibrant. Indeed, Wolfe’s stated goal for the University of Missouri Press helps to define the next chapter in our challenge to discharge our scholarly mission.
High-quality scholarship is now a necessary but insufficient benchmark for success. Economic scarcity has increased competition within the university for shrinking resources while digital technologies and the web have created the misperception that publishing is simple and cheap. It isn’t. Yet, we directly contribute to the university’s teaching and research missions in a way that results in the widest possible dissemination of scholarship at the lowest possible cost.
Universities generally perceive their presses (if they have them — only about 90 North American universities do) as being relatively small units focused on the humanities and social sciences, areas that themselves have constituted smaller and smaller pieces of overall university allocation and focus. Our budgets are small, especially compared to those of academic divisions or of the university library. But our need for financial support when we already sell a product puzzles many administrators and creates the notion that we are not successful, critical acclaim for our products notwithstanding. Too many of our colleagues think we’re resisting the shift to digital scholarship, instead focusing on dull old print technologies. We aren’t hip and we don’t want to see that information wants to be free.
All too often university administrators don’t see their press as essential to the university’s core mission. With all due respect, they couldn’t be more wrong — but the failure to demonstrate our importance rests with us and we will begin to correct that failure now.
A revolution is taking place in scholarly communications. From something as broad as the development and evolution of the web to technology as narrow as digital print machines, changes in production, distribution, marketing (yes, even scholarship requires marketing to reach its broadest audience), and selling can and must follow. Such change requires new business models, and we’re developing them; if managed well, they could allow universities and their faculty more control over the information they create but too often cede to others.
University presses are one of the few centers of expertise regarding scholarly communication to be found on any campus, and their knowledge is broader than any other entity. Librarians are acutely aware of some dissemination issues, like price, but not so much about cost and business models. Academic computing center staff know the technical aspects of the web and are hands-down the experts on hardware. But in the broadest context of scholarly communication it is presses, charged with recovering on average 80 percent of their operating costs, that have the greatest expertise in all aspects of the big picture.
From conducting peer review (a critical step that distinguishes scholarship from other forms of publication) to creating metadata that allow broad discovery of scholarship to experimenting with innovative ways to provide that scholarship to libraries, faculty, and students on a lower cost-per-page basis than commercial scholarly publishing entities, we have been building expertise for years. It is expertise sometimes learned at each individual press, but especially in recent years also from cooperative ventures ranging from common production, marketing, and fundraising efforts to coalitions to expand international markets. That expertise can be used to help the university create the infrastructure it needs to lessen the cost of scholarship purchased from other entities.
It is self-evident that the books and journals we publish benefit faculty in their roles as authors, researchers, and teachers. Less evident is that our conduct of peer review and the luster of our imprints together support the tenure and promotion system that has characterized American higher education for generations. Sadly, this system has allowed colleges and universities without presses to "free ride" on the backs of those that have them; it costs them no more than the university press books and journals they choose to buy. Any solution to university press support might do well to address such freeloading.
Less recognized in the academic world is the degree to which university presses, through their publications, serve students. It is true that few presses publish core textbooks such as “Introduction to Economics” (though that’s an area where we are helping in the development of open-access texts), but a very large proportion of the books read either alongside or in lieu of a core text are university press publications. Indeed, our lifetime best-selling books are virtually always those read in undergraduate and graduate courses.
University presses have become the leading regional publishers in the country. State university presses in particular have played a major role in publishing books that help citizens recognize and celebrate what makes home, home. From histories to natural histories to cookbooks and sports books, we help give American citizens a better sense of who they are.
Finally, the dissemination and sale of university press products throughout the world has helped spread awareness of our individual universities more broadly than any other single product — including the football team. Scholars around the world are acutely aware of Temple University Press’s pioneering and prize-winning Asian American studies, while LSU Press’s four Pulitzer Prizes bring renown to its commitment to literature that matters. The University of Minnesota Press enjoys the same global accolades for its critical and social theory list and for bringing innovative European thought to North America through its well-known translation program. In all cases, the light shone on the press reflects the parent university’s commitment to serious, cutting-edge scholarship.
University presses have enriched American education and American intellectual life for over a century. These are tough times to be sure, and presses today need to share in the sacrifices being made by all parts of the university. But it will be a long-term mistake if the expertise and contributions of presses are sacrificed to resolve short-term budget problems.
Alex Holzman, Douglas Armato and MaryKatherine Callaway
Alex Holzman is director of Temple University Press, Douglas Armato is director of the University of Minnesota Press and MaryKatherine Callaway is director of LSU Press. All are former presidents of the Association of American University Presses.
Internet2 and Educause, two higher-ed technology organizations, announced on Tuesday that they are expanding a group purchasing effort that allows member institutions to purchase access e-textbooks from McGraw-Hill at a discounted price. The effort, which began in January with five universities, "aims to advance a new model for the purchase, distribution, and use of electronic textbooks and digital course materials," according to a press release. The program added 20 additional institutions on Tuesday, including both small liberal arts colleges and large state universities. The idea is that negotiating deals for e-textbook access at the institutional level, as a group, will make it cheaper and easier for colleges and universities to support professors who want to take their courses digital. The first five universities to sign on recently collaborated on a report summarizing the experiences of students and professors in the first semester of the pilot. The results were mixed.
This year is the centenary of James Harvey Robinson’s bookThe New History: Essays Illustrating the Modern Historical Outlook, which made a case for teaching and writing about the past as something other than the record of illustrious men gaining power and then doing things with it.
“Our bias for political history,” he wrote, “led us to include a great many trifling details of dynasties and military history which merely confound the reader and take up precious space that should be devoted to certain great issues hitherto neglected.” The new breed of historians, such as the ones Robinson was training at Columbia University, would explore the social and cultural dimensions of earlier eras -- “the ways in which people have thought and acted in the past, their tastes and their achievements in many fields” – as well as what he called “the intricate question of the role of the State in the past.”
One hundred years and several paradigm shifts later, this “new history” is normal history; it’s not obvious why Robinson’s effort was so provocative at the time. You can see how it might have upset turf-protecting experts concerned with, say, whether or not Charles the Bald was actually bald. But it also promised to make connections between contemporary issues and knowledge of the past -- or threatened to make those connections, to put it another way.
Hold that thought for now, though. Jumping from 1912 to the present, let me point out a new collection of papers from the University of Georgia Press called Doing Recent History, edited by Claire Bond Potter and Renee C. Romano. (Potter is professor of history at the New School, Romano an associate professor of history at Oberlin College.)
There’s something puzzlingly James Harvey Robinson-ish about it, even though none of the contributors give the old man a nod. It must be a total coincidence that the editors are publishing the collection just now, amidst all the centennial non-festivities. And some of Robinson’s complaints about his colleagues would sound bizarre in today’s circumstances – especially his frustration at their blinkered sense of what should count as topics and source materials for historical research. “They exhibit but little appreciation of the vast resources upon which they might draw,” he wrote, “and unconsciously follow for the most part, an established routine in their selection of facts.”
As if in reply, the editors of Doing Recent History write: “We have the opportunity to blaze trails that have not been marked in historical literature. We have access to sources that simply do not exist for earlier periods: in addition to living witnesses, we have unruly evidence such as video games and television programming (which has expanded exponentially since the emergence of cable), as well as blogs, wikis, websites, and other virtual spaces.”
No doubt cranky talk-show hosts and unemployed Charles the Bald scholars will take umbrage at Jerry Saucier’s paper “Playing the Past: The Video Game Simulation as Recent American History” – and for what it’s worth, I’m not entirely persuaded that Saucier’s topic pertains to historiography, rather than ethnography. But that could change at some point. In “Do Historians Watch Enough TV? Broadcast News as a Primary Source,” David Greenberg makes the forceful argument that political historians tend to focus on written material to document their work: a real anachronism given TV’s decisive role in public life for most of the period since World War II. He gives the example of a sweeping history of the Civil Rights movement that seemed to draw on every imaginable source of documentation -- but not the network TV news programs that brought the struggle into the nation's living room. (The historian did mention a couple of prime-time specials, but with no details or reason to suppose he'd watched them.) Likewise, it’s entirely possible that historians of early 21st-century warfare will need to know something about video games, which have had their part in recruiting and training troops.
Besides the carefully organized, searchable databases available in libraries, historians have to come to terms with the oceans of digital text created over the past quarter-century or so -- tucked away on countless servers for now, but posing difficult questions about archiving and citation. The contributors take these issues up, along with related problems about intellectual property and the ethical responsibility of the historian when using documents published in semi-private venues online, or deposited in research collections too understaffed to catch possible violations of confidentiality.
In “Opening Archives on the Recent Past: Reconciling the Ethics of Access and the Ethics of Privacy, “ Laura Clark Brown and Nancy Kaiser discuss a number of cases of sensitive information about private citizens appearing in material acquired by the Southern Historical Collection of the University of North Carolina at Chapel Hill. For example, there's the author whose papers include torrid correspondence with a (married) novelist who wouldn't want his name showing up in the finding aid. Brown and Kaiser also raise another matter for concern: “With the full-text search capabilities of Google Books and electronic journals, scholarly works no longer have practical obscurity, and individuals could easily find their names and private information cited in a monograph with even a very small press run.”
The standard criticism of James Harvey Robinson’s work among subsequent generations of professional historians is that his “new history” indulges in “presentism” – the sin of interpreting the past according to concerns or values of the historian’s own day. In Robinson’s case, he seems to have been a strong believer in the virtues of scientific progress, in its continuing fight against archaic forms of thought and social organization. With that in mind, it’s easier to understand his insistence that social, cultural, and intellectual history were at least as important as the political and diplomatic sort (and really, more so). Students and the general public were better off learning about “the lucid intervals during which the greater part of human progress has taken place,” rather than memorizing the dates of wars and coronations.
None of the contributors to Doing Recent History are nearly that programmatic. Their main concern is with the challenge of studying events and social changes from the past few decades using the ever more numerous and voluminous sources becoming available. Robinson’s “new history” tried to make the past interesting and relevant to the present. The “recent history” people want to generate the insights and critical skills that become possible when you learn to look at the recent past as something much less familiar, and more puzzling, than it might otherwise appear. I'm struck less by the contrast than the continuity.
Robinson would have loved it. In fact, he even anticipated their whole project. “In its normal state,” he wrote one hundred years ago, “the mind selects automatically, from the almost infinite mass of memories, just those things in our past which make us feel at home in the present. It works so easily and efficiently that we are unconscious of what it is doing for us and of how dependent we are upon it.”
Our memory — personal and cultural alike – “supplies so promptly and so precisely what we need from the past in order to make the present intelligible that we are beguiled into the mistaken notion that the present is self-explanatory and quite able to take care of itself, and that the past is largely dead and irrelevant, except when we have to make a conscious effort to recall some elusive fact.” That passage would have make a good epigraph for Doing Recent History, but it’s too late now.
In June, Inside Higher Edtold readers about Princeton University Press’s impending experiment with a political-science volume on the 2012 presidential election: It would make excerpts from the work-in-progress available online free while the campaign was still under way. It required a “truncated timetable” for peer review -- getting the readers’ reports back in two or three weeks instead of a few months.
Given the stately pace of scholarly publishing, such a turnaround counts as feverish. By the standards of punditry, it’s almost languorous. The idea was to give the public access to portions of The Gamble: Choice and Chance in the 2012 Election just as the convention season began.
And so they are. Two chapters are now available for download from the Princeton website, here and here. The authors, John Sides and Lynn Vavreck, also have a website for the book. (They are associate professors of political science at George Washington University and the University of California at Los Angeles, respectively.) The material runs to about a hundred pages of text.
It would be hard to read Sides's and Vavreck’s work during the conventions, amid all the funny hats and confetti. But their research puts a couple of things about the campaigns into perspective. Keep in mind that the authors are responding not just to data (most of it quantitative) but to the received wisdom of the past several months regarding the campaign -- and on two points in particular.
Each is an assessment of a candidate’s presumed vulnerabilities.
The first holds that President Obama’s chances of re-election depend -- more than anything else, and perhaps even exclusively -- on the state of the economy. Incumbency has its advantages, but unemployment rates could trump them. The second is that Mitt Romney lacks the support of his party’s base, which is considerably to the right of him on both social and economic issues. Romney doesn’t suffer from Sarah Palin’s very negative approval rating among the public at large, but he can’t count on the support of her followers, Twitter and otherwise.
In social-science books, the methodology is usually as salient as the findings themselves. Each chapter comes with an appendix stuffed with additional analysis. Suffice it to say that a few patterns have emerged from studies of presidential campaigns in the past. The challenge is to move from generalizations about yesteryear to the electoral battle now unfolding.
For example, the country’s economic performance during a president’s administration -- but especially in the months just before the election -- is a pretty solid index of his re-electability. In the sixteen presidential elections between 1948 and 2008, changes in gross domestic product between January and September of the election year tracked closely to the fortunes of the incumbent party’s candidate. “It’s hard to beat an incumbent party in a growing economy,” Sides and Vavreck write, “and even harder to beat the actual incumbent himself.”
When the change in GDP over the three quarters preceding the election is negative, the incumbent party’s presidential candidate is sure to lose -- at least if the examples of Nixon (1960), Carter (1980), and McCain (2008) are anything to go by. The stronger the economic contraction, the bigger the defeat.
But the pattern of the past 60 years isn’t much help for handicappers of the race now under way. GDP during the first quarter of 2012 grew an incumbent-friendly 2 percent, while the initial estimate for the second quarter was 1.5 percent growth. As it happens, this column is running on August 29, when the Bureau of Economic Analysis is scheduled to issue a revised estimate of second-quarter growth based on additional data. (And the first estimate of GDP in third quarter isn’t out until 12 days before the election.)
In any case, the GDP itself can give only a rough sense of how voters experience and understand the economy. Sides and Vavreck have developed a model that correlates public-opinion poll results from each quarter between 1948 and 2008 with a number of other data points. These include three economic factors (unemployment and inflation rates, plus the change in GDP between quarters) as well as “events such as scandals and wars that might push approval [ratings] up and down” and the president’s length of time in office, counted in quarters.
From all of this information, the authors extracted a general model of how much each factor counted in determining the presidential approval ratings. Then they ran all the numbers again to see how well the general model could retroactively “predict” the changes in each president’s approval ratings from quarter to quarter. And the model proved good at it. The actual quarterly ratings were usually quite close to what the formulas expected, given the economic and other factors in play.
Plugging in relevant data for 2009-2011, the authors generated a graph showing the approval ratings that would be expected given the tendencies of the previous six decades. Here things get interesting:
“Although early on in his presidency Obama was slightly less popular than expected (by about 1 percent throughout most of 2009 and 2010), by the end of 2010 and continuing into 2012, he was more popular. In 2011, his popularity exceeded expectations by over 6 points. This feat is something that few presidents have accomplished. Only one president, Ronald Reagan, consistently ‘beat’ the prediction in his first term to an extent greater than Obama.”
The authors suggest that, if anything, their model may have overestimated the level of Obama popularity that might be expected if all things were equal, relative to earlier presidencies -- which they weren’t. The economic slump that began in 2008 has been deeper, and lasted longer, than any over the previous 60 years. High unemployment yields diminished approval ratings, of course -- but compounding it with a rise in long-term unemployment should presumably push them down even harder.
At the same time, the model does not account for what the authors call “the ‘penalty’ of his race” -- the marked tendency of those with negative attitudes toward black people in general to disapprove of Obama in particular. Sides and Vavreck estimate that his approval rating might be up to four points higher if not for his skin color.
In short, Obama entered the 2012 campaign with considerably more support than one might expect given the lackluster economy. The authors leave it to others to speculate on the source of this strength. But what about his opponent? Isn't Mitt Romney out of step with the rest of his party -- hence vulnerable to conservatives staying home?
When he emerged from the Republican primary season a few months back, Romney seemed less a victor than the last man standing. And inexplicably so: until a few years back, he spoke in favor of both Roe v. Wade and LGBT equality. And Jonathan Gruber, the economist at the Massachusetts Institute of Technology and "intellectual architect" of the healthcare reform bill that Romney crafted while governor of Massachusetts, has compared it to Obamacare in colorful terms: “it’s the same [flippin’] bill.” Shouldn’t he be exhibited by the Smithsonian Institution as the last surviving member of an extinct species, the Rockefeller Republican?
Sides and Vavreck challenge the idea that the Republican primary process revealed a deep yearning by conservatives for “Anyone But Romney.” All the other candidates courted them assiduously, only to be done in by scandal or gaffe or the inability to remember which government programs he or she intended to close down, once in office. Romney was what the party had left. (Actually "had left" is probably a bad way of putting it.)
The authors concede that Romney “never ‘surged’ in the polls" in late 2011 and early ’12, "and never experienced the reinforcing cycle of positive news coverage and gains in the polls.” As a result, he "appear[ed] to be a weak candidate, unloved by many in the party. But this also concealed the underlying structure of the race, which tilted in his favor.” A poll from last December showed that he “was viewed positively by likely Republican primary voters whether they were conservatives or moderates, pro-life or pro-choice, relatively wealthy or not.” More than two-thirds of the Tea Party members surveyed expressed a favorable opinion of him, with non-Tea Party people doing so at the same rate.
The authors make their case with charts, graphs, and whatnot, but looking at them, I felt some cognitive dissonance. It’s hard to shake the impression that the GOP has a sizable wing that is so far to the right of Romney that he had to placate them with a veep candidate with stronger conservative credentials. When I raised the issue to the authors by e-mail, Sides replied that "people have overestimated two things about GOP voters: (1) just how conservative they are (or perceive themselves), relative to how they perceived Romney; and (2) how much ideology drove their feelings about Romney and the other candidates.” That misperception was strengthened by the “media boomlets” that seem intrinsic to the 24-hour news cycle.
“When news coverage focused on a candidate other than Romney and that candidate had conservative bona fides,” Sides continued, “then conservatives were more likely to vote for that person than Romney…. But this does not mean they were implacably opposed to Romney. Preferring another candidate to Romney is not the same as opposing Romney.” He may have won out “not because he was ideologically who every conservative activist or voter wanted, but because he was the compromise candidate of the various party's factions. It doesn't mean he was widely loved, but he was satisfactory to all. Which makes him like most other presidential candidates, really.”
The authors are still analyzing the primary season while also following the latest twists and turns of the process. I wondered if that meant the chapters now available were working drafts of a sort.
“We will probably rework the chapters a little bit,” Lynn Vavreck wrote back, “but not very much I suspect. We may adjust some of the error or uncertainty estimates, but the general take-aways will remain the same.”
Writing a monograph with the campaign still in motion is a way to shake things up some in the discipline. “It bothered us that parties, candidates, consultants, and journalists had better data on campaigns and elections than political scientists had -- and we wanted to be a part of what was happening, when it was happening, so we could share in those data and use them in real time.”
They hope the project serves as a model to others, while acknowledging that it’s “not the kind [of effort] that academics are typically strong on making -- partnerships have to be forged, things have to be delivered on deadline, and you have to promote your results and your work to a wider audience.” It sounds like what anyone else engaged in politics must do, except with a bibliography.
Most volumes by Jürgen Habermas appearing in English over the past decade have consisted of papers and lectures building on the theory of communicative action and social change in his earlier work, or tightening the bolts on the system. Some are technical works only a Habermasian could love. But a few of the books have juxtaposed philosophical writings with political journalism and the occasional interview with him in his role as public intellectual of global stature.
The latest such roundup, The Crisis of the European Union: A Response -- published in Germany late last year and in translation from Polity this summer – is probably the most exasperated of them as well. Very few contemporary thinkers have laid out such a comprehensive argument for the potential of liberal-democratic societies to reform and revitalize themselves in a way that would benefit their citizens while also realizing the conditions of possibility for human flourishing everywhere else.
The operative term here being, of course, “potential.” When you consider that his recent collections The Divided West (2006) and Europe: The Faltering Project (2009), also both from Polity, are now joined by one with “crisis” in the title, it’s clear that unbridled optimism is not a distorting element in Habermas’s world view. But the sobriety has turned into something closer to frustration in his latest interventions.
The earliest text in the new book first appeared in November 2008 – a time when the initial impact of the financial crisis made many people assume that the retooling of major institutions was so urgent as to be imminent. Habermas was more circumspect about it than, say, folks in the United States who imagined Obama as FDR redivivus. But although he has long been the most moderate sort of mildly left-of-center reformist, the philosopher did permit himself to hope.
Might not the U.S. “as it has done so often in the past,” he said, “pull itself together and, before it is too late, try to bind the competing major powers of today – the global powers of tomorrow – into an international order which no longer needs a superpower?” If so, “the United States would need the friendly support of a loyal yet self-confident ally in order to undertake such a radical change in direction.”
That would require the European Union to learn “to speak with one voice in foreign policy and, indeed, to use its internationally accumulated capital of trust to act in a farsighted manner itself.” A common EU foreign policy would only be possible if it had a more coherent economic policy. “And neither could be conducted any longer through backroom deals,” he wrote, “behind the backs of the populations.”
Habermas suffered no illusions about how likely such changes might be. But he treated late ’08 as a moment when “a somewhat broader perspective may be more needful than that offered by mainstream advice and the petty maneuvering of politics as usual.” (Fatalism, too, is an illusion, and one that paralyzes.)
The appendix to Crisis reprints some newspaper commentaries that Habermas published in 2010 and ’11, as the crisis of the Euro exposed the shakiness of “an economic zone of continental proportions with a huge population but without institutions being established at the European level capable of effectively coordinating the economic policies of the member states.” This gets him riled up. He is particularly sharp on the role of the German Federal Constitutional Court’s “solipsistic and normatively depleted mindset.”
He also complains about “the cheerful moderators of the innumerable talk shows, with their never-changing line-ups of guests,” which kill the viewer’s “hope that reasons could still count in political questions.”
A seemingly more placid tone prevails in his two scholarly texts on the juridification (i.e., codifying and legal enforcement) of democratic and humanitarian values. But there is a much tighter connection between Habermas’s fulminations and his conceptual architecture than it first appears.
Another recent volume, Shivdeep Singh Grewal’s Habermas and European Integration: Social and Cultural Modernity Beyond the Nation-State (Manchester University Press), starts with a review of Habermas’s changing attitudes towards European unification over the past 30 years. Then Grewal -- an independent scholar who has taught at Brunel University and University College London -- reconstructs pertinent aspects of Habermas’s scholarly work over roughly the same period, surveying it in the context of the philosopher’s developing political concerns.
Using the political journalism as a way to frame his thinking about modernity is an unusual approach, but illuminating, and it avoids the familiar tendency in overviews of Habermas’s work to treat his books as if they spawned one another in turn.
To summarize things to a fault: From the U.S. and French revolutions onward, the nation-state was best able to secure its legitimacy through constitutional democracy. However limited in scope or restricted in mandate it was at the start, constitutional democracy opened up the possibility for public challenges to authority grounded on nothing more than tradition or inertia, which could in turn make for greater political inclusiveness. It could even try to protect its more vulnerable citizens and mitigate some kinds of inequality and economic dislocation.
Thus public life would expand and grow more various and complex, since more people would have access to more possibilities for decision-making. And that, in turn, demands a political structure both firm and flexible. Which brings us back to constitutional-democratic governance. A virtuous circle!
Actual constitutional democracies were another matter, but at least it was a normative model, something to shoot for. But the problems faced by nation-states cut across borders; and the more complex they become, the less power over them the separate states have. The point of creating a united Europe, from Habermas’s perspective, was, Grewal writes, “the urgent task of preserving the democratic and welfarist achievements of the nation state ‘beyond its own limits.’ ”
Habermas makes the point somewhere that institutions making decisions about transnational issues are going to exist in any case. Whether they will be accountable is another matter. Establishing a constitutional form of governance that goes beyond the nation-state would involve no end of difficulty in principle, let alone in practice, but it is essential.
“Habermas acknowledges the 'laborious' and incremental learning learning process of the German government,” Grewal told me, “whilst bemoaning the lack of sufficiently bold and courageous politicians to take the European project forward.…The alternative to the transnationalization of democracy is, Habermas continues to suggest, a sort of post-democratic 'executive federalism', with shades of the opinion poll-watching, media-manipulating approach of figures such as Berlusconi and Putin.”
He acknowledges that there are people who don’t see this as an either-or option. It’s possible have both continent-spanning constitutional democracy and a political system in which media manipulation and pandering ensure that decision-making continues behind closed doors. Is it ever....
But even aside from that, why does Habermas count on bold and courageous politicians for the kind of change he wants? Part of his frustration, no doubt, is that he’s counting on the actions of people who don’t exist, or get sidelined quickly if they do. Democracy doesn’t come from on high. I respect the man's intentions and persistence, but wish he would come up with a better strategy.
Right after last month’s shootings in Aurora, Colo., I started reading George Michael’s Lone Wolf Terror and the Rise of Leaderless Resistance (Vanderbilt University Press) as well as a few recent papers on solo-organized political violence. It proved easy to put off writing a column on this material. For one thing, the official publication date for Lone Wolf Terror isn’t until mid-September. Plus, a single bloodbath is grim enough to think about, let alone a trend toward bloodbaths.
But the most pertinent reason for not writing about the book following the Aurora massacre was simply that James Holmes (whom we are obliged by the formalities to call “the alleged gunman,” though nobody has disputed the point) didn’t really qualify as an example of lone-wolfdom, at least as defined in the literature. In “A Review of Lone Wolf Terrorism: The Need for a Different Approach,” published earlier this year in the journal Social Cosmos, Matthijs Nijboer marks out the phenomenon’s characteristics like so:
“Lone wolf terrorism is defined as: '[…] terrorist attacks carried out by persons who (a) operate individually, (b) do not belong to an organized terrorist group or network, and (c) whose modi operandi are conceived and directed by the individual without any direct outside command or hierarchy' ... Common elements included in several accepted definitions [of terrorism] include the following: (1) calculated violence, (2) that instills fear, (3) motivated by goals that are generally political, religious or ideological. These guidelines help distinguish [lone-wolf] terrorist attacks from other forms of violence.”
The actions of Ted Kaczynski and Anders Breivik fall under the heading of lone-wolf terrorism. They had what they regarded as reasons, and even presented them in manifestoes. So far, James Holmes has given no hint of why he shot people and booby-trapped his apartment with explosives. If he ever does put his motives into words, it’ll probably be something akin to Brenda Ann Spencer’s reason for firing on an elementary school in 1979: “I don’t like Mondays. This livens up the day.” Something about Holmes dyeing his hair so that he looks like a villain from "Batman"gives off the same quality of insanity tinged with contempt.
George Michael, the author of Lone Wolf Terror and the Rise of Leaderless Resistance, is an associate professor of nuclear counterproliferation and deterrence at the Air War College. He does not completely dismiss psychopathology as a factor in lone-wolf violence (bad neurochemistry most likely played as big a role in both Kaczynski’s and Breivik’s actions as ideology did, after all). But for the most part Michael treats lone-wolf violence as a new development in the realm of strategy and tactics – something that is emerging as a response to changes in the ideological and technological landscapes.
As it happens, the book appears during the 20th anniversary of the prophetic if ghastly document from which Michael borrows part of his title: “Leaderless Resistance,” an essay by Louis Beam, whom Michaels identifies in passing as “a firebrand orator and longstanding activist.” Fair enough, although “author of Essays of a Klansman” also seems pertinent.
Beam’s argument, in brief, was that the old-model hate group (one that recruited openly, held public events, and believed in strength through numbers) was now hopelessly susceptible to surveillance and infiltration by the government, as well as vulnerable to civil suits. The alternative was “phantom cells,” ideally consisting of one or two members at most and operating without a central command.
As Michael notes, Beam’s essay from 1992 bounced around the dial-up bulletin boards of the day, but it also bears mentioning that the boards were a major inspiration for Beam’s ideas in the first place. (He set up one for the Aryan Nations in 1984.) Versions of the leaderless-resistance concept soon caught on in other milieus that Michaels discusses, such as the Earth Liberation Front and the Islamicist/jihadist movements. It’s improbable that Beam’s writings were much of an influence on these currents. More likely, Beam, as an early adopter of a networked communication technology, came to anti-hierarchical conclusions about how risky activity might be organized that others would reach on their own, a few years later.
The other technological underpinning of small-scale or lone-wolf operations is the continuous development of ever more compact and deadly weaponry. Bombs and semiautomatic firearms being the most practical options for now, though the information is out there now for anyone trying to build up a private atomic, biological, or chemical arsenal. Factor in the vulnerable infrastructure that Michael lists (including pipelines, electrical power networks, and the information sector) and it’s clear how much potential exists for mayhem unleashed by a single person.
In the short term, Michael writes, “increased scrutiny by law enforcement and intelligence agencies will continue to make major coordinated terrorist activities extremely difficult, but not impossible. Although the state’s capacity to monitor is substantial, individuals can still operate covertly and commit violence with little predictability. Leaderless resistance can serve as a catalyst spurring others to move from thought to action, in effect inspiring copycats.”
And in the longer term, he regards all of it as the possible harbinger of a new mode of warfare in which a lone-wolf combatants have a decisive part -- with leaderless resistance already a major factor in shaping the globalized-yet-fragmented 21st century.
Maybe so. Something horrible could happen to confirm his beliefs before you finish reading this sentence. But just sobering are the findings from a study (available here) conducted by the Institute for Security and Crisis Management, a think tank in the Netherlands. The researcher found that lone-wolf attacks represented just over 1 percent of the all terrorist incidents in its survey of a dozen European countries plus Australia, Canada, and the United States between January 1968 and May 2007. “Our findings further seem to indicate that there has not been a significant increase in lone-wolf terrorism in [all but one of the] sample countries over the past two decades.”
Only in the U.S. did lone-wolf attacks account for more than a “marginal proportion” of terrorism, “with the U.S. cases accounting for almost 42 percent of the total;” 80 percent of them involved domestic rather than international issues. The report suggested the "significant variation" from the norm in other countries in the study "can partly be explained by the relative popularity of this strategy among white supremacists and anti-abortion activists in the United States." In any event, the researchers found that as of 2007, the trend toward lone-wolf terror had been growing markedly in the U.S., if not elsewhere.
Something else I'd rather not think about. A few days after I put Lone Wolf Terror to the side for a while, there came news of the shootings at the Sikh temple in Wisconsin. You can only tune these things out for just so long. They always come back.
Keeping the costs of textbooks and other learning tools as low as possible for today’s college students is a goal almost everyone can agree upon. How to accomplish that goal, however, is another matter entirely.
And pursuing that goal in the courts, where sweeping decisions can render in a minute what might otherwise take years to implement, is risky at best and counterproductive at worst.
Sometimes, however, savings for students can be found in the most unlikely of places. To prove my point, take a close look at Cambridge University Press v. Becker, widely known as the Georgia State University (GSU) E-Reserves case, initially ruled upon three months ago by U.S. Federal District Court Judge Orinda Evans, who issued a further ruling last Friday.
Most of the press coverage of Judge Evans’s ruling concentrated on its delineation of the many ways that colleges can continue to cite the doctrine of “fair use” to permit their making copies of books and other materials for use in teaching and the pursuit of scholarship. And, to be fair (pardon the pun), in 94 of the 99 instances claimed by academic publishers such as Cambridge, Oxford and Sage to be violations of copyright, the judge did rule that GSU and its professors were covered by fair use.
But in its fair use assessment, the court made two important rulings: (1) it created a bright line rule for the amount of text that can be copied; and (2) it established that when publishers make excerpts available for licensing (particularly in digital form), the publisher has a better chance of receiving those licensing fees (i.e., it is less likely to be held fair use). With regard to the first ruling, the key point is that the guesswork has been taken out. Specific amounts (10 percent of a book if less than 10 chapters, or 1 chapter of a book if more than 10 chapters) allowable for copy have been set.
The second ruling is even more significant. At first glance, it might seem that licensing “fees” have negative ramifications for students, as they would now be forced to “pay” for materials that would otherwise be “free.” But the nuanced reality of the ruling, at least in my view, is that this will actually do more to keep student book prices down than the commonly accepted benefits of fair use.
Here’s why: without this finding, many small and mid-size academic publishers might otherwise be priced out of participating in the higher education market and a handful of larger textbook players could multilaterally decide to raise prices within their tight but powerful group, serving to hurt students’ pocketbooks in the process.
However, the ability for all publishers -- small, medium and large -- to sell excerpts that are “reasonably available, at a reasonable price” levels the playing field for suppliers of content. This then leads to a pricing scheme that rewards the creation of effective units of content, meaning that students are paying only for what is most relevant to their studies, and not the extra materials that inevitably become part of comprehensive textbook products.
Disaggregation of content therefore, is not a license to charge students for materials that would otherwise be free. Instead, disaggregation is an enabler of the provision of targeted, highly relevant content that, in the end, may actually cost students less than their purchase of more generalized materials that often include content not taught in a particular class.
The pricing of disaggregated content is, to be sure, set entirely by the publisher. But a publisher faced with an opportunity to amortize a portion of its intellectual investment through what is, in effect, a “permission fee” per student or to hold fast to a view of “buy the entire book or nothing at all” will, I am fairly certain, come to a quick realization that unit pricing is the way to go.
If “a small excerpt of a copyrighted book is available in a convenient format and at a reasonable price, then that factor [in the fair use assessment] weighs in favor of the publisher to be compensated for such academic use,” according to Judge Evans’s initial ruling in the GSU E-Reserves case. This not only stands in her recent ruling, it is reasonable because it incentivizes publishers to make their content more readily available to be licensed and it provides a mechanism by which academic institutions can take advantage of those licenses.
From the outset, the purpose of the GSU E-Reserves case, as brought by the plaintiff publishers, was to try to bring some judicial clarity to GSU’s practice of posting large amounts of copyrighted material to e-reserves system under a claim of fair use.
Now, with this latest ruling by Judge Evans, the copyright picture is beginning to clarify, but a healthy debate of the meaning of the ruling remains in order. As CEO of a company that strives to make available copyright-cleared units of content for professors to assemble into “best-of” books, I’ve just provided my take. What’s yours?
Caroline Vanderlip is CEO of SharedBook Inc., parent company of AcademicPub.