John G. Browning’s recent essay on Inside Higher Ed fires many of the traditional bullets at student-edited law journals: They are overly theoretical, redundant, costly, and despite being edited by 20-somethings, are clumsily adapted to the digital age. I serve as editor-in-chief of Cardozo Law School’s Journal of Conflict Resolution, apparently one of these outmoded publications. Like most student editors, I’ve become accustomed to reading criticisms like these. Similar sentiments have been published in The New York Times,The Atlantic, and Legal Affairs. Browning is in good company.
Critics have their hearts in the right place. But their arguments are flawed in two ways: First, they dramatically overgeneralize the varied landscape of student-edited legal journals and the articles they publish. Second, critics view the primary mission of law journals as helping appellate judges and practicing lawyers. In fact, students are the primary beneficiaries of law reviews. Practicing lawyers and judges are important audiences too, but not as central as critics claim.
Nailing Down the Complaints
Criticism of law journals, like criticism of lawyers, is a time-honored American tradition. The common argument goes like this: Law journals publish bizarre theoretical pieces that are totally removed from real-world legal practice. As journals proliferate, they are becoming increasingly useless to practicing lawyers, and are failing in their primary mission of influencing judicial opinions. Browning makes his case, like many critics do, by citing some pompous-sounding topics of recently published pieces.
The vision of useless “theory” articles gained traction last year with a comment by U.S. Supreme Court Chief Justice John Roberts: “Pick up a copy of any law review that you see,” he said, “and the first article is likely to be, you know, the influence of Immanuel Kant on evidentiary approaches in 18th-century Bulgaria, or something which I’m sure was of great interest to the academic that wrote it, but isn’t of much help to the bar.”
This system persists, say the critics, because law professors are either innately interested in these abstruse topics or merely write under the pressure of the publish-or-perish system. More cynically, law schools themselves have a secondary incentive for subsidizing law reviews (which often operate at significant financial losses). Schools aim to build their reputations in specialty fields. “Reputation” is often code for the infamous reputational index on U.S. News & World Report -- a ranking of schools’ programs by professors in a particular field. If a school hopes to build its rank in taxation, for example, it might consider creating a tax law journal. Such a journal would allow tax scholars to publish with the school, probably bring such scholars to campus to participate in conferences, and generally increase the reputation of the school among experts in the field.
In short, critics cast law journals as nothing more than vehicles for prestige for schools and tenure for professors.
A Response: All Theory and No Practice?
To start, critics like Browning severely overgeneralize the landscape of law reviews. All student-edited legal publications are lumped together into a monolithically useless heap. In the minds of critics, these journals all publish on obscure theories of legal philosophy and hermeneutics. This is simply not the reality.
Yes, some journals are “theory-heavy”—the Yale Law Journal, William & Mary’s Bill of Rights Journal, and Washington University’s Jurisprudence Review, to name a few. But average law reviews and most specialty journals (journals that focus on particular areas, like real estate or intellectual property) are keenly interested in publishing relevant scholarship. Don’t believe me? Visit a few law journal websites and scroll through their recent tables of contents. Sure, you’ll encounter the occasional oddball pretentious titles. But you’ll also find articles firmly grounded in reality — articles that, as Sherrilyn Ifill of the University of Maryland said, “offer muscular critiques of contemporary legal doctrine, alternative approaches to solving complex legal questions, and reflect a deep concern with the practical effect of legal decision-making on how law develops in the courtroom.” Indeed, many law journal articles are written or co-written by practicing attorneys.
My own journal is a good example. We publish exclusively on arbitration, negotiation and mediation—all very practical processes for problem-solving, particularly in a world where the vast majority of cases settle out of court. To the extent that we publish “theory,” the articles discuss innovative designs for new adjudicative or dispute resolution systems.
Another rebuttal. Critics of law reviews complain that we’re failing at our key mission — being cited by appellate courts. Since when is judicial citation our raison d’être? Don’t get me wrong: I fantasize about Justice Kagan going to sleep with a copy of the Journal of Conflict Resolution on her nightstand before hearing a case arising under the Federal Arbitration Act. But contrary to the assertions of critics, I have never met a fellow law journal editor who selects articles for publication solely (or even predominantly) because of the likelihood that a judge somewhere may someday cite it. We choose articles that are thought-provoking and cutting-edge, but not merely because we desire judicial attention.
A corollary point is that law reviews do not necessarily need to be cited by courts in order to aid practicing attorneys. Attorneys facing specific legal situations often search journal databases for a starting point. Articles they discover serve as both invaluable synopses and comprehensive bibliographies of relevant precedents and statutes. That sort of impact is very difficult to measure unless the article itself is cited by a judge. (And since the great majority of cases settle before a judge ever sees them, how reliable a metric are judicial citations anyway?).
Critics underscore the role of law reviews in aiding practicing attorneys. This is surely one of their important functions. But the truth is, journals also exist for the benefit of their student editors. Students hone their legal research and writing skills by doing careful editing, citation formatting, and proposition-checking. They produce their own publishable pieces of original scholarship, usually commentaries on recent cases. Third-year students have the additional experience of managing a large team, controlling a significant budget, and interacting with leading scholars from around the world. Even if no appellate judge ever cites an article, the average student will still have grown tremendously by editing it.
Browning worries that student-edited journals are not useful for practicing attorneys. I disagree. But even adopting his assertion, attorneys still have access to innumerable attorney-edited journals published by bar associations across the country. Student law reviews are not their only resource, or even necessarily their best resource.
A Concession: An Archaic Publishing Process
Browning is absolutely right in one respect: the current model of journal publishing is entirely outdated. (I suspect this is true for non-legal academic journals too). Right now, law reviews submit content to one of the handful of specialized law review publishers. Those publishers print the content and mail our book-like volumes to subscribers. The publisher also sends that content to Westlaw and LexisNexis, the two major online legal research databases. Journals charge a small subscription fee and, sometimes, a content reuse fee if their articles are reprinted in textbooks. West and Lexis charge a great deal more. Most journals operate at a loss.
What is peculiar about this system is that many journals also publish their content as PDFs on their websites. PDFs, of course, are searchable by Google and easily findable through Google Scholar. For free. In the 21st century, I have absolutely no idea why any library or practitioner subscribes to the print edition of any law journal. (Though don’t repeat that to our valued subscribers). And I have absolutely no idea why any law school wastes money subsidizing the cost of printing and mailing them. (Though don’t repeat that to my generous dean).
The answer is probably about competition; no law school wants to be the first to go “online only.” If prestige is truly the obstacle — we are talking about lawyers here — the solution is an industrywide collusion. If deans from a collection of law schools discussed this, perhaps during an Association of American Law Schools (AALS) conference, they could reach a disarmament agreement. Depending on the number of journals a school operates, this shift could easily save five or six figures annually.
As the digital age moves along, law reviews should reject not only print publication, but also the very idea of distinct “volumes” and “issues.” What is an “issue” of a law journal but a collection of articles on a random assortment of current topics? Few issues have a single unifying theme that merits their being bound together in a book format. Law journals should move to a model of rolling deadlines more akin to digital journalism. Scholarship can be published whenever it is ready, and at a much lower cost.
If the goal of law journals is to influence appellate judges in their day-to-day decision-making, perhaps they are failing. But most journals don’t (or shouldn’t) adopt that as their sole metric for success. A better metric is what the students on the journal are gaining from the experience. Journals expose law students to truly complicated, confusing, complex writing. Some of that writing is terrible and pompous, some vivacious and sharp. Some is hackneyed, some is intellectually subversive and trailblazing. Thus, academic legal writing mirrors the mélange of actual writing — i.e., briefs, memos and judicial opinions — that attorneys will encounter throughout their careers.
Journals teach student editors to sharpen complex legal arguments, clarify language, format intricate citations, and work long hours to hone a final product. More sentimentally, the journal process reminds students that no legal doctrine is static. Law is subject to thinking and rethinking, argument and re-argument. Authority can not only be cited but questioned — by smart lawyers, through their writing.
There is so much to fix in modern legal education. Are student-edited law journals really so bad?
Brian Farkas is a third-year student at Cardozo School of Law and editor-in-chief of the Cardozo Journal of Conflict Resolution.
Scholarly publishing consultants Tracy Gardner and Simon Inger recently concluded a large-scale study of how researchers navigate the flood of digitized scholarly content. Renew Training, the British company they run, will sell you the complete data set for a mere £1000 (that's $1,592), or the same information in a deluxe Excel spreadsheet, outfitted with specially designed an analytic features, for £2,500 (a cool $3,981). Anyone whose curiosity is merely idle or penniless must settle for the “survey edition” of the consultants' own analysis, in PDF, which is free.
As you would expect, it's more of an advertisement than a report, with graphs that hint at how much data they have, and how many kinds of it, from around the world. Gardner and Inger’s own report, “How Readers Discover Content in Scholarly Journals,” is available in e-book format at a reasonable price – so I sprang for a copy and have culled some of their findings for this week’s column.
The key word here being some, because even the consultants’ non-exhaustive crunching of the numbers is pretty overwhelming. Between May and July of this year, they collected responses from more than 19,000 interview subjects spanning the populated world. The questions covered various situations in which someone might go looking for scholarly articles in a digital format and the considerable range of ways of going about it. Two-thirds of respondents were from academic institutions – with a large majority (three out of four) identifying themselves as researchers.
Roughly two-thirds of the respondents were from North America and Europe, and the interview itself was conducted in English. But enough participants came from the medical, corporate, and government sectors, and from countries in Africa, Oceania, and South America, to make the study something other than a report on Anglo-American academe. In addition, Gardner and Inger conducted a similar survey in 2008 (albeit with a much smaller harvest of data, from around 400 respondents). They also draw on a study they conducted in 2005 as consultants for another group.
The trends, then. The range and size of digitally published scholarship keep growing, and a number of tools or approaches have developed for accessing material. Researchers rely on university library sites, abstracting and indexing (A&I) services, compilations of links assembled by learned societies or research teams, social networks, and search engines both general (Yahoo) and focused (Google Scholar). You might bookmark a favorite journal, or sign up for an e-mail alert when the table of contents for a new issue is out, or use the journal publisher’s website to find an article.
The survey questions cover three research “behaviors” common across the disciplines: (1) following up a citation, (2) browsing in the core journals in a given field, and (3) looking for articles on a specific subject. As indicated, quite a few ways of carrying out these tasks are now available. Some approaches are better-developed in one field than another. The survey shows that researchers in the life sciences use the National Institutes of Health's bibliographical database PubMed “almost exclusively,” while the e-mailed table-of-contents (ToC) notifications for chemistry journals are rich enough in information for their readers to find them valuable.
And ease of access to sorting-and-channeling methods varies from one part of the world to the next. A researcher in a poor country is likely to use the search feature on a publisher’s website (bookmarked for just that purpose) for the simple reason that doing so is free – while someone working in a major research library may have access to numerous bibliographical tools so well-integrated into the digital catalog that users barely notice them as such.
North American researchers “are most likely to use an academic search engine or the library web pages if they have a citation,” the reports notes, “whilst Europeans are more likely to go the journal’s homepage.” Humanities scholars “rely much more on library web pages and especially aggregated collections of journals” than do researchers in the life sciences.
Comments made by social scientists reveal that they use “a much more varied list of resources” for following up citations, including one respondent who relied on “my husband’s library because mine is so bad.”
When browsing around the journals in their field, researchers in the field of education “are greater users of academic search engines and of web pages maintained by key research groups” than are people working in other areas. “Social scientists appear to use journal aggregations less than those in the humanities for reading the latest articles.” And all of them rank “library web pages and journal aggregations more highly” than do people in medicine and the physical and life sciences. One respondent indicated that it wasn’t really necessary to look through recent issues of journals in mathematics because “nowadays virtually all leading research in math is uploaded to arXiv.”
Specialized bibliographical databases “are still the most popular resource” for someone trying to read up on a particular topic, “and allowing for a margin of error [this preference] shows no significant change over time.” The web pages compiled by scholarly societies and research groups “have both shown a slight upward trend” in that regard, “which may be due to changes in publisher marketing strategies resulting in readers becoming more familiar publisher and society brands.”
The rise of academic search engines is a new factor -- and while there are others, such as Microsoft Academic Search, the bar graphs show Google Scholar looming over all competitors like a skyscraper over huts. And that’s not even counting the general-purpose Google search engine, which remains a standard tool for academic researchers.
One interesting point that the authors extract from the comments of participants is that many scholars remain unclear on the difference between a search engine and, say, a specialized bibliographical database. Unfortunately the survey seems not to have included information on respondents’ ages, though it would be interesting to know if that is a factor in recognizing such distinctions.
As I said, the e-book version is reasonably priced, and well within reach of anyone intrigued by this column's aerial survey. The publishers and information managers who can afford the full-dress, all-the-data version, which will allow comparison between the research preferences of Malaysian physicists and German historians, and so forth, will be able to extract from it information on how better to engineer access to their content by the specific research constituencies using it.
For the rest of us, it's a reminder of how many methods we have available for gaining access to the labyrinth of digital scholarship -- and, perhaps, of how much we take them for granted.
In June, Inside Higher Edtold readers about Princeton University Press’s impending experiment with a political-science volume on the 2012 presidential election: It would make excerpts from the work-in-progress available online free while the campaign was still under way. It required a “truncated timetable” for peer review -- getting the readers’ reports back in two or three weeks instead of a few months.
Given the stately pace of scholarly publishing, such a turnaround counts as feverish. By the standards of punditry, it’s almost languorous. The idea was to give the public access to portions of The Gamble: Choice and Chance in the 2012 Election just as the convention season began.
And so they are. Two chapters are now available for download from the Princeton website, here and here. The authors, John Sides and Lynn Vavreck, also have a website for the book. (They are associate professors of political science at George Washington University and the University of California at Los Angeles, respectively.) The material runs to about a hundred pages of text.
It would be hard to read Sides's and Vavreck’s work during the conventions, amid all the funny hats and confetti. But their research puts a couple of things about the campaigns into perspective. Keep in mind that the authors are responding not just to data (most of it quantitative) but to the received wisdom of the past several months regarding the campaign -- and on two points in particular.
Each is an assessment of a candidate’s presumed vulnerabilities.
The first holds that President Obama’s chances of re-election depend -- more than anything else, and perhaps even exclusively -- on the state of the economy. Incumbency has its advantages, but unemployment rates could trump them. The second is that Mitt Romney lacks the support of his party’s base, which is considerably to the right of him on both social and economic issues. Romney doesn’t suffer from Sarah Palin’s very negative approval rating among the public at large, but he can’t count on the support of her followers, Twitter and otherwise.
In social-science books, the methodology is usually as salient as the findings themselves. Each chapter comes with an appendix stuffed with additional analysis. Suffice it to say that a few patterns have emerged from studies of presidential campaigns in the past. The challenge is to move from generalizations about yesteryear to the electoral battle now unfolding.
For example, the country’s economic performance during a president’s administration -- but especially in the months just before the election -- is a pretty solid index of his re-electability. In the sixteen presidential elections between 1948 and 2008, changes in gross domestic product between January and September of the election year tracked closely to the fortunes of the incumbent party’s candidate. “It’s hard to beat an incumbent party in a growing economy,” Sides and Vavreck write, “and even harder to beat the actual incumbent himself.”
When the change in GDP over the three quarters preceding the election is negative, the incumbent party’s presidential candidate is sure to lose -- at least if the examples of Nixon (1960), Carter (1980), and McCain (2008) are anything to go by. The stronger the economic contraction, the bigger the defeat.
But the pattern of the past 60 years isn’t much help for handicappers of the race now under way. GDP during the first quarter of 2012 grew an incumbent-friendly 2 percent, while the initial estimate for the second quarter was 1.5 percent growth. As it happens, this column is running on August 29, when the Bureau of Economic Analysis is scheduled to issue a revised estimate of second-quarter growth based on additional data. (And the first estimate of GDP in third quarter isn’t out until 12 days before the election.)
In any case, the GDP itself can give only a rough sense of how voters experience and understand the economy. Sides and Vavreck have developed a model that correlates public-opinion poll results from each quarter between 1948 and 2008 with a number of other data points. These include three economic factors (unemployment and inflation rates, plus the change in GDP between quarters) as well as “events such as scandals and wars that might push approval [ratings] up and down” and the president’s length of time in office, counted in quarters.
From all of this information, the authors extracted a general model of how much each factor counted in determining the presidential approval ratings. Then they ran all the numbers again to see how well the general model could retroactively “predict” the changes in each president’s approval ratings from quarter to quarter. And the model proved good at it. The actual quarterly ratings were usually quite close to what the formulas expected, given the economic and other factors in play.
Plugging in relevant data for 2009-2011, the authors generated a graph showing the approval ratings that would be expected given the tendencies of the previous six decades. Here things get interesting:
“Although early on in his presidency Obama was slightly less popular than expected (by about 1 percent throughout most of 2009 and 2010), by the end of 2010 and continuing into 2012, he was more popular. In 2011, his popularity exceeded expectations by over 6 points. This feat is something that few presidents have accomplished. Only one president, Ronald Reagan, consistently ‘beat’ the prediction in his first term to an extent greater than Obama.”
The authors suggest that, if anything, their model may have overestimated the level of Obama popularity that might be expected if all things were equal, relative to earlier presidencies -- which they weren’t. The economic slump that began in 2008 has been deeper, and lasted longer, than any over the previous 60 years. High unemployment yields diminished approval ratings, of course -- but compounding it with a rise in long-term unemployment should presumably push them down even harder.
At the same time, the model does not account for what the authors call “the ‘penalty’ of his race” -- the marked tendency of those with negative attitudes toward black people in general to disapprove of Obama in particular. Sides and Vavreck estimate that his approval rating might be up to four points higher if not for his skin color.
In short, Obama entered the 2012 campaign with considerably more support than one might expect given the lackluster economy. The authors leave it to others to speculate on the source of this strength. But what about his opponent? Isn't Mitt Romney out of step with the rest of his party -- hence vulnerable to conservatives staying home?
When he emerged from the Republican primary season a few months back, Romney seemed less a victor than the last man standing. And inexplicably so: until a few years back, he spoke in favor of both Roe v. Wade and LGBT equality. And Jonathan Gruber, the economist at the Massachusetts Institute of Technology and "intellectual architect" of the healthcare reform bill that Romney crafted while governor of Massachusetts, has compared it to Obamacare in colorful terms: “it’s the same [flippin’] bill.” Shouldn’t he be exhibited by the Smithsonian Institution as the last surviving member of an extinct species, the Rockefeller Republican?
Sides and Vavreck challenge the idea that the Republican primary process revealed a deep yearning by conservatives for “Anyone But Romney.” All the other candidates courted them assiduously, only to be done in by scandal or gaffe or the inability to remember which government programs he or she intended to close down, once in office. Romney was what the party had left. (Actually "had left" is probably a bad way of putting it.)
The authors concede that Romney “never ‘surged’ in the polls" in late 2011 and early ’12, "and never experienced the reinforcing cycle of positive news coverage and gains in the polls.” As a result, he "appear[ed] to be a weak candidate, unloved by many in the party. But this also concealed the underlying structure of the race, which tilted in his favor.” A poll from last December showed that he “was viewed positively by likely Republican primary voters whether they were conservatives or moderates, pro-life or pro-choice, relatively wealthy or not.” More than two-thirds of the Tea Party members surveyed expressed a favorable opinion of him, with non-Tea Party people doing so at the same rate.
The authors make their case with charts, graphs, and whatnot, but looking at them, I felt some cognitive dissonance. It’s hard to shake the impression that the GOP has a sizable wing that is so far to the right of Romney that he had to placate them with a veep candidate with stronger conservative credentials. When I raised the issue to the authors by e-mail, Sides replied that "people have overestimated two things about GOP voters: (1) just how conservative they are (or perceive themselves), relative to how they perceived Romney; and (2) how much ideology drove their feelings about Romney and the other candidates.” That misperception was strengthened by the “media boomlets” that seem intrinsic to the 24-hour news cycle.
“When news coverage focused on a candidate other than Romney and that candidate had conservative bona fides,” Sides continued, “then conservatives were more likely to vote for that person than Romney…. But this does not mean they were implacably opposed to Romney. Preferring another candidate to Romney is not the same as opposing Romney.” He may have won out “not because he was ideologically who every conservative activist or voter wanted, but because he was the compromise candidate of the various party's factions. It doesn't mean he was widely loved, but he was satisfactory to all. Which makes him like most other presidential candidates, really.”
The authors are still analyzing the primary season while also following the latest twists and turns of the process. I wondered if that meant the chapters now available were working drafts of a sort.
“We will probably rework the chapters a little bit,” Lynn Vavreck wrote back, “but not very much I suspect. We may adjust some of the error or uncertainty estimates, but the general take-aways will remain the same.”
Writing a monograph with the campaign still in motion is a way to shake things up some in the discipline. “It bothered us that parties, candidates, consultants, and journalists had better data on campaigns and elections than political scientists had -- and we wanted to be a part of what was happening, when it was happening, so we could share in those data and use them in real time.”
They hope the project serves as a model to others, while acknowledging that it’s “not the kind [of effort] that academics are typically strong on making -- partnerships have to be forged, things have to be delivered on deadline, and you have to promote your results and your work to a wider audience.” It sounds like what anyone else engaged in politics must do, except with a bibliography.
Right after last month’s shootings in Aurora, Colo., I started reading George Michael’s Lone Wolf Terror and the Rise of Leaderless Resistance (Vanderbilt University Press) as well as a few recent papers on solo-organized political violence. It proved easy to put off writing a column on this material. For one thing, the official publication date for Lone Wolf Terror isn’t until mid-September. Plus, a single bloodbath is grim enough to think about, let alone a trend toward bloodbaths.
But the most pertinent reason for not writing about the book following the Aurora massacre was simply that James Holmes (whom we are obliged by the formalities to call “the alleged gunman,” though nobody has disputed the point) didn’t really qualify as an example of lone-wolfdom, at least as defined in the literature. In “A Review of Lone Wolf Terrorism: The Need for a Different Approach,” published earlier this year in the journal Social Cosmos, Matthijs Nijboer marks out the phenomenon’s characteristics like so:
“Lone wolf terrorism is defined as: '[…] terrorist attacks carried out by persons who (a) operate individually, (b) do not belong to an organized terrorist group or network, and (c) whose modi operandi are conceived and directed by the individual without any direct outside command or hierarchy' ... Common elements included in several accepted definitions [of terrorism] include the following: (1) calculated violence, (2) that instills fear, (3) motivated by goals that are generally political, religious or ideological. These guidelines help distinguish [lone-wolf] terrorist attacks from other forms of violence.”
The actions of Ted Kaczynski and Anders Breivik fall under the heading of lone-wolf terrorism. They had what they regarded as reasons, and even presented them in manifestoes. So far, James Holmes has given no hint of why he shot people and booby-trapped his apartment with explosives. If he ever does put his motives into words, it’ll probably be something akin to Brenda Ann Spencer’s reason for firing on an elementary school in 1979: “I don’t like Mondays. This livens up the day.” Something about Holmes dyeing his hair so that he looks like a villain from "Batman"gives off the same quality of insanity tinged with contempt.
George Michael, the author of Lone Wolf Terror and the Rise of Leaderless Resistance, is an associate professor of nuclear counterproliferation and deterrence at the Air War College. He does not completely dismiss psychopathology as a factor in lone-wolf violence (bad neurochemistry most likely played as big a role in both Kaczynski’s and Breivik’s actions as ideology did, after all). But for the most part Michael treats lone-wolf violence as a new development in the realm of strategy and tactics – something that is emerging as a response to changes in the ideological and technological landscapes.
As it happens, the book appears during the 20th anniversary of the prophetic if ghastly document from which Michael borrows part of his title: “Leaderless Resistance,” an essay by Louis Beam, whom Michaels identifies in passing as “a firebrand orator and longstanding activist.” Fair enough, although “author of Essays of a Klansman” also seems pertinent.
Beam’s argument, in brief, was that the old-model hate group (one that recruited openly, held public events, and believed in strength through numbers) was now hopelessly susceptible to surveillance and infiltration by the government, as well as vulnerable to civil suits. The alternative was “phantom cells,” ideally consisting of one or two members at most and operating without a central command.
As Michael notes, Beam’s essay from 1992 bounced around the dial-up bulletin boards of the day, but it also bears mentioning that the boards were a major inspiration for Beam’s ideas in the first place. (He set up one for the Aryan Nations in 1984.) Versions of the leaderless-resistance concept soon caught on in other milieus that Michaels discusses, such as the Earth Liberation Front and the Islamicist/jihadist movements. It’s improbable that Beam’s writings were much of an influence on these currents. More likely, Beam, as an early adopter of a networked communication technology, came to anti-hierarchical conclusions about how risky activity might be organized that others would reach on their own, a few years later.
The other technological underpinning of small-scale or lone-wolf operations is the continuous development of ever more compact and deadly weaponry. Bombs and semiautomatic firearms being the most practical options for now, though the information is out there now for anyone trying to build up a private atomic, biological, or chemical arsenal. Factor in the vulnerable infrastructure that Michael lists (including pipelines, electrical power networks, and the information sector) and it’s clear how much potential exists for mayhem unleashed by a single person.
In the short term, Michael writes, “increased scrutiny by law enforcement and intelligence agencies will continue to make major coordinated terrorist activities extremely difficult, but not impossible. Although the state’s capacity to monitor is substantial, individuals can still operate covertly and commit violence with little predictability. Leaderless resistance can serve as a catalyst spurring others to move from thought to action, in effect inspiring copycats.”
And in the longer term, he regards all of it as the possible harbinger of a new mode of warfare in which a lone-wolf combatants have a decisive part -- with leaderless resistance already a major factor in shaping the globalized-yet-fragmented 21st century.
Maybe so. Something horrible could happen to confirm his beliefs before you finish reading this sentence. But just sobering are the findings from a study (available here) conducted by the Institute for Security and Crisis Management, a think tank in the Netherlands. The researcher found that lone-wolf attacks represented just over 1 percent of the all terrorist incidents in its survey of a dozen European countries plus Australia, Canada, and the United States between January 1968 and May 2007. “Our findings further seem to indicate that there has not been a significant increase in lone-wolf terrorism in [all but one of the] sample countries over the past two decades.”
Only in the U.S. did lone-wolf attacks account for more than a “marginal proportion” of terrorism, “with the U.S. cases accounting for almost 42 percent of the total;” 80 percent of them involved domestic rather than international issues. The report suggested the "significant variation" from the norm in other countries in the study "can partly be explained by the relative popularity of this strategy among white supremacists and anti-abortion activists in the United States." In any event, the researchers found that as of 2007, the trend toward lone-wolf terror had been growing markedly in the U.S., if not elsewhere.
Something else I'd rather not think about. A few days after I put Lone Wolf Terror to the side for a while, there came news of the shootings at the Sikh temple in Wisconsin. You can only tune these things out for just so long. They always come back.
Keeping the costs of textbooks and other learning tools as low as possible for today’s college students is a goal almost everyone can agree upon. How to accomplish that goal, however, is another matter entirely.
And pursuing that goal in the courts, where sweeping decisions can render in a minute what might otherwise take years to implement, is risky at best and counterproductive at worst.
Sometimes, however, savings for students can be found in the most unlikely of places. To prove my point, take a close look at Cambridge University Press v. Becker, widely known as the Georgia State University (GSU) E-Reserves case, initially ruled upon three months ago by U.S. Federal District Court Judge Orinda Evans, who issued a further ruling last Friday.
Most of the press coverage of Judge Evans’s ruling concentrated on its delineation of the many ways that colleges can continue to cite the doctrine of “fair use” to permit their making copies of books and other materials for use in teaching and the pursuit of scholarship. And, to be fair (pardon the pun), in 94 of the 99 instances claimed by academic publishers such as Cambridge, Oxford and Sage to be violations of copyright, the judge did rule that GSU and its professors were covered by fair use.
But in its fair use assessment, the court made two important rulings: (1) it created a bright line rule for the amount of text that can be copied; and (2) it established that when publishers make excerpts available for licensing (particularly in digital form), the publisher has a better chance of receiving those licensing fees (i.e., it is less likely to be held fair use). With regard to the first ruling, the key point is that the guesswork has been taken out. Specific amounts (10 percent of a book if less than 10 chapters, or 1 chapter of a book if more than 10 chapters) allowable for copy have been set.
The second ruling is even more significant. At first glance, it might seem that licensing “fees” have negative ramifications for students, as they would now be forced to “pay” for materials that would otherwise be “free.” But the nuanced reality of the ruling, at least in my view, is that this will actually do more to keep student book prices down than the commonly accepted benefits of fair use.
Here’s why: without this finding, many small and mid-size academic publishers might otherwise be priced out of participating in the higher education market and a handful of larger textbook players could multilaterally decide to raise prices within their tight but powerful group, serving to hurt students’ pocketbooks in the process.
However, the ability for all publishers -- small, medium and large -- to sell excerpts that are “reasonably available, at a reasonable price” levels the playing field for suppliers of content. This then leads to a pricing scheme that rewards the creation of effective units of content, meaning that students are paying only for what is most relevant to their studies, and not the extra materials that inevitably become part of comprehensive textbook products.
Disaggregation of content therefore, is not a license to charge students for materials that would otherwise be free. Instead, disaggregation is an enabler of the provision of targeted, highly relevant content that, in the end, may actually cost students less than their purchase of more generalized materials that often include content not taught in a particular class.
The pricing of disaggregated content is, to be sure, set entirely by the publisher. But a publisher faced with an opportunity to amortize a portion of its intellectual investment through what is, in effect, a “permission fee” per student or to hold fast to a view of “buy the entire book or nothing at all” will, I am fairly certain, come to a quick realization that unit pricing is the way to go.
If “a small excerpt of a copyrighted book is available in a convenient format and at a reasonable price, then that factor [in the fair use assessment] weighs in favor of the publisher to be compensated for such academic use,” according to Judge Evans’s initial ruling in the GSU E-Reserves case. This not only stands in her recent ruling, it is reasonable because it incentivizes publishers to make their content more readily available to be licensed and it provides a mechanism by which academic institutions can take advantage of those licenses.
From the outset, the purpose of the GSU E-Reserves case, as brought by the plaintiff publishers, was to try to bring some judicial clarity to GSU’s practice of posting large amounts of copyrighted material to e-reserves system under a claim of fair use.
Now, with this latest ruling by Judge Evans, the copyright picture is beginning to clarify, but a healthy debate of the meaning of the ruling remains in order. As CEO of a company that strives to make available copyright-cleared units of content for professors to assemble into “best-of” books, I’ve just provided my take. What’s yours?
Caroline Vanderlip is CEO of SharedBook Inc., parent company of AcademicPub.
Call it philosophical synesthesia: the work of certain thinkers comes with a soundtrack. With Leibniz, it’s something baroque played on a harpsichord -- the monads somehow both crisply distinct and perfectly harmonizing. Despite Nietzsche’s tortured personal relationship with Wagner, the mood music for his work is actually by Richard Strauss. In the case of Jean-Paul Sartre’s writings, or at least some of them, it’s jazz: bebop in particular, and usually Charlie Parker, although it was Dizzie Gillespie who wore what became known as “existentialist” eyeglasses. And medieval scholastic philosophy resonates with Gregorian chant. Having never managed to read Thomas Aquinas without getting a headache, I find that it’s the Monty Python version:
Such linkages are, of course, all in my head -- the product of historical context and chains of association, to say nothing of personal eccentricity. But sometimes the connection between philosophy and music is much closer than that. It exists not just in the mind’s ear but in the thinker’s fingers as well, in ways that François Noudelmann explores with great finesse in The Philosopher’s Touch: Sartre, Nietzsche, and Barthes at the Piano (Columbia University Press).
The disciplinary guard dogs may snarl at Noudelmann for listing Barthes, a literary critic and semiologist, as a philosopher. The Philosopher’s Touch also ignores the principle best summed up by Martin Heidegger (“Horst Vessel Lied”): “Regarding the personality of a philosopher, our only interest is that he was born at a certain time, that he worked, and that he died." Biography, by this reasoning, is a distraction from serious thought, or, worse, a contaminant.
But then Noudelmann (a professor of philosophy at l’Université Paris VIII who has also taught at Johns Hopkins and New York Universities) has published a number of studies of Sartre, who violated the distinction between philosophy and biography constantly. Following Sartre’s example on that score is a dicey enterprise -- always in danger of reducing ideas to historical circumstances, or of overinterpreting personal trivia.
The Philosopher’s Touch runs that risk three times, taking as its starting point the one habit its protagonists had in common: Each played the piano almost every day of his adult life. Sartre gave it up only as a septuagenarian, when his health and eyesight failed. But even Nietzsche’s descent into madness couldn’t stop him from playing (and, it seems, playing well).
All of them wrote about music, and each published at least one book that was explicitly autobiographical. But they seldom mentioned their own musicianship in public and never made it the focus of a book or an essay. Barthes happily accepted the offer to appear on a radio program where the guest host got to spin his favorite recordings. But the tapes he made at home of his own performances were never for public consumption. He was an unabashed amateur, and recording himself was just a way to get better.
Early on, a conductor rejected one of Nietzsche’s compositions in brutally humiliating terms, asking if he meant it as a joke. But he went on playing and composing anyway, leaving behind about 70 works, including, strange to say, a mass.
As for Sartre, he admitted to daydreams of becoming a jazz pianist. “We might be even more surprised by this secret ambition,” Noudelmann says, “when we realize that Sartre did not play jazz! Perhaps this was due to a certain difficulty of rhythm encountered in jazz, which is so difficult for classical players to grasp. Sight-reading a score does not suffice.” It don’t mean a thing if it ain’t got that swing.
These seemingly minor or incidental details about the thinkers’ private devotion to the keyboard give Noudelmann an entrée to a set of otherwise readily overlooked set of problems concerning both art -- particularly the high-modernist sort -- and time.
In their critical writings, Sartre and Barthes always seemed especially interested in the more challenging sorts of experimentation (Beckett, serialism, Calder, the nouveau roman, etc.) while Nietzsche was, at first anyway, the philosophical herald of Wagner’s genius as the future of art. But seated at their own keyboards, they made choices seemingly at odds with the sensibility to be found in their published work. Sartre played Chopin. A lot. So did Nietzsche. (Surprising, because Chopin puts into sound what unrequited love feels like, while it seems like Nietzsche and Sartre are made of sterner stuff. Nietzsche also loved Bizet’s Carmen. His copy of the score “is covered with annotations, testifying to his intense appropriation of the opera to the piano.” Barthes liked Chopin but found him too hard to play, and shifted his loyalties to Schumann – becoming the sort of devotee who feels he has a uniquely intense connection with an artist. “Although he claims that Schumann’s music is, through some intrinsic quality, made for being played rather than listened to,” writes Noudelmann, “his arguments can be reduced to saying that this music involves the body that plays it.”
Such ardor is at the other extreme from the modernist perspective for which music is the ideal model of “pure art, removed from meaning and feeling,” creating, Noudelmann writes, “a perfect form and a perfect time, which follow only their own laws.... Such supposed purity requires an exclusive relation between the music and a listener who is removed from the conditions of the music’s performance.”
But Barthes’s passion for Schumann (or Sartre’s for Chopin, or Nietzsche’s for Bizet) involves more than relief at escaping severe music for something more Romantic and melodious. The familiarity of certain compositions; the fact that they fall within the limits of the player’s ability, or give it enough of a challenge to be stimulating; the way a passage inspires particular moods or echoes them -- all of this is part of the reality that playing music “is entirely different from listening to it or commenting on it.” That sounds obvious but it is something even a bad performer sometimes understands better than a good critic.
“Leaving behind the discourse of knowledge and mastery,” Noudelmann writes, “they maintained, without relent and throughout the whole of their existence, a tacit relation to music. Their playing was full of habits they had cultivated since childhood and discoveries they had made in the evolution of their tastes and passions.” More is involved than sound.
The skills required to play music are stored, quite literally, in the body. It’s appropriate that Nietzsche, Sartre, and Barthes all wrote, at some length, about both the body and memory. Noudelmann could have belabored that point at terrific length and high volume, like a LaMonte Young performance in which musicians play two or three notes continuously for several days. Instead, he improvises with skill in essays that pique the reader's interest, rather than bludgeoning it. And on that note, I must now go do terrible things to a Gibson electric guitar.