In June, Inside Higher Edtold readers about Princeton University Press’s impending experiment with a political-science volume on the 2012 presidential election: It would make excerpts from the work-in-progress available online free while the campaign was still under way. It required a “truncated timetable” for peer review -- getting the readers’ reports back in two or three weeks instead of a few months.
Given the stately pace of scholarly publishing, such a turnaround counts as feverish. By the standards of punditry, it’s almost languorous. The idea was to give the public access to portions of The Gamble: Choice and Chance in the 2012 Election just as the convention season began.
And so they are. Two chapters are now available for download from the Princeton website, here and here. The authors, John Sides and Lynn Vavreck, also have a website for the book. (They are associate professors of political science at George Washington University and the University of California at Los Angeles, respectively.) The material runs to about a hundred pages of text.
It would be hard to read Sides's and Vavreck’s work during the conventions, amid all the funny hats and confetti. But their research puts a couple of things about the campaigns into perspective. Keep in mind that the authors are responding not just to data (most of it quantitative) but to the received wisdom of the past several months regarding the campaign -- and on two points in particular.
Each is an assessment of a candidate’s presumed vulnerabilities.
The first holds that President Obama’s chances of re-election depend -- more than anything else, and perhaps even exclusively -- on the state of the economy. Incumbency has its advantages, but unemployment rates could trump them. The second is that Mitt Romney lacks the support of his party’s base, which is considerably to the right of him on both social and economic issues. Romney doesn’t suffer from Sarah Palin’s very negative approval rating among the public at large, but he can’t count on the support of her followers, Twitter and otherwise.
In social-science books, the methodology is usually as salient as the findings themselves. Each chapter comes with an appendix stuffed with additional analysis. Suffice it to say that a few patterns have emerged from studies of presidential campaigns in the past. The challenge is to move from generalizations about yesteryear to the electoral battle now unfolding.
For example, the country’s economic performance during a president’s administration -- but especially in the months just before the election -- is a pretty solid index of his re-electability. In the sixteen presidential elections between 1948 and 2008, changes in gross domestic product between January and September of the election year tracked closely to the fortunes of the incumbent party’s candidate. “It’s hard to beat an incumbent party in a growing economy,” Sides and Vavreck write, “and even harder to beat the actual incumbent himself.”
When the change in GDP over the three quarters preceding the election is negative, the incumbent party’s presidential candidate is sure to lose -- at least if the examples of Nixon (1960), Carter (1980), and McCain (2008) are anything to go by. The stronger the economic contraction, the bigger the defeat.
But the pattern of the past 60 years isn’t much help for handicappers of the race now under way. GDP during the first quarter of 2012 grew an incumbent-friendly 2 percent, while the initial estimate for the second quarter was 1.5 percent growth. As it happens, this column is running on August 29, when the Bureau of Economic Analysis is scheduled to issue a revised estimate of second-quarter growth based on additional data. (And the first estimate of GDP in third quarter isn’t out until 12 days before the election.)
In any case, the GDP itself can give only a rough sense of how voters experience and understand the economy. Sides and Vavreck have developed a model that correlates public-opinion poll results from each quarter between 1948 and 2008 with a number of other data points. These include three economic factors (unemployment and inflation rates, plus the change in GDP between quarters) as well as “events such as scandals and wars that might push approval [ratings] up and down” and the president’s length of time in office, counted in quarters.
From all of this information, the authors extracted a general model of how much each factor counted in determining the presidential approval ratings. Then they ran all the numbers again to see how well the general model could retroactively “predict” the changes in each president’s approval ratings from quarter to quarter. And the model proved good at it. The actual quarterly ratings were usually quite close to what the formulas expected, given the economic and other factors in play.
Plugging in relevant data for 2009-2011, the authors generated a graph showing the approval ratings that would be expected given the tendencies of the previous six decades. Here things get interesting:
“Although early on in his presidency Obama was slightly less popular than expected (by about 1 percent throughout most of 2009 and 2010), by the end of 2010 and continuing into 2012, he was more popular. In 2011, his popularity exceeded expectations by over 6 points. This feat is something that few presidents have accomplished. Only one president, Ronald Reagan, consistently ‘beat’ the prediction in his first term to an extent greater than Obama.”
The authors suggest that, if anything, their model may have overestimated the level of Obama popularity that might be expected if all things were equal, relative to earlier presidencies -- which they weren’t. The economic slump that began in 2008 has been deeper, and lasted longer, than any over the previous 60 years. High unemployment yields diminished approval ratings, of course -- but compounding it with a rise in long-term unemployment should presumably push them down even harder.
At the same time, the model does not account for what the authors call “the ‘penalty’ of his race” -- the marked tendency of those with negative attitudes toward black people in general to disapprove of Obama in particular. Sides and Vavreck estimate that his approval rating might be up to four points higher if not for his skin color.
In short, Obama entered the 2012 campaign with considerably more support than one might expect given the lackluster economy. The authors leave it to others to speculate on the source of this strength. But what about his opponent? Isn't Mitt Romney out of step with the rest of his party -- hence vulnerable to conservatives staying home?
When he emerged from the Republican primary season a few months back, Romney seemed less a victor than the last man standing. And inexplicably so: until a few years back, he spoke in favor of both Roe v. Wade and LGBT equality. And Jonathan Gruber, the economist at the Massachusetts Institute of Technology and "intellectual architect" of the healthcare reform bill that Romney crafted while governor of Massachusetts, has compared it to Obamacare in colorful terms: “it’s the same [flippin’] bill.” Shouldn’t he be exhibited by the Smithsonian Institution as the last surviving member of an extinct species, the Rockefeller Republican?
Sides and Vavreck challenge the idea that the Republican primary process revealed a deep yearning by conservatives for “Anyone But Romney.” All the other candidates courted them assiduously, only to be done in by scandal or gaffe or the inability to remember which government programs he or she intended to close down, once in office. Romney was what the party had left. (Actually "had left" is probably a bad way of putting it.)
The authors concede that Romney “never ‘surged’ in the polls" in late 2011 and early ’12, "and never experienced the reinforcing cycle of positive news coverage and gains in the polls.” As a result, he "appear[ed] to be a weak candidate, unloved by many in the party. But this also concealed the underlying structure of the race, which tilted in his favor.” A poll from last December showed that he “was viewed positively by likely Republican primary voters whether they were conservatives or moderates, pro-life or pro-choice, relatively wealthy or not.” More than two-thirds of the Tea Party members surveyed expressed a favorable opinion of him, with non-Tea Party people doing so at the same rate.
The authors make their case with charts, graphs, and whatnot, but looking at them, I felt some cognitive dissonance. It’s hard to shake the impression that the GOP has a sizable wing that is so far to the right of Romney that he had to placate them with a veep candidate with stronger conservative credentials. When I raised the issue to the authors by e-mail, Sides replied that "people have overestimated two things about GOP voters: (1) just how conservative they are (or perceive themselves), relative to how they perceived Romney; and (2) how much ideology drove their feelings about Romney and the other candidates.” That misperception was strengthened by the “media boomlets” that seem intrinsic to the 24-hour news cycle.
“When news coverage focused on a candidate other than Romney and that candidate had conservative bona fides,” Sides continued, “then conservatives were more likely to vote for that person than Romney…. But this does not mean they were implacably opposed to Romney. Preferring another candidate to Romney is not the same as opposing Romney.” He may have won out “not because he was ideologically who every conservative activist or voter wanted, but because he was the compromise candidate of the various party's factions. It doesn't mean he was widely loved, but he was satisfactory to all. Which makes him like most other presidential candidates, really.”
The authors are still analyzing the primary season while also following the latest twists and turns of the process. I wondered if that meant the chapters now available were working drafts of a sort.
“We will probably rework the chapters a little bit,” Lynn Vavreck wrote back, “but not very much I suspect. We may adjust some of the error or uncertainty estimates, but the general take-aways will remain the same.”
Writing a monograph with the campaign still in motion is a way to shake things up some in the discipline. “It bothered us that parties, candidates, consultants, and journalists had better data on campaigns and elections than political scientists had -- and we wanted to be a part of what was happening, when it was happening, so we could share in those data and use them in real time.”
They hope the project serves as a model to others, while acknowledging that it’s “not the kind [of effort] that academics are typically strong on making -- partnerships have to be forged, things have to be delivered on deadline, and you have to promote your results and your work to a wider audience.” It sounds like what anyone else engaged in politics must do, except with a bibliography.
Right after last month’s shootings in Aurora, Colo., I started reading George Michael’s Lone Wolf Terror and the Rise of Leaderless Resistance (Vanderbilt University Press) as well as a few recent papers on solo-organized political violence. It proved easy to put off writing a column on this material. For one thing, the official publication date for Lone Wolf Terror isn’t until mid-September. Plus, a single bloodbath is grim enough to think about, let alone a trend toward bloodbaths.
But the most pertinent reason for not writing about the book following the Aurora massacre was simply that James Holmes (whom we are obliged by the formalities to call “the alleged gunman,” though nobody has disputed the point) didn’t really qualify as an example of lone-wolfdom, at least as defined in the literature. In “A Review of Lone Wolf Terrorism: The Need for a Different Approach,” published earlier this year in the journal Social Cosmos, Matthijs Nijboer marks out the phenomenon’s characteristics like so:
“Lone wolf terrorism is defined as: '[…] terrorist attacks carried out by persons who (a) operate individually, (b) do not belong to an organized terrorist group or network, and (c) whose modi operandi are conceived and directed by the individual without any direct outside command or hierarchy' ... Common elements included in several accepted definitions [of terrorism] include the following: (1) calculated violence, (2) that instills fear, (3) motivated by goals that are generally political, religious or ideological. These guidelines help distinguish [lone-wolf] terrorist attacks from other forms of violence.”
The actions of Ted Kaczynski and Anders Breivik fall under the heading of lone-wolf terrorism. They had what they regarded as reasons, and even presented them in manifestoes. So far, James Holmes has given no hint of why he shot people and booby-trapped his apartment with explosives. If he ever does put his motives into words, it’ll probably be something akin to Brenda Ann Spencer’s reason for firing on an elementary school in 1979: “I don’t like Mondays. This livens up the day.” Something about Holmes dyeing his hair so that he looks like a villain from "Batman"gives off the same quality of insanity tinged with contempt.
George Michael, the author of Lone Wolf Terror and the Rise of Leaderless Resistance, is an associate professor of nuclear counterproliferation and deterrence at the Air War College. He does not completely dismiss psychopathology as a factor in lone-wolf violence (bad neurochemistry most likely played as big a role in both Kaczynski’s and Breivik’s actions as ideology did, after all). But for the most part Michael treats lone-wolf violence as a new development in the realm of strategy and tactics – something that is emerging as a response to changes in the ideological and technological landscapes.
As it happens, the book appears during the 20th anniversary of the prophetic if ghastly document from which Michael borrows part of his title: “Leaderless Resistance,” an essay by Louis Beam, whom Michaels identifies in passing as “a firebrand orator and longstanding activist.” Fair enough, although “author of Essays of a Klansman” also seems pertinent.
Beam’s argument, in brief, was that the old-model hate group (one that recruited openly, held public events, and believed in strength through numbers) was now hopelessly susceptible to surveillance and infiltration by the government, as well as vulnerable to civil suits. The alternative was “phantom cells,” ideally consisting of one or two members at most and operating without a central command.
As Michael notes, Beam’s essay from 1992 bounced around the dial-up bulletin boards of the day, but it also bears mentioning that the boards were a major inspiration for Beam’s ideas in the first place. (He set up one for the Aryan Nations in 1984.) Versions of the leaderless-resistance concept soon caught on in other milieus that Michaels discusses, such as the Earth Liberation Front and the Islamicist/jihadist movements. It’s improbable that Beam’s writings were much of an influence on these currents. More likely, Beam, as an early adopter of a networked communication technology, came to anti-hierarchical conclusions about how risky activity might be organized that others would reach on their own, a few years later.
The other technological underpinning of small-scale or lone-wolf operations is the continuous development of ever more compact and deadly weaponry. Bombs and semiautomatic firearms being the most practical options for now, though the information is out there now for anyone trying to build up a private atomic, biological, or chemical arsenal. Factor in the vulnerable infrastructure that Michael lists (including pipelines, electrical power networks, and the information sector) and it’s clear how much potential exists for mayhem unleashed by a single person.
In the short term, Michael writes, “increased scrutiny by law enforcement and intelligence agencies will continue to make major coordinated terrorist activities extremely difficult, but not impossible. Although the state’s capacity to monitor is substantial, individuals can still operate covertly and commit violence with little predictability. Leaderless resistance can serve as a catalyst spurring others to move from thought to action, in effect inspiring copycats.”
And in the longer term, he regards all of it as the possible harbinger of a new mode of warfare in which a lone-wolf combatants have a decisive part -- with leaderless resistance already a major factor in shaping the globalized-yet-fragmented 21st century.
Maybe so. Something horrible could happen to confirm his beliefs before you finish reading this sentence. But just sobering are the findings from a study (available here) conducted by the Institute for Security and Crisis Management, a think tank in the Netherlands. The researcher found that lone-wolf attacks represented just over 1 percent of the all terrorist incidents in its survey of a dozen European countries plus Australia, Canada, and the United States between January 1968 and May 2007. “Our findings further seem to indicate that there has not been a significant increase in lone-wolf terrorism in [all but one of the] sample countries over the past two decades.”
Only in the U.S. did lone-wolf attacks account for more than a “marginal proportion” of terrorism, “with the U.S. cases accounting for almost 42 percent of the total;” 80 percent of them involved domestic rather than international issues. The report suggested the "significant variation" from the norm in other countries in the study "can partly be explained by the relative popularity of this strategy among white supremacists and anti-abortion activists in the United States." In any event, the researchers found that as of 2007, the trend toward lone-wolf terror had been growing markedly in the U.S., if not elsewhere.
Something else I'd rather not think about. A few days after I put Lone Wolf Terror to the side for a while, there came news of the shootings at the Sikh temple in Wisconsin. You can only tune these things out for just so long. They always come back.
Keeping the costs of textbooks and other learning tools as low as possible for today’s college students is a goal almost everyone can agree upon. How to accomplish that goal, however, is another matter entirely.
And pursuing that goal in the courts, where sweeping decisions can render in a minute what might otherwise take years to implement, is risky at best and counterproductive at worst.
Sometimes, however, savings for students can be found in the most unlikely of places. To prove my point, take a close look at Cambridge University Press v. Becker, widely known as the Georgia State University (GSU) E-Reserves case, initially ruled upon three months ago by U.S. Federal District Court Judge Orinda Evans, who issued a further ruling last Friday.
Most of the press coverage of Judge Evans’s ruling concentrated on its delineation of the many ways that colleges can continue to cite the doctrine of “fair use” to permit their making copies of books and other materials for use in teaching and the pursuit of scholarship. And, to be fair (pardon the pun), in 94 of the 99 instances claimed by academic publishers such as Cambridge, Oxford and Sage to be violations of copyright, the judge did rule that GSU and its professors were covered by fair use.
But in its fair use assessment, the court made two important rulings: (1) it created a bright line rule for the amount of text that can be copied; and (2) it established that when publishers make excerpts available for licensing (particularly in digital form), the publisher has a better chance of receiving those licensing fees (i.e., it is less likely to be held fair use). With regard to the first ruling, the key point is that the guesswork has been taken out. Specific amounts (10 percent of a book if less than 10 chapters, or 1 chapter of a book if more than 10 chapters) allowable for copy have been set.
The second ruling is even more significant. At first glance, it might seem that licensing “fees” have negative ramifications for students, as they would now be forced to “pay” for materials that would otherwise be “free.” But the nuanced reality of the ruling, at least in my view, is that this will actually do more to keep student book prices down than the commonly accepted benefits of fair use.
Here’s why: without this finding, many small and mid-size academic publishers might otherwise be priced out of participating in the higher education market and a handful of larger textbook players could multilaterally decide to raise prices within their tight but powerful group, serving to hurt students’ pocketbooks in the process.
However, the ability for all publishers -- small, medium and large -- to sell excerpts that are “reasonably available, at a reasonable price” levels the playing field for suppliers of content. This then leads to a pricing scheme that rewards the creation of effective units of content, meaning that students are paying only for what is most relevant to their studies, and not the extra materials that inevitably become part of comprehensive textbook products.
Disaggregation of content therefore, is not a license to charge students for materials that would otherwise be free. Instead, disaggregation is an enabler of the provision of targeted, highly relevant content that, in the end, may actually cost students less than their purchase of more generalized materials that often include content not taught in a particular class.
The pricing of disaggregated content is, to be sure, set entirely by the publisher. But a publisher faced with an opportunity to amortize a portion of its intellectual investment through what is, in effect, a “permission fee” per student or to hold fast to a view of “buy the entire book or nothing at all” will, I am fairly certain, come to a quick realization that unit pricing is the way to go.
If “a small excerpt of a copyrighted book is available in a convenient format and at a reasonable price, then that factor [in the fair use assessment] weighs in favor of the publisher to be compensated for such academic use,” according to Judge Evans’s initial ruling in the GSU E-Reserves case. This not only stands in her recent ruling, it is reasonable because it incentivizes publishers to make their content more readily available to be licensed and it provides a mechanism by which academic institutions can take advantage of those licenses.
From the outset, the purpose of the GSU E-Reserves case, as brought by the plaintiff publishers, was to try to bring some judicial clarity to GSU’s practice of posting large amounts of copyrighted material to e-reserves system under a claim of fair use.
Now, with this latest ruling by Judge Evans, the copyright picture is beginning to clarify, but a healthy debate of the meaning of the ruling remains in order. As CEO of a company that strives to make available copyright-cleared units of content for professors to assemble into “best-of” books, I’ve just provided my take. What’s yours?
Caroline Vanderlip is CEO of SharedBook Inc., parent company of AcademicPub.
Call it philosophical synesthesia: the work of certain thinkers comes with a soundtrack. With Leibniz, it’s something baroque played on a harpsichord -- the monads somehow both crisply distinct and perfectly harmonizing. Despite Nietzsche’s tortured personal relationship with Wagner, the mood music for his work is actually by Richard Strauss. In the case of Jean-Paul Sartre’s writings, or at least some of them, it’s jazz: bebop in particular, and usually Charlie Parker, although it was Dizzie Gillespie who wore what became known as “existentialist” eyeglasses. And medieval scholastic philosophy resonates with Gregorian chant. Having never managed to read Thomas Aquinas without getting a headache, I find that it’s the Monty Python version:
Such linkages are, of course, all in my head -- the product of historical context and chains of association, to say nothing of personal eccentricity. But sometimes the connection between philosophy and music is much closer than that. It exists not just in the mind’s ear but in the thinker’s fingers as well, in ways that François Noudelmann explores with great finesse in The Philosopher’s Touch: Sartre, Nietzsche, and Barthes at the Piano (Columbia University Press).
The disciplinary guard dogs may snarl at Noudelmann for listing Barthes, a literary critic and semiologist, as a philosopher. The Philosopher’s Touch also ignores the principle best summed up by Martin Heidegger (“Horst Vessel Lied”): “Regarding the personality of a philosopher, our only interest is that he was born at a certain time, that he worked, and that he died." Biography, by this reasoning, is a distraction from serious thought, or, worse, a contaminant.
But then Noudelmann (a professor of philosophy at l’Université Paris VIII who has also taught at Johns Hopkins and New York Universities) has published a number of studies of Sartre, who violated the distinction between philosophy and biography constantly. Following Sartre’s example on that score is a dicey enterprise -- always in danger of reducing ideas to historical circumstances, or of overinterpreting personal trivia.
The Philosopher’s Touch runs that risk three times, taking as its starting point the one habit its protagonists had in common: Each played the piano almost every day of his adult life. Sartre gave it up only as a septuagenarian, when his health and eyesight failed. But even Nietzsche’s descent into madness couldn’t stop him from playing (and, it seems, playing well).
All of them wrote about music, and each published at least one book that was explicitly autobiographical. But they seldom mentioned their own musicianship in public and never made it the focus of a book or an essay. Barthes happily accepted the offer to appear on a radio program where the guest host got to spin his favorite recordings. But the tapes he made at home of his own performances were never for public consumption. He was an unabashed amateur, and recording himself was just a way to get better.
Early on, a conductor rejected one of Nietzsche’s compositions in brutally humiliating terms, asking if he meant it as a joke. But he went on playing and composing anyway, leaving behind about 70 works, including, strange to say, a mass.
As for Sartre, he admitted to daydreams of becoming a jazz pianist. “We might be even more surprised by this secret ambition,” Noudelmann says, “when we realize that Sartre did not play jazz! Perhaps this was due to a certain difficulty of rhythm encountered in jazz, which is so difficult for classical players to grasp. Sight-reading a score does not suffice.” It don’t mean a thing if it ain’t got that swing.
These seemingly minor or incidental details about the thinkers’ private devotion to the keyboard give Noudelmann an entrée to a set of otherwise readily overlooked set of problems concerning both art -- particularly the high-modernist sort -- and time.
In their critical writings, Sartre and Barthes always seemed especially interested in the more challenging sorts of experimentation (Beckett, serialism, Calder, the nouveau roman, etc.) while Nietzsche was, at first anyway, the philosophical herald of Wagner’s genius as the future of art. But seated at their own keyboards, they made choices seemingly at odds with the sensibility to be found in their published work. Sartre played Chopin. A lot. So did Nietzsche. (Surprising, because Chopin puts into sound what unrequited love feels like, while it seems like Nietzsche and Sartre are made of sterner stuff. Nietzsche also loved Bizet’s Carmen. His copy of the score “is covered with annotations, testifying to his intense appropriation of the opera to the piano.” Barthes liked Chopin but found him too hard to play, and shifted his loyalties to Schumann – becoming the sort of devotee who feels he has a uniquely intense connection with an artist. “Although he claims that Schumann’s music is, through some intrinsic quality, made for being played rather than listened to,” writes Noudelmann, “his arguments can be reduced to saying that this music involves the body that plays it.”
Such ardor is at the other extreme from the modernist perspective for which music is the ideal model of “pure art, removed from meaning and feeling,” creating, Noudelmann writes, “a perfect form and a perfect time, which follow only their own laws.... Such supposed purity requires an exclusive relation between the music and a listener who is removed from the conditions of the music’s performance.”
But Barthes’s passion for Schumann (or Sartre’s for Chopin, or Nietzsche’s for Bizet) involves more than relief at escaping severe music for something more Romantic and melodious. The familiarity of certain compositions; the fact that they fall within the limits of the player’s ability, or give it enough of a challenge to be stimulating; the way a passage inspires particular moods or echoes them -- all of this is part of the reality that playing music “is entirely different from listening to it or commenting on it.” That sounds obvious but it is something even a bad performer sometimes understands better than a good critic.
“Leaving behind the discourse of knowledge and mastery,” Noudelmann writes, “they maintained, without relent and throughout the whole of their existence, a tacit relation to music. Their playing was full of habits they had cultivated since childhood and discoveries they had made in the evolution of their tastes and passions.” More is involved than sound.
The skills required to play music are stored, quite literally, in the body. It’s appropriate that Nietzsche, Sartre, and Barthes all wrote, at some length, about both the body and memory. Noudelmann could have belabored that point at terrific length and high volume, like a LaMonte Young performance in which musicians play two or three notes continuously for several days. Instead, he improvises with skill in essays that pique the reader's interest, rather than bludgeoning it. And on that note, I must now go do terrible things to a Gibson electric guitar.
Zygmunt Bauman makes a passing reference to his “uncannily long life” in On Education: Conversations with Riccardo Mazzeo (Polity). I found it surprising to determine that he is, in fact, not quite five months shy of his 87th birthday. With all due respect to someone old enough to have experienced the Hitler-Stalin pact as a personal problem (his Polish-Jewish family had to emigrate to the Soviet Union), current demographic trends are making longevity seem astounding only when your age runs to three digits.
Such judgments are always relative, of course. What the man is, without a doubt, is freakishly prolific. By the time the University of Leeds made him professor emeritus of sociology in 1990, Bauman had published some 25 books. At least two of them, Legislators and Interpreters: On Modernity, Post-Modernity, Intellectuals (1987) and Modernity and the Holocaust (1989), qualify as masterpieces. (Both were published by Cornell University Press.) Since retiring Bauman has published another 40 books, more or less. At this point, the author himself has probably lost count.
Bauman’s last few books have been assemblages of commentary on a variety of topics. On Education certainly belongs to that cycle. It consists of 20 exchanges with Riccardo Mazzeo, an editor at the Italian publishing house Edizioni Erickson, conducted by e-mail from June to September 2011. Mazzeo poses a question or makes an observation, sometimes on education and sometimes not. Bauman replies at length, and usually at a tangent. Its title notwithstanding, the book is neither focused on education nor, really, all that conversational. What it resembles more than anything else is the set of essays and notes gathered last year under the Magritte-ish title This is Not a Diary (Polity, 2011).
For the past dozen years, Bauman has been writing about what he calls the “liquid modernity” of contemporary industrialized and digitalized societies: the structure-in-flux emerging from a confluence of technological innovation, consumerism, and constantly changing demands on the adaptability of the labor force. His thinking about the unstoppable cultural torrent of liquid modernity resembles a combination of Daniel Bell’s sociological work on The Coming Post-Industrial Society (1973) and The Cultural Contradictions of Capitalism (1976) with Jean-Francois Lyotard’s reflections on The Postmodern Condition (1979), as updated via Thomas Friedman’s globalization punditry in The Lexus and the Olive Tree (1999).
Not that Bauman is a mash-up theorist. I cite these authors only by way of triangulation, rather than as influences. His work is grounded, rather, in the founding concern of sociology in the 19th century: the effort to understand the world taking shape under the impact of industrialization. The pace of change was much quicker than was imaginable in pre-industrial times, and the span of transformations was much wider. The classic period of sociology (the days of Marx, Durkheim, and Weber) analyzed how modernity differed from and disrupted -- but sometimes also absorbed and refashioned -- the institutions and traditions established by earlier ways of life.
Along the way, it came to seem as if the shifts and upheavals of industrial society could be understood and even (this was the imagination of the technocrat and the ideologue kicking in) brought under control. And if you couldn’t engineer change, at least it was reasonable to assume you could plan for it. If the population of a city is likely to grow at a certain rate, for example, it would be feasible to project whether more schools need to be built over the next decade -- and if so, where. The area’s chief industry might boom or bust, in which case your population projections would have led you astray. Even so, the ethos of “solid modernity” was confident enough to regard contingency as a risk, but one with a margin of error you could try to anticipate.
Liquid modernity is more volatile than its predecessor, characterized by changes that don’t so much interact as cascade. Planning for the city’s educational needs would be less confident, the risks more complex and cumulative. Suppose, in the best of all worlds, economic good times come to the city, bringing an influx of population with it. But the very currents that brought them in might well send them back out again, and so the new residents may not feel enough connection to the place to regard property taxes as anything but an infringement of their human rights. Projecting public expenditures, if not impossible, then becomes something of a shot in the dark.
The curriculum was once stable enough for school systems to use the same textbooks for years on end. But no longer. Now it’s necessary to invest in educational hardware and software, in full knowledge of obsolescence as a problem. That could prove a more contentious issue than overcrowding. And so on.
Bauman doesn't use the school-board analogy I've made here, but it seems as good a way as any to show the implications of his thinking about education. The lesson of “the liquid modern world,” he writes, “is that nothing in that world is bound to last, let alone forever. Objects recommended today as useful and indispensable tend to ‘become history’ well before they have had time to settle down and turn into a need or habit…. Everything is born with the brand of imminent death and emerges from the production line with a ‘use-by date’ printed or presumed. The construction of new buildings does not start unless their duration is fixed or it is made easy to terminate them on demand…. A spectre hovers over the denizens of the liquid modern world and all their labours and creations: the spectre of superfluity.”
The liquification, if that’s how to put it, affects not just infrastructure but the very goals of education. Bauman writes that “the unbounded expansion of every and any form of higher education” in recent decades was driven by the value of certification in pursuing “plum jobs, prosperity, and glory,” with the “volume of rewards steadily arising to match the steadily expanding ranks of degree holders.”
A chance at upward mobility will not be a motivation again anytime soon. Mazzeo refers to the growing rank of what are called, in Britain, neets: young people “not in education, employment, or training.” At the same time, liquid modernity eats away at the long-established precondition of education itself: the expectation that, by acquiring certain fixed skills and established forms of knowledge, the student is receiving something of durable value. But durability is not a value in liquid modernity.
Almost everything Bauman says about education will be only too familiar to the sort of reader likely to pick it up in the first place. But his knack for placing things in context and accounting for that uneasy feeling you get from this or that current development makes it stimulating.
Bauman is prone to leaping from trend to totality in a single bound, and he doesn’t always quite make it. “Few if any commitments last long enough to reach the point of no return,” he writes, “and it is only by accident that decisions, all deemed to be binding only ‘for the time being,’ stay in force.” This is an example of Bauman the sage turning into Bauman the scold, of overgeneralization raised to the power of crankiness.
True, the fluids of digital hyper-ephemerality were saturating human relationships even before Mark Zuckerberg came on the scene. But the word “friend” does still have meaning, in some offline contexts anyway. To read that you are able to keep commitments or to follow through on decisions “only by accident” is considerably more insulting than there is any reason to suppose Bauman intends.