In an essay first published in 1948, the American folklorist and cultural critic Gershon Legman wrote about the comic book -- then a fairly recent development -- as both a symptom and a carrier of psychosexual pathology. An ardent Freudian, Legman interpreted the tales and images filling the comics’ pages as fantasies fueled by the social repression of normal erotic and aggressive drives. Not that the comics were unusual in that regard: Legman’s wider argument was that most American popular culture was just as riddled with misogyny, sadomasochism, and malevolent narcissism. And to trace the theory back to its founder, Freud had implied in his paper “Creative Writers and Daydreaming” that any work of narrative fiction grows out of a core of fantasy that, if expressed more directly, would prove embarrassing or offensive. While the comic books of Legman’s day might be as bad as Titus Andronicus – Shakespeare’s play involving incest, rape, murder, mutilation, and cannibalism – they certainly couldn’t be much worse.
But what troubled Legman apart from the content (manifest and latent, as the psychoanalysts say) of the comics was the fact that the public consumed them so early in life, in such tremendous quantity. “With rare exceptions,” he wrote, “every child who was six years old in 1938 has by now absorbed an absolute minimum of eighteen thousand pictorial beatings, shootings, stranglings, blood-puddles, and torturings-to-death from comic (ha-ha) books alone, identifying himself – unless he is a complete masochist – with the heroic beater, strangler, blood-letter, and/or torturer in every case.”
Today, of course, a kid probably sees all that before the age of six. (In the words of Bart Simpson, instructing his younger sister: “If you don't watch the violence, you'll never get desensitized to it.”) And it is probably for the best that Legman, who died in 1999, is not around to see the endless parade of superhero films from Hollywood over the past few years. For in the likes of Superman, he diagnosed what he called the “virus” of a fascist worldview.
The cosmos of the superheroes was one of “continuous guilty terror,” Legman wrote, “projecting outward in every direction his readers’ paranoid hostility.” After a decade of supplying Superman with sinister characters to defeat and destroy, “comic books have succeeded in giving every American child a complete course in paranoid megalomania such as no German child ever had, a total conviction of the morality of force such as no Nazi could even aspire to.”
A bit of a ranter, then, was Legman. The fury wears on the reader’s nerves. But he was relentless in piling up examples of how Americans entertained themselves with depictions of antisocial behavior and fantasies of the empowered self. The rationale for this (when anyone bothered to offer one) was that the vicarious mayhem was a release valve, a catharsis draining away frustration. Legman saw it as a brutalized mentality feeding on itself -- preparing real horrors through imaginary participation.
Nothing so strident will be found in Jason Dittmer’s Captain America and the Nationalist Superhero: Metaphors, Narratives, and Geopolitics (Temple University Press), which is monographic rather than polemical. It is much more narrowly focused than Legman’s cultural criticism, while at the same time employing a larger theoretical toolkit than his collection of vintage psychoanalytic concepts. Dittmer, a reader in human geography at University College London, draws on Homi Bhabha’s thinking on nationalism as well as various critical perspectives (feminist and postcolonial, mainly) from the field of international relations.
For all that, the book shares Legman’s cultural complaints to a certain degree, although none of his work is cited. But first, it’s important to stress the contrasts, which are, in part, differences of scale. Legman analyzed the superhero as one genre among others appealing to the comic-book audience -- and that audience, in turn, as one sector of the mass-culture public.
Dittmer instead isolates – or possibly invents, as he suggests in passing – a subgenre of comic books devoted to what he calls “the nationalist superhero.” This character-type first appears, not in 1938, with the first issue of Superman, but in the early months of 1941, when Captain America hits the stands. Similar figures emerged in other countries, such as Captain Britain and (somewhat more imaginatively) Nelvana of the Northern Lights, the Canadian superheroine. What set them apart from the wider superhero population was their especially strong connection with their country. Nelvana, for instance, is the half-human daughter of the Inuit demigod who rules the aurora borealis. (Any relationship with actual First Nations mythology here is tenuous at best, but never mind.)
Since Captain America was the prototype –- and since many of you undoubtedly know as much about him as I did before reading the book, i.e., nothing – a word about his origins seems in order. Before becoming a superhero, he was a scrawny artist named Steve Rogers who followed the news from Germany and was horrified by the Nazi menace. He tried to join the army well before the U.S entered World War Two but was rejected as physically unfit. Instead, he volunteered to serve as a human guinea pig for a serum that transforms him into an invincible warrior. And so, as Captain America -- outfitted with shield and spandex in the colors of Old Glory – he went off to fight Red Skull, who was not only a supervillain but a close personal friend of Adolf Hitler.
Now, no one questions Superman’s dedication to “truth, justice, and the American way,” but the fact remains that he was an alien who just happened to land in the United States. His national identity is, in effect, luck of the draw. (I learn from Wikipedia that one alternate-universe narrative of Superman has him growing up on a Ukrainian collective farm as a Soviet patriot, with inevitable consequences for the Cold War balance of power.) By contrast, Dittmer’s nationalist superhero “identifies himself or herself as a representative and defender of a specific nation-state, often through his or her name, uniform, and mission.”
But Dittmer’s point is not that the nationalist superhero is a symbol for the country or a projection of some imagined or desired sense of national character. That much is obvious enough. Rather, narratives involving the nationalist superhero are one part of a larger, ongoing process of working out the relationship between the two entities yoked together in the term “nation-state.”
That hyphen is not an equals sign. Citing feminist international-relations theorists, Dittmer suggests that one prevalent mode of thinking counterposes “the ‘soft,’ feminine nation that is to be protected by the ‘hard,’ masculine state” -- which is also defined, per Max Weber, as claiming a monopoly on the legitimate use of violence. From that perspective, the nationalist superhero occupies the anomalous position of someone who performs a state-like role (protective and sometimes violent) while also trying to express or embody some version of how the nation prefers to understand its own core values.
And because the superhero genre in general tends to be both durable and repetitive (the supervillain is necessarily a master of variations on a theme), the nationalist superhero can change, within limits, over time. During his stint in World War II, Captain America killed plenty of people in combat with plenty of gusto and no qualms. It seems that he was frozen in a block of ice for a good part of the 1950s, but was thawed out somehow during the Johnson administration without lending his services to the Vietnam War effort. (He went in Indochina just a couple of times, to help out friends.) At one point, a writer was on the verge of turning the Captain into an overt pacifist, though the publisher soon put an end to that.
Even my very incomplete rendering of Dittmer’s ideas here will suggest that his analysis is a lot more flexible than Legman’s denunciation of the superhero genre. The book also makes more use of cross-cultural comparisons. Without reading it, I might never known that there was a Canadian superhero called Captain Canuck, much less the improbable fact that the name is not satirical.
But in the end, Legman and Dittmer share a sense of the genre as using barely conscious feelings and attitudes in more or less propagandistic ways. They echo the concerns of one of the 20th century's definitive issues: the role of the irrational in politics. And that doesn't seem likely to become any less of a problem any time soon.
The University of Pittsburgh Press is printing new copies of two collections of poetry by Richard Blanco, the inaugural poet selected by President Obama, and the press is preparing to release a new volume, which will include the inaugural poem, The Pittsburgh Post-Gazette reported. Orders are coming in fast. The books currently available from Pitt are City of a Hundred Fires and Looking for the Gulf Motel.
A couple of months ago I interrupted several years of procrastination and finally got around to a time-consuming bit of housework: unpacking each volume from every shelf in my library, flipping it (the shelf that is), and then putting the books into a more orderly state than they had been in a long time. It was the work of several days. The shelves are thick and sturdy, but they had borne two rows of books, plus whatever could be fitted in horizontally, for more than a decade. With a dozen tiers to process -- at eight shelves per tier, and 25 to 50 volumes per shelf -- I had an incentive to build up all the mindless, robotic momentum possible. Stopping to read anything was strictly forbidden, for all the good that did.
They were, and still are, organized alphabetically by author’s name. Friends occasionally express dismay at this. It seems the most impersonal system possible short of arranging them by color. But putting them back on the shelf -- after sifting and sorting them, and a lot of dusting -- proved anything but impersonal. It was comparable to reading an old journal – that is, an experience of numbing repetitiveness, interrupted by melancholy and embarrassment. Several volumes have inscriptions from friends who have died. My copy of the collected Edgar Allan Poe was a Christmas present from pre-adolescence, when bookplates evidently struck me as the height of sophistication. What is stranger -- the extensive academic literature concerning UFO-based religions, or the fact that I seem to own all of it?
Memory kept sabotaging discipline. It’s amazing the job ever got done. The same cannot be said of winnowing through a couple of filing cabinets loaded with photocopies, a week or so later, which involved no more complex sentiment than a kind of satisfying ruthlessness. And with digital text, you don’t even get that. Every so often I copy all the e-books and article PDFs from the laptop to a flashdrive, which then gets dropped into a coffee cup on my desk, along with the others. Sorting and purging the e-library hardly seems worth the effort. Any item in it can be located and extracted within a few minutes. I have no fond memory of acquiring any of them. Downloading a book from Amazon must be consumerism at its most disenchanted. For that matter, thinking back on the e-books I’ve read, what comes to mind is almost always information, rather than the experience of reading them.
Andrew Piper’sBook Was There: Reading in Electronic Times (University of Chicago Press) occupies a niche somewhere between a couple of fields of study that were already interdisciplinary. One is the history of the book, from scroll to e-reader. The other is a phenomenological psychology of reading – an effort, that is, to describe the concrete experience of engaging with the written word, which involves more than the sense of sight, or even the neural processes that somehow convert squiggles into meaning.
“Books have been important to us,” Piper writes in a passage that made me glad to have read him, “because of the way our interactions with them span several domains of sensory and physical experience. Whether it is through the acts of touch, sight, sound, sharing, or acquiring a sense of place, [our] embodied, and at times impersonal, ways of interacting with books coalesce to magnify the learning that takes place through them. The same information processed in different ways and woven together is one of the profound secrets of bookish thought."
Piper, an associate professor of languages, literature, and culture at McGill University, in Montreal, won the Modern Language Association Prize for a first book Dreaming in Books: The Making of the Bibliographic Imagination in the Romantic Age (2009), also published by the University of Chicago Press; and his paper “Rethinking the Print Object: Goethe and the Book of Everything” (2006) received the Goethe Society of North America’s annual essay prize. While no less grounded in European cultural history than his earlier work, Book Was There (its title taken from Gertrude Stein) is more digressive and memoiristic. Parenthood supplements scholarship: one of his children learned to read as Piper was writing the book; his reflections on reading as an aspect of self-fashioning are at least partly grounded in family life.
His intent is not -- as the subtitle “Reading in Electronic Times” might suggest -- a screed against the e-text flood. Book Was There shows a wide knowledge of contemporary digital art and literature, and Piper makes a brief mention of his role in a collaborative project to create a computer model of the impact of The Sorrows of Young Werther on subsequent literature. Like anyone who has given the matter more than a soundbite’s worth of thought, he recognizes that the relationship between the cultural system now emerging and the previous thousand years of human civilization involves both continuities and disruptions, for both better and worse.
What Piper does insist on is the specificity of how we interact with text when incarnated as the artifact of the three-dimensional book. This begins with the hand, which navigates through a bound volume in a way distinct from the turning of a scroll or the button-punching we do on Kindle or Nook. (For one thing, the ancient and the digital format resemble each other at least as much as either does the codex or a book from the Gutenberg era.) One of Piper’s core ideas, radiating out in several directions throughout the book, is that the sense of touch creates “a form of redundancy” in the overall experience of reading, “enfolding more sensory information into what we see and therefore what we read.”
Someone who has lived closely with a given volume for a long time will have a sense of what Piper means. Even without bookmarks or notes, you often know how to look up something in it fairly quickly. The citation is also a location which you can (literally) feel your way to finding.
But there is more to it than that. “As early as the twelfth century,” Piper notes, “writers began drawing hands in the margins of their books to point to important passages. Such a device gradually passed into typescript and became a commonplace of printed books.” And in that regard it mimicked the book’s own role in pointing to specific aspects of the world – making them, in a hand-related analogy, “graspable.” (The implicit contrast here would be the link, which serves as another way to direct the reader’s eyes, but with the constant risk of diffusing attention instead of directing it.)
Handwriting is another mode of engagement with text -- one considerably less efficient than today’s cut-and-paste norm, as Piper acknowledges, but of value precisely for its slowness, which permits incorporation of meaning rather than the aggregation of content. He cites research showing a significant relationship between writing by hand and drawing:
“Early elementary school students who draw before they write tend to produce more words and more complex sentences than those who do not. And as historians of writing have shown, writing makes drawing more analytical. It allows for more complex visual structures and relations to emerge. As Goethe remarked, word and image, drawing and writing, are correlates that eternally search for one another. Handwriting is an integral means of their convergence.”
Cyberculture is all about convergence, if not in Goethean terms. It overloads the reader’s sensorium through every conceivable channel of communication (preferably at the same time) while bombarding us with invitations to respond to its messages, right away. “Interactivity is a constraint,” Piper writes, “not a freedom.” As if glossing his point, a satirical article in The Onion recently reported that Internet users had had enough. “Nobody needs to get my immediate take on everything I see online,” it quotes one woman as saying. “…. At best I’m just going to parrot back some loose approximation of what I’ve heard before, which will just prove that I never should have weighed in in the first place.”
Piper suggests that reading of the older sort provokes a kind of anxiety in the culture now because of its seeming isolation and inertia – so out of step with the drive for connectivity, quantifiable impact, and a rapid turnover in goods and services. But -- to continue his point -- the physical inertia tends to generate a much more intense internal dynamism, making for more complex and lasting patterns of meaning.
That sounds right. Given the limits of space, my acquisition of hardbacks and paperbacks must slow down; at this point, the ones on hand are saturated enough with significance to last the rest of my days. But the e-texts filling my coffee cup can accumulate as rapidly as ever. No shelf bends under the weight, and their imprint on my memory is like footprints in the snow.
Submitted by Rob Weir on January 22, 2013 - 3:00am
Stewart Brand is credited with coining the phrase "information wants to be free." In the wake of the suicide of 26-year-old cyber activist Aaron Swartz, we need to re-evaluate that assumption.
Brand, the former editor of The Whole Earth Catalog and a technology early adopter, is a living link between two great surges in what has been labeled "the culture of free": the 1967 Summer of Love and the Age of Information that went supernova in the late 1990s. Each period has stretched the definition of "free."
During the Summer of Love, the Diggers Collective tried to build a money-free enclave in San Francisco’s Haight-Ashbury district. They ran "free" soup kitchens, stores, clinics and concerts. Myth records this as a noble effort that ran aground; history reveals less lofty realities. "Free" was in the eye of the beholder. The Diggers accumulated much of the food, clothing, medicine, and electronic equipment it redistributed by shaking down local merchants like longhaired mob muscle. Local merchants viewed Digger "donations" as a cost of doing business analogous to lost revenue from shoplifting. Somebody paid for the goods; it just wasn’t the Diggers or their clients.
Move the clock forward. Aaron Swartz’s martyr status crystallizes as I type. As the legend grows, Swartz was a brilliant and idealistic young man who dropped out of Stanford and liberated information for the masses until swatted down by multinational corporations, elitist universities, and the government. Faced with the potential of spending decades behind bars for charges related to hacking into JSTOR, a depressed Swartz committed suicide. (In truth, as The Boston Globe has reported, a plea bargain was nearly in place for a four-to-six-month sentence.)
I am sorry that Swartz died, and couldn’t begin to say whether he was chronically depressed, or if his legal woes pushed him over the edge. I do assert, though, that he was no hero. The appropriate label is one he once proudly carried: hacker. Hacking, no matter how principled, is a form of theft.
It’s easy to trivialize what Swartz did because it was just a database of academic articles. I wonder if his supporters would have felt as charitable if he had "freed" bank deposits. His was not an innocent act. The Massachusetts Institute of Technology and the Commonwealth of Massachusetts took the not-unreasonable position that there is a considerable difference between downloading articles from free accounts registered with a university, and purloining 4.8 million documents by splicing into wiring accessed via unauthorized entry into a computer closet. That’s hacking in my book – the moral equivalent of diverting a bank teller with a small transaction whilst a partner ducks behind the counter and liberates the till.
Brand and his contemporaries often parse the definition of free. Taking down barriers and making data easier to exchange is “freeing” in that changing technology makes access broader and cheaper to deliver. Alas, many young people don’t distinguish between "freeing" and "free." Many of my undergrads think nearly all information should come at no cost – free online education, free movies, free music, free software, free video games…. Many justify this as Swartz did: that the value of ideas and culture is artificially inflated by info robber barons.
They’re happy to out the villains: entrenched university administrations, Hollywood producers, Netflix, the Big Three record labels, Amazon, Microsoft, Nintendo, Sega…. I recently had a student pulled from my class and arrested for illegal music downloading. He was considerably less worried than Swartz and pronounced, "I fundamentally don’t believe anyone should ever have to pay for music." This, mind you, after I shared tales of folk musicians and independent artists that can’t live by their art unless they can sell it.
Sorry, but this mentality is wrong. Equally misguided are those who, like Swartz before his death, seek to scuttle the Stop Online Piracy Act and the Protect Intellectual Property Act. Are these perfect bills? No. Do they protect big corporations, but do little to shelter the proverbial small fish? Yes. Do we need a larger political debate about the way in which conglomeration has stifled innovation and competition? Book me a front-row seat for that donnybrook. Are consumers of everything from music to access to academic articles being price gouged? Probably. But the immediate possibility of living in a world in which everything is literally free is as likely as the discovery of unicorns grazing on the Big Rock Candy Mountain.
Let’s turn to JSTOR, the object of Swartz’s most recent hijinks. (He was a repeat offender.) JSTOR isn’t popular among librarians seeking subscription money, or those called upon to pay for access to an article (which is almost no one with a university account who doesn’t rewire the network). Many wonder why money accrues to those whose only "creation" is to aggregate the labor of others, especially when some form of taxpayer money underwrote many of the articles. That’s a legitimate concern, but defending Swartz’s method elevates vigilantism above the rules of law and reason. More to the point, reckless "liberation" often does more harm than good.
JSTOR charges university libraries a king’s ransom for its services. Still, few libraries could subscribe to JSTOR’s 1,400 journals more cheaply. (Nor do many have the space to store the physical copies.) The institutional costs for top journals are pricey. Go to the Oxford University Website and you’ll find that very few can be secured for under $200 per volume, and several are over $2,000. One must ultimately confront a question ignored by the culture of free: Why does information cost so much?
Short answer: Because journals don’t grow on trees. It’s intoxicating to think that information can be figuratively and literally free, until one assembles an actual journal. I don’t care how you do it; it’s going to cost you.
I’m the associate editor of a very small journal in the academic pond. We still offer print journals, which entails thousands of dollars in printing and mailing costs for each issue. Fine, you say, print is dead. Produce an e-journal. Would that be "free?" Our editor is a full-time academic. She can only put in the hours needed to sift articles, farm them out for expert review, send accepted articles to copy editors, forward copy to a designer, and get the journal to subscribers because her university gives her a course reduction each semester. That’s real money; it costs her department thousands of dollars to replace her courses. Design, copy editing, and advertising fees must be paid, and a few small stipends are doled out. Without violating confidentiality I can attest that even a modest journal is expensive to produce. You can’t just give it away, because subscribers pick up the tab for everything that can’t be bartered.
Could you do this free online with no membership base? Sure – with a team of editors, designers, and Web gurus that don’t want to get paid for the countless hours they will devote to each issue. Do you believe enough in the culture of free to devote your life to uncompensated toil? (Careful: The Diggers don’t operate those free stores anymore.) By the way, if you want anyone to read your journal, you’ll give it to JSTOR or some other aggregator. Unless, of course, you can drum up lots of free advertising.
The way forward in the Age of Information begins with an honest assessment of the hidden costs within the culture of free. I suggest we retire the sexy-but-hollow phrase “information wants to be free" and resurrect this one: "There’s no such thing as a free lunch." And for hackers and info thieves, here’s one from my days as a social worker: "If you can’t do the time, don’t do the crime."
Rob Weir teaches history at Smith College. He is the author of Inside Higher Ed's "Instant Mentor" career advice column.
Michael Barera has been named Wikipedian in residence at the Gerald R. Ford Presidential Library at the University of Michigan -- the first such position at a presidential library. Barera will focus on expanding the availability of information about President Ford and the library's holdings on Wikipedia through the Gerald Ford WikiProject.
I don’t think there’s much more to say about Aaron Swartz. I didn’t know him personally, but like many others I am a beneficiary of the work he did. And I have agreed for much of my life as an academic with the thinking that led him to his fateful act in a closet at the Massachusetts Institute of Technology. Most centrally, that there are several ethical imperatives that should make everything that JSTOR (or any comparable bundling of scholarly publication) holds freely available to everyone: much of that work was underwritten directly or indirectly by public funds, the transformative impact of open access on inequality is already well-documented, and it's in keeping with the obligations and values that scholars allege to be central to their work.
Blame is coming down heavy on MIT and JSTOR, both of which were at pains to distance themselves from the legal persecution of Swartz even before news of his suicide broke, particularly JSTOR, which very early on asked that Swartz not be prosecuted. Blame is coming down even more heavily, as it should, on federal prosecutors who have been spewing a load of spurious garbage about the case for over a year. They had discretion and they abused it grievously in an era when vast webs of destructive and criminal activities have been discretionarily ignored if they stem from powerful men and powerful institutions. They chose to be Inspector Javert, chasing down Swartz over a loaf of bread.
But if we’re talking blame, then there’s a diffuse blame that ought to be conferred. In a way, it’s odd that MIT should have been the bagman for the ancien regime: its online presence and institutional thinking about digitization have otherwise been quite forward-thinking in many respects. If MIT allowed itself to be used by federal prosecutors looking to put an intellectual property head on a pike, that is less an extraordinary gesture by MIT and more a reflection of the academic default.
I’ve been frustrated for years, like other scholars and faculty members who take an interest in these issues, at the remarkable lassitude of academia as a whole toward publication, intellectual property and digitization. Faculty who tell me passionately about their commitment to social justice either are indifferent to these concerns or are sometimes supportive of the old order. They defend the ghastly proposition that universities (and governments) should continue to subsidize the production of scholarship that is then donated to for-profit publishers who then charge high prices to loan that work back to the institutions that subsidized its creation, and the corollary, demanded by those publishers, that the circulation of such work should be limited to those who pay those prices.
Print was expensive, print was specialized, and back in the age of print, what choice did we have? We have a choice now. Everything, everything, about the production of scholarship can be supported by consortial funds within academe. The major added value is provided by scholars, again largely for free, in the work of peer review. We could put the publishers who refuse to be partners in an open world of inquiry out of business tomorrow, and the only cost to academics would be the loss of some names for journals. Every journal we have can just have another name and be essentially the same thing. Every intellectual, every academic, every reader, every curious mind that wants to read scholarly work could be reading it tomorrow if they had access to a basic Internet connection, wherever they are in the world. Which is what we say we want.
I once had a colleague tell me a decade ago that this shift wouldn’t be a positive development because there’s a digital divide, that not everyone has access to digital devices, especially in the developing world. I asked this colleague, whose work is focused on the U.S., if she knew anything about the costs and problems that print imposed on libraries and archives and universities around the world, and of course she didn’t. Digitized scholarship can’t be lost or stolen the way that print can be, it doesn’t have to be mailed, it doesn’t have to have physical storage, it can’t be eaten by termites, it can’t get mold on it. If it were freed from the grasp of the publishers who charge insane prices for it, it could be disseminated for comparatively small costs to any institution or reader who wants access. Collections can be uniformly large everywhere that there’s a connection: what I can read and research, a colleague in Nairobi or Beijing or Moscow or São Paulo can read and research, unless their government (or mine) interferes. That simply couldn’t be in the age of print. Collections can support hundreds or thousands of simultaneous readers rather than just the one who has something checked out. I love the materiality of books, too, but on these kinds of issues, there’s no comparison. And no justification.
The major thing that stands in the way of the potentiality of this change is the passivity of scholars themselves. Aaron Swartz’s action, and its consequences, had as much to do with that generalized indifference as it did with any specific institution or organization. Not all culture needs to be open, and not all intellectual property claims are spurious. But scholarship should be and could be different, and has a claim to difference deep in its alleged values. There should be nothing that stops us from achieving the simplest thing that Swartz was asking of us, right now, in memory of him.
Timothy Burke is professor of history at Swarthmore College.
Between all the fiscal cliff-hanging and the preparations for the inauguration later this month, nobody inside the Beltway is paying much attention to the burgeoning political-science literature on the electoral significance of presidential dog ownership.
Well, official Washington has its priorities, and I have mine. A paper on the topic appears in the January issue of the American Political Science Association’s journalPS: Political Science & Politics. “Burgeoning” is something of an overstatement, but it’s the second time an article on dogs and the presidency has appeared there in a couple of years. So, close enough.
The work of Matthew L. Jacobsmeier and Daniel C. Lewis, two assistant professors of political science at the University of New Orleans, “Barking Up the Wrong Tree: Why Bo Didn’t Fetch Many Votes for Barack Obama in 2012,” is full of statistics and (let its title be fair warning) puns. Their argument builds on the work of Diana C. Mutz (right, I know) whose paper “The Dog That Didn’t Bark: The Role of Canines in the 2008 Campaign” appeared in the October 2010 issue of PS. It would be more accurate to say that Jacobsmeier and Lewis undermine and overturn her analysis, but at least they are friendly about it. (Mutz is a professor of poli sci at the University of Pennsylvania and Princeton University.)
Documentation of the role of pets in the history of the executive branch already existed when Mutz set to work, though it was, for the most part, anecdotal. But she could cite a survey from 2006 indicating that, in local elections at least, not quite 99 percent of dog owners responded that “a candidate’s position or track record on issues such as breed discrimination, breed bans, or leash laws played a significant role in their electoral choice.”
That statistic is at least somewhat questionable, coming as it does from My Dog Votes™, identified as “the world’s only company with a mission of Saving Dogs and Democracy … [by means of] clothing, accessories, and real campaign gear.” And the effect of dog-related issues on voter behavior during national elections remains very much an understudied question. Be that as it may, Mutz ventured a significant interpretation of the 2008 election -- which, while historic, was short of the landslide many expected.
She wrote: “Early in his run for the presidency, Obama made a widely publicized promise to get his daughters a dog after the election, regardless of the outcome. This gesture may have been superficially endearing, as promises go. However, I argue that in the end, this promise backfired on Obama by raising the salience of his family’s doglessness and thus alienating a significant proportion of the population.”
Mutz drew on data collected by the National Annenberg Election Study, a poll that “tracked a large, randomly selected sample of respondents throughout the 2008 presidential campaign.” Pet ownership was one of numerous characteristics recorded in the survey, along with gender, income, educational level, size of household, gun ownership, party identification, and the respondent’s perception of whether the economy was improving or worsening.
The problem was to determine how much weight dog ownership had as a variable affecting voters’ feelings about whether they would be likely to support a candidate. That means taking into account, through regression analysis, the strength of the other factors (gender, income, etc.) and any confluence between them. The results varied across Mutz’s four models, but dog ownership consistently proved to be a negative predictor for an Obama ballot for 1.7 to 5 percent of those surveyed – and among subjects who reported their votes, “the odds decreased by 16 percent if the respondent was a dog owner.”
Mutz offered two possible explanations for this remarkable gap. One was the failure of group identification: “The minimal group paradigm suggests that in-group favoritism can be stimulated even by very weak, transient, and meaningless group identifications.... Whether for symbolic or imputed substantive reasons, group identification theory suggests that, all else being equal, dog owners should be drawn to dog-owning candidates.”
An alternative (not mutually exclusive by any means) was the “congruity-oriented theory” that owners of a particular sort of pet will prefer candidates with similar characteristics, such as “emotional transparency and straightforward displays of emotion” in the case of dogs. That would present difficulties for an altogether feline politician such as Obama.
The scholarship has advanced considerably since the says of Gibbs Davis’s Wackiest White House Pets (2004) and it should come as no surprise to learn that others have revisited Mutz’s data from an alternative perspective. While admiring her analysis as “particularly elegant and compelling,” Jacobsmeier and Lewis challenge it on the basis of “our graduate school experiences [which] included Pavlovian training in the detection of omitted variable bias.”
The omitted variable, in this case, is region. The American Veterinary Medical Association reports that 37.2 percent of American households included a dog in 2006, but they are not evenly distributed. “Rates of dog ownership clearly vary with geographical location,” write Jacobsmeier and Lewis. “Using census region as the geographical unit, dog ownership is most common in the South and least common in the Northeast.”
The data also shows that “a large gap in dog ownership exists between black and white respondents” -- with whites having the higher rates, as do gun owners, home owners, and people living in rural areas. Mutz’s regression models took into account respondents’ party affiliations and how strongly they identified themselves as liberal or conservative, and tried thereby to isolate dog ownership as an independent factor. Instead, it proves to be a kind of proxy for “red state”-ism.
Among the pools of data the authors tapped into while doing their research, evidently, was an exhaustive collection of canine-pertinent verbs, images, sayings, etc., every single one of which was then incorporated into the paper. It seems like something best done with monomaniacal thoroughness, if you’re going to do it at all. I have managed to keep most of them out of this column, but you can find them all -- and many other interesting points scanted here -- in a prepublication copy of the paper here.
After a successful pilot, JSTOR is launching its Register & Read program, which lets anyone read up to three articles from 1,200 of its journals every two weeks in exchange for demographic information.