When downloading an app or approving a software update, I now usually hesitate for a moment to consider something the comedian John Oliver said early this summer: a software company could include the entire text of Mein Kampf in the user agreement and people would still click the “agree” button.
“Hesitates” is the wrong word for something that happens in a fraction of a second. It’s not as if I ever scrolled back through to make sure that, say, Microsoft is not declaring that it owns the copyright to everything written in OneNote or Word. The fine print goes on for miles, and anyway, a user agreement is typically an all-or-nothing proposition. Clicking “agree” is less a matter of trust than of resignation.
But then, that’s true about far more of life in the contemporary digital surround than most of us would ever want to consider. Every time you buy something online, place a cell phone call, send or receive a text message or email, or use a search engine (to make the list no longer nor more embarrassing than that), it is with a likelihood, verging on certainty, that the activity has been logged somewhere -- with varying degrees of detail and in ways that might render the information traceable directly back to you. The motives for gathering this data are diverse; so are the companies and agencies making use of it. An online bookseller tracks sales of The Anarchist Cookbook in order to remind customers that they might also want a copy of The Minimanual of the Urban Guerrilla, while the National Security Administration will presumably track the purchase with an eye to making correlations of a different sort.
At some level we all know such things are happening, probably without thinking about it any more often than strictly necessary. Harder to grasp is the sheer quantity and variety of the data we generate throughout the day -- much of it trivial, but providing, in aggregate, an unusually detailed map of what we do, who we know and what’s on our minds. Some sites and applications have “privacy settings,” of course, which affect the totality of the digital environment about as much as a thermostat does the weather.
To be a full-fledged participant in 21st-century society means existing perpetually in a state of information asymmetry, in the sense described by Finn Brunton and Helen Nissenbaum in Obfuscation: A User’s Guide for Privacy and Protest (MIT Press). You don’t have to like it, but you do have to live with it. The authors (who teach media culture and communications at New York University, where they are assistant professor and professor, respectively) use the term “obfuscation” to identify various means of leveling the playing field, but first it’s necessary to get a handle on information asymmetry itself.
For one thing, it is distinct from the economic concept of asymmetrical information. The latter applies to “a situation in which one party in a transaction has more or superior information compared to another.” (So I find it explained on a number of websites ranging from the scholarly to the very sketchy indeed.) The informed party has an advantage, however temporary; the best the uninformed can do is to end up poorer but wiser.
By contrast, what Brunton and Nissenbaum call information asymmetry is something much more entrenched, persistent and particular to life in the era of Big Data. It occurs, they explain, “when data about us are collected in circumstances we may not understand, for purposes we may not understand, and are used in ways we may not understand.” It has an economic aspect, but the implications of information asymmetry are much broader.
“Our data will be shared, bought, sold, analyzed and applied, all of which will have consequences for our lives,” the authors write. “Will you get a loan, or an apartment, for which you applied? How much of an insurance risk or a credit risk are you? What guides the advertising you receive? How do so many companies and services know that you’re pregnant, or struggling with an addiction, or planning to change jobs? Why do different cohorts, different populations and different neighborhoods receive different allocations of resources? Are you going to be, as the sinister phrase of our current moment of data-driven antiterrorism has it, ‘on a list’?”
Furthermore (and here Brunton and Nissenbaum’s calm, sober manner can just barely keep things from looking like one of Philip K. Dick’s dystopian novels), we have no way to anticipate the possible future uses of the galaxies of personal data accumulating by the terabyte per millisecond. The recent series Mr. Robot imagined a hacker revolution in which all the information related to personal debt was encrypted so thoroughly that no creditor would ever have access to it again. Short of that happening, obfuscation may be the most practical response to an asymmetry that’s only bound to deepen with time.
A more appealing word for it will probably catch on at some point, but for now “obfuscation” names a range of techniques and principles created to make personal data harder to collect, less revealing and more difficult to analyze. The crudest forms involve deception -- providing false information when signing up with a social media site, for example. A more involved and prank-like approach would be to generate a flood of “personal” information, some of it true and some of it expressing one’s sense of humor, as with the guy who loaded up his Facebook profile with so many jobs, marriages, relocations, interests and so on that the interest-targeting algorithms must have had nervous breakdowns.
There are programs that will click through on every advertisement that appears as you browse a site (without, of course, bothering you with the details) or enter search engine terms on topics that you have no interest in, thereby clouding your real searches in a fog of irrelevancies.
The cumulative effect would be to pollute the data enough to make tracking and scrutiny more difficult, if not impossible. Obfuscation raises a host of ethical and political issues (in fact the authors devote most of their book to encouraging potential obfuscators to think about them) as well as any number of questions about how effective the strategy might be. We’ll come back to this stimulating and possibly disruptive little volume in weeks to come, since the issues it engages appear in other new and recent titles. In the meantime, here is a link to an earlier column on a book by one of the co-authors that still strikes me as very interesting and, alas, all too pertinent.
George Orwell opened one of his broadcasts on the BBC in the early 1940s by recounting how he’d learned history in his school days. The past, as his teachers depicted it, was “a sort of long scroll with thick black lines ruled across it at intervals,” he said. “Each of these lines marked the end of what was called a ‘period,’ and you were given to understand that what came afterwards was completely different from what had gone before.”
The thick black lines were like borders between countries that didn’t know one another’s languages. “For instance,” he explained, “in 1499 you were still in the Middle Ages, with knights in plate armour riding at one another with long lances, and then suddenly the clock struck 1500, and you were in something called the Renaissance, and everyone wore ruffs and doublets and was busy robbing treasure ships on the Spanish Main. There was another very thick black line drawn at the year 1700. After that it was the Eighteenth Century, and people suddenly stopped being Cavaliers and Roundheads and became extra-ordinarily elegant gentlemen in knee breeches and three-cornered hats … The whole of history was like that in my mind -- a series of completely different periods changing abruptly at the end of a century, or at any rate at some sharply defined date.”
His complaint was that chopping up history and simplistically labeling the pieces was a terrible way to teach the subject. It is a sentiment one can share now only up to a point. Orwell had been an average student; today it would be the mark of a successful American school district if the average student knew that the Renaissance came after the Middle Ages, much less that it started around 1500. (A thick black line separates his day and ours, drawn at 1950, when television sales started to boom.)
Besides, the disadvantages of possessing a schematic or clichéd notion of history are small by contrast to the pleasure that may come later, from learning that the past was richer (and the borders between periods more porous) than the scroll made it appear.
Must We Divide History Into Periods? asked Jacques Le Goff in the title of his last book, published in France shortly before his death in April 2014 and translated by M. B. DeBevoise for the European Perspectives series from Columbia University Press. A director of studies at L'École des Hautes Études en Sciences Sociales in Paris, Le Goff was a prolific and influential historian with a particular interest in medieval European cities. He belonged to the Annales school of historians, which focused on social, economic and political developments over very long spans of time -- although his work also exhibits a close interest in medieval art, literature and philosophy (where changes were slow by modern standards, but faster than those in, say, agricultural technique).
Le Goff’s final book revisits ideas from his earlier work, but in a manner of relaxed erudition clearly meant to address people whose sense of the past is roughly that of young Orwell. And in fact it is that heavy mark on the scroll at the year 1500 -- the break between the Middle Ages and the Renaissance -- that Le Goff especially wants the student to reconsider. (I say “student” rather than “reader” because time with the book feels like sitting in a lecture hall with a memorably gifted teacher.)
He quotes one recent historian who draws the line a little earlier, with the voyage of Christopher Columbus in 1492: “The Middle Ages ended, the modern age dawned, and the world became suddenly larger.” Le Goff is not interested in the date but in the stark contrast that is always implied. Usually the Middle Ages are figured as “a period of obscurity whose outstanding characteristic was ignorance” -- happily dispelled by a new appreciation for ancient, secular literature and a sudden flourishing of artistic genius.
Calling something “medieval” is never a compliment; the image that comes to mind is probably that of a witch trial. By contrast, “Renaissance” would more typically evoke a page from Leonardo da Vinci’s notebooks. Such invidious comparison is not hard to challenge. Witch trials were rare in the Middle Ages, while the Malleus Maleficarum appeared in “the late fifteenth century,” Le Goff notes, “when the Renaissance was already well underway, according to its advocates.”
Given his expertise on the medieval city -- with its unique institutional innovation, the university -- Le Goff makes quick work of demolishing the notion of the Middle Ages having a perpetually bovine and stagnant cultural life. The status of the artist as someone “inspired by the desire to create something beautiful” who had “devoted his life to doing just this” in pursuit of “something more than a trade, nearer to a destiny,” is recognized by the 13th century. And a passage from John of Salisbury describes the upheaval underway in the 12th century, under the influence of Aristotle:
“Novelty was introduced everywhere, with innovations in grammar, changes in dialectic, rhetoric declared irrelevant and the rules of previous teachers expelled from the very sanctuary of philosophy to make way for the promulgation of new systems …”
I can’t say that the name meant anything to me before now, but the entry on John of Salisbury in the Stanford Encyclopedia of Philosophy makes it sound as if Metalogicon (the work just quoted) was the original higher ed polemic. It was “ostensibly written as a defense of the study of logic, grammar and rhetoric against the charges of the pseudonymous Cornificius and his followers. There was probably not a single person named Cornificius; more likely John was personifying common arguments against the value of a liberal education. The Cornificians believe that training is not relevant because rhetorical ability and intellectual acumen are natural gifts, and the study of language and logic does not help one to understand the world. These people want to seem rather than to be wise. Above all, they want to parlay the appearance of wisdom into lucrative careers. John puts up a vigorous defense of the need for a solid training in the liberal arts in order to actually become wise and succeed in the ‘real world.’”
That's something an Italian humanist might write four hundred years later to champion “the new learning” of the day. And that is no coincidence. Le Goff contends that “a number of renaissances, more or less extensive, more or less triumphant,” took place throughout the medieval era -- an elaboration of the argument by the American historian Charles H. Haskins in The Renaissance of the Twelfth Century (1927), a book that influenced scholars without, as Le Goff notes, having much effect on the larger public. The Renaissance, in short, was a renaissance -- one of many -- and in Le Goff’s judgment “the last subperiod of a long Middle Ages.”
So, no bold, clean strokeof the black Magic Marker; more of a watercolor smear, with more than one color in the mix. Le Goff treats the Middle Ages as having a degree of objective reality, insofar as certain social, economic, religious and political arrangements emerged and developed in Europe over roughly a thousand years.
At the same time, he reminds us that the practice of breaking up history into periods has its own history -- deriving, in its European varieties, from Judeo-Christian ideas, and laden with ideas of decline or regeneration. “Not only is the image of a historical period liable to vary over time,” he writes, “it always represents a judgment of value with regard to sequences of events that are grouped together in one way rather than another.”
I'm not entirely clear how, or if, he reconciled the claim to assess periodization on strictly social-scientific grounds with its status as a concept with roots in the religious imagination, but it's a good book that leaves the reader with interesting questions.
The Book Industry Study Group just reported that 52 percent of college students surveyed agreed that “I would rather pay $100 for a learning solution that improves my result by one letter grade and reduces my study time by 25 percent than $50 for my current textbook.” As a professor, I am troubled by declines in the effort many in my classes are willing to put into doing the reading I assign. But as an administrator, I also recognize students’ concerns with scoring high grades, juggling internships and part-time jobs, and minimizing expenses.
Multiple factors are at play here: grade inflation, social pressures, student debt, the iffy job market. Further relevant is the time students report studying each week (now an average of 15 hours, down from about 24 in the 1960s). Yet one of the major culprits is the price tag on textbooks and other course materials, estimated at around $1,200 a year -- assuming you buy them.
Faculty members and students alike are in a quandary over how to handle textbook costs, especially for those hefty tomes often used in introductory courses. Increasingly, students are opting not to purchase these books -- not even rent them. Digital formats (and rentals of any kind) tend to be less expensive than buying print, though frequently the decision is not to acquire the materials at all. The U.S. Public Interest Research Group reports that two-thirds of students have refrained from purchasing at least one assigned textbook because of price.
Recently, American University ran focus groups with our undergraduates, looking to get a sense of how they make textbook decisions. For courses in their major, they are willing to lay out more money than for general education classes, which they perceive (often wrongly) not to require much work anyway. Over all, the common sentiment is that spending more than about $50 for a book is excessive. And of course there are plenty of college textbooks with prices that exceed $50.
This message was reinforced by an anecdote shared with me by Michael Rosenwald, a reporter for The Washington Post. While interviewing American University students for a story on college reading and book-purchasing habits, Rosenwald asked, “Who buys course materials from the campus store these days?” Their answer: “Freshmen,” revealing that once students settle into campus life, they discover less expensive ways to get their books -- or devise strategies on how much reading they'll actually do.
For faculty members, the challenge is to find a workable balance between the amount of reading we would like those in our classes to complete and realistic expectations for student follow-through. While some full-length books may remain on our required list, their numbers have shrunk over time. These days, assignments that used to call for complete books are being slimmed down to single chapters or articles. Our aspirations for our students to encounter and absorb substantial amounts of written material increasingly rub up against their notions of how much is worth reading.
The numbers tell the tale. That same Book Industry Study Group report noted that between 2010 and 2013, the percentage of students indicating that classes they were taking required “no formal course materials” rose from 4 percent to 11 percent.
Student complaints are equally revealing. When Robert Putnam’s Bowling Alone came out, I assigned the book to a group of honors undergraduates, eager for them to experience careful, hypothesis-driven, data-rich social science research. One member of the class balked. In fact, she publicly berated me, demanding to know why I hadn’t told the group about the “short version” of the book -- meaning an article Putnam has written years earlier, before his full study was completed. She went on to inform the class what she had learned from a teacher in high school: books aren’t worth reading, only articles. The rest of what’s in books is just padding.
The author and teacher in me cringed at how this young woman perceived the intellectual enterprise.
For students, besides the understandable limitations on time and finances, there is the question of value proposition. If the objective is learning that lasts, maybe buying the book (and reading it) is worth it. But if the goal is getting a better grade, maybe not. All too often today, it is the grade that triumphs.
One player that faculty members generally leave out of the equation is the publishing industry, including not just the companies whose names are on the spines but the people who print the books, supply the paper and ink, and operate the presses. Recently I spoke at the Book Manufacturers’ Institute Conference and was troubled by the disconnect I perceived between those who produce and distribute textbooks and those who consume them. As students buy fewer books, publishers do smaller print runs, resulting in higher prices, which in turn reinforces the spiral of lower sales.
A potential compensatory financial strategy for publishers is issuing revised editions, intended to render obsolete those already in circulation. In reality, students often take a pass on these new offerings, waiting until they appear on the used book market. Yes, sometimes there is fresh, timely material in the new versions, but how often do we really need to update textbooks on the structure of English grammar or the history of early America?
When speaking with participants in the book manufacturers’ conference, I became increasingly convinced that the current model of book creation, distribution and use is not sustainable. What to do?
There is a pressing need for meaningful collaboration between faculty members and the publishing industry to find ways of producing materials designed to foster learning that reaches beyond the test -- and that students can be reasonably expected to procure and use. I would like to hope that textbook publishers (who I know are financially suffering) are in conversation not just with authors seeking book contracts but with faculty members who can share their own assignment practices, along with personal experiences about how students are voting with their feet regarding purchasing and reading decisions.
To help foster such dialogue, here are some suggestions:
Gather data on shifts in the amount and nature of reading that faculty assign, say, over the past 10-20 years.
Reconsider publishing strategies regarding those handsome, expensive, color-picture-laden texts, whose purpose is apparently to entice students to read them. If students aren’t willing to shell out the money, the book likely isn’t being read. Focus instead on producing meaningful material written with clear, engaging prose.
Rethink when a new edition is really warranted and when not. In many instances, issuing a smaller update, to be used as a supplement to the existing text, is really all that’s needed. (Think of those encyclopedia annuals with which many of us are familiar.) Students -- and far more of them -- will be willing to pay $9.95 for an update to an older book than $109.95 for a new one. McDonald’s learned long ago that you can turn a handsome profit through high volume on low-cost items. The publishing industry needs to do the math.
Make faculty members aware of the realities of both textbook prices (some professors never look before placing book orders) and student reading patterns. I heartily recommend hanging out in the student union (or equivalent) and eavesdropping. You will be amazed at how cunning -- and how honest -- students are about their study practices.
Encourage professors to assign readings (especially ones students are asked to pay for) that maximize long-term educational value.
Educate students about the difference between gaming the assignment system (either for grades or cost savings) and learning.
The results can yield a win-win situation for both the publishing industry and higher education.
Naomi S. Baron is executive director of the Center for Teaching, Research, and Learning at American University and author of Words Onscreen: The Fate of Reading in a Digital World.
The University of Akron announced a few weeks ago that it would eliminate more than 200 jobs to deal with a budget deficit. As employees losing their jobs were notified this week, it has become clear that the university is eliminating its university press. All employees of the press are among those having their jobs killed, Northeast Ohio Media Group reported. "We have essentially been shut down," said Thomas Bacher, director of the press. "Another blow against culture by a short-sighted administration. It's sad that the university values beans over brains." The press is a small one with a focus on Ohio-related topics, and it also publishes some poetry.
The university layoffs also include all employees of Akron's multicultural center, although the university says that other offices will support the programming offered by the center.
So it turns out that -- title notwithstanding -- Beth Shapiro’s How to Clone a Mammoth: The Science of De-Extinction (Princeton University Press) is not a do-it-yourself manual. What’s more, cloned mammoths are, in the author’s considered opinion, impossible. Likewise, alas, with regard to the dodo.
But How Not to Clone a Dodo would never cut it in the marketplace. Besides, the de-extinction of either creature seems possible (and in case of the mammoth, reasonably probable) in the not-too-distant future. The process involved won’t be cloning, per se, but rather one of a variety of forms of bioengineering that Shapiro -- an associate professor of ecology and evolutionary biology at the University of California at Santa Cruz -- explains in moderate detail, and in an amiable manner.
Her approach is to present a step-by-step guide to how an extinct creature could be restored to life given the current state of scientific knowledge and the available (or plausibly foreseeable) advances in technology. There are obstacles. Removing some of them is, by Shapiro’s account, a matter of time and of funding. Whether or not the power to de-exterminate a species is worth pursuing is a question with many parts: ethical and economic, of course, but also ecological. And it grows a little less hypothetical all the time. De-extinction is on the way. (The author allows that the whole topic is hard on the English language, but “resurrection” would probably cause more trouble than it’s worth.)
The subject tickles the public’s curiosity and stirs up powerful emotions. Shapiro says she has received her share of fan and hate mail over the years, including someone’s expressed wish that she be devoured by a flesh-eating mammal of her own making. Perhaps the calmest way into the discussion is by considering why reviving the mammoth or the dodo is possible, but would not be the same thing as cloning one. (And dinosaur cloning is also right out, just to make that part clear without further delay.)
To clone something, in short, requires genetic material from a living cell with an intact genome. “No such cell has ever been recovered from remains of extinct species recovered from the frozen tundra,” writes Shapiro, whose research has involved the search for mammoth remains in Siberia. Flash freezing can preserve the gross anatomy of a mammoth for thousands of years, but nucleases -- the enzymes that fight off pathogens when a cell is alive -- begin breaking down DNA as soon as the cell dies.
What can be recovered, then, is paleogenetic material at some level of dismantling. The challenge is to reconstruct an approximation of the extinct creature’s original genome -- or rather, to integrate the fragments into larger fragments, since rebuilding the whole genetic structure through cut-and-paste efforts is too complex and uncertain a task. The reconstituted strings of genetic data can then be “inserted” at suitable places in the genome of a related creature from our own era. In the case of the woolly mammoth, that would mean genetic material from the Asian elephant; they parted ways on the evolutionary tree a mere 2.5 million years ago. In principle, at least, something similar could be done using DNA from the taxidermy-preserved dodo birds in various collections around the world, punched into the pigeon genome.
“Key to the success of genome editing,” writes Shapiro, “has been the discovery and development of different types of programmable molecular scissors. Programmability allows specificity, which means we can make the cuts we want to make where we want to make them, and we can avoid making cuts that kill the cell.”
Cells containing the retrofitted genome could then be used to spawn a “new” creature that reproduces aspects of the extinct one -- pending the solution of various technical obstacles. For that matter, scraping together enough raw material from millennia past presents its own problems: “In order to recover DNA from specimens that have very little preserved DNA in them, one needs a very sensitive and powerful method for recovering the DNA. But the more sensitive and powerful method is, the more likely it is to produce spurious results.”
Also a factor is the problem of contamination, whether found in the sample (DNA from long-dead mold and bacteria) or brought into the lab in spite of all precautions. Shapiro leaves the reader aware of both the huge barriers to be overcome before some species is brought back from extinction and the strides being made in that direction. She predicts the successful laboratory creation of mammoth cells, if not of viable embryos, within the next few years.
It will be hailed as the cloning of an extinct animal -- headlines that Shapiro (whose experiences with the media do not sound especially happy) regards as wrong but inevitable. The reader comes to suspect one motive for writing the book was to encourage reporters to ask her informed questions when that news breaks, as opposed to trying to get her to speculate about the dangers of Tyrannosaurus rex 2.0.
Besides its explanations of the genetics and technology involved, How to Clone a Mammoth insists on the need to think about what de-extinction would mean for the environment. Returning the closest bioengineerable approximation of a long-lost species to the landscape it once inhabited will not necessarily mean a happy reunion. The niche that animal occupied in the ecosystem might no longer exist. Indeed, the ecosystem could have developed in ways that doom the creature to re-extinction.
Shapiro is dismissive of the idea that being able to revive a species would make us careless about biodiversity (or more careless, perhaps), and she comes close to suggesting that de-extinction techniques will be necessary for preserving existing species. But those things are by no means incompatible. The author herself admits that some species are more charismatic than others: we're more likely to see the passenger pigeon revived than, say, desert rats, even though the latter play an ecological role. The argument may prove harder to take for the humbler species once members of Congress decide to freeze-dry them for eventual relaunching, should that prove necessary.
By now we should know better than to underestimate the human potential for creating a technology that goes from great promise to self-inflicted disaster in under one generation. My guess is that it will take about that long for the horrible consequences of the neo-dodo pet ownership craze of the late 2020s to makes themselves fully felt.
You don’t hear much about the United States being a “postracial society” these days, except when someone is dismissing bygone illusions of the late ’00s, or just being sarcastic. With the Obama era beginning to wind down (as of this week, the president has just under 18 months left in office) American life is well into its post-post-racial phase.
Two thoughts: (1) Maybe we should retire the prefix. All it really conveys is that succession does not necessarily mean progress. (2) It is easy to confuse an attitude of cold sobriety about the pace and direction of change with cynicism, but they are different things. For one, cynicism is much easier to come by. (Often it’s just laziness pretending to be sophisticated.) Lucid assessment, on the other hand, is hard work and not for the faint of spirit.
Naomi Zack’s White Privilege and Black Rights: The Injustice of U.S. Police Racial Profiling and Homicide (Rowman & Littlefield) is a case in point. It consists of three essays plus a preface and conclusion. Remarks by the author indicate it was prepared in the final weeks of last year, with the events in Ferguson, Mo., fresh in mind. But don’t let the title or the book’s relative brevity fool you. The author is a professor of philosophy at the University of Oregon -- and when she takes up terms such as “white privilege” or “black rights,” it is to scrutinize the concepts rather than to use them in slogans.
Despite its topicality, Zack’s book is less a commentary on recent events than part of her continuing effort to think, as a philosopher, about questions of race and justice that are long-standing, but also prone to flashing up, on occasion, with great urgency -- demanding a response, whether or not philosophers (or anyone else) is prepared to answer them.
Zack distinguishes between two ways of philosophizing about justice. One treats justice as an ideal that can be defined and reasoned about, even if no real society in human history ever “fully instantiates or realizes an ideal of justice for all members of that society.” Efforts to develop a theory of justice span the history of Western philosophy.
The other approach begins with injustice and seeks to understand and correct it. Of course, that implies that the philosopher already has some conception of what justice is -- which would seem to beg the question. But Zack contends that theories of justice also necessarily start out from pre-existing beliefs about what it is, which are then strengthened or revised as arguments unfold.
“However it may be done and whatever its subject,” Zack writes, “beginning with concrete injustice and ending with proposals for its correction is a very open-ended and indeterminate task. But it might be the main subject of justice about which people who focus on real life and history genuinely care.”
The philosopher Zack describes may not start out with a theory of what justice is. But that’s OK -- she can recognize justice, paradoxically enough, when it's gone missing.
I wish the author had clarified the approach in the book’s opening pages, rather than two-thirds of the way through, because it proves fundamental to almost everything else she says. She points out how police killings of young, unarmed African-American males over the past couple of years are often explained with references to “white privilege” and “the white supremacist system” -- examples of a sort of ad hoc philosophizing about racial injustice in the United States, but inadequate ones in Zack’s analysis.
Take the ability to walk around talking on the phone carrying a box of Skittles. It is not a “privilege” that white people enjoy, as should be obvious from the sheer absurdity of putting it that way. It is one of countless activities that a white person can pursue without even having to think about it. “That is,” Zack writes, “a ‘privilege’ whites are said to have is sometimes a right belonging to both whites and nonwhites that is violated when nonwhites are the ones who [exercise] it.”
In the words of an online comment the author quotes, “Not fearing the police will kill your child for no reason isn’t a privilege, it’s a right.” The distinction is more than semantic. What Zack calls “the discourse of white privilege” not only describes reality badly but fosters a kind of moral masochism, inducing “self-paralysis in the face of its stated goals of equality.” (She implies that white academics are particularly susceptible to "hold[ing] … progressive belief structures in intellectual parts of their life that are insulated from how they act politically and privately …")
Likewise, “the white supremacist power structure” is a way of describing and explaining oppression that is ultimately incapacitating: “After the civil rights movement, overt and deliberate discrimination in education, housing and employment were made illegal and explicitly racially discriminatory laws were prohibited.” While “de facto racial discrimination is highly prevalent in desirable forms of education, housing and employment,” it does no one any good to assume that “an officially approved ideology of white supremacy” remains embodied in the existing legal order.
None of which should be taken to imply that Zack denies the existence of deep, persisting and tenacious racial inequality, expressed and reinforced through routine practices of violence and humiliation by police seldom held accountable for their actions. But, she says, "What many critics may correctly perceive as societywide and historically deep antiblack racism in the United States does not have to be thoroughly corrected before the immediate issue of police killings of unarmed young black men can be addressed."
She is not a political strategist; her analyses of the bogus logic by which racial profiling and police killings are rationalized are interesting but how to translate them into action is not exactly clear. But in the end, justice and injustice are not problems for philosophers alone.