Flat World's shift in gears and what it means for open textbook publishing

Flat World Knowledge will no longer publish versions of its textbooks at no charge. How big a setback does the company's change represent for the 'open' movement?

Review of Alasdair Roberts, 'Can Government Do Anything Right?'

In 1952, a graduate student in sociology from the University of Chicago named Erving Goffman published a paper in the journal Psychiatry that I reread every few years with deepening respect. It analyzes one seemingly marginal and highly specific kind of person-to-person interaction, then pulls back, like a movie camera, to take in more and more of the terrain of everyday life.

The title, “On Cooling the Mark Out: Some Aspects of Adaptation to Failure,” uses two pieces of criminal argot resonant of a David Mamet play. A “mark,” in the lexicon of con artists at midcentury, was the target of a swindle. See also: dupe, chump, patsy. No synonyms come to mind for “cooling out,” which is a very specific operation sometimes necessary after the mark has been relieved of money. In a smoothly run con, the mark will accept both the sure-thing, once-in-a-lifetime opportunity and, later, the explanation for why it failed. (The overseas company he invested in was shut down by corrupt officials who confiscated all the funds, for example.) “The mark is expected to go on his way,” as Goffman puts it, “a little wiser and a lot poorer.”

But, on occasion, the victim is left not just unhappy or angry but inclined to involve the police or other authorities. In that case, it becomes necessary to “cool the mark out,” which sounds like a very delicate kind of psychological intervention. For the mark has staked, and lost, not only his money but a good part of his self-image. The con flatters and emboldens the mark’s sense of being shrewd, canny -- able to spot an opportunity and to judge risks. But circumstances have proven otherwise, and left the mark feeling humiliated and vengeful. The cooler is adept at defining “the situation for the mark in a way that makes it easy for him to accept the inevitable and quietly go home,” says Goffman. He “exercises upon the mark the art of consolation.” Then on to the next con.

Goffman’s paper deftly shifts perspective to reveal the mark as someone in a basic human predicament, facing a bitter fact of social life that nobody, I suppose, escapes entirely. To wit: the experience of finding a gap between what someone believes about themselves and expects others to acknowledge, on the one hand, and unambiguous evidence to the contrary. That happens in an endless variety of ways, from the trivial to the catastrophic; rather than give examples, let me recommend you consult the paper itself. Goffman’s point is many of the jobs, rituals and routines necessary to keep institutions running and personal drama within reasonable limits amount to just so many variations on the theme of “cooling the mark out.”

Looking at it from the other end of the telescope, Goffman’s paper also implies that quite a lot of social life amounts to a series of bunco schemes. It is a perspective once associated with film noir and currently applicable to much of the breaking news from day to day.

But I’ve just come across an unexpected variant of it in Can Government Do Anything Right? (Polity) by Alasdair Roberts, who is director of the school of public policy at the University of Massachusetts Amherst. The overall drift of his answer to the titular question seems to be “Yes, within limits, although admittedly not many people seem to think so, because they either expect too much from it or spend all their time obsessing about the failures.”

There are occasional expressions of cautious optimism: European integration and intra-European peace can continue despite E.U. wobbles; “judged by the number of attacks or the number of deaths, the current wave of terrorism in North America and Western Europe is less severe than that of the 1970s”; the U.S. dedicates “a smaller share of national income to defense than at any point in the Cold War.” For the sake of balance, perhaps, the author also expresses cautious pessimism about economic inequality, climate change and geopolitical rivalries. The search is on for a “new paradigm” in public policy to overcome the excesses of neoliberalism, just as neoliberalism overcame the excesses of the welfare state. In the meantime, things may get unpleasant, but “pragmatism, empiricism and open-mindedness” can and should win in the long term.

Here, then, is centrism at its most anodyne. So it’s a bit startling when, near the end of the book, the author calls government “a sort of confidence trick.” What sort? He explains:

Confidence artists know two kinds of tricks: the “short con,” a quick deception that yields a small reward, and the “long con,” an elaborate ruse that involves many people and plays out over a long time. Government is a long con. The aim is to persuade people that the state is durable and its authority unassailable … If the confidence trick works, leaders benefit, because it discourages resistance to their authority. But the rest of us benefit too. If we do not believe that there is stability and predictability, we are reluctant to make plans and undertake new projects.

So it's an altruistic racket? A con for the marks’ own good? The analogy is offered in an almost bizarrely uncynical tone, and without the critical implications that show around the edges of Goffman's analysis. I'm not sure what to make of it. But one implication comes to mind: it may be a bad idea to put a short-term con artist in charge of a long-term con. For the latter, a cooler is sometimes required. Even the most trusted fixer just won't do.

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Review of Robert Irwin’s ‘Ibn Khaldun: An Intellectual Biography’

The publisher seems not to have noticed, but Robert Irwin’s Ibn Khaldun: An Intellectual Biography (Princeton University Press) happens to appear on the 60th anniversary of the first complete English translation of Khaldun's masterpiece, The Muqaddimah, in three large volumes. An abridged edition of Khaldun’s treatise is available in paperback (also stout) from the same press.

Even in its condensed form, The Muqaddimah comes with a moniker that feels like a wry comment on the interminable demands of scholarship: the title is the Arabic word for “introduction,” since Khaldun (1332-1406) wrote it as a kind of methodological prologue to his much longer history of the world, composed during the heyday of medieval Islamic intellectual life. (Irwin is senior research associate at the School of Oriental and African Studies in London.)

Despite everything marking it as the product of a very different culture and era, The Muqaddimah is bound to impress the 21st-century reader, from the first page, as a work of integrative social science. Khaldun writes that the historians of earlier times are unreliable because they lacked “clear knowledge of the principles resulting from custom, the fundamental facts of politics, the nature of civilization or the conditions governing human social organization.” Having given these matters a great deal of thought as both a scholar and an experienced political operative, he was driven to work out a kind of sociological model of historical change, taking into account geography, law, economics and culture. The book is nothing if not encyclopedic, with its vast array of learning bound together by a core concept: asabiya, or “group feeling.”

Khaldun starts out from the intense bonds of social solidarity needed to keep human existence going from generation to generation in the face of everything nature has to throw at it. The laws, customs and skills of various communities -- their respective patterns of life and authority structures -- are both made possible by asabiya and serve to sustain it. For Khaldun, the purest and strongest form of asabiya develops among desert nomads. Communities that settle in fertile areas develop more prosperous economies and firmly established royal authority. Culture flourishes and grows more sophisticated -- and The Muqaddimah itself is very much a product of what its author calls a “sedentary civilization.” But over time, corruption sets in. The feeling of communal solidarity grows thinner and the people more sinful (which is more or less the same thing). “The toughness of desert life is lost,” Khaldun writes. “Members of the tribe revel in the well-being God has given them. Their children and offspring grow up too proud to look after themselves or to attend to their own needs … Eventually, group feeling is altogether destroyed.”

By then, the combination of prosperity and weakness invites conquest by nomads, which Khaldun regards as a good thing, on the whole. The nomads bring with them a firmer piety and a stricter moral sensibility; they provide a kind of asabiya transfusion. (Not by chance did the prophet emerge among people of the desert.) But then the process repeats. Khaldun sees history as cyclical; he takes what comfort he can from his discovery that it is at least intelligible.

Roughly half of the books about Khaldun available in English have appeared in just the past 20 years, including Bensalem Himmich’s novel The Polymath, which was the closest thing to a biography available until Allen James Fromherz, a professor of medieval Mediterranean and Middle East history at Georgia State University in Atlanta, published Ibn Khaldun: Life and Times (Edinburgh University Press) in 2011.

Khaldun did leave an account of his own life, or rather of his career: a listing of the eminent scholars he studied with and his various positions as judge, tax collector, diplomat and professor. Contemporaries describe him as irascible and arrogant -- though, as Irwin says, “he had a lot to be arrogant about.” Irwin’s biography (by my count, the second in English) shows that Khaldun’s life was not without incident. He served a two-year prison sentence at the hands of one employer and was driven into exile by another. The first draft of The Muqaddimah was composed while the author was lying low in a castle in Algeria. A few years later, he suffered the loss of his wife, daughters and library when the ship transporting them sank en route to joining him in Egypt.

Fromherz’s biography depicted Khaldun not only as belonging to the Sufi branch of Islam but also as achieving “awakening” about the course of history as part of Sufi mystical experience. Irwin is skeptical but will allow that circumstantial evidence strongly suggests Khaldun was a Sufi, even if he never explicitly identified as one. This detail is important given the overall thrust of Irwin’s book, which is that Khaldun’s modern admirers haven’t understood him very well at all.

“Though Ibn Khaldun was almost certainly a Sufi,” he writes, “this was not apparent to 19th-century commentators who mostly preferred to think of him as a rationalist, a materialist and a positivist.” The French scholars who first introduced him to the European public aimed “to strip Ibn Khaldun of his superficially medieval Arab identity and reveal him to be in reality a modern Frenchman and one, moreover, who would have approved the French Empire in North Africa.”

By the 20th century, sociologists identified Khaldun as a pioneer in their field. Arnold Toynbee celebrated The Muqaddimah as anticipating his own analysis of the rise and fall of civilizations. Marxists found Khaldun’s attention to economics admirably Marxish. One of the earliest monographs in English was by a disciple of Leo Strauss, the proto-neoconservative philosopher; it reveals Khaldun to have been no mere sociologist but rather a participant in the secret dialogue among philosophers down the centuries, interpretable only by those possessing the Leo Strauss fan club secret decoder ring.

In short, interpreters of Khaldun have been prone to remake him in their own images. Irwin’s biography is meant as an antidote by stressing the various elements that just don’t fit when trying to modernize Khaldun. Seeing him as a philosopher of history or a sociological theorist avant la lettre tends to secularize his framework, as if the citations from the Quran were ornamental rather than essential. He did not identify himself as a falasifa (philosopher); the word rendered as “philosophy” in the English translation of The Muqaddimah instead means “wisdom” or “what prevents one from ignorant behavior.” And the strain of moralism and pessimism suggested by Khaldun’s cyclical view of history is baked right in. “The entire world is trifling and futile,” he writes. “It leads to death and annihilation.”

That could be just a rhetorical flourish. But coming as it does from the pen of a man whose family drowned in the Mediterranean at the very time he was assuming a plum job in government service, maybe not.

Irwin’s biography may bend the stick too hard on some matters. The Muqaddimah still reads to me more like someone inventing sociology than it does the work of a pious man wringing his hands as the world goes to hell in a handbasket. But looking at the author from both angles may be necessary to see him in greater depth.

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Michael Roth considers Steven Pinker's new book, 'Enlightenment Now' (opinion)

Steven Pinker has become chief cheerleader for modernity. In his 2011 book, The Better Angels of Our Nature, he marshaled mountains of evidence to show that violence, both private and public, has significantly declined over the last 200 years. While atrocities naturally continue to draw our attention, they are actually less prevalent than ever before. If we avoid the “availability bias” of sensational headlines and study the broad spectrum of relevant information, we can see that, as a species, we are moving away from violence.

In his new book, Enlightenment Now: The Case for Reason, Science, Humanism and Progress, Pinker expands his purview to include progress in everything from access to basic nourishment and health care to income and increased choices in how we spend our time. In every important area, Pinker sees robust improvement. The world is getting safer, more prosperous and less authoritarian. “Look at the data!” he cries again and again, and you will see that human beings have much to cheer about and much to look forward to. Evidence from surveys even suggests that we are happier -- although not nearly as happy as we should be, given the progress we’ve made.

Pinker himself is not happy with colleges and universities, especially humanities programs, which, he claims, tend to emphasize the tragic, the negative, even the apocalyptic. He takes particular aim at Nietzsche and the streams of critical theory that flow from his thinking. Nietzsche’s antimodern polemics against smug, middle-class complacency especially rankle the Harvard University professor who can’t seem to imagine why anyone wouldn’t be grateful for the greater access to food, shelter and leisure that modernity has created.

There is plenty to criticize in Pinker’s historical portrait of triumphant modernity. He ignores any part of the Enlightenment legacy that doesn’t fit neatly into his neat, Popperian understanding of how scientific progress is made through disconfirming hypotheses. In describing progress in societies that behave more rationally, he says almost nothing about the social movements and struggles that forced those with power (and claims to rationality) to pay attention to political claims for justice. When science leads to bad things, like eugenics, he just dismisses the results as bad science. He criticizes those with whom he disagrees as being narrow-minded or tribalistic, but he seems to have no self-awareness of how his own thinking is plagued by parochialism. He writes that we have to cure “identity protective cognition,” but for him history is an effort to find figures like himself in the past so that he can write a story that culminates with people who have the same views as he. “There can be no question of which was the greatest era for human culture; the answer has to be today.” Maybe he thinks that the gesture of expecting an even better future is an expression of intellectual modesty.

But as much as Pinker’s self-congratulation may annoy anyone concerned with (or just curious about) the ways the achievements of modernity have been built through oppression, exploitation and violence, it would be a mistake to ignore the extraordinary accomplishments that he documents in Enlightenment Now. Take the astonishing reductions in poverty around the world. Over the last century, the portion of people living in extreme poverty has been reduced from 90 percent to under 10 percent. The acceleration of this progress in the last half century has been truly remarkable, and we can see similar good news in regard to decreased child mortality and increased life expectancy (to pick just two of the subjects Pinker covers).

And Pinker is right that many of us in the humanities and interpretive social sciences are loath simply to celebrate such gains when discussing the legacies of the Enlightenment or embracing contemporary critical thinking. Why? Part of the reason is that the story of those achievements should not be divorced from an account of how social injustice has made them possible. Humanists don’t dismiss the importance of reductions in poverty, but neither do they simply want to describe slavery, colonialism and other forms of exploitation as the price one has (always?) to pay for progress.

A judicious history of the dramatic increase in the powers of science and rationality should include chapters on the massive increases in the destructive power now in human hands. Those chapters are missing from Pinker’s book, and that’s important because of the asymmetric risks now facing the planet. Pinker’s caricatures of doomsayers of the past predicting environmental or nuclear disaster can be amusing, but his cheerful account of an ever more peaceful and prosperous world reminds one of the optimists writing in 1914 just before the outbreak of World War I. They, too, were quite sure that in their century war was a thing of the past and that economic development would go on more or less steadily.

Yet as Daniel Callahan recently showed in The Five Horsemen of the Modern World, the risks of massive destruction and deep ecological dislocation today have been greatly magnified by nuclear weapons, global warming and profound challenges in regard to food and water. These risks are not reduced because we’ve already made progress in regard to poverty and life expectancy. Some of the same forces that helped create the positive changes have also led to enormous problems. And past performance is, as they say, no guarantee of future results.

Pinker does spend time on contemporary challenges, seeing them as technical problems to be solved through inquiry and experimentation. That seems reasonable enough. We’ve produced nuclear weapons that could destroy millions of lives -- we need mechanisms to make their use less probable. Economic development has put too much carbon in the atmosphere -- we need to develop tools to take the carbon out while creating jobs and enhancing prosperity.

This story of progress begetting more positive change rather than intractable problems is, of course, very much end point dependent. Pinker’s claims for enhanced freedom around the world today run into the obstacles of authoritarian rule in Russia and China. So, he says, Putin and Xi are not nearly as bad as Stalin and Mao. And when he started writing Enlightenment Now, Pinker could not have predicted President Trump. He acknowledges the threats that Trump and other antiscientific populists pose to his idea of continual progress, but he suggests that demographic trends will naturally shrink the base of know-nothing authoritarians. And if we all just emphasized how positive things are, populists claiming only they can save us wouldn’t have as much to work with: “By failing to take note of the gifts of modernity, social critics poison voters against responsible custodians and incremental reformers.” Cheerleading as activism.

The Enlightenment was never just one thing, and its most serious exponents often thought long and hard about the negative consequences of reducing all thinking to the narrowest forms of the science of their time. Humanists in colleges and universities today can extend the legacies of the Enlightenment not by celebrating the virtues of science with unalloyed optimism nor by denigrating them with unadulterated nihilism. Instead, humanists today can acknowledge the gains of science and economic development while continuing to question both their premises and their unintended consequences.

Pinker writes that “none of us are as happy as we ought to be, given how amazing our world has become.” But we don’t need cheerleading psychologists telling us we should be happier than we are. We need teachers whose broad-based thinking builds hope and inspires positive change by critically challenging complacency. That’s still the best bet for what Kant recognized as the goal of Enlightenment: freedom from self-imposed immaturity.

Michael S. Roth is president of Wesleyan University and author, most recently, of Beyond the University: Why Liberal Education Matters.

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Review of Barbara Ehrenreich, ‘Natural Causes: An Epidemic of Wellness, the Certainty of Dying, and Killing Ourselves to Live Longer’

In the event that other developments on the world stage haven’t left you in a state of quivering anxiety, the Centers for Disease Control and Prevention put out a report this week that ought to do the trick. The CDC statement bears the title “Containment of Novel Multidrug-Resistant Organisms and Resistance Mechanisms -- United States, 2006-2017,” which soon translated into headlines such as “‘Nightmare Bacteria’ Are Trying to Spread in the U.S., CDC Says.”

We’ve been hearing about the rise of antibiotic-resistant (AR) strains of diseases for decades. They now kill more than 23,000 Americans per year. More troubling than that is the discovery -- from a new program of screenings -- that 11 percent of health-care workers in contact with patients with AR infections were “colonized” by the strain in question. That is, they were carrying around the highly communicable and hard-to-treat germs without themselves exhibiting any symptoms. And AR-enabling genetic material can be transferred from one disease to another; diseases previously treatable with antibiotics can mutate into tougher form.

Summing up its findings for the general public, CDC underscores the efficacy of early detection and containment of AR pathogens. But almost two years’ warning that it was coming seems not to have slowed down a bacterium reported in England last month under the horrifying moniker “super-gonorrhea.”

Blind confidence in the powers of early detection and preventative medicine is the main target of Barbara Ehrenreich’s Natural Causes: An Epidemic of Wellness, the Certainty of Dying, and Killing Ourselves to Live Longer (Twelve). In some ways, it is a book-length sequel to “Welcome to Cancerland,” her unforgettable essay from 2001. There, in recounting her diagnosis and treatment for breast cancer, Ehrenreich’s instincts as a muckraker kept her in a standoff with what she called “cancer culture”: while the medical protocol left her depleted and nauseous, the New Age-tinged demands for positive thinking felt insipid and infantilizing.

“What sustained me through the ‘treatments,’” she wrote, “is a purifying rage, a resolve, framed in the sleepless nights of chemotherapy, to see the last polluter, along with, say, the last smug health-insurance operative, strangled with the last pink ribbon … I will not go into that last good night with a teddy bear tucked under my arm.”

At that point, Ehrenreich was 60 years old. Natural Causes is the work of a writer who can see her 80s approaching at not too great a distance, and who has come to certain conclusions. She is, she says, “old enough to die,” although in no great hurry to do so, and considers it “an achievement, not a defeat, and the freedom it brings is worth celebrating.” She declines “to incur any more suffering, annoyance or boredom in the pursuit of a longer life,” she says. “I eat well, meaning I choose foods that taste good and that will stave off hunger for as long as possible, like protein, fiber and fats. I exercise -- not because it will make me live longer but because it feels good when I do. As for medical care: I will seek help for an urgent problem, but I am no longer interested in looking for problems that remain undetectable to me.”

Such a decision has not been possible, let alone necessary, for very many people throughout most of history. Life expectancy in the United States over the past 150 years has nearly doubled, and it continues to increase. The all-too-American default attitude holds that more is better, pretty much by definition, but extending the length of life without regard to its quality sounds like a recipe for hell on earth. Ehrenreich's concern is that medicine has grown both so powerful and so profitable that many procedures have become practically automatic and unquestionable. In particular, patients have grown accustomed to undergoing "tests and examinations that, in sufficient quantity, are bound to detect something wrong or at least worthy of follow-up" -- procedures the reliability of which are sometimes dubious, with serious consequences from false positives.

An example is the extreme rate of overdiagnosis for thyroid cancer: up 70 to 80 percent of the surgery for it performed on women in the U.S., France and Italy in the '00s is now determined to have been unnecessary, leaving patients with a lifelong dependence on thyroid hormones. Just as excessive reliance on antibiotics gave rise to "nightmare germs" they are ineffective to treat, the effort to steal a march on every medical vulnerability as you age can boomerang.

But overtesting is, by Ehrenreich's account, ultimately more a symptom than the real problem. Moving from social criticism to scientific popularization and contemplative digressions on how we situate ourselves in the natural order, she finds that both medical science and the self-help culture are prone to exaggerating the possibilities for human control over life, health and long-term success in the pursuit of happiness. She puts it this way:

Our predecessors proceeded from an assumption of human helplessness in the face of a judgmental and all-powerful God who could swoop down and kill tens of thousands at will, while today’s assumption is one of almost unlimited human power. We can, or think we can, understand the causes of disease in cellular and chemical terms, so we should be able to avoid it by following the rules laid down by medical science: avoiding tobacco, exercising, undergoing routine medical screening and eating only foods currently considered healthy. Anyone who fails to do so is inviting an early death. Or to put it another way, every death can now be understood as suicide.

Which is ridiculous, of course, as is the point. In a section of the book that reads as if the author is planting dynamite under every "holistic" institution ever to promote wellness, she challenges the popular understanding of the body as "a smooth-running machine in which each part obediently performs its tasks for the benefit of the common good." There is evidence to the contrary in the behavior of the immune system, and we might do better to picture a norm of "conflict within the body … carried on by the body’s own cells as they compete for space and food and oxygen. We may influence the outcome of these conflicts-- through our personal habits and perhaps eventually through medical technologies that will persuade immune cells to act in more responsible ways -- but we cannot control it."

Control is a short-term proposition, at best, while our long-range chances were best put by whoever designed that T-shirts that read "Exercise. Eat Right. Die Anyway."

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Author discusses his new book on gay student activism on Christian college campuses

Author discusses his new book about gay students and LGBT activism at Christian colleges.

Author discusses her new book about how U.S. laws led to growth in female enrollments

Author discusses her new book, which argues that federal laws that weren't focused on gender led to rise in female enrollments in higher education.

Author discusses new book on free speech on campus

Author discusses his new book calling for academic leaders to defend free speech on campus.

Review of 'Westworld and Philosophy'

Like the reunion tours by long-disbanded rock groups whose members dislike each other and the endless parade of remade hit movies from decades past, the vogue for reimagining old science-fiction narratives seems like evidence of an entertainment industry prone to expecting nostalgia to do the work of creativity. Look what’s become of poor Gene Roddenberry’s legacy, with the Star Trek universe long since reduced to the condition of an oil field pumped dry by ill-conceived movies and spin-offs. There’s nothing left for the studio to do except reimagine the original series itself: “To boldly go … where we’ve been in syndication all this time.”

So, with all that now out of my system, I’m ready to tell you about Westworld and Philosophy (Wiley-Blackwell), a collection of articles edited by James B. South and Kimberly S. Engels, to be published shortly before the second season of the HBO series begins in late April. I knew the premise from seeing the original Westworld in the theaters in the early 1970s: the idea of an Old West-themed vacation resort with robot cowboys that went haywire and started killing guests was memorable enough, plus it had a brief scene involving a partially nude robot saloon girl, which filled my young nerd mind with wonder at the possibilities. But taking the premise out for another spin all these decades later did not sound promising.

Nor did the idea of a cheesy Star Wars knockoff from the late 1970s called Battlestar Galactica when the series was revived starting in 2003. As it turned out, I was wrong -- totally and instructively wrong. The rebooted Battlestar Galactica, with its monotheistic androids conducting a holy war against the polytheistic humans who had created them, caught the prevailing tone of American public life after Sept. 11, without simply giving in to the traumatic affect and the Manichaean thinking of that era. The reimagined version was more inspired than the original had ever been.

Likewise with Westworld, albeit in a more anticipatory mode. Here the androids embody (to use a highly pertinent verb) the potential for bioengineering and artificial intelligence to merge in ways that feel uncomfortably plausible if not quite imminent enough to keep anyone awake nights, just yet. Anthologies in the pop-culture-and-philosophy genre are always a very mixed bag, and very often the potential of extracting dollars from a fan base seems to count for the book's existence more than any reward to be had from thinking about the material.

But in the case of Westworld, it's more fair to say that the series keeps questions about mind, personal identity and ethics in play at all times. The "hosts" (androids) populating the vacation resort are programmed with artificial memories, giving them realistic personalities as they interact with "guests" in the course of various Western "narratives" (romance, raising a posse, staking a mining claim, etc.), also programmed by Westworld's corporate management. Artificial intelligence enables the hosts to respond with appropriate emotions and actions as the narrative unfolds in real time. Indeed, they are so perfectly lifelike as to be effectively indistinguishable from the dude-ranch clientele, who are free to behave toward the hosts in any way they want, without the risk of the consequences that they'd experience in the outside world. After a narrative has run its course, the host's short-term memory cache is cleared so they can be deployed again with another bunch of guests/customers. (The imaginary engineering in the series is much more sophisticated than in the original movie, in which it sufficed to show Yul Brynner's face being removed to reveal all the transistors.)

Without any spoilers, it's possible to say that certain glitches and hacks affect some hosts as they start to process unerased data and "remember" how past narratives have played out. I use quotation marks there because the hosts, after all, are extremely complex cybernetic units. But then, what are the guests, or the viewers, for that matter? If a machine were able to handle the same stimuli as a human nervous system, process it in ways modeled on the cognitive capabilities of the human brain and respond with language and behavior as complex and variable as a fully socialized adult, how meaningful would the distinction be?

Philosophers and screenwriters follow the conundrum in different directions, of course, but Westworld displays a surprising awareness of where the philosophical discussion has already gone. The viewer may feel compelled to humanize the hosts -- to imagine the androids as somehow, at some level of complexity, generating an interiority, a sense of self. But onscreen, we find Robert Ford -- the creator of the technology, played by Anthony Hopkins -- considering that idea from the other end:

"There is no threshold that makes us greater than the sum of our parts, no inflection point at which we become fully alive. We can't define consciousness because consciousness does not exist. Humans fancy that there is something special about the way we perceive the world, and yet we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next."

This stance has a perfectly legitimate pedigree, as Michael Versteeg and Adam Barkman point out in their chapter in Westworld and Philosophy, "Does the Piano Play Itself? Consciousness and the Eliminativism of Robert Ford." It derives from the philosophers Paul Churchland and Daniel Dennett, who overcome the old problem of finding a bridge between mind and matter by advancing a thoroughgoing materialism. "Consciousness" is, in effect, a fancy word for what certain really complex nervous systems (ours) do in the course of running themselves.

Which is one way of kicking the old Cartesian can further down the road. Westworld doesn't endorse eliminativism but rather imagines a world in which it is a very consequential idea for the lives of people not involved in the philosophical profession. It also permits and encourages a number of ethical thought experiments: guests are able to commit acts of violence and murder without breaking any law and -- what seems more troubling -- are free to do so without any obligation to think about the suffering of the androids, however realistic it may be. (All the recreational mayhem of today's video games, but in person!) A chapter by one of the book's editors, Engels, draws on Sartre to analyze the effect on one character of choosing to enjoy the simulated death and horror. Another chapter considers the growing awareness and resistance of the hosts in terms of Frantz Fanon's writings on the violent struggle of colonized people against dehumanization.

With many of the chapters kindly provided to me for an early look by the editors, the concepts and questions explored were clearly things already on the minds of Westwood's writers. The contributors fleshed out the background and the logic of an imagined world in which, as with a car's rearview mirror, objects may be closer than they appear.

Editorial Tags: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 

Overview of forthcoming university press books on social media, privacy and technology

Things have been rocky of late in the public’s love affair with our ever more sophisticated gadgets. The troubles have been building up for a while: cyberbullying, revenge porn, Twitter-bot mayhem … Last month, Amazon’s Alexa started randomly “laughing at users and creeping them out,” while Facebook’s vast and mostly unaccountable power would have made a #DeleteFacebook campaign inevitable even without reports of a massive data breach. And the week started with the death of a pedestrian hit by a self-driving car -- an eventuality that no doubt crossed most people’s minds immediately upon hearing the words “self-driving car” for the first time.

The relationship isn’t over -- even if, from the human side, it often seems more like a case of Stockholm syndrome than a romance. Several new and forthcoming titles from university presses take up the interconnected subjects of social media, privacy and technological change. Here’s a brief survey; quoted material is taken from publishers’ catalogs and websites.

Originally published in Germany, Roberto Simanowski’s Facebook Society: Losing Ourselves in Sharing Ourselves (Columbia University Press, July) maintains that social media “remake the self in their [own] image” by conditioning users to experience their own lives as raw material for “episodic autobiographies whose real author is the algorithm lurking behind the interface.” Appearing in English two years and billions of likes later, it will presumably find readers with an even more attenuated “cultural memory and collective identity in an emergent digital nation.” Lee Humphreys’s The Qualified Self: Social Media and the Accounting of Everyday Life (MIT Press, March) offers an at least implicit dissent by arguing that “predigital precursors of today’s digital and mobile platforms for posting text and images” (e.g., diaries, pocket notebooks, photo albums) have allowed people “to catalog and share their lives for several centuries.” Hence, our “ability to take selfies has not turned us into needy narcissists; it’s part of a longer story about how people account for everyday life.” Perhaps, though, the options aren’t mutually exclusive, as the author seems to imply.

The disinhibiting effects of online communication are well established. Moderation often seems to be exercised after the fact, when conducted at all. (A death threat is taken down eventually; lesser forms of harassment may enjoy the benefit of the doubt.) But according to Tarleton Gillespie’s Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, June), that is changing with the rise of a powerful but normally inconspicuous layer of digital operatives. Content moderators -- those “who censor or promote user-posted content” on social-media platforms -- have tools “to curb trolling, ban hate speech, and censor pornography” that can also be used to “silence the speech you need to hear.” Their role “receives little public scrutiny even as it is shapes social norms,” with “consequences for public discourse, cultural production, and social interaction.”

And it’s easy to imagine content moderation becoming a much faster and more discriminating process when combined with the disruptive technology discussed in Terry Sejnowski's The Deep Learning Revolution (MIT, May). The author, “one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version” of artificial intelligence, helped develop “deep learning networks” capable of not just extracting and processing information but of “gradually acquiring the skills needed to navigate novel environments” -- as exhibited by, for example, driverless cars. Which is a touchy subject just now, but give it time: Sejnowski predicts the development of, among other things, “a personal cognitive assistant will augment your puny human brain.” By that point, I fear, the driverless cars will start running us over on purpose.

Meredith Broussard makes the case against “technochauvinism” -- defined as “the belief that technology is always the solution” -- in Artificial Unintelligence: How Computers Misunderstand the World (MIT, April). With a series of case studies, the author “uses artificial intelligence to investigate why students can’t pass standardized tests; deploys machine learning to predict which passengers survived the Titanic disaster; and attempts to repair the U.S. campaign finance system by building AI software.” Clearly no Luddite, she stresses the need to recognize both the power and the limits of our technology, however smart and responsive it may become.

Our devices possess no sense of privacy. On the contrary, “popular digital tools are designed to expose people and manipulate users into disclosing personal information,” as Woodrow Hartzog charges in Privacy's Blueprint: The Battle to Control the Design of New Technologies (Harvard University Press, April). It’s time for “a new kind of privacy law responsive to the way people actually perceive and use digital technologies” -- and new regulations to “prohibit malicious interfaces that deceive users and leave them vulnerable” and “require safeguards against abuses of biometric surveillance,” among other things.

Two other books, also from Harvard, trace the historical vicissitudes of privacy. Sarah E. Igo’s The Known Citizen: A History of Privacy in Modern America recounts how, between the 19th and 21st centuries, “popular journalism and communication technologies, welfare bureaucracies and police tactics, market research and workplace testing, scientific inquiry and computer data banks, tell-all memoirs and social media all propelled privacy to the foreground of U.S. culture.” But establishing laws in defense of privacy -- defending the individual from “wrongful publicity” -- also yielded the unexpected consequence Jennifer E. Rothman analyzes in The Right of Publicity: Privacy Reimagined for a Public World: “Beginning in the 1950s, the right transformed into a fully transferable intellectual property right, generating a host of legal disputes …” It “transformed people into intellectual property, leading to a bizarre world in which you can lose ownership of your own identity.” (Both Igo’s and Rothman’s volumes are due out in May.)

While social media foster the tendency for individuals to think of their own personalities as brands, the trend in the business world has run in the other direction: well-established brands are just as susceptible to a sudden reversal of reputation from a few hostile tweets as any junior-high student or member of the White House staff. “With citizens acting as 24-7 auditors of corporate behavior, one formerly trusted company after another has had their business disrupted with astonishing velocity,” according to James Rubin and Barie Carmichael’s Reset: Business and Society in the New Social Landscape (Columbia University Press, January). Offered as “a strategic road map for businesses to navigate the new era, rebuild trust, and find their voice” by “proactively mitigating the negative social impacts inherent in their business models, strategies, and operations,” Reset will be of interest and use to corporate executives until such time as they are replaced by our AI overlords.

And with that in mind, two books with rather cataclysmic titles bear notice. Small Wars, Big Data: The Information Revolution in Modern Conflict by Eli Berman, Joseph H. Felter and Jacob N. Shapiro with Vestal McIntyre (Princeton University Press, June) argues that “an information-centric understanding of insurgencies,” benefiting from the accumulation of “vast data, rich qualitative evidence, and modern methods,” is superior to conventional military methods. In a more figural vein, Justin Joque’s Deconstruction Machines: Writing in the Age of Cyberwar (University of Minnesota Press, February) presents “a detailed investigation of what happens at the crisis points when cybersecurity systems break down and reveal their internal contradictions,” with cyberattacks “seen as a militarized form of deconstruction in which computer programs are systems that operate within the broader world of texts.” That sounds abstract, but it could just be that our commonplace notions of warfare are out of date.

Editorial Tags: 
Image Source: 
Is this diversity newsletter?: 
Disable left side advertisement?: 
Is this Career Advice newsletter?: 


Subscribe to RSS - Books
Back to Top