American politics in the age of Donald Trump may yet make armchair psychopathologists of us all. The stream-of-consciousness quality of the candidate’s speeches now becomes a factor in governance. In the wee hours, while most of us sleep, the president-elect tweets. Stephen Dedalus’s description of history as “a nightmare from which I am trying to awake” feels less literary by the hour.
And so the public is compelled to play analyst: armed with diagnostic checklists and extensive Wikipedian training, we try to categorize his personality (as narcissistic, borderline, histrionic, etc.) in hopes that an adequate label might provide some hint of what to expect over the next four years. It won’t, of course, although the odds that a State of the Union speech will address the president’s penis size have increased considerably.
On a more substantial matter, it’s obvious that Trump’s affinity for the conspiratorial mind-set goes beyond a mutual appreciation of talk-show host Alex Jones. It forms the bedrock of Trump’s very existence as a political figure. His aspiration to something greater than mere celebritydom began in earnest only when Trump made himself a major player in the pseudo-controversy over President Obama’s birth certificate. (That racist melodrama assumed, even if it did not always emphasize, the existence of shadowy forces conspiring to put a Kenyan Muslim into office for their own un-American reasons.)
The penchant of Trump and some prominent figures in his entourage to resort to conspiratorial tropes seems like yet more evidence for the perennial value of Richard Hofstadter’s “The Paranoid Style in American Politics” (1964). And while I find Michael Paul Rogin’s critique of Hofstadter persuasive, there is no denying the essay’s almost irresistible quotability. Some passages sound as if the historian were making a summary of the themes appealing to the president-elect’s base:
“America has been largely taken away from them and their kind, though they are determined to try to repossess it and to prevent the final destructive act of subversion. The old American virtues have already been eaten away by cosmopolitans and intellectuals; the old competitive capitalism has been gradually undermined by socialistic and communistic schemers; the old national security and independence have been destroyed by treasonous plots, having as their most powerful agents not merely outsiders and foreigners but major statesmen who are at the very centers of American power.”
Add complaints about political correctness for seasoning, and the reader would have no reason to think this passage is not from a report on the 2016 election.
And that is, in a way, Hofstadter’s point. Hofstadter identifies the paranoid style as a recurrent if not permanent strain in American political thought and rhetoric -- but also as weaker, and less effective, over the long run, than its durability might imply. It appeals to established but aggrieved groups imagining themselves to be “the real America” under threat from change. Then the immigrants and upstarts become established, new demagogues emerge to exploit their discontent, and the whole thing starts again.
Rob Brotherton’s bookSuspicious Minds: Why We Believe Conspiracy Theories (Bloomsbury Sigma), originally published in late 2015, now appears in paperback as the Inauguration Day bleachers go up near the White House. While not a commentary on the Trump ascendancy, its timing may skew the reader’s attention in that direction even so.
The author, a psychologist and science writer, is more concerned than Hofstadter with the particular cognitive processes involved in the conspiratorial mentality. Rather than pointing to a paranoid mood that ebbs and flows with political currents, Brotherton treats conspiracy theories as part of a continuum of patterns of thought and behavior that are extremely common and not, for the most part, paranoid.
Much of it comes down to pattern recognition (the brain’s incessant but not always reliable drive to find order) combined with a tendency to overestimate the validity or completeness of the available information. Brotherton writes, “When we’re uninformed -- and we’re all ignorant about a lot of things -- our brain indiscriminately uses whatever is at hand to plaster over the intellectual blind spot.” The author adduces a number of lab experiments showing this, including research that suggests cognitive strain tends to heighten the capacity to imagine structure where none exists.
“By painting conspiracism as some bizarre psychological tic that blights the minds of a handful of paranoid kooks,” he writes, “we smugly absolve ourselves of the faulty thinking we see so readily in others. But we’re doing the same thing as conspiracists who blame all of society’s ills on some small shadowy cabal. And we’re wrong. Conspiracy thinking is ubiquitous, because it’s a product, in part, of how all of our minds are working all the time.”
This is persuasive, up to a point. But somewhere far beyond that point are whole milieus of people whose pattern-recognition software got stuck in the conspiratorial program and can’t be reset. There's David Icke, for one, an internationally famous author who believes that most political, social and cultural changes of recent decades are the work of shape-shifting interdimensional reptile people. (Icke makes Alex Jones sound like Walter Cronkite.)
Between Hofstadter’s cyclical rise and fall of paranoid politics and Brotherton’s rather genial vision as everyone being conspiracy-minded at one time or another, it’s almost possible to imagine the next few years as something other than cataclysmic. But I’m not entirely persuaded. Suppose this is just the beginning. After all, we still have no idea where the incoming administration stands on shape-shifting interdimensional reptile people. The president-elect hasn’t even uttered the words “shape-shifting interdimensional reptile people.” What is he trying to hide?
On first reading the title of Timothy Recuber’s Consuming Catastrophe: Mass Culture in America’s Decade of Disaster (Temple University Press), my guess was that it would be about the 1970s -- that is to say, the era of my childhood, when movies like Earthquake, The Poseidon Adventure, The Towering Inferno and The Hindenburg were the talk of the playground. Besides the disaster movies (which were a genre unto itself, for a few years), there were best-selling books and TV fair of similar ilk.
It was all pretty formulaic -- even ritualistic. The strains of numerous crises in public life (Watergate, the oil embargo and inflation, plus aftershocks from the 1960s) were translated into the language of blockbuster melodrama. The spectacular disaster on the screen or the page enacted a kind of miniature social implosion, its destructive force revealing the inner strengths or vices of the characters who had to face it. Various embodiments of evil or dumb authority would perish. Survivors of the disaster would reunite with their families or reconnect with their values.
The genre’s chief weakness was that the supply of viable disaster scenarios was not unlimited. The point of exhaustion came, as I recall, with a made-for-TV movie-of-the-week involving a swarm of killer bees. In retrospect, the whole period looks like one big anxiety disorder. Ronald Reagan never appeared in a disaster movie, but his election in 1980 probably owed something to the genre insofar as the public could imagine him guiding it to safety through all the debris.
In Consuming Catastrophe, Recuber, a visiting assistant professor of communication at Hamilton College, has a another period and variety of spectacle in mind: the real-world disasters from the first decade of this century (Sept. 11, Hurricane Katrina, the Virginia Tech shootings, the BP offshore oil spill, the near collapse of the financial system in 2008), rather than symptomatic fictions churned out as entertainment.
The contrast is also one of levels of immediacy and saturation of the public attention. Very few news stories of 40 years ago unfolded with the intensity and duration of real-time coverage that has become the norm -- even when the occasion is something considerably less wrenching than a disaster. This tends to create a public sense of somehow participating in an event, rather than just being informed about it. The potentials and limits of that participation are the focus of much of Recuber’s interest.
The widest frame of his perspective takes in German sociologist and philosopher Jürgen Habermas’s argument that newspapers and magazines were foundational elements of the public sphere of information and reasoned debate that could challenge policies and opinions that derived their force only from established authority or the inertia of tradition. Besides the political and economic issues normally associated with Habermas’s understanding of the public sphere, Recuber notes that “disasters, crises, misfortunes and the suffering of distant others were central topics of discussion there, although [its] literate publics frequently disagreed about the moral and ethical acceptability of such macabre subjects.” The classic instance would be the Lisbon earthquake of 1755 (see this column from 2005, on the disaster’s sestercentennial).
Recuber quotes Adam Smith on what is involved in a sympathetic response to others’ misfortune: “The compassion of the spectator must arise altogether from the consideration of what he himself would feel if he was reduced to the same unhappy situation, and, what perhaps is impossible, was at the same time able to regard it with his present reason and judgment.” This seems carefully balanced -- a synthesis of much public-sphere argument, no doubt. But it is also demanding. It implies some obligation to find an effective means to alleviate the suffering as well as to determine if any part of it was preventable. Sympathy, to use the preferred 18th-century term, was not just a personal emotional response but also a communal force. It held society together and could, if strengthened, improve it.
Fast-forward two centuries and a few decades, and we find the contradictory and perverse situation that Recuber describes in a series of case studies. Means of communication exist that can expand our powers of sympathy and our capacity to intervene to reduce suffering -- and they do sometimes, but in problematic ways. It’s not just that the intensity and pervasiveness of media coverage of disasters can induce what’s become known as “compassion fatigue.” That is certainly a factor, but Recuber emphasizes the more subtle and insidious role of what he calls “the empathic gaze.”
Where sympathy means an awareness of another’s unhappiness as something that can and should be alleviated, empathy, in the author’s usage, “refers to an intersubjective understanding of the other’s plight devoid of the obligation to intervene.” It is a relationship to the other’s suffering that is of a “more passive, vicarious character.” The capacity for empathy is much praised in the contemporary literature of self-help and personnel management. Certainly it’s preferable to the psychopathic indifference which, of late, increasingly seems like the other main option on order. But in Recuber’s estimation it rests content with having reached a secure but passive position vis-à-vis suffering, if not a rather morbidly sensationalistic variety of pity.
My impression is that Recuber, far from chastising us as a generation of moral ghouls feasting on disaster, actually regards sympathy as our original or default mode of moral perspective (rather as some 18th-century thinkers did). His case studies of disasters from 2001 to 2010 are, in effect, accounts of sympathy being frustrated, exploited or otherwise short-circuited in diverse ways by the channels into which the media directs it.
One example stands out in particular and will stick in my memory. It concerns the April 2007 massacre at Virginia Tech, which left 32 dead, followed by the suicide of the gunman, Seung-Hui Cho. Cho sent a multimedia package explaining himself to NBC Nightly News, portions of which were shown on the program two days after the shootings. “We are sensitive to how all of this will be seen by those affected,” the news anchor said, “and we know that we are in effect airing the words of a murderer here tonight.”
No one could accuse him of lacking empathy, anyway; empathy can discharge its responsibilities simply by announcing itself. “The statement was an oddly unbalanced one,” Recuber comments, “… seemingly missing a second half that explained what the benefits of broadcasting the manifesto to be and why they outweighed the concerns of ‘those affected.’ Such a statement never came.”
But of course not! It’s not as if being “sensitive to how all of this will be seen by those affected” compelled the network to spare them anything. Those of us watching disaster movies in the 1970s were on higher moral ground: the entertainment was brainless but at least it involved no disregard for real suffering.
“We think about this as a studio apartment,” says one member of a couple living in a single-room occupancy. In better days the unit was a motel room. To see it as a studio apartment is a triumph of imagination and will, as the speaker is fully aware. “We have to,” she continues, “’cause if we continue to realize where we’re at in life, we would spiral into a massive depression. And the housekeeping we do would not get done …. I call it home and I cry on the thought of losing it ’cause this is all I have.”
Christopher P. Dum, an assistant professor of sociology at Kent State University, is careful to protect the identities of the ethnographic subjects he spoke to in researching Exiled in America: Life on the Margins in a Residential Hotel (Columbia University Press). In exchange he has been given access to some extraordinarily precarious and fragile domestic spaces -- dwellings that will seem to most readers just slightly more stable than a homeless shelter or living out of your car. But that is, by Dum’s reckoning, a blinkered view. The squalor is real and inescapable; what’s harder to see from a distance is the residents’ effort to find, or create, some kind of order in seriously damaged lives.
The author lived at the hotel he calls the Boardwalk for a year as fieldwork for his dissertation, gradually overcoming the residents’ (understandable) suspicion that he worked for law enforcement and intriguing some with the prospect of having their stories told. The hotel originally drew Dum’s attention while he was investigating the difficulties of registered sex offenders in finding housing. (In the public mind, “registered sex offender” has come to mean pedophile, although the label applies equally to those convicted of exhibitionism, soliciting prostitutes or even, believe it or not, public urination.)
Also staying at the Boardwalk during the author’s stay were recently released ex-prisoners and people with a range of mental-health issues, physical disabilities or substance-abuse problems, often in combination. It sounds like a population guaranteed to create even greater chaos than the sum of its dysfunctions. What Dum found instead is an emergent and fragile community of what he characterizes as “social refugees … impelled to relocate within their own country of citizenship because of the influence of social context and/or social policy.”
From studies of migration he adopts the notion of push and pull factors to discuss the two strong forces shaping life at the hotel. One is the overwhelming power of stigma: the area’s largely middle-class public “viewed motel residents as belonging to one or several devalued groups,” with the sex offenders among them being especially contaminating and marginalizing. Driving past the Boardwalk and yelling that its inhabitants were child molesters seems to have been a local pastime.
Membership in stigmatized groups pushed residents away from mainstream society and toward the Boardwalk, which in turn pulled them to its “sustaining habitat” -- a term from urban sociology akin to the real-estate mantra “location, location, location.” Boardwalk residents had ready access to bus stops, cheap food, a Laundromat and other “opportunit[ies] to engage in the same type of consumer relations that characterized the lives of their [better-off] detractors.” There also seems to have been some comfort in knowing that their landlord owned another nearby property called Park Place. (I’d guess that the real names of these motels were just as overblown as their Monopoly stand-ins.) Park Place was cheaper, in more serious disrepair and had a reputation for violence among the tenants. “They start drinking Milwaukee’s Best in the morning,” one Boardwalker explains, “and that makes them get wily.”
However uninviting Boardwalk might look from the author’s photographs, it’s hardly the worst place where life could leave you. Some tenants Dum interviewed managed to establish a degree of stability that included employment (one aspect of a sustaining habitat) as well as a certain amount of interior decoration. They felt a responsibility to care for other residents, particularly those with severe mental disorders. An informal but exacting code of etiquette governed the sharing of cigarettes, food and intoxicants. A certain amount of sexual jealousy and trash talking was inevitable, as was the occasional round of threats or punches, but Dum indicates that he heard of very little theft or predation. “It’s like any other community,” one resident told him, “it’s just people trying to get along.”
At the same time, even success in carving out livable circumstances could leave residents feeling trapped. Treating one’s room as a studio apartment entailed more than psychological strain. Rent “did not guarantee heat, air-conditioning, a fridge, kitchen or even drinkable water,” Dum writes. “Offsetting these conditions exacted so much material cost to buy fans, space heaters, refrigerators, microwaves and bottled water that once residents settled at the hotel they found it very hard to leave. They struggled to make monthly, even weekly, rent payments, and because of this, putting down a security deposit and first month’s rent for an apartment was nearly impossible.”
The author’s fieldwork turned out to coincide with the final phase of the motel’s existence. After years of code violations -- largely unnoticed by city inspectors for the simple reason that they often didn’t even show up -- the local government forced the closing of Boardwalk and Park Place. Exiled in America is on the whole an exemplary piece of social reportage and analysis, but while reading it I often wondered if calling his interview subjects “refugees” might not be pushing it. As it turns out, most of them found out they were being evicted a few hours before the deadline. So, yes, refugees.
Some weeks back, a publishing house in Spain announced that it would be issuing a deluxe facsimile edition of the enigmatic and sui generis volume best known as the Voynich manuscript, in a print run limited to 898 copies, selling at 7,000-8,000 euros each. That’s the equivalent of $7,400 to $8,400 -- a price tag guaranteed to separate the true bibliomaniac from the common run of book collectors.
But then Beinecke 408 (as the volume is also known, from its catalog reference in Yale University’s rare books collection) tends to throw a spell over those who contemplate it for very long. Running to about 200 parchment pages -- or closer to 240, if you count a number of long, folded-up sheets as multiple pages -- it is abundantly illustrated with drawings of plants that have somehow eluded the attention of botanists, surrounded by copious text in an unknown alphabet. It looks like what you’d get from throwing Roman, Greek and Arabic script into a blender along with a few alchemical symbols. At a certain point the artwork takes a noticeable turn: the plants are accompanied by miniature drawings of naked women sitting on the leaves, emerging from broken stems or bathing in pools. Those slightly Rubenesque figures also show up in what appear to be a number of astronomical or astrological charts. A final section of the book consists of page after page of closely written text, with starlike symbols in the margin that seem to indicate section or paragraph divisions.
It sounds like something H. P. Lovecraft and Jorge Luis Borges might have concocted to pull a prank on Umberto Eco. But mere description of the Voynich manuscript little prepares you for the experience of turning its pages, even in the considerably less expensive hardback just published by Yale University Press. The editor, Raymond Clemens, is curator of early books and manuscripts at the university’s Beinecke Rare Book & Manuscript Library. The color reproductions of each page are made at the size of the original; the ink or paint used by the illustrator at times bleeding through slightly behind the text and artwork on the other side of the parchment. The thing that strikes the eye most about the writing is how concentrated it looks: printed in a crisp, precise hand by someone who, especially in the final pages, seems determined to make good use of the space without sacrificing readability.
Which is, of course, the maddening thing about the book -- almost literally so, at times. The effort to figure out what it says has tested, and defeated, the mental powers of numerous researchers over the past century, beginning not long after the bookseller Wilfrid M. Voynich acquired it in 1912. (The Yale edition includes a detailed biographical article on Voynich, who seems to have escaped from the pages of a novel by Dostoyevsky before settling in London and moving, eventually, to New York. The label “bookseller” is too narrow by far. One telling detail: his father-in-law was George Boole.)
The first scholar to throw himself into solving the riddle was William Romaine Newbold, professor of moral and intellectual philosophy at the University of Pennsylvania, whose The Cipher of Roger Bacon was posthumously assembled from his notes and manuscripts and published in 1928. Its title reflects the earliest known attribution of the work: a letter from 1665 or ’66 reports that the book had been owned by Rudolph II -- the Holy Roman Emperor, patron of Johannes Kepler and alchemy enthusiast -- who believed the author to be the 13th-century English scientist and monk Roger Bacon. (Not to be confused with Francis Bacon, also an English scientist, born 300 years after the monk’s prime.)
Knowing that Roger Bacon was a pioneer in the study of optics and had experimented with lenses, Newbold boldly combined fact and speculation to argue that the Voynich manuscript reported Bacon’s discoveries using the microscope (spermatozoa, for example) and the telescope (the Andromeda galaxy). Furthermore, the hieroglyphs in the mysterious text actually consisted of much smaller letters -- combined in a code of great sophistication -- which were only visible with a microscope.
Quod est demonstrandum, sort of. Admirers of Roger Bacon found the interpretations plausible, anyway. But in 1931, the medievalist journal Speculum published a long and devastating assessment of Newbold’s methodology, which concluded that the code system he’d deduced was so vague and arbitrary that the messages he unearthed were “not discoveries of secrets hidden by Roger Bacon but the products of his own intense enthusiasm and his learned and ingenious subconsciousness.”
That judgment surely inspired caution among subsequent Voynich analysts. I found one paper, published in Science in 1945, claiming to have determined that the manuscript was written by a 16th-century astrologer and herbalist known to have had a particular interest in women’s illnesses. The researcher ends his report by insisting that it was not the product of “a learned and ingenious subconscious.” Be that as it may, the author also felt that “present war conditions” made it “undesirable to publish, at this time, the details of the key.”
That note of hesitation foreshadows one of the many interesting points made in the generally excellent short essays accompanying the Voynich manuscript in the Yale edition: “The extent to which the problems it poses have been a matter of professional as well as amateur interest is reflected in the fact that the best book-length introduction to this ‘elegant enigma’ was written by a government cryptologist … and published in-house by the U.S. National Security Agency.” The monograph (now in the public domain and available to download) indicates that the NSA had already played host to quite a bit of hard-core Voynich inquiry by the late 1970s, and who knows how much computational power has been directed at cracking it since then.
The Yale University Press edition ventures no theory of who created the Voynich manuscript or what it says. A chapter reporting on examination and tests of the material indicates that the parchment can be dated to roughly 1420, give or take a couple of decades, while multispectral imaging reveals the erased invisible signature of a pharmacist who died in 1622, using the noble title he received in 1608. That may not remove all possibility of a hoax, but it would seem to backdate it by a few centuries. The enigma, like the book itself, has proven nothing if not durable; this handsome and (relatively) affordable edition will serve to spread its fascination around.
In his autobiography, Benjamin Franklin describes how, as a striving young man in Philadelphia, he practiced a quite literal variety of moral bookkeeping. Having determined 13 virtues he ought to cultivate (temperance, frugality, chastity, etc.), he listed them on a table or grid, with the seven days of the week as its horizontal element. At night, before bed, he would make a mark for each time he had succumbed to a vice that day, in the row for the virtue so compromised.
A dot in the ledger was a blot on his character. Franklin explicitly states that his goal was moral perfection; the 13th virtue on his list was humility, almost as an afterthought. But without claiming to have achieved perfection, Franklin reports that his self-monitoring began to show results. Seeing fewer markings on the page from week to week provided a form of positive reinforcement that made Franklin, as he put it in his late 70s, “a better and a happier man than I otherwise should have been had I had not attempted it.”
Franklin’s feedback system was a prototype of the 21st-century phenomenon analyzed by Deborah Lupton in The Quantified Self (Polity), a study of how digital self-tracking is insinuating itself into every nook and cranny of human experience. (The author is a research professor in communication at the University of Canberra in Australia.) A device or application is available now for just about any activity or biological function you can think of (if not, just wait), generating a continuous flow of data. It’s possible to keep track of not only what you eat but where you eat it, at what time and how much ground was covered in walking to and from the restaurant, assuming you did.
In principle, the particulars of your digestive and excretory processes could also be monitored and stored: Lupton mentions “ingestible digital tablets that send wireless signals from inside the body to a patch worn on the arm.” She does not elaborate, but a little follow-up shows that their potential medical value is to provide “an objective measure of medication adherence and physiologic response.” Wearable devices can keep track of alcohol consumption (as revealed by sweat), as well as every exertion and benefit from a fitness routine. Sensor-equipped beds can monitor your sleep patterns and body temperature, not to mention “sounds and thrusting motions” possibly occurring there.
Self-tracking in the digital mode yields data about the individual characterized by harder-edged objectivity than even the most brutally honest self-assessment might allow. For Franklin, the path to self-improvement involved translating the moral evaluation of his own behavior into an externalized, graphic record; it was an experiment with the possibility of increasing personal discipline through enhanced self-awareness. The tools and practices that Lupton discusses -- the examples cited above are just a small selection -- expand upon Franklin’s sense of the self as something to be quantified, controlled and optimized. The important difference lies in how comprehensive and automated the contemporary methods are (many of the apps and devices can run in the background of everyday life, unnoticed most of the time), as well as how much more strongly they imply a technocratic sense of the world.
“The body is represented as a machine,” writes Lupton, “that generates data requiring scientific modes of analysis and contains imperceptible flows and ebbs of data that need to be identified, captured and harnessed so that they may be made visible to the observer.” But not only the body: other forms of self-tracking are available to monitor (and potentially to control) productivity, mood and social interaction. One device, “worn like a brooch … listens to conversations with and around the wearer and lights up when the conversation refers to topics that the user has listed in the associated app.”
Along with the ability to monitor and control various dimensions of an individual’s existence, there is likely to come the expectation or obligation to do so. On this point, Lupton’s use of the idea of self-reflexivity (as developed by the social theorists Zygmunt Bauman, Ulrich Beck and Anthony Giddens) proves more compelling than her somewhat perfunctory and obligatory references to Michel Foucault on “technologies of self” or Christopher Lasch on “the culture of narcissism.” The digitally enhanced, self-monitoring 21st-century citizen must meet the challenge of continuously “seeking information and making choices about one’s life in a context in which traditional patterns and frameworks that once structured the life course have largely dissolved … Because [people] must do so, their life courses have become much more open, but also much more subject to threats and uncertainties,” especially “in a political context of the developed world -- that of neoliberalism -- that champions self-responsibility, the market economy and competition and where the state is increasingly withdrawing from offering economic support to citizens.”
In such a context, high-tech self-tracking can provide access to exact, objective self-knowledge about health, productivity, status (there are apps that keep track of your standing in the world of social media) and so on. Know thyself -- and control thy destiny! Or so it would seem, if not for a host of issues around who has ownership, use or control of the digital clouds that shadow us. Lupton points to a recent case in which lawyers won damages in a personal-injury suit using data from a physical fitness monitor: the victim’s numbers from before and after the accident were concrete testimony to its effect. Conversely, it is not difficult to imagine such data being subpoenaed and used against someone.
The unintended consequences may also take the form of changed social mores: “Illness, emotional distress, lack of happiness or lack of productivity in the workplace come to be represented primarily as failures of self-control or efficiency on the part of individuals, and therefore as requiring greater or more effective individual efforts -- including perhaps self-tracking regimens of increased intensity -- to produce a ‘better self.’” Advanced technology may offer innovative ways to dig ourselves out of the hole, with the usual level of success.
Lupton is not opposed to self-tracking any more than she is a celebrant of it, in the manner of a loopy technovisionary prophet who announces, “Data will become integral with our sensory, biological self. And as we get more and more connected, our feeling of being tied into one body will also fade, as we become data creatures, bodiless, angelized.” (I will avoid naming the source of that quotation and simply express hope that it was meant to be a parody of Timothy Leary.) Instead, The Quantified Self is a careful, evenhanded survey of a trend that is on the cusp of seeming so ubiquitous that we’ll soon forget how utterly specific the problems associated with this aspect of our sci-fi future are to the wealthy countries, and how incomprehensible they must seem to the rest of the planet.