A march in Washington calling for the release of Donald Trump’s income tax returns is scheduled for April 15 -- putting turnout somewhat at the mercy of potential participants’ diligence about getting their own returns filed early. The demand is reasonable and has been called for by, at last report, 53 percent of voters, though that is no reason to expect the demonstration will have much effect. Whatever Trump needed to hide as a candidate obviously remains a vulnerability now that he is president.
His returns might yet enter the public record in the course of congressional (and other) investigations. But there is little chance of full disclosure even then, as Richard Murphy’s Dirty Secrets: How Tax Havens Destroy the Economy (Verso) has the indirect effect of reminding us. The available means for concealing assets -- whether from tax agents, creditors or the lawyers of former spouses -- are highly developed and amount to an alternative global economy in their own right.
Murphy, a professor of practice in International political economy at City, University of London, is both a chartered accountant and a co-founder of the Tax Justice Network, an international research and advocacy group. Of the five books he has published, this is the fourth on taxation; he mentions in passing that he wrote it in three months, almost certainly meaning last summer. (The endnotes tend to confirm this hunch: the latest articles and reports they cite are from August.) No discussion of taxation can be too short for the lay public, but Dirty Secrets puts muckraking and pedagogy in tandem to good effect.
The expression “tax haven” is still in general use, understood, Murphy writes, as “a place whose tax system provides an advantage to a person who is not resident in that place.” It calls to mind the discreet, friendly, uninquisitive accountants of Switzerland or the Bahamas, hiding cash in your name in a vault somewhere far from the authorities back home. But the somewhat broader term “secrecy jurisdiction” proves much more suitable for conveying both the range and the mechanics of the offshore economy.
“All the tax haven does,” Murphy explains, “is record the ownership of assets that are located in one place (which is not the tax haven) by a person who is themselves resident anywhere but the tax haven.” The ownership may be by a company or fund rather than an individual; the assets may be “title to lands and buildings” or such tangible wealth as “art, yachts and the like,” not just currency. “Nor,” the author explains, “are these investments usually managed from the tax haven in which their ownership is recorded. The decisions on where, and in what, the funds are ‘invested’ will, in all likelihood, be made by fund managers or share owners who are themselves almost certainly located ‘elsewhere.’”
For that matter, “very few banks [are] based in tax havens,” which instead host branches of international institutions (Deutsche Bank, Lloyds Bank, the Bank of Cyprus, etc.). Murphy’s own research into “the 60 secrecy jurisdictions studied as the basis of the first Tax Justice Network’s Financial Secrecy Index” in 2010 found that more than two-thirds of them had local offices of at least two of the world’s four leading accounting firms. (All four firms had offices in 33 of the countries studied.)
Determining how much wealth is involved -- or the economic impact of the loss of tax revenue, especially in the poorest countries -- requires great effort as well as considerable tolerance for wide margins in the final estimates. In 2011, Murphy’s analysis of World Bank data “estimated the total cost of tax evasion in the world as a whole at $3.1 trillion, or about 5 percent of world GDP at the time.”
A report released the following year by his colleagues in the Tax Justice Network used a number of methods to handle data from the International Monetary Fund, the World Bank and numerous other sources to make an estimate of between $21 trillion and $32 trillion “for global offshore financial assets as of 2010,” with “estimated annual loss of revenue at between $190 billion and $280 billion.” While not satisfied with the methodology of some researchers he cites, Murphy notes that they seem to converge on the figure of at least $200 billion a year of tax revenue lost to offshore concealment alone.
Very large numbers are easier to cite than to wrap the mind around, and they at best convey only a very general sense of the scale of the problem. The cumulative effect on public budgets around the world is obvious: Murphy treats the rise of secrecy jurisdictions as integral to the neoliberal agenda, with its ultimate ambition of ensuring that tax revenue is directed to funding police, prisons and the military while not a dime is spent for any other public purpose.
But Murphy also, surprisingly, regards tax havens as an affront to the power of the marketplace and their defeat as essential to saving capitalism from itself. I admit that this argument caught me off guard. Here is the author making it in brief.
If markets are to be efficient in the way that economists have described -- and as those who suggest they provide optimal solutions profess to believe they operate -- then there must be the highest-quality information available to all market participants so that they can act rationally, allocating resources to the person who is best able to use them to maximize return, and who exposes the provider of capital to the lowest risk in that process. Very obviously, tax havens undermine these principles. They are in fact designed to deny market participants the information they need to act rationally, allocate resources efficiently and minimize risk. … If risk is increased, then the required rate of return within marketplaces also increases. This means that the number of projects that can be invested in is reduced, so that the amount of capital committed is diminished. As a consequence, productivity declines, and along with it growth, output, wages and profits.
The suite of reforms Murphy proposes amount to a program of robust data collection by the European Union and other international actors combined with legislation that would, bit by bit, make access to secrecy jurisdictions more difficult and less profitable. The alternative is even more staggering levels of inequality than have already become the norm. Murphy’s trust in the possibility for reform would be easier to credit if the shadow economy were some kind of lamprey that had attached itself to an otherwise healthy organism; then it could be removed. But his book is too persuasive in its depiction of tax havens as tightly connected to banks, accounting firms and other established institutions. They seem to exist in a kind of symbiosis -- which can’t end well.
“A newspaper can have no bigger nuisance than a reporter who is always trying to write literature,” Joseph Mitchell confessed in the opening pages of My Ears Are Bent (1938), a selection of the pieces that had, presumably, gotten him into trouble. “It is not easy,” as he also noted in passing, “to get an interview with Professor Franz Boas, the greatest anthropologist in the world, across a city desk.”
But in fact he had recently done so -- in a series of articles for The New York World-Telegraph that have only now been collected between covers as “Man -- With Variations”: Interviews With Franz Boas and Colleagues, 1937, published by Prickly Paradigm Press and distributed by the University of Chicago Press. The volume, best described as a pamphlet, was edited by Robert Brightman, a professor of Native American studies at Reed College, whose excellent introduction supplies not just context but also a thoughtful consideration of Mitchell’s place at the convergence of ethnography, journalism and memoir.
In 1938, Mitchell became a staff writer at The New Yorker, where he passed into legend as one of the pre-eminent literary journalists of all time. These earlier pieces are invaluable for understanding his work as a whole, and it's good to have them rescued from oblivion.
“Greatest anthropologist in the world” may sound like journalistic hyperbole, but much the same was said by Boas’s peers, and Mitchell was well within the bounds of fair comment in presenting the German-born professor to American readers as “the most dangerous enemy of Adolf Hitler’s racial concepts.” (The very concept of race he regarded as imprecise and scientifically dubious, while that of a “pure” or “superior” race was “impossible to countenance.”)
Beyond the topical significance of Boas’s work -- increasingly clear as the Nazi juggernaut was warming up -- Mitchell presented anthropology as the discipline that could, in effect, teach the world to recognize human nature within human variety, and vice versa.
A solemn priority -- not that Mitchell was po-faced about addressing it. In the third article, he pivoted from profiling Boas to describing the work done by the anthropologists he had trained, making the transition with what is the best sentence I have read so far this year, and probably for a longer while than that.
Nothing disgusts the average young anthropologist so much as the heroic stories in the newspaper about those African expeditions organized by well-heeled young gents whose mamas are willing to buy them yachts and tons of Abercrombie & Fitch equipment just to keep them from going on sit-down strikes in fancy gin mills or from getting themselves betrothed to fan dancers.
This is the first line -- in journalistic argot, the lead -- of the third of Mitchell’s six articles. Any lead tries to stake a claim on public attention somehow; that obligation grows exponentially more difficult if it is certain that quite a few readers will not have seen the earlier installments of a series. Mitchell goes about it with humor, obviously, but also with great rhetorical finesse.
Every word in the sentence is precisely chosen to elicit wry recognition from the newspaper-buying public of 1937. The reader today will share very little with the “imagined community” (to borrow a more recent anthropologist’s expression) for which Mitchell was writing. Yet after 80 years, his lead still works: scenes from some long-lost Marx brothers film flicker in the mind for just a second while reading it.
In 1937, the American high school graduation rate had not yet reached 50 percent, yet Mitchell was undertaking not just to explain anthropology to a heterogeneous public but also to convey to readers that Boas’s students and colleagues were dedicated and serious researchers. Two paragraphs after mentioning the fan dancers, he sketches a portrait of the anthropologist as a young penny-pincher.
If, for instance, he goes for a summer’s work on aboriginal linguistics he will not have much more than $500 to spend, and he will probably buy a used automobile to save traveling expenses, selling it when he returns, and he will eat scantily and live simply spending every possible copper on the problem he has set for himself.
How effective was this in convincing John and Jane Q. Public at the height of the Depression? It’s impossible to know, but with a few precise words in the best possible order, he conveyed a sense of fieldwork as work, rather than the pastime of dilettante playboys out to collect souvenirs. I don't know if his series qualifies as literature, but very little journalism reads this well after 80 years. Very little of anything does.
Andy Warhol’s prediction about fame merits the occasional update. One that popped into my head not long ago after crossing paths with a gaggle of tourists holding their cellphones at arm’s length and smiling: “In the future, everyone will take a selfie every 15 minutes.”
After launching this random thought into the world via social media, I realized almost immediately that it wasn’t much of a prophecy. A poll in 2013 found that almost every third picture taken by someone between the ages of 18 and 24 was a selfie. The following year, participants in a Google developers’ conference heard that the users of one type of cellphone were snapping 93 million selfies per day. My reworking of Warhol’s point might not literally describe the status quo now, but it could certainly be taken for evidence of aging, as in fact my friends were not long in pointing out.
No longer a fad though not a tradition quite yet, the selfie is one of those cultural phenomena that almost everyone can recognize as probably symptomatic -- the result of social, psychological and technological forces too inexorable to escape but too troubling to think about for very long. (Other examples: reality television, sex robots, cars that drive themselves.)
Even the most ardent or compulsive selfie taker must have moments of uneasiness at how tightly the genre knots together self-expression and self-obsession, leaving not much room for anything else. A recent paper in the journal Frontiers in Psychology identifies a selfie-specific form of ambivalence unlikely to go away. It is called “The Selfie Paradox: Nobody Seems to Like Them Yet Everyone Has Reasons to Take Them. An Exploration of Psychological Functions of Selfies in Self-Presentation.”
More on that shortly. But first, a quick look at a book with a more compact and less literal title, I Love My Selfie, by the critic and essayist Ilan Stavans (Duke University Press). A few of the author’s selfies appear in the book, along with reproductions of self-portraits by Rembrandt, van Gogh and Warhol, but it would be an irony-impaired reader indeed who took him to be making any claim to equivalence. The book’s spirit is much closer to that of the Puerto Rican multimedia artist Adál Alberto Maldonado, whose work appears throughout its pages and who titled one photo series “Go Fuck Your Selfie: I Was a Schizophrenic Mambo Dancer for the FBI.” The seed for Stavans’s book was the preface he wrote for a collection of photos by Adál, as he prefers to be known. (Stavans is a professor of Latin American and Latino culture at Amherst College.)
“Richard Avedon once said that a portrait is a picture of someone who knows he is being portrayed,” writes Stavans. “… The self-portrait is that knowledge twice over.” Combined with the highly developed skills of a painter or a photographer, that redoubled awareness can reveal more than the creator’s idealized self-image. The late self-portraits of Robert Mapplethorpe, for instance, “emit a stoicism that is frightening … as if his statement was ‘The world around me is falling apart, but I’m still here, a chronicler of my times.’” Adál’s quietly surreal photographs of himself posing with various props are an oblique and sometimes comic reflection on being a Puerto Rican artist obliged to deal with whatever assumptions the viewer may bring to his work.
Selfies, by contrast, are what’s left of the self-portrait after all technique, discipline, talent and challenge are removed from the process. They exist to be displayed -- not to reveal the self but to advertise it. Stavans calls the selfie “a business card for an emotionally attuned world” and describes life in the public sphere of social media as “a mirage, a solipsistic exercise in which we believe we’re connecting with others while in truth we’re just synchronizing with the image we have of them in our mind.”
And as with other forms of advertising, too much truthfulness would damage the brand. Most selfies never go out into the world. “The trash icon in which we imprison them,” Stavans writes, “is the other side of our life, the one we reject, the one we condemn.”
The authors of “The Selfie Paradox,” Sarah Diefenbach and Lara Christoforakos, are researchers in the department of psychology of Ludwig-Maximilians-University in Munich. The participants in their study were 238 individuals living in Austria, Germany and Sweden between 18 and 63 years of age, recruited from email lists and at university events. They were asked about the frequency with which they took selfies and received them from other people, as well as a series of questions designed to elicit information about their personality and feelings about, and motivations for, taking and viewing selfies.
Not surprisingly, perhaps, people who stated that they were open about their feelings and prone to discussing their accomplishments also tended to enjoy taking selfies. And consistently enough, those inclined to downplay their own successes also tended to report “negative selfie-related affect” -- i.e., were decidedly nonenthusiastic about selfies.
The researchers found broad agreement with the idea that selfies could have unpleasant consequences (inciting derogatory comments, for example) but much less regarding what the positive effects might be. “The only aspect that reached significant agreement” the researchers found, “was self-staging, i.e., the possibility to use selfies for presenting an intended image to others.” Positive benefits such as expressing independence or connection with others were recognized by far fewer participants. And those who took selfies more often were more likely to identify positive consequences for the activity:
In a way, taking selfies may be a self-intensifying process, where one discovers unexpected positive aspects (besides self-staging) while engaging in the activity and this positive experience encourages further engagement. Nevertheless, the majority showed a rather critical attitude, and among the perceived consequences of selfies, negative aspects clearly predominate.
To put it another way, participants in the study tended to acknowledge that putting a selfie out into the world could backfire -- while the only broadly accepted benefit of a selfie they recognized was that of self-display or self-promotion. Though the researchers do not spell out the connection, these attitudes seem mutually reinforcing. If the most recognized motivation for posting a selfie is to benefit the ego, exposing its vulnerabilities would be an associated danger.
Another of the findings also seems in accord with this logic: participants were likely to explain their reasons for taking and posting selfies as ironic or self-deprecating -- while showing much less tendency to assume that other people were doing the same. They also expressed a preference for others to post more nonselfie photographs.
Indeed, people who reported taking a lot of selfies tended “not to like viewing others’ selfie pictures and rather wish for a higher number of usual photos.” It seems in accord with one of Stavans’s observations: “Looking at a favorite selfie is like entering into a world in which we, and nobody else, exist in an uninterrupted fashion.” At least until Narcissus falls into the pool and drowns.
Finding himself in prison following the beer-hall fiasco in Munich in 1923, Adolf Hitler had time to put his thoughts about politics and destiny into order, at least as much as that was possible. The United States was part of his grand vision, and not as someplace to conquer.
“The racially pure and still unmixed German has risen to become master of the American continent,” he wrote in Mein Kampf, “and he will remain the master, as long as he does not fall victim to racial pollution.” He was encouraged on the latter score by what he had learned of American immigration policy. With its stated preference for Northern Europeans, its restrictions on those from Southern and Eastern Europe, and its outright exclusion of everyone else, the Immigration Act of 1924 impressed Hitler as exemplary. It manifested, “at least in tentative first steps,” what he and his associates saw as “the characteristic völkisch conception of the state,” as defined in some detail by the Nazi Party Program of 1920.
Revulsion is an understandable response to this little traipse through the ideological sewer, but it is wholly inadequate for assessing the full measure of the facts or their implications. The admiration for American immigration policy expressed in Mein Kampf was not a passing thought on the day’s news (Hitler had been in prison for about two months when Calvin Coolidge signed the act into law) nor a one-off remark. Its place in the full context of Nazi theory and practice comes into view in Hitler’s American Model: The United States and the Making of Nazi Race Law (Princeton University Press) by James Q. Whitman, a professor of comparative and foreign law at Yale Law School.
Many people will take the very title as an affront. But it’s the historical reality the book discloses that proves much harder to digest. The author does not seem prone to sensationalism. The argument is made in two succinct, cogent and copiously documented chapters, prefaced and followed with remarks that remain within the cooler temperatures of expressed opinion (e.g.: “American contract law, for example, is, in my opinion, exemplary in its innovativeness”).
Hitler’s American Model is scholarship and not an editorial traveling incognito. Its pages contain many really offensive statements about American history and its social legacy. But those statements are all from primary sources -- statements about America, made by Nazis, usually in the form of compliments.
“The most important event in the history of the states of the Second Millennium -- up until the [First World] War -- was the founding of the United States of America,” wrote a Nazi historian in 1934. “The struggle of the Aryans for world domination thereby received its strongest prop.” Another German author developed the point two years later, saying that “a conscious unity of the white race would never have emerged” without American leadership on the global stage following the war.
Examples could be multiplied. The idea of the United States as a sort of alt-Reich was a Nazi commonplace, at least in the regime’s early years. But it was not just a matter of following Hitler’s lead. The white-supremacist and eugenicist writings of Madison Grant and Lothrop Stoddard -- among the best-selling American authors of a 100 years ago -- circulated in translation in the milieu that spawned Hitler. (I don’t recall Hannah Arendt mentioning Grant or Stoddard in Origins of Totalitarianism, oddly enough.) A popular Nazi magazine praised lynching as “the natural resistance of the Volk to an alien race that is attempting to gain the upper hand.” European visitors noted the similarity between the Ku Klux Klan and fascist paramilitary groups like the Brownshirts, and they compared the post-Reconstruction order in the South to the Nazi system.
But the journalistic analogies and propaganda talking points of the day, while blatant enough, don’t convey the depth of American influence on Nazi race law. The claim of influence runs against the current of much recent scholarship arguing that Nazi references to the Jim Crow system were “few and fleeting” and that American segregation laws had little or no impact on the Nuremberg Laws. (At the Nuremberg rally of 1935, the Nazis proclaimed citizenship limited to those “of German blood, or racially related blood” and outlawed marriage or sexual relations between Jews and German citizens.)
While the Nazis did call attention to segregation in the United States -- so the argument goes -- it was to deflect criticism of German policy. The error here, as Whitman sees it, comes from treating the U.S. Supreme Court ruling in Plessy v. Ferguson as the primary or quintessential legal component of racial oppression in the United States, and presumably the one Nazi jurists would have looked to in reshaping German policy. But, according to Whitman, “American race law” in the 19th and much of the 20th century:
sprawled over a wide range of technically distinct legal areas … [including] Indian law, anti-Chinese and -Japanese legislation, and disabilities in civil procedure and election law …. Anti-miscegenation laws on the state level featured especially prominently … [as] did immigration and naturalization law on the federal level ….
Even before the outbreak of World War I, German scholars were fascinated by this teeming mass of American racist law -- with a particular interest in what one of them identified as a new category of “subjects without citizenship rights” (or second-class citizens, to put it another way) defined by race or country of ancestry. By the 1930s, the anti-miscegenation laws in most American states were another topic of great concern. While many countries regarded interracial marriage as undesirable, Nazi jurists “had a hard time uncovering non-American examples” of statutes prohibiting it.
A stenographic transcript from 1934 provides Whitman’s most impressive evidence of how closely Nazi lawyers and functionaries had studied American racial jurisprudence. A meeting of the Commission on Criminal Law Reform “involved repeated and detailed discussion of the American example, from its very opening moments,” Whitman writes, including debate between Nazi radicals and what we’d have to call, by default, Nazi moderates.
The moderates argued that legal tradition required consistency. Any new statute forbidding mixed-race marriages had to be constructed in accord with the one existing precedent for treating a marriage as criminal: the law against bigamy. This would have been a bit of a stretch, and the moderates preferred letting the propaganda experts discourage interracial romance rather than making it a police matter.
The radicals were working from a different conceptual tool kit. Juristic tradition counted for less than what Hitler had called the “völkisch conception of the state,” which demanded Aryan supremacy and racial purity. It made more sense to them to follow an example that had been tried and tested. One of the hard-core Nazis on the commission knew where to turn:
Now as far as the delineation of the race concept goes, it is interesting to take a look at the list of American states. Thirty of the states of the union have race legislation, which, it seems clear to me, is crafted from the point of view of race protection. … I believe that apart from the desire to exclude if possible a foreign political influence that is becoming too powerful, which I can imagine is the case with regard to the Japanese, this is all from the point of race protection.
The lawyers whom Whitman identifies as Nazi radicals seemed to appreciate how indifferent the American states were to German standards of rigor. True, the U.S. laws showed a lamentable indifference to Jews and Gentiles marrying. But otherwise they were as racist as anything the führer could want. “The image of America as seen through Nazi eyes in the early 1930s is not the image we cherish,” Whitman writes, “but it is hardly unrecognizable.”
A survey of 7,000 freshmen at colleges and universities around the country found just 6 percent of them able to name the 13 colonies that founded the United States. Many students thought the first president was Abraham Lincoln, also known for “emaciating the slaves.” Par for the course these days, right?
It happens that the study in question was reported in The New York Times in 1943. The paper conducted the survey again during the Bicentennial, using more up-to-date methods, and found no improvement. “Two‐thirds [of students] do not have the foggiest notion of Jacksonian democracy,” one history professor told the Times in 1976. “Less than half even know that Woodrow Wilson was president during World War I.”
Reading the remark now, it’s shocking that he was shocked. After 40 years, our skins are thicker. (They have to be: asking the current resident of the White House about Jacksonian democracy would surely be taken as an invitation to reminisce about his “good friend,” Michael.)
The problem with narratives of decline is that they almost always imply, if not a golden age, then at least that things were once much better than they are now. The hard truth in this case is that they weren’t. On the average, the greatest generation didn’t know any more about why The Federalist Papers were written, much less what they said, than millennials do now. The important difference is that today students can reach into their pockets and, after some quick thumb typing and a minute or two of reading, know at least something on the topic.
How to judge all this is largely a question of temperament -- of whether you see their minds as half-empty or half-full. Tom Nichols conveys the general drift of his own assessment with the title of his new book, The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters, published by Oxford University Press. The author is a professor of national security affairs at the U.S. Naval War College and an adjunct professor at the Harvard Extension School.
He sees the longstanding (probably perennial) shakiness of the public’s basic political and historical knowledge as entering a new phase. The “Google-fueled, Wikipedia-based, blog-sodden collapse of any division between professionals and laymen, students and teachers” is like a lit match dropped into a gasoline tanker-sized container filled with the Dunning-Kruger effect. (It may seem comical that I just linked to Wikipedia to explain the effect, but it’s a good article, and in fact David Dunning himself cites it.)
Nichols knows better than to long for a better time before technology shattered our attention spans. He quotes Alexis de Tocqueville’s observation from 1835: “In most of the operations of the mind, each American appeals only to the individual effort of his own understanding.” This was basic to Jacksonian democracy’s operating system, in which citizens were, Tocqueville wrote, “constantly brought back to their own reason as the most obvious and proximate source of truth. It is not only confidence in this or that man which is destroyed, but the disposition to trust the authority of any man whatsoever.”
The difference between a self-reliant, rugged individualist and a full-throated, belligerent ignoramus, in other words, tends to be one of degree and not of kind. (Often it’s a matter of when you run into him and under what circumstances.) Nichols devotes most of his book to identifying how 21st-century American life undermines confidence in expert knowledge and blurs the lines between fact and opinion. Like Christopher Hayes in The Twilight of the Elites, he acknowledges that real failures and abuses of power by military, medical, economic and political authorities account for a good deal of skepticism and cynicism toward claims of expertise.
But Nichols puts much more emphasis on the mutually reinforcing effects of media saturation, confirmation bias and “a childish rejection of authority in all its forms” -- as well as the corrosive effects of credential inflation and “would-be universities” that “try to punch above their intellectual weight for all the wrong reasons, including marketing, money and faculty ego.” Unable to “support a doctoral program in an established field,” Nichols says, “they construct esoteric interdisciplinary fields that exist only to create new credentials.”
Add the effect of consumerism and entertainment on the academic ethos, and the result is a system “in which students learn, above all else, that the customer is always right,” creating a citizenry that is “undereducated but overly praised” and convinced that any claim to authoritative knowledge may be effectively disputed in the words of the Dude from The Big Lebowski: “Yeah, well, you know, that’s just, like, your opinion, man.”
As a work of cultural criticism,The Death of Expertise covers a good deal of familiar territory and rounds up the usual suspects to explain the titular homicide. But the process itself is often enjoyable. Nichols is a forceful and sometimes mordant commentator, with an eye for the apt analogy, as when he compares the current state of American public life to “a hockey game with no referees and a standing invitation for spectators to rush onto the ice.”
But one really interesting idea to take away from the book is the concept of metacognition, which Nichols defines as “the ability to know when you’re not good at something by stepping back, looking at what you’re doing, and then realizing that you’re doing it wrong.” (He gives as an example good singers: they “know when they’ve hit a sour note,” unlike terrible singers, who don’t, even if everyone else winces.)
“The lack of metacognition sets up a vicious loop, in which people who don’t know much about a subject do not know when they’re in over their head talking with an expert on that subject. An argument ensues, but people who have no idea how to make a logical argument cannot realize when they’re failing to make a logical argument …. Even more exasperating is that there is no way to educate or inform people who, when in doubt, will make stuff up.”
The implications are grave. In 2015-16, Donald Trump ran what Nichols calls “a one-man campaign against established knowledge,” and he certainly pounded the expertise of most pollsters into the dirt. He is now in a position to turn the big guns on reality itself; that, more than anything else, seems to be his main concern at present. Nichols writes that research on the Dunning-Kruger effect found that the most uninformed or incompetent people in a given area were not only “the least likely to know they were wrong or to know that the others were right” but also “the most likely to try to fake it, and the least able to learn anything.” That has been shown in the lab, but testing now continues on a much larger scale.
“All eras in a state of decline and dissolution are subjective,” said Goethe in a moment of sagely grumbling about the poets and painters of the younger generation, who, he thought, confused wallowing in emotion for creativity. “Every healthy effort, on the contrary, is directed from the inward to the outward world.”
I didn’t make the connection with Svend Brinkmann’sbook Stand Firm: Resisting the Self-Improvment Craze until a few days after writing last week’s column about it. One recommendation in particular from the Danish author’s anti-self-help manual seems in accord with Goethe’s admonition. As Brinkmann sees it, the cult of self-improvement fosters a kind of bookkeeping mentality. We end up judging experiences and relationships “by their ability to maximize utility based on personal preferences -- i.e. making the maximum number of our wishes come true.” The world becomes a means to the ego’s narrow ends, which is no way to live.
Besides offering a 21st-century guide to the Stoic ethos of disinvestment in the self, Brinkmann encourages the reader to rediscover the world in all its intrinsic value -- its fundamental indifference to anybody’s mission statement. How? By spending time in museums and forests:
“A museum is a collection of objects from the past (near or distant), e.g. art or artifacts that say something about a particular era or an aspect of the human experience. Obviously, you learn a lot from a museum visit -- but the greatest joy lies in just reveling in the experience with no thought of how to apply the knowledge and information. In other words, the trick is to learn to appreciate things that can’t be ‘used’ for some other function....
Similarly, a walk in the woods gives us a sense of being part of nature and an understanding that it shouldn’t be seen as consisting of resources that exist merely to meet human needs and desires. ... There are aspects of the world that are good, significant, and meaningful in their own right -- even though you derive nothing from them in return.”
Making similar points from a quite different angle is The Usefulness of Useless Knowledge by Abraham Flexner (1866-1959), the founding director of the Institute for Advanced Study, in an edition from Princeton University Press with a long introduction by the institute’s current director, Robbert Dijkgraaf.
The essay giving the book its title first appeared in Harper’s magazine in October 1939 -- a few months into the New York World’s Fair (theme: The World of Tomorrow) and just a few weeks into World War II. “I [am] pleading for the abolition of the word ‘use,” Flexner wrote, “and for the freeing of the human spirit.” It must have seemed like one hell of a time for such an exercise. But the essay’s defense of the Ivory Tower was tough-minded and far-sighted, and Dijkgraaf’s introduction makes a case for Flexner as a major figure in the history of the American research university whose contribution should be remembered and revived.
The germ of The Usefulness of Useless Knowledge was a memorandum Flexner wrote as executive secretary of the General Education Board of the Rockefeller Foundation in 1921.The principles it espouses were also expressed in his work bringing Albert Einstein and other European academic refugees to the Institute at Princeton in the early 1930s.The essay defends “the cultivation of beauty ... [and] the extension of knowledge” as “useless form[s] of activity, in which men [and, as he acknowledges a few sentences earlier, women] indulge because they procure for themselves greater satisfactions than are otherwise available.”
But the impact of Flexner’s argument does not derive primarily from the lofty bits. He stresses that the pursuit of knowledge for its own sake has in fact shown itself already to be a powerful force in the world -- one that the ordinary person may not be able to recognize while swept up in “the angry currents of daily life.” The prime exhibits come from mathematics (Maxwell’s equations or Gauss’s non-Euclidian geometry took shape decades before practical uses could be found for them), though Flexner also points to the consequential but pure curiosity-driven work of Michael Faraday on electricity and magnetism, as well as Paul Ehrlich’s experiments with staining cellular tissue with dye.
“In the end, utility resulted,” Flexner writes, “but it was never a criterion to which [researchers’] ceaseless experimentation could be subjected.” Hence the need for institutions where pure research can be performed, even at the expense of pursuing ideas that prove invalid or inconsequential. “[W]hat I say is equally true of music and art and of every other expression of the untrammeled human spirit,” he adds, without, alas, pursing the point further.
The untrammeled human spirit requires funding in any case. Although written towards the end of the Great Depression -- and published ten years to the month after the stock market crash -- The Usefulness of Useless Knowledge reads like a manifesto for the huge expansion of higher education and of research budgets in the decades to follow.
Flexner could point to the Institute for Advanced Study with justified pride as an example of money well-spent. He probably corrected the page proofs for his essay around the same time Einstein was writing his letter to President Roosevelt, warning that the Germans might be developing an atomic bomb. And as Robbert Dijkgraaf reminds us in his introduction, another Flexner appointee was the mathematician John von Neumann, who “made Princeton a center for mathematical logic in the 1930s, attracting such luminaries as Kurt Godel and Alan Turing.” That, in turn, led to the invention of an electronic version of something Turing had speculated about in an early paper: a machine that could be programmed to prove mathematical theorems.
“A healthy and balanced ecosystem would support the full spectrum of scholarship,” Dijkgraaf writes, “nourishing a complex web of interdependencies and feedback loops.” The problem now is that such a healthy and balanced intellectual ecosystem is no less dependent on a robust economy in which considerable amounts of money are directed to basic research -- without any pressing demand for a return on investment. “The time scales can be long,” he says, “much longer than the four-year periods in which governments and corporations nowadays tend to think, let alone the 24-hour news cycle.”
That would require a culture able to distinguish between value and cost. Flexner’s essay, while very much a document from eight decades ago, still has something to say about learning the difference.
In lists of winners of the Nobel Prize for Literature, an asterisk sometimes appears next to the name of the entry for 1964. That year Jean-Paul Sartre declined the award because, among other things, a writer must “refuse to let himself be transformed into an institution.” The refusal cannot be called all that effective, in part because Sartre already was an institution (on an international scale to which, so far as I know, no author today really compares) and in part because the Swedish academy did not give the award to anyone else that year. He remains on the list, marked as a sore winner.
That same year, a future Nobel laureate issued his third and fourth albums, The Times They Are a-Changin’ and Another Side of Bob Dylan. The second title in particular hints at the ambivalence that the songwriter formerly known as Robert Zimmerman was beginning to feel toward his most ambitious creation -- to whit, “Bob Dylan,” a persona shaped in part through his own borrowings from various folk-music legends (especially Woody Guthrie) and in part by the felt need of segments of the American public for someone to embody the voice of his generation. In acquiring an audience, he took on the weight of its expectations and demands. (Reasonable and otherwise: Dylan had what in 1960s were not yet known as stalkers.) “By many accounts, he’d shed his boyish charm and had become moody, withdrawn and dismissive of those who either stood in his way or who wanted something from him,” writes Andrew McCarron in Light Come Shining: The Transformations of Bob Dylan (Oxford University Press). In public he sometimes had to wear a disguise, just to be left alone.
A connection can be drawn between Sartre and Dylan not only through their shared Nobel status (something of a coincidence almost, given the literature committee’s caprice in recent years) but because Light Come Shining belongs to a genre to which Sartre devoted a great deal of attention over the years: the psychobiography. Indeed, McCarron’s whole perspective on Dylan’s life and work shows the influence of concepts from Sartre’s “existential psychoanalysis,” especially that of the project. McCarron, who heads the religion, philosophy and ethics department at Trinity School in New York City, draws on quite a few more recent developments in psychology. But the Sartrean component is central enough -- and nowadays unusual enough -- to be striking.
Psychobiography in this sense should not be confused with the hunt for formative family relationships, childhood traumas, personal secrets, etc.: the sort of diagnosis at a distance, licensed or otherwise, practiced by many if not most biographers over the past century. It combs the available information about a subject’s life -- especially his or her own recollections and interpretations of things -- not for symptoms or concealed truths but, McCarron writes, for “the themes and structures of a life narrative that shed light on the mind and life-world behind the story.” An inaccurate memory or an outright lie may prove more revealing than what it distorts: “Appropriating, embellishing, misrepresenting, fantasizing, projecting and contradicting are all par for the course within the narrative realm. … The psychological truth that a given story conveys is considerably more valuable from a study of lives perspective than its historical truth.” The search is for the deep pattern in how the subject has understood life and tried to steer it (accurately or not) in certain directions. The psychobiographer is interested in “what [someone] succeeds in making of what he has been made,” as Sartre put it in a passage McCarron quotes.
Bob Dylan has been famous for his massive changes of direction, both in songwriting style (folk to rock to country, on to every permutation thereof) and personal identity. Early in his career he claimed to have been a carny and a hobo, among other things, and his interviews across the decades have often been performances, deflecting questions as much as answering them. More dramatic even than his shift from anti-war and civil rights balladeer to introspective surrealist -- with the two albums from 1964 marking the transition -- was Dylan’s conversion to Christianity in the late 1970s. For a while his concerts became confrontational, both from his refusal to play old songs and his impromptu fire-and-brimstone preaching. Whatever his religious affiliation now, the proselytizing phase did not last. He’s had his quota of marital and romantic drama and career downturns. Light Come Shining was finished before Dylan received the Nobel, and it’s possible he has not seen his last metamorphosis.
The psychobiographer, then, faces an excess of material with Dylan, not to mention more than 50 years of investigation, speculation and exegesis by obsessive fans. McCarron sifts through it and finds “variations on a repetitive plotline” coming to the fore with particular clarity at a number of points: “I have lost my sense of identity and purpose. I feel anxious and vulnerable to death and destruction. I turn to the songs and artists of my youth for guidance. I feel a redeemed sense of self and purpose. I reflect upon the change and understand it as the process of developing into who I’m supposed to be.”
One case of anxious and unmoored feelings was Dylan’s sense of being crushed by celebrity circa 1964 -- a period culminating in his motorcycle crash in 1966. (If that’s what really happened, rather than a stint in rehab, for which there seems to be more evidence.) McCarron identifies similar phases of great personal strain in the late 1970s and ’80s, followed by, respectively, his religious conversion and the major revival of his creative powers shown in Dylan’s songwriting in the 1990s. At each turn, he escaped desolation row by reconnecting with his musical roots: the blues, gospel, Western swing, the sounds of New Orleans, the memory of seeing Buddy Holly a few days before his death.
“All of Sartre’s studies of lives reveal features characteristic of traditional religious narratives,” wrote Stuart L. Charmé in Meaning and Myth in the Study of Lives: A Sartrean Perspective (University of Pennsylvania Press, 1984). And that makes sense insofar as what the psychobiographer looks for in a subject’s life is a kind of private mythology: the self’s innermost sense of its origins and its course. (As mentioned earlier, Sartre calls this a project; perhaps “projectile” also fits, since there’s a definite sense of movement, of throwing, or being thrown, into the future.)
If what McCarron identifies as Dylan’s psychobiographical bedrock might also be called a story of death and resurrection, that’s not necessarily because of the songwriter’s midlife experience of being “born again” and driven to evangelize. A great deal of the music that Dylan loves and immerses himself in echoes biblical language and themes, and it turns out that any number of songs about worldly pleasures and follies were written by performers who did a bit of preaching, too. Dylan absorbed musical traditions so deeply that they became part of himself, then projected them forward, in constant awareness that -- in a lyric that McCarron oddly never cites -- “he not busy being born is busy dying.”
Attributing human characteristics to animals -- as in the case of Henri the Cat, the existentialist feline -- is a case of anthropomorphism. But the word is perhaps less suitable when the creatures in question are monkeys or apes. Anthropomorphizing disregards the vast difference between an animal’s world and our own. Watching primates is another matter.
Not that the gap is smaller, but it’s tangible and fascinating in its own right. Projecting human qualities onto primates can boomerang: we are close enough on the evolutionary tree to make every point of anatomical or behavioral resemblance a challenge to our egocentricity as a species. From a certain angle, it probably looks like we’re just a species of jumped-up chimpanzee.
Two camps have formed in the study of how intelligence evolved, according to Julia Fischer’s Monkeytalk: Inside the Worlds and Minds of Primates, published in Germany five years ago and now out in translation from the University of Chicago Press. One camp takes human beings as “the analytical point of departure” and “seeks to discover which other animal groups share competencies” with us. The anthropocentric researcher then goes in search of “a plausible explanation … for when a particular trait emerged in the course of evolution.”
In contrast, what Fischer calls the “evolutionary-ecological approach” starts out from an understanding of intelligence as one aspect of how animals engage with and adapt to their environment, raising questions about how “various species solved similar problems in the course of evolution” and what circumstances foster the power to learn or to generalize from experience. (Or, conversely, what factors might inhibit that power.)
Drawing on her own work in the field and the lab as well as that of other researchers, Fischer considers it “most productive to incorporate both perspectives” -- the anthropocentric and the evolutionary-ecological -- “to develop a comprehensive understanding of animal intelligence” and of primates especially. But my impression is that she inclines more to the evolutionary-ecological camp: much of the book reflects on her observation of three species (the Barbary macaque and two kinds of baboon) in different environments, and Fischer keeps the reader aware of the natural fit between behavioral pattern or social structure and immediate issues such as predator threats and food availability.
Fischer’s recollections of field research (where “strong nerves, grit and oftentimes a morbid sense of humor are essential”) and descriptions of monkey behavior are highly engaging. The account of babysitting among Barbary macaques is especially vivid and memorable. A male will snatch a newborn (not necessarily his own progeny) from its mother for use as a status symbol and icebreaker with the guys. Then:
He can more confidently approach another male and engage in mutual grooming than if he approaches alone. When two male Barbary macaques sit together holding an infant, they often engage in a peculiar ritual, lifting the baby up high, nuzzling it and thoroughly inspecting it. They chatter their teeth, smack their lips and emit deep grunting sounds. Sometimes they will bask in the afterglow, calmly remaining beside each other, while at other times one of the males will brusquely snatch the infant up and rush off to repeat the ritual with another male.
Eventually the baby gets hungry, making it less amusing, whereupon it is returned to the mother. From observation of chacma baboons, Fischer found that at the age of 10 weeks, youngsters did not respond to recordings of baboon calls. By four months, they did pay attention, without regard for what kind of call it was. And two months after that, “They reacted clearly to alarm calls and had learned to ignore contact calls, save for those produced by their mothers.” A learning process had transpired, though Fischer notes it is difficult for researchers to work out just how it happens in the wild.
Monkeytalk reports on findings concerning three dimensions of the primate mind: social behavior, cognition and communication. One of the arguments Fischer considers is “that intelligence has arisen as a consequence of life in complexly structured groups”; the other, “that intelligence and communicative ability are intimately interconnected.”
From our limb of evolutionary development, it’s tempting to consider them as all inextricably linked. The anthropocentrist would insist on a third link: one between communicative ability and social complexity, which work together like pistons in the engine of human cognition. (See Kenneth Burke’s “Definition of Man” for another formulation of this idea.) But from Fischer’s review of the evidence, the connections are much more loosely imbricated than we might think:
Primate intelligence is not limited to the social domain. Primates competently interpret objects and events in their physical surroundings and draw correct inferences about them -- or at least they do when the pertinent stimuli are not too misleading …. Yet indirect evidence and “invisible” causal connections remain completely alien to them. … While intelligence is tied to a rich representation of the social world, it by no means entails a sophisticated system of communication. At the same time, primates are evidently capable of perceiving the subtlest differences in the signaling behavior of their fellows and investing those nuances with distinctive meaning. In addition, they make use of, and adaptively respond to, a variety of information sources, such as contextual clues and signals.
Only on the final page (not counting acknowledgments and other apparatus) does Fischer make the reader fully aware of two very dark clouds hanging over the progress of knowledge concerning our fellow primates. One is that long-term research -- while necessary, since most species have long life spans -- is difficult given the scarcity of long-term funding. The other is that a majority of species are now endangered, and many are on the verge of extinction. Monkeytalk certainly leaves you with a feeling of the depths that loss will mean.