“Before the Freedom of Information Act,” Henry Kissinger told a gathering of diplomats in Turkey in March 1975, “I used to say at meetings, ‘The illegal we do immediately; the unconstitutional takes a little longer.’ But since the Freedom of Information Act, I'm afraid to say things like that.”
Not that afraid, obviously. The Machiavellian quip got a laugh at the time, according to the official transcript -- and clearly it merits a spot in any future collection of familiar quotations, alongside Kissinger’s remark about power being the ultimate aphrodisiac. For now, it serves as the epigraph to a press release from WikiLeaks announcing the opening of the Public Library of U.S. Diplomacy, with its first collection consisting of more than 1.7 million diplomatic cables from 1973 to ’76.
All of the material was routinely (if belatedly) declassified after 25 years, per U.S. law, and has been available from the National Archives and Records Administration. WikiLeaks made the collection searchable and is “housing” it on servers presumably beyond the reach of Big Brother. Now they can’t be reclassified.
As announcements from WikiLeaks go, it’s all fairly underwhelming. But it does make an important revelation -- however unintentional -- by reminding the public that three years have passed since the group last made a world-shaking release of information. The leaks, it seems, have been plugged. Secret documents are staying secret. Even the most ardent admirer of Bradley Manning will be understandably reluctant to share his fate. While it is too soon to pronounce WikiLeaks dead, it does appear to be in a coma.
Castronovo, a professor of English and American studies at the University of Wisconsin at Madison, links the “Cablegate” of 2010 to a Revolutionary War-era incident through the concept of “a new kind of network actor” distinct from “the traditional person of liberal democracy.” The case in question was the Thomas Hutchinson affair of 1773, when letters by the governor of the Massachusetts Bay Colony somehow found their way into the hands of the Sons of Liberty, who then circulated them via newspaper and pamphlet.
Hutchinson had borne the brunt of serving His Majesty during the Stamp Act riots a few years earlier, and was in office during the Boston Massacre. In his correspondence he referred to the need for “abridgement of what are called English liberties" among the unruly colonial subjects, which was just so much gasoline on the fire.
The source of the leak was one Benjamin Franklin, colonial postmaster. Franklin later insisted that this ethical lapse was committed in an attempt (alas! unsuccessful) to reduce American hostility towards Parliament and the Crown by documenting that the real source of trouble was someone much lower in the chain of command. Castronovo treats this claim with greater suspicion than have some historians -- and not just because Franklin was such a master of irony, pseudonymous commentary, and the fake-out.
Franklin was also a node in multiple correspondence networks, and understood perfectly well how porous they could be. Alongside the official channels of communication between Court and colony, there were informal but durable long-distance connections among merchants, officials, publishers, and so on. A letter by someone within such a network tended to have, so to speak, an implicit “cc” or “bcc” field.
“More significant than the sending and receipt of private letters between individuals,” writes Castronovo, the activity of these epistolary networks “encompassed a range of public activities, including the recitation of letters aloud, the printing of handwritten letters in newspapers, the transmission of pamphlets, and the sending of circular letters by local governments....” Such communications might be “opened by third parties and forwarded without permission, shared in social circles and reprinted in newspapers.”
By transmitting Hutchinson’s letters to figures within his own circles who were in contact with the more hot-headed American revolutionary circles, Franklin was creating a political weapon against the authorities. He was, in effect, both a whistleblower and Julian Assange at the same time.
Having put it that way, however, I must immediately backtrack to say that the analogy is not Castronovo’s point at all. “At issue,” he writes, “is how communication spreads and metastasizes, how ideas proliferate and take root, how views and opinions propagate themselves.”
The network in each case – epistolary or digital – is not just a medium or tool that individuals use to communicate or act. In it, rather, “individual agency becomes unmoored from stable locations and is set adrift along an interconnected web of tendril-like links and nodes.” This is a perspective derived from the work of Bruno Latour, among others. It rejects the familiar way of thinking of society as consisting of distinct individuals who interact and so create networks. Instead -- to put things one way – it’s networks all the way down. Society emerges from a teeming array of networks that overlap and intersect, that get knotted together or fray with use.
Franklin’s catalytic intervention in the American crisis of 1773 was as effective as it was by virtue of his ability to channel communication from one network to another. And it was effective because it was done quietly; he advanced the revolutionary process involving “a public interlinked and excited by expressions of dissent” without making himself known. “In a perhaps uncharacteristic move,” Castronovo says, “Franklin refuses to occupy the center [of public discussion], instead preferring to sit back in the shadows where, after all, the shadowy work of espionage gets done.”
But the state – however much it may use networks of its own – insists on ascribing public action to individuals possessing stable and legible identities. By 1774, the Privy Council knew about Franklin’s role in the matter and summoned him to a hearing in London, where he was denounced, in humiliating terms, for more than an hour.
Bradley Manning, of course, faces worse – while the coiner of that witticism about operating illegally and unconstitutionally has never endured the consequences of his actions. What does that imply for a Latourian theory of social ontology? I don’t know, but it surely demonstrates that not all networks are equal before the law.
Once it would have been possible to jump right into a discussion of Michael Gordin’s The Pseudoscience Wars: Immanuel Velikovsky and the Birth of the Modern Fringe (University of Chicago Press) with the reasonable assumption that readers would have at least a nodding acquaintance with the maverick psychoanalyst’s ideas.
But today -- as Gordin, a professor of history at Princeton University, notes -- few people under the age of 50 will recognize Velikovsky’s name, much less know of his theory of the traumatic impact of cosmic catastrophes on human history. It was a heated topic for discussion in the 1970s. I recall seeing a poster for a meeting of Chaos and Chronos, a student organization dedicated to Velikovskian matters that once had clubs on many U.S. college campuses. This was as late as 1980 or ’81. (Which only corroborates Gordin’s point, for I am approaching the half-century mark at an alarming speed.)
So, first, a lesson in now-dormant controversy.
Although he published several other books during his lifetime, plus a few more posthumously, Velikovsky presented his core argument in a volume called Worlds in Collision (1950). It was an attempt to formulate the key to all mythologies, or at least an explanation of some of the more striking stories and beliefs of antiquity. Drawing on sources both classical and obscure, he showed that cultures all over the world preserved narratives in which the world passed through incredible catastrophes: the earth shook, the heavens darkened, the sun stood still, floods wiped out society, fire or stones or both fell from the sky, and so on. The cultures that preserved the tales explained the events as a manifestation of God’s wrath at humanity, or as the consequence of gods’ behavior toward one another.
An orthodox Freudian, Velikovsky had no use for Jung’s nebulous ideas about archetypes in the collective unconscious. His theory was more concrete, if no less strange. The far-flung legends all made sense as distorted accounts of a series of astronomical anomalies beginning circa 1500 B.C.E., when (he argued) a huge mass of matter broke off the planet Jupiter and spun off into space. It passed dangerously close to Earth a couple of times before eventually settling into orbit as the planet we now know as Venus.
Its comet-like transit through the solar system generated a series of events, both in outer space and here below, that continued for the better part of a thousand years. Earth and proto-Venus came near enough to affect each other’s orbits, and that of Mars as well. Bewildered by the strange things happening in the sky, mankind endured terrestrial upheaval on an incredible scale -- tectonic disasters, weird weather, and shifts of the planet’s axis, for example.
Once, when proto-Venus came close to Earth, its atmosphere permeated our own long enough to precipitate a fluffy, snow-like substance made of hydrocarbons. And so it came to pass that the Israelites received the manna falling from heaven that the Lord did send to nourish them.
Well, it’s a theory, anyway.Harper’s magazine ran an article about Velikovsky’s book in advance of its publication. Other, less sober publications followed suit, presenting Worlds in Collision as demonstrating the literal (albeit distorted) truth of events recorded in the Bible. The response by scientists was less enthusiastic, to put it mildly. The word “crackpot” tended to come up. Velikovsky’s interdisciplinary erudition impressed them only as evidence that he was profoundly ignorant in a number of fields. The American people would be dumber for reading the book, and so on.
Upon seeing the early publicity for Velikovsky’s book, some scientists were so disgusted that they wrote to Velikovsky’s publisher, Macmillan, to complain. Worlds in Collision had been listed in the firm’s catalog as a scientific work. The letter-writers considered this disgraceful, and warned of the potential damage to the press’s reputation in the scientific community. After a few university science departments canceled their meetings with Macmillan’s textbook salesmen, the publisher became alarmed and sold its right to the book to Doubleday.
Velikovsky was unhappy about this, but the deal was hard on Macmillan as well. At its peak, Worlds in Collision was selling a thousand copies a week, despite being a rather pricey hardback. The backstage furor soon died down, as did public interest in Velikovsky’s claims. By 1951, his ideas must have seemed as if they would have no more of a future than the other big fad of the previous years, L. Ron Hubbard’s Dianetics. (The scholarly literature seems to have overlooked this bit of synchronicity, though I’m sure there is a master’s thesis in it for somebody.)
The Worlds in Collision affair might have been forgotten entirely if not for a special issue of the journal American Behavioral Science devoted to the whole matter, published in 1963. The contributors were interested not so much in Velikovsky’s ideas as in how scientists had responded to them – with peremptory dismissals based on the Harper’s article, emotional rhetoric, and behind-the-scenes pressure on his publisher. It amounted to censorship and the repression of ideas – the assertion of scientific authority against a theory, despite the lack of serious engagement with the book itself.
Velikovsky and Albert Einstein both lived in Princeton, N.J., during the 1950s, and Velikovsky could quote remarks from the physicist’s letters and conversation suggesting that Worlds in Collision was at least interesting and worthy of a hearing. This was by no means the only thing Einstein had to say. Gordin quotes a number of occasions when Einstein described Velikovsky as “crazy” -- and clearly he regarded the man as a pest, at times. But it's not difficult to imagine why the most famous Jewish immigrant in postwar America might develop sympathetic feelings for someone of a comparable background who seemed to be facing unfair persecution. Besides, they could speak German together. That counted for a lot.
In any case, their friendship also made it easier to argue that Velikovsky just might be too far ahead of his time. In the mid-1960s, students at Princeton University formed a discussion group on Worlds in Collision, and Velikovsky himself spoke there – the first of what became many lectures to packed halls. Given the spirit of the time, having been rejected and anathemized by the scientific establishment was, in its own way, a credential. Among young people, Velikovsky enjoyed the special authority that comes when mention of one’s ideas is sufficient to annoy, very noticeably, one’s professors.
In 1972, the editors of Portland State University’s student magazine Pensée turned it into a forum defending and developing Velikovsky’s ideas. Papers were peer-reviewed, sort of: they were submitted to scholars and scientists for vetting, though most of the reviewers were sympathetic to Velikovsky (and, it sounds like, also contributors). Pensée’s first all-catastrophism issue clearly met a need. It had to be reprinted twice and sold 75,000 copies, after which the journal’s circulation settled down to a still-remarkable 10-20,000 copies per year.
And if any more evidence of his status as countercultural eminence were needed, the American Association for the Advancement of Science held a Velikovsy symposium at its annual conference in February 1974. The most famous participant was the astronomer Carl Sagan, who challenged the author’s supposed scientific evidence for the cosmic-catastrophe scenario at considerable length. Velikovsky and his supporters were angry that all of the invited speakers were critical of his work. But the organizers invited Velikovsky himself to respond, which he did, also at considerable length. The symposium may not have vindicated Velikovsky, but it gave him an unusually prominent place at the table
He died in 1979, and five years later Henry Bauer (now an emeritus professor of chemistry and science studies at Virginia Tech) published Beyond Velikovsky: The History of a Public Controversy (University of Illinois Press). It was the first book-length analysis of the whole saga and, for a long time, the last. Most of the secondary literature on Velikovsky appearing since his death resembles the material about him published during his lifetime, in that it is polemical, for or against. The one published biography of Velikovsky that I know of, drawing on his own memoirs, is by his daughter.
So Gordin’s The Pseudo-Science Wars belongs to the fairly small number of studies that do not simply pour the old controversies into new bottles. In that regard, the title is something of a fake-out. The author doesn’t treat Velikovsky’s catastrophism as a variety of pseudoscience. He is dubious about the concept, both because it is applied to too many phenomena that don’t share anything (what do astrology, cold fusion, biorhythms, and the study of how ancient astronauts shaped human evolution really have in common?) and because no one has established an epistemological “bright line” to distinguish science-proper from its pretenders. The word’s real significance lies in its use in shoring up the authority of those who use it. Calling something pseudoscience is more profoundly delegitimizing than calling it bad science.
Happily the author spends only a little time on Sociology of Deviance 101-type labeling theory before getting down to the altogether more compelling labor of using archival material that was unavailable to Bauer 30 years ago -- especially the theorist’s personal papers, now in Princeton’s collection. Velikovsky was something of a packrat. If he ever parted with a document, it cannot have been willingly. The Pseudo-Science Wars fills in the familiar outline of his career and controversy, as sketched above, with an abundance of new detail as well as insight into what Gordin calls “the development of Velikovskian auto-mythology.”
We learn, for one thing, that tales of a deliberate campaign of letter-writing and well-organized pressure on Macmillan through the threat of a boycott have little evidence to back them up. Accounts treating Velikovsky as an American Galileo typically suggest that his opponents wanted to prevent his ideas from receiving any hearing at all – that they were, in principle if not in method, book-burners.
But the existing documents suggest that the scientific community was chiefly troubled at seeing Worlds in Collision issued under the full authority of a major science and textbook publisher. A number of scientists responded by exerting pressure on Macmillan, but Gordin says the letters “were disorganized, uncoordinated, and threatened different things – some not to buy books, some not to referee manuscripts, others not to write them.”
Textbooks represented up to 70 percent of the publisher’s revenue, so professorial displeasure “had to be taken seriously. Macmillan could not afford to call it a bluff.” Once Worlds in Collision was sold to Doubleday (a trade publisher) scientists were content to mock the author’s grasp of geology, chemistry, celestial mechanics, etc. – or simply to ignore the book altogether.
Velikovsky converted the episode into a kind of moral capital, and Gordin demonstrates how shrewdly he and his admirers used it to build a scientific counter-establishment -- what one might otherwise call a full-scale pseudoscience. The analysis requires a number of detours, heading into territory where intellectual historians seldom venture – as in the sad tale of Donald Wesley Patten, author and publisher of The Biblical Flood and the Ice Epoch (1966), who ultimately proved too Velikovskian for the fundamentalists, and vice versa.
For a long time it seemed as if no one could go beyond Beyond Velikovsky. Gordin's book does not replace the earlier study, which remains an interesting and valuable book, and certainly worth the attention of anyone trying to decide whether to explore the terrain in more detail. But The Pseudoscience Wars puts the catastrophist’s ideas and aura into a wider and thicker context of ideas, people, and institutions -- a remarkable array, spanning from the Soviet genetics debates of the 1940s to today's fractious niche of (please accept my sincere apology for this next word) post-Velikovskyism.
Speaking of which, let me end with a prediction. While reading Gordin, it crossed my mind that the scenario of upheaval in Worlds in Collision might well speak to the sense of how precarious our little ball in space really is. Americans are not a thrifty people, but we do tend to recycle our cultural phenomena, and if there is one 20th-century idea that seems a likely candidate for 21st-century revival, it is probably catastrophism.
In his inaugural address, President Obama referred repeatedly to education – but exclusively to education in STEM disciplines, as if only those fields had a defensible public purpose. Sadly, this is no aberration: in December the White House issued a report entitled "Transformation and Opportunity: The Future of the U.S. Research Enterprise," which completely overlooked research in the humanities and social sciences, even in its brief history of the growth of research at American universities.
Such a narrow focus is surprising, as the president himself apparently consults historians (and probably other scholars); and it is counterproductive, whether in strict dollars and cents terms or broader ones. Some politicians have gone further, aggressively asserting that various humanities and social science disciplines are useless, and attempting to impose higher tuitions on students who major in them, making it all the more important that those who know better actively affirm the value of teaching and research beyond the STEM fields.
I will focus here on the case for history: it is what I know best, and since history straddles the line between humanities and social sciences, many arguments for its importance apply to various allied fields. One might loosely group these into three categories, ranging from the most social scientific to the most humanistic. The first applies to lessons drawn from circumstances relatively close to our own; the second to learning about times and places we know are quite different. The third applies to research showing that some currently accepted ideas are actually fairly novel, and that people not so different from us saw did without them; engaging the concepts they used instead may help us see additional possibilities in the world, whether for good or ill.
Examples of the first category underlie almost any sound public policy debate, as well as many private deliberations. Take, for example, the 2009 stimulus bill. By itself, no mathematical calculation could assess the relative accuracy of the more-or-less Keynesian models suggesting that the stimulus would help the economy and the "real business cycle" models, which predicted that it would be an expensive waste. The difference lay in historical research about how various modern economies had responded to historically specific policy initiatives. Other examples abound, though most are less well-known: closest to home in this regard would be evaluating options for STEM investment in light of the vast literature on what has given rise to specific clusters of innovation in the past, and which innovations proved most beneficial. One would also expect development efforts to gain from examining research on past relationships among, say, education, urbanization, birthrates, and investment.
The benefits of research into the importance of understanding differences in the context of policy decisions abound, with special clarity emerging in what we might call "area studies" knowledge – an enormous part of the growth of U.S. research universities after WWII. Surely we could have saved lives and money had policy-makers known more about religious differences within Iraqi society, the political and social history of Afghanistan, or class relations and popular nationalism in Vietnam before military interventions in those places. The same, I would argue, goes for using research into the evolution of Chinese notions of ethnicity, nationality, race, and geopolitics to understand likely governmental and popular reactions to possible American policies on Tibet, trade, the Diaoyu/Senkaku Islands, and so on.
Perhaps less obvious, but equally important, is the usefulness of research that shows that many ideas we may take to be "natural," or at least of very long standing, are actually relatively new.. Some of these insights may be "just" a contribution to increased self-understanding, but others bear directly on public issues. Urgent debates over how fixed the concept of "marriage" has been come first to mind, but there are many more actual and potential examples. Recognizing that the term "ethnic group" is barely 75 years old reminds us how mutable are our understandings of the basis and implications of human groupings; that "gross national product" is of roughly the same vintage suggests maximizing that particular measurement is not inevitably the paramount goal of economic policy.
It hardly seems a stretch to think that a world facing our current challenges might benefit from awareness of other ways that people have thought about the relationship of work, citizenship, adult status, "independence" and dignity, or about consumption, economic growth, leisure and the nature of progress. Or to take some narrower examples, consider the implications of learning how relatively recently life insurance went from seeming like a morally dubious gambling on death to a taken-for-granted tool for managing risk. Or that, while (as Thomas Ricks noted in a recent Atlantic) almost no U.S. generals were removed from their commands for poor performance during Vietnam, Afghanistan or Iraq, many were so removed during World War II – suggesting that the recent situation does not represent an inevitable feature of government, much less of hierarchy generally. Historical knowledge of this kind does not provide lessons as straightforward as “deficit spending can work,” but it can add significantly to our understandings of what is possible, for better or worse, and how things may become, or cease to be, unthinkable.
Research that produces these results, both testing earlier certainties and responding to new questions , thus seems a useful, even necessary complement to research in the STEM fields. Fortunately, most historical research is also relatively cheap, but it does not thrive on complete neglect.
Kenneth Pomeranz is University Professor of History at the University of Chicago and president of the American Historical Association. The views expressed here are his alone.
Submitted by Anne Hyde on December 21, 2012 - 3:00am
Giants can move. So can venerable, cautious scholarly organizations like the American Historical Association. In a recent New York Times op-ed, Kevin Carey of the New America Foundation asked "Who Will Hold Colleges Accountable?" As a professor at Colorado College, and faculty chair of the AHA’s Tuning Project, I can answer: we will. In a moment where college education and the value it provides students, their families, and American society in general seems continuously under attack, the American Historical Association has been quietly helping its members define and promote the value of history. Carey’s piece, pointing out the outdated notion of credit hours that grant students "credit" and eventually degrees for the act of sitting in chairs or staring at screens, thoughtfully calls for scholarly societies to "define and update what it means to be proficient in a field."
The AHA is developing just such a set of definitions. As a group of professional teachers and scholars of history, we do have standards and expectations for what it means to learn to think historically. We should be able to explain what college students who take history courses and major in history have gained from their effort. This might be risky because scholarly organizations generally avoid telling people what they should know, teach, or research in a given discipline. But with the help of a grant from the Lumina Foundation and 70 history departments and programs, the AHA Tuning Project is moving toward a "discipline core."
Now a "discipline core" is not your grandmother’s set of facts that all history students should know – the dreaded lists of dates, czars, emperors, wars, presidents or their wives – but a set of skills and habits of mind that college-educated students should have. And, it turns out, historians can agree about what people with a history degree should be able to do. The 14,000 members of the AHA don’t and won’t ever agree about what facts students should know, but we can agree about the importance of evidence in generating interpretation and the imperative of developing a rich context around those facts. For example, someone with a history major probably could have saved the Gap some money and bad PR by explaining the historical context of "Manifest Destiny" and why that phrase might not be an ideal T-shirt slogan.
History students need to be able to find and sift information, read with a critical eye, assess evidence from the past, write with precision, and be able to tell stories that analyze and narrate the past effectively. We can also agree about a variety of ways students can demonstrate such skills. None of these can be assessed with fill-in-the-bubble tests or any national standardized test, but require meaningful assignments, student responses, and attentive faculty feedback. Take number 8 from the AHA’s discipline core: "Explore multiple historical and theoretical viewpoints that provide perspective on the past." Students could demonstrate this very simply by describing, in written or oral form, a range of descriptions of a specific event. To use the Manifest Destiny example again, a student could describe how this concept emerged in the 1840s and how people in Mexico, in Washington, and in Texas or California might have perceived it as imperial ambition or dangerous racism. Understanding that these different descriptions represent different points of view is great practice in perspective-taking, a tremendously important skill.
The collaborative process that is central to "tuning" means that this set of professional standards will not be prescriptive, but rather will provide reference points to guide history departments and history teachers. Each college and university will read and use a core of professional standards to design courses and degrees that reflect the varied missions and contexts of educational institutions. Having core values and standards that define history as a discipline and the value of historical thinking can and will build programs that do far more than require students to be present for a set of credit bearing hours.
To learn these skills, students have to practice them -- a fact that will immediately ratchet up what goes on in and out of classrooms. Time in a classroom is not, as some skeptics suggest, a waste of money and effort, but essential to real learning. Students have to speak, write, and communicate in a variety of media and to have their work assessed carefully. They need places to practice both skepticism and empathy to acquire the habits of mind required of a history student. These abilities are essential to having thoughtful leaders and citizens, and college graduates with value in the workplace and the community – the central promise of a college education.
Scholarly societies and disciplinary organizations can and should develop professional standards that insist on effective practices at colleges and universities. The AHA is betting that professional historians want to be held accountable for what their students should know and be able to do.
Anne Hyde is professor of history at Colorado College and faculty chair of the American Historical Association Tuning Project.
Fascism is alive and well in the United States, at least as an epithet. The Third Reich provides the one set of historical analogies everybody will recognize. No more damning evidence about the state of American cultural literacy can be given.
Regardless of who is in office, protesters will wave photographs of the president with the nostril-wide mustache inked in. And whenever a city council or state legislature considers a ban on public smoking, the most unlikely people start complaining about the Gestapo. (First they came for the snuff-dippers, and I did not speak out, for I was not a snuff-dipper….) That seems indicative less of ignorance than of a low threshold for frustration. If there is one thing we can all agree on about totalitarianism, it’s the inconvenience.
The tendency has grown worse over the past dozen years. That, like everything else, can probably be blamed on the Internet, though no doubt some of the responsibility belongs to the History Channel, where Poland is invaded at least twice a day. After a while, fatigue sets in. But then you read about something like the Golden Dawn in Greece -- a rapidly growing party using a streamlined swastika as its emblem – and the word “fascist” ceases to be a free-floating signifier of vituperation. It begins to mean something again. But what?
Fascism: Journal of Comparative Fascist Studies published its first, semiannual issue in October. While not particularly focused on recent developments in the streets, they echo in it even so. The tendency I’ve just complained about – the stretching a concept so thin that it seems to have almost no substance -- has its parallel in the scholarship on fascism. And so does its return to a more substantial form.
St. Augustine said he knew what time was until someone asked him to explain it. Then the trouble started. A similarly perplexed feeling comes over someone reading historiographical efforts to get a handle on fascism.
It’s easy enough to start out with definition by ostension – that is, by pointing to the movements and regimes of Mussolini and of Hitler. And all the more so, given that the Italian leader not only coined the term fascism but wrote an encyclopedia entry on it, or at least signed one. But for all the inspiration Hitler and his early supporters took from Mussolini’s rise to power, Nazi doctrine grew out of its own distinct set of German influences. Racism -- and in particular anti-Semitism of a special variety, bolstered by pseudoscience – played a role in Hitler’s worldview strikingly absent from Mussolini’s doctrine.
And that doctrine itself had a paradoxical aspect. On the one hand, it was, so to speak, nationalism on steroids – deeply hostile to internationalism, especially of the Marxist variety. (In the late ‘10s and early '20s, Germany and Italy alike experienced long revolutionary crises, with left-wing parties making serious bids for power.) At the same time, fascist organizations sprang up all over Europe and in North and South America, with a few also appearing in Asia. Some adherents thought of fascism as a “universal” movement: a new stage of society, of which the Italians, and the later the Germans, were setting the example. In 1934, fascist delegates gathered in Switzerland for a world congress, although the effort soon foundered on ideological differences.
So even the fascists themselves couldn’t agree on how to understand their movement. Nor could historians and political scientists studying them after the defeat of the “classical” fascist regimes. A familiar dichotomy between “lumpers” and “splitters” played itself out, with the former emphasizing common elements among the fascist organizations (authoritarianism, nationalism, leader-worship, tendency to wear uniforms) and interpreting the movement as the product of larger forces (social anomie, economic crisis, resistance to modernization, etc.)
A good précis of the splitters’ response to lumper theorizing appeared in an article by Gilbert Allardyce in The American Historical Review in 1979. Focusing just on Nazism and Italian fascism, he stressed that “one arose in the most advanced industrial nation in Western Europe; the other, in a country still largely underdeveloped. Getting both into any uniform theory is hard enough, but getting both into the same stage of modernization is impossible. Interpretations that make sense in the case of one regime often make no sense in the case of the other.”
At least one historian took the next logical step. Italy was fascist under Mussolini. Fascism involved the dictatorial push of a largely preindustrial society into the age of mechanical reproduction. That wasn’t necessary in Germany. Therefore, Hitler was not a fascist. Likely it would be possible to disprove this syllogism with a Venn diagram or two; but in any event, it feels wrong somehow.
Much of the academic literature on the Italian and German regimes – and just about all of the popular history – goes about its business without getting too bogged down in the “generic fascism” problem. The devil is truly in the details. But the new journal Fascism takes the possibility of a generic concept of the movement as its point of departure, and in ways that seem worth watching.
The field of “comparative fascist studies” as pursued in the journal takes its bearings from Richard Griffin’s understanding of fascism as an ideology defined by a core of “palingenetic ultranationalism” which manifests itself in specific kinds of populist mobilization and charismatic leadership. Griffen, a professor of modern history at Oxford Brookes University, in Britain, first presented this argument in The Nature of Fascism (1991).
Now, before saying another word, I want to point out that calling fascism “palingenetic” is in not in any way meant as a slur on the beloved former governor of Alaska, vice presidential candidate, and reality television star. Palingenesis means “regeneration, rebirth, revival, resuscitation,” according to the Oxford English Dictionary. “In philosophy and natural science, formerly applied spec[ifically] to the (supposed) regeneration of living organisms from ashes or putrefying matter, to the preformation theory of Charles Bonnet (1720–93), and to the persistence of the soul (metempsychosis) or (in Schopenhauer) of the will from one generation to another.” Let’s be perfectly clear about that. I don’t want any trouble.
In Griffin’s usage, the term carries overtones of both regeneration-from-putrefaction and a sort of reincarnation. A fascist movement seeks a rebirth of the nation’s soul by overcoming its degeneration. It promises not merely a return to “the good old days” but an extreme, and usually violent, new beginning. The national revival comes through “a ‘populist’ drive towards mobilizing the energies of all those considered authentic members of the national community,” with the charismatic aura of the leader and “the pervasive use of theatrical and ritual elements in politics.” Xenophobia and genocidal racism are typical elements but not, as such, absolutely necessary. The main thing is that there be “groups identified with physical and moral decadence,” whose ejection from the nation would be a step towards its rebirth.
It is not so much a theory of fascism as a decent set of fingerprints. Griffin’s description doesn’t explain the movement’s origin or viability in any given country, but it identifies what they shared. That is significant on more than just typological grounds. The program of “comparative fascist studies” as it emerges from Griffin’s keynote essay in the first issue of Fascism, confirmed by the articles following it, includes research into how organizations in various countries influenced one another in the years between Il Duce’s march on Rome in 1922 and the Fuhrer’s suicide in 1945 – and since then, as well. For while the effort to create a “universal fascist” movement collapsed during the Great Depression, the project itself carries on. After a dozen years and more of incredibly trivializing and dumb references to fascism, the rolling economic crisis may yet give it some bite.
A final note. I picked up Fascism at the table of its publisher, Brill, during a conference last month. But it’s also available online, in full, as an open-access journal. The first issue is now available; the next comes out in April.