Five years ago, I took a long walk in Ireland with my husband, and when we got back, there were reports of several research scandals in which academic reputations were ruined by what appeared to be data falsification or at least substantial sloppiness. I wrote about it -- claiming, as I often do, that enforced data sharing would at least ensure that researchers tidied up their documentation.
A few weeks ago, I took another long walk in Ireland with my husband, and this time the news was filled with Ireland’s public referendum legalizing same-sex marriage and another research scandal, this one involving research about the possibility that face-to-face voter canvassing by persons identified as gay can change opinions about the rights of the LGBT community. I guess I need to be more careful about my travel plans, at least when they involve Ireland.
This time I am less sanguine about the idea that simply enforcing data sharing can improve the research process enough that sloppiness and outright fraud will be well policed. The most recent story involves a young Ph.D. student, Michael J. LaCour, who made up facts about the research process -- such as who funded it, how incentives were paid out, whether the embedded experiments were registered with a centralized registry and perhaps even which survey firm conducted the study.
The scandal began, as they often do, when someone wanted to replicate the research, and the researcher did not share all of the data. The student, in fact, despite the very sound advice of his senior co-author, had not deposited all the data with my former employer the Inter-University Consortium for Political and Social Research (ICPSR) so that the full data file could both be found and shared. Once thwarted and confused, of course, the research team that wanted to replicate his research started pulling at the loose thread and unraveled a whole skein of lies and exaggerations. I bet Michael LaCour now profoundly wishes he had paid closer and more careful heed to the advice of his mentor -- because ICPSR, in fact, could have coaxed him into the truth simply by the act of scrutiny and documentation; instead he chose a self-archiving method that allowed him to upload what he wished.
The story of LaCour should bolster my cherished premise that full data sharing will reduce the amount of malfeasance, right? Is it possible to still be naïve in your early 50s? I am afraid so. After another five years of being head down and hip deep in data collection and file preparation, I am willing to admit that either encouraging or forcing data sharing among researchers just is not enough. These scandals result from deeper problems with our training and review of the research process. The scandals almost always erupt when someone starts to question the data used to answer a substantive question -- and then the answer to the substantive question is viewed with suspicion. The inability to replicate, or even get close, opens the door to all types of scrutiny. Mishandling data or data collection is like Al Capone not paying his taxes -- it provides an entrée for our academic Eliot Ness to bring home the investigation.
My claims about the inadequacy of research training and the peer-review process will likely raise howls of protest -- what about all of the graduate-level methods courses, the Institutional Review Board (IRB) and the peer-review process required for grants and publications? Yes, all of these checks and balances, in principle, ensure ethical, high-quality research. But they do not, in fact, in any of the disciplines I am familiar with. Graduate-level methods classes in the social sciences -- and I have taught more than a few -- carry a heavy burden requiring both an omnibus survey of data collection methods, research ethics and often a smattering of statistical methods. The section on research ethics usually only focuses on how to deal with human subjects, not on how to handle the data we collect from them. Even a two-course sequence will never get you much beyond what I always think of as the research equivalent of “happily ever after” data collection. No one tells you how to stay married to your Prince Charming nor how to adequately and ethically prepare data files for sharing.
What of the IRB and peer review -- don’t they represent the bulwark against sloppiness and malfeasance? Not really -- as both do not have the explicit purpose of policing the research process generally. The purpose of the IRB is, in fact, the protection of human subjects -- that is, ensuring that all data collection is ethical. This may or may not ensure that the data collection is well documented, accurate and scrupulously transparent, as the protection of human subjects requires looking carefully at informed consent, for instance, but not necessarily data documentation.
Unfortunately, peer review is even more narrowly focused, except when a reviewer pulls hard at a methodological thread. Journal articles and grant applications never allow for the careful description of the methods and procedures because of substantial space constraints. In the past, co-authors and fellow review panel members have rightly scolded me for my overweening and tedious attention to the details of the research process. Peer review focuses primarily on substance and research quality because it must -- we are meant to trust that our colleagues are well trained, careful, transparent and accurate, without a lot of detail about how they execute these traits. I am not entirely sure that trust is warranted -- thus, peer review also fails to ensure that the research process is as it should be.
On our walk in Ireland, my husband and I climbed Croagh Patrick, the mountain on top of which St. Patrick spent 40 days fasting in 441 AD. It is a religious pilgrimage for many Irish Catholics -- for us, it was the challenge of going straight uphill on loose slate for two hours. Croagh Patrick is famous for its miserable weather, and our walk was no exception -- 50-mile-an-hour winds, driving rain and dense fog. As my husband is fond of saying, St. Patrick’s religious visions on the top of the mountain can likely be attributed to hypothermia and the fact that he could not find his way down.
As I crawled my way up the mountain of loose, wet stone, in addition to cursing my husband, who is descended from a long line of spirited Irish men and women, I thought about the value of careful and thorough preparation. My husband, ever the Eagle Scout, always ensures that we are thoroughly prepared and carefully equipped for every eventuality -- thus, I only got soaked to the skin in the last 20 minutes instead of the first two hours, and we made it both up and down the mountain despite being the far side of 50 years old. It strikes me that the research process is indeed like climbing Croagh Patrick -- preparation and careful attention to detail are an absolute must. The research community must find better ways to nurture and encourage these skills rather than spend time picking over the bones of those who have fallen off the trail.
Felicia B. LeClere is a senior fellow with NORC at the University of Chicago, where she works as research coordinator on multiple projects. She has 20 years of experience in survey design and practice, with particular interest in data dissemination and the support of scientific research through the development of scientific infrastructure.
Only satire can look certain horrible realities in the eye, as The Onion did with its article from last year about a lone-wolf mass shooting of random strangers. Its headline cut to the quick: “‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens.”
It’s the real American exceptionalism. Rampage shootings do take place in other countries (the 1996 Dunblane school massacre in Scotland, for example), but rarely. They remain distinct events in the public memory, rather than blurring together. In the United States the trauma is repetitive and frequent; only the location and the number of victims seem to change.
With Charleston we have the additional grotesquerie of a presidential candidate calling Dylann Roof’s extremely deliberate act an “accident” while the director of the Federal Bureau of Investigation made a point of denying that it was terrorism. (The shooter was an avowed white supremacist who attacked an African-American church and took care to leave one survivor to tell the tale. By no amount of semantic weaselry can it be described as anything but “[an] act of violence done or threaten[ed] to in order to try to influence a public body or citizenry,” to quote the director's own definition of terrorism.) But American rampage shootings do not always express an ideological agenda, or even a motive intelligible to anyone but the gunman. The meaninglessness of the violence, combined with its regularity, is numbing. So with time our scars grow callused, at least until the next spree rips them open again.
A few years ago Christopher Phelps, an intellectual historian who happens to be my closest friend, moved with his family to England, where he is now a senior lecturer (i.e., an associate professor) in American and Canadian studies at the University of Nottingham. At some point the British media began turning to him for commentary on life in these United States. “I tend to be asked on radio shows when there's a need for American expertise -- and implicitly an American accent, which adds an air of authenticity,” he wrote in an email when I asked him about it.
Among the topics he’s been asked about are “the re-election of Obama, the anniversary of JFK's death, and even what comprises the East Wing of the White House, since one only ever hears about the West Wing.” Of late, America’s everyday mayhem keeps coming up. In 2013 he discussed the Trayvon Martin case. Last August, it was the girl whose training in automatic weapons on an Arizona firing range ended when she lost control and sprayed her instructor with bullets. Phelps appeared on a nationally broadcast talk show hosted by Nick Ferrari, which seems like the perfect name for a bigger-than-life radio personality.
Ferrari wasted no time: “What is it with Americans and guns?” he asked. A fair question, though exceedingly blunt.
“I should have anticipated that, I suppose,” Phelps says now, “but I froze like the proverbial deer in the headlights, stuttering away.” Since then, unfortunately, he has gained experience in answering variations of the question. “The producers need people to do it,” he explains, “the university media team work hard to set up the gigs, and you feel as an American you should step in a bit to help modulate the conversation, but it sweeps away my life for a day or two when I have other plans and some psychopath shoots up America.” (The BBC program for which he was interviewed following the Charleston shootings can be found here.)
“It is still depressing,” Phelps continues, “in fact draining, to be put in the position of explaining my people through this kind of event, but reflection has prompted some better ways of answering.”
A one-sentence question about the American pattern of free-range violence takes many more to address at all concretely. Phelps's assessment bears quoting at length:
“While I'm as drawn to generalities as anyone -- I've always thought there was something to H. Rap Brown's declaration that ‘violence is as American as cherry pie’ -- it’s important to realize that most American households do not possess guns, only a third do. So gun owners do not comprise all Americans but a particular demographic, one more white, male and conservative than the general population.
“The shooters in mass killings, likewise, tend to be white men. So we need to explain this sociologically. My shorthand is that white men have lost a supreme status of power and privilege, given a post-’60s culture claiming gender and racial equality as ideals, yet are still socialized in ways that encourage aggressiveness.
“Of course, that mix wouldn't be so dangerous if it weren't easy to amass an arsenal of submachine guns, in effect, to mow people down. Why do restrictions that polls say Americans consider reasonable always get blocked politically, if gun-owning households are a minority? For one thing, the gun manufacturing corporations subsidize a powerful lobby that doubles as a populist association of gun owners. That, combined with a fragmented federalist system of government, a strongly individualist culture and the centrality of a Constitution that seems to inscribe ‘the right to bear arms’ as a sacred right, makes reform very difficult in the United States compared to similarly positioned societies. This suggests the problem is less cultural than political.”
Following the massacre of 26 people, most of them children, at Sandy Hook Elementary School in Connecticut in 2012, National Rifle Association executive vice president Wayne LaPierre waited several days before issuing a statement. Whether he meant to let decent interval pass or just needed time to work up the nerve, his response was to blame our culture of violence on… our culture of violence.
He condemned the American entertainment industry for its constant output of “an ever more toxic mix of reckless behavior and criminal cruelty” in the form of video games, slasher movies and so forth. The American child is exposed to “16,000 murders and 200,000 acts of violence by the time he or she reaches the ripe old age of 18” -- encouraging, if not spontaneously generating, LaPierre said, a veritable army of criminals and insane people, just waiting for unarmed victims to cross their paths. “The only way to stop a monster from killing our kids,” he said, “is to be personally involved and invested in a plan of absolute protection.”
The speech was a marvel of chutzpah and incoherence. But to give him credit, LaPierre’s call for “a plan of absolute protection” had a sort of deluded brilliance to it -- revealing a strain of magical thinking worthy of… well, when you get right down to it, a violent video game. Despite living in a society full of people presumably eager to act out their favorite scenes in Natural Born Killers and American Psycho, having enough firepower will give you “absolute protection.”
On many points, Firmin DeBrabander’s book Do Guns Make Us Free? Democracy and the Armed Society (Yale University Press) converges with the analysis quoted earlier from my discussion with Christopher Phelps. But DeBrabander, an associate professor of philosophy at Maryland Institute College of Art, places special emphasis on the corrupting effect of treating the Second Amendment as the basis for “absolute protection” of civil liberties.
The vision of democracy as something that grows out of the barrel of a gun (or better yet, a stockpile of guns, backed up with a ready supply of armor-piercing bullets) involves an incredibly impoverished understanding of freedom. And it is fed by a paranoid susceptibility to “unmanageable levels of fear,” DeBrabander writes, and “irrationalities that ripple through society.”
He turns to the ancient Stoic philosophers for a more robust and mature understanding of freedom. It is, he writes, “a state of mental resolve, not armed resolve. Coexisting with pervasive threats, Seneca would say, is the human condition. The person who lives with no proximate dangers is the exception. And it’s no sign of freedom to live always at the ready, worried and trigger-happy, against potential threats; this is the opposite of freedom.” It is, on the contrary, “a form of servitude,” and can only encourage tyranny by demagogues.
“Freedom,” DeBrabander goes on to say, “resides in the ability to live and thrive in spite of the dangers that attend our necessarily tenuous social and political existence -- dangers that are less fearsome and debilitating to the extent that we understand and acknowledge them.” It is only one of many good points the author makes. (See also his recent essay “Campus Carry vs. Faculty Rights” for Inside Higher Ed.) And the certainty that another mass shooting will take place somewhere in the United States before much longer means we need all the stoicism we can get.
Among the theories about why women are less likely than men to pursue mathematics and science degrees is that women underestimate their capabilities in mathematics. A new study, led by Shane Bench of Washington State University and appearing in the journal Sex Roles, offers a twist on that theory. Male and female undergraduates were given a mathematics test and asked to estimate how well they did. The women were fairly accurate in their predictions. But the men generally predicted better performance than they achieved. So the gender gap in mathematics and science enrollments may be based on male overconfidence, the authors suggest, not just on female lack of confidence.
If you can remember the 1960s, the old quip goes, you weren’t really part of them. By that standard, the most authentic participants ended up as what used to be called “acid casualties”: those who took spiritual guidance from Timothy Leary’s injunction to “turn on, tune in and drop out” and ended up stranded in some psychedelic heaven or hell. Not that they’ve forgotten everything, of course. But the memories aren’t linear, nor are they necessarily limited to the speaker’s current incarnation on this particular planet.
Fortunately Stephen Siff can draw on a more stable and reliable stratum of cultural memory in Acid Hype: American News Media and the Psychedelic Experience (University of Illinois Press). At the same time, communicating about the world as experienced through LSD or magic mushrooms was ultimately as difficult for a sober newspaper reporter, magazine editor or video documentarian as conversation tends to be for someone whose mind has been completely blown. The author, an assistant professor of journalism at Miami University in Ohio, is never less than shrewd and readable in his assessment of how various news media differed in method and attitude when covering the psychedelic beat. The slow and steady buildup of hype (a word Siff uses in a precise sense) precipitated an early phase of the culture wars -- sometimes in ways that partisans now might not expect.
Papers on experimentation with LSD were published in American medical journals as early as 1950, and reports on its effects from newspaper wire services began tickling the public interest by 1954. The following year, mass-circulation magazines were devoting articles to LSD research, followed in short order by a syndicated TV show’s broadcast of film footage showing someone under the influence. The program, Confidential File, sounds moderately sleazy (the episode in question was described as featuring “an insane man in a sensual trance”) but much of the early coverage was perfectly respectable, treating LSD as a potential source of insight into schizophrenia, or a potential expressway to the unconscious for psychoanalysts.
But the difference between rank sensationalism and science-boosting optimism may count for less, in Siff’s interpretation, than how sharply coverage of LSD broke with prevailing media trends that began coming into force in the 1920s.
After the First World War, with wounded soldiers coming back with a morphine habit, newspapers carried on panic-stricken anti-drug crusades (“The diligent dealer in deadly drugs is at your door!”) and any publication encouraging recreational drug use, or treating it as a fact of life, was sure to fall before J. Edgar Hoover’s watchful eye. Early movie audiences enjoyed the comic antics of Douglas Fairbanks Sr.’s detective character Coke Ennyday (always on the case, syringe at the ready), or in a more serious mood they could go to For His Son, D. W. Griffith’s touching story of a man’s addiction to Dopokoke, the cocaine-fueled soft drink that made his father rich. But by the time the talkies came around, the Motion Picture Production Code categorically prohibited any depiction of drug use or trafficking, even as a criminal enterprise. Siff notes that in the 20 years following the code’s establishment in 1930, “not a single major Hollywood film dealing with drug use was distributed to the public.”
Not that depictions of substance abuse were a forbidden fruit the public was craving, exactly. But the relative openness of the mid-1950s (emphasis on “relative”) allowed editors to risk publishing stories on what was, after all, serious research on a potential new wonder drug. Siff points out that general-assignment newspaper reporters attending a scientific or medical conference, unable to tell what sessions were worth covering, could feel reasonably confident that a title mentioning LSD would probably yield a story.
At the same time, writers for major newsmagazines and opinion journals were following the lead of Aldous Huxley, the novelist and late-life religious searcher, who wrote about mystical experiences he had while taking mescaline. In 1955, when the editors of Life magazine decided to commission a feature on hallucinogenic mushrooms, it turned to Wall Street banker and amateur mycologist R. Gordon Wasson. He traveled to Mexico and became, in his own words, one of “the first white men in recorded history to eat the divine mushroom” -- and if not, then surely the first to give an eyewitness report on “the archetypes, the Platonic ideals, that underlie the imperfect images of everyday life” in the pages of a major newsweekly.
Suffice it to say that by the time Timothy Leary and associates come on the scene (wandering around Harvard University in the early 1960s, with continuously dilated pupils and only the thinnest pretense of scientific research) it is rather late in Siff’s narrative. And Leary’s legendary status as psychedelic shaman/guru/huckster seems much diminished by contrast with the less exhibitionistic advocacy of LSD by Henry and Clare Boothe Luce. Beatniks and nonconformists of any type were mocked regularly in the pages of Time or Life, but the Luce publications were for many years very enthusiastic about the potential benefits of LSD. The power couple tripped frequently, and hard. (Some years ago, when I helped organize Mrs. Luce’s papers at the Library of Congress, the LSD notes were a confidence not to be breached, but now the experiments are a matter of public record.)
The hippies, in effect, seem like a late and entirely unintentional byproduct of industrial-strength hype. “During an episode of media hype,” Siff writes, “news coverage feeds on itself, as different news outlets follow and expand on one another’s stories, reacting among themselves and to real-world developments. Influence seems to flow from the larger news organizations to smaller ones, as editors at smaller or more marginal media operations look toward the decisions made by major outlets for ideas and confirmation of their own judgment.”
That is the process, broadly conceived. In Acid Hype, Siff charts the details -- especially how the feedback bounced around between news organizations, not just of different sizes, but with different journalistic cultures. Newspaper coverage initially stuck to the major talking points of LSD researchers; it tended to stress the potential wonder-drug angle, even when the evidence for it was weak. Major magazines wanted to cover the phenomenon in greater depth -- among other things, with firsthand reports on the psychedelic universe by people who’d gone there on assignment. Meanwhile, the art directors tried to figure out how to convey far-out experiences through imagery and layout -- as, in time, did TV producers. (Especially on Dragnet, if memory serves.)
Some magazine editors seem to have been put off by the religious undercurrents of psychedelic discourse. Siff exhibits a passage in a review that quotes Huxley’s The Doors of Perception but carefully removes any biblical or mystical references. But someone like Leary, who proselytized about psychedelic revolution, was eminently quotable -- plus he looked good on TV because (per the advice of Marshall McLuhan) he smiled constantly.
The same hype-induction processes that made hallucinogens seem like the next step toward improving the American way of life (or, conversely, the escape route for an alternative to it) also went into effect when the tide turned: just as dubious claims about LSD’s healing properties were reported without question (it’ll cure autism!), so did horror stories about side effects (it’ll make you stare at the sun until you go bling!).
The reaction seems to have been much faster and more intense than the gradual pro-psychedelic buildup. Siff ends his account of the period in 1969 -- oddly enough, without ever mentioning the figure who emerged into public view that year as the embodiment of LSD's presumed demons: Charles Manson. You didn't hear much about the drug's spiritual benefits after Charlie began explaining them. That was probably for the best.