“The Top 10 Retractions of 2014” appeared on the website of the life-sciences magazine The Scientist a couple of weeks back, garnering a little attention (mostly of the social-media, “Hey, look at this!” variety) but without making much of an impact. Comments were few and far between.
That seems unfortunate, given the stakes. Physicians, it has been said, bury their mistakes -- a grim joke that very nearly applies to some of the researchers whose work made the Hall of Shame. But most of the inductees are charged with committing malfeasance rather than error. (A number of them were covered here at Inside Higher Ed over the past year.)
The most egregious case? That would have to be the paper that the journal Retrovirology retracted, by a researcher who "spiked rabbit blood samples with human blood to make it look as though his HIV vaccine was working.” The runner-up is probably the situation that forced the Journal of Vibration and Control to retract 60 articles, which had been accepted for publication after receiving fraudulent “peer review” by scientists who manipulated the online submission system using up to 130 fake email accounts.
The good news is that the paper reporting on HIV vaccine work that had been tampered with seems not to have made much of an impression: it hadn’t been cited by other researchers. As for the phony peer-review gang, its leader was the identical twin brother of Taiwan’s minister of education, whose name appeared as a coauthor of some of the papers. Not long after the scandal broke, the minister resigned, while insisting that he had no idea of what his evil twin had been up to. (And you thought your family gatherings were awkward,)
The annual list (first compiled in 2013) is the work of the good people at Retraction Watch, who monitor and investigate the embarrassed announcements that publishers would rather not have to issue. Most of the stories they cover are from the sciences (chiefly natural, some social) although there is the occasional case from the humanities, where the main ground for retraction seems to be plagiarism. Or rather, problems of involving "mistaken punctuation" and "misreferencing," since euphemism prevails. (One author charged with plagiarism admitted to "misconduct in text," which is my new favorite expression.)
Besides fraudulent labwork and efforts to game the peer-review system, RW covers breaches of ethical norms in research -- the notorious "Facebook mood experiment" made the list for 2014 -- while also keeping an eye on predators lurking in the shadows around scholarly publishing. While unscrupulous academic publishers deserve all the bad press they get, they are often so brazen that it's hard to think of them as a menace. Consider the most widely noticed example in recent months: the story of a couple of computer scientists who wrote a "paper" consisting of an obscene seven-word sentence, repeated a few hundred times and incorporated into graphs and flowcharts. They submitted it to one of the sketchier journals in their field, where it appeared once the authors paid a fee. After all, the anonymous reviewer considered the paper "excellent.”
Which, in its own way, it was, though the paper is not on the top 10 list. Using the carelessness and greed of worthless journals to embarrass them may be an entertaining way to blow a few hundred bucks, but much less amusing is the thought that there must be academic libraries paying for subscriptions to said journals.
One of the year's top 10 items involved a French computer scientist who, the Retraction Watch says, “catalogued computer-generated papers that made it into more than 30 published conference proceedings between 2008 and 2013. Sixteen appeared in publications by Springer, and more than 100 were published by the Institute of Electrical and Electronic Engineers (IEEE).” The fact that the papers were computer-generated does not mean they were gibberish, since there are programs that can perform a database search and "write" a credible literature review. Still, that seems like streamlining the production of knowledge just a little too far.
The Retraction Watch site is littered with the wreckage of numerous careers, but it serves an important purpose apart from the dubious pleasures of Schadenfreude. In a recent column I wrote about Ben Goldacre's book I Think You'll Find It's a Bit More Complicated Than That (Fourth Estate), which includes a shrewd assessment of the strengths and weaknesses of the peer-review system that seems germane:
"[Peer-review] is often represented as some kind of policing system for truth, but in reality some dreadful nonsense gets published, and mercifully so: shaky material of some small value can be published into the buyer-beware professional literature of academic science; then the academic readers, who are trained to appraise critically a scientific case, can make their own judgments. And it is this second stage of review by your peers -- after publication -- that is so important in science. If there are flaws in your case, responses can be written, as letters to the academic journal, or even whole new papers. If there is merit in your work, then new ideas and research will be triggered. That is the real process of science."
I am a five-foot-tall female physicist. You hear a lot about the challenges facing women in physics. These are real, and the percentage of physics bachelor’s degrees earned by women has stagnated at just over 20 percent for more than a decade. Being a woman in physics can be hard, but being a short physicist seems even harder to me. Why don’t we ever talk about the challenges of being short?
Gender is the most prominent feature that we use to categorize ourselves, beginning from the first question asked after we are born: Is it a boy or a girl? The hypothesis that women are less intelligent or less cognitively capable of certain tasks has been around for a long time. For a while it was attributed to brain size, then the Y chromosome, then hormones circulating in the body, and now prenatal hormone exposure.
For some reason, our society wants to believe that women aren’t as smart as men. When a woman feels out of place in a male-dominated environment, she is understandably tempted to attribute it to her gender -- and she may be right.
But when I find myself feeling out of place and not quite knowing why, I tend to blame it on my height. Whether on the athletic field, in an elevator, or in the lab, I am generally the shortest person present. At my height, 19 out of every 20 women I meet are taller than I am. The average man soars 10 inches above me. High heels cannot make up 10 inches.
As kids, we all wait to grow into the world around us, and the average 12-year-old is close to my height. It wasn’t until I was an undergraduate at Yale University that I had to admit the world would never be designed for me. I was somehow happily oblivious in college to the challenges faced by women, but the challenges faced by short people were obvious to me, every day. I could not reach things on high shelves in the labs and libraries. I could not sit with my feet flat on the floor with my back supported in many classroom chairs.
The challenges continued in my graduate research lab at Harvard University. I wasn’t large enough to flip the dewar that held our cryogenic microscope. I wasn’t strong enough to loosen a bolt. When I couldn’t find where my peers had put something, I learned to get on the step stool and look at their eye level.
I looked ridiculous using all my body weight at an awkward angle to pull a liquid helium tank down the hall. The cleanroom ran out of the small sizes of “bunny suits” that are required to enter the cleanroom fabrication facilities. Small people were expected to wear larger ones, since big people cannot physically fit into smaller ones.
The biggest safety hazard was the location of a hot plate in a fume hood. The point of a fume hood, a structure that allows you to put your hands into a space that has its own ventilation, is to keep toxic fumes on the inside, away from the air you breathe. Short people simply took a deep breath before sticking their heads into the fume hood.
My six-foot-tall female labmate didn’t have these problems.
I now work at a women’s college. The environment is eye-opening.
The brightest student in the class is a woman. The most studious student is a woman. The struggling student is a woman. The slacker is a woman. The geek is a woman. The most aggressive and most outgoing students are women. Even the student who talks the most in class is a woman.
When I need help reaching the screen at the front of the room to pull it down, it’s a 5’6’’ woman who comes to my rescue. Prizewinners are always women, and leadership positions always go to women. We may still categorize the people we meet, but it’s no longer based on gender.
I received the Presidential Early Career Award for Scientists and Engineers in 2010, considered one of the most prestigious awards bestowed upon young scientists. There were two things that statistically increased the chances of receiving the PECASE that year through the National Science Foundation: being a woman; and being named Ben. You are unlikely to hear the accusation that you won “just because your name is Ben,” yet women are told that they receive awards because of their gender, not their qualifications.
A women’s college naturally provides many female role models, but predicting effective role models is not straightforward. For some, identifying with a role model is critical to pursuing an unusual path, but a good match is not as straightforward as being the same gender, race, or sexuality.
I never needed or wanted female role models in physics. But I do need short role models in sports. Watching someone much larger than you excel on the field is not helpful. Seeing someone your size outcompete a larger person is motivating, and educational. One striking part of my interview at Mount Holyoke was how short the (male) dean of faculty was. I more recently met the (short, female) director of the American Association of Physics Teachers. I didn’t think I was looking for five-foot-tall role models in leadership, but maybe that’s because I hadn’t met any.
While I intellectually recognize that being a woman in physics has presented challenges, I viscerally know that being short is difficult. That I haven’t volunteered my race or sexuality suggests I’m white (which is true) and heterosexual (also true).
When someone speaks over me in a meeting or repeats my idea more loudly as their own, I assume it’s due to my physical stature, not because I’m a woman. And for all of you who are ever in a meeting and notice this happening, it’s your cue to say, “Thank you for reinforcing the point made by... .” That’s all it takes to change a frustrating environment into an affirming one, in a noncontroversial way.
If we all make an effort to do small things like that more often — to recognize that the categories by which we sort people are limited, and that talent comes in all shapes and colors and follows many different trajectories through life — then perhaps an essay like this will someday simply start with the statement: “I am a physicist.”
Katherine Aidala is an associate professor of physics at Mount Holyoke College.
British libel law is so stringent and unforgiving -- so notorious for its tendency to find in favor of the aggrieved party -- that I am reluctant to say much more about it than that. Come to think of it, “stringent and unforgiving” seems a bit harsh. As does “notorious.” No aspersions on the British judicial system are intended; please don’t sue.
Between 2003 and 2011, the epidemiologist Ben Goldacre -- currently a research fellow at the London School of Hygiene and Tropical Medicine -- wrote a column called “Bad Science” for The Guardian as well as a book about shady practices in the reporting of clinical-trial results called Bad Pharma (2012). “Writing about other people’s misdeeds,” he says in the introduction to his new book, I Think You’ll Find It’s a Bit More Complicated Than That, “collecting ever greater numbers of increasingly powerful enemies – and all under British libel law – is like doing pop science with a gun to your head.”
Actually, reporting honest research can have the same menacing consequences. A few of the items in More Complicated Than That – a 400-page collection of Goldacre’s smart and mordant investigative reporting and commentary over the past dozen years – discuss cases of scientists and doctors sued just for going public with an informed opinion, or even lab results. Goldacre’s default mode is a lot more aggressive than that. Examining news and debates in both professional journals and the mass media, he’s constantly exposing unsupported claims, analyzing doctored statistics, and explaining the limitations, as well as the necessity, of the peer-review system. And with a suitably scathing tone, much of the time. No doubt Goldacre’s solicitor keeps very busy
Well before this point in a column, I would normally have provided a link to the press that brought out I Think You’ll Find It’s a Bit More Complicated Than That (here it is) but it may be a while yet the book reaches stores in the United States, where the large majority of Inside Higher Ed readers live. It hit the shelves in England only a few weeks ago. Consider the lag time of Big Pharma: Faber & Faber published the U.S. edition in April, a good two years after it drove the Association of the British Pharmaceutical Industry into damage-control mode. (In early January, the House of Commons Public Accounts Committee issued a report echoing Goldacre’s call for drug companies to increase disclosure in reporting of clinical-trial methods and findings.)
For now, American reader can order the print edition of More Complicated Than That from online bookstores. The ebook is available in England and with some pressure could probably be made available here sooner rather than later. Frankly, Goldacre deserves a much bigger following than he has in the States, where the need for continuous, accessible public tutoring in statistical literacy and logical thinking (and in how they can be suborned by the unscrupulous) remains every bit keen as in the UK, to put it mildly.
One of Goldacre’s specialties is challenging “science by press release” – cases of the media “cover[ing] a story even though the scientific paper doesn’t exist, assuming it’s around the corner” or rewriting statements “from dodgy organizations as if they were health news.” The blame can hardly be borne by “quacks and hacks” alone, since the “dodgy organizations” sometimes include academic institutions. In 2004, one British university issued a press release on a study showing the wonderful health benefits of “nature’s superdrug,” cod liver oil. It did something or other to cartilage (the details went unelaborated) by means of an enzyme, presumably to be named later.
When more than a year went by without a report to the scientific community, Goldacre contacted the university asking what happened to the paper. He got an evasive though patronizing response (“Mr. Goldacre is quite right in asserting that scientists have to be very certain of their facts before making public statements or publishing data”) from the researchers involved, but no offprint. He would have to be patient, they said.
“In 2014, after being patient for a decade as requested,” he writes in an afternote, Goldacre tried again but received only “a skeletal description [of] a brief conference presentation,” running to four paragraphs. The original press release (which cannot have been written without the cooperation of the scientists, of course) had been 17 paragraphs long. “I’ll try them again in a decade,” Goldacre concludes.
It would be more amusing if not for another essay in which he explains the significance of a study (published in Annals of Internal Medicine in 2009) of 200 press releases selected at random from a pool of “one year’s worth of press releases from twenty medical research centers” at “a mixture of the most eminent universities and the most humble, as measured by their U.S. News & World Report ranking.”
Goldacre notes that half the studies were done on human subjects. But 23 percent of the press releases in that category “didn’t bother to mention the number of participants – it’s hard to imagine anything more basic – and 34 percent failed to quantify their results.”
Forty percent of the press releases on research involving human beings concerned studies “without a control group, small samples of fewer than thirty participants, studies looking at ‘surrogate primary outcomes’ (which might measure a blood-cholesterol level, for example, rather than something concrete like a heart attack) and so on.” While the results would not be as reliable as those of a study with a stronger design – involving randomization, control groups, and more subjects – Goldacre acknowledges that they might nonetheless be of value. But an astonishing 58 percent of the press releases analyzed “lacked the relevant cautions and caveats about the methods used and the results reported.”
O.K., that was quite a lot of numbers just now, and not nearly as entertaining as the pieces in which Goldacre scrutinizes claims about caffeine intake and hallucination (“Drink Coffee, See Dead People”) or explains how make-believe expertise informs public debate (“Politicians Can Define Which Policy Works Best By Using Their Special Magic Politician Beam”) or characterizes a paper in the non-peer-reviewed Elsevier journal Medical Hypotheses (“with a respectable ‘impact factor’… of 1.299”) as “almost surreally crass.”
At his best, Goldacre writes with a combination of Martin Gardner’s knack for science exposition (and pseudoscience demolition) with Christopher Hitchens’s dedication to eviscerating fools gladly (a reliable source of schadenfreude, at least if you agreed with him on who counted as a fool). As the holiday season approaches, anyone with a bright young relative showing an interest in science or medicine – or a leaning toward serious journalism – should consider I Think You’ll Find It’s a Bit More Complicated Than That as a possible gift. Consider it an investment in public intelligence.