Last week, an independent investigation of the American Psychological Association found that several of its leaders aided the U.S. Department of Defense’s controversial enhanced interrogation program by loosing constraints on military psychologists. It was another bombshell in the ongoing saga of the U.S. war on terror in which psychologists have long served as foot soldiers. Now, it appears, psychologists were among its instigators, too.
Leaders of the APA used the profession’s ethics policy to promote unethical activity, rather than to curb it. How? Between 2000 and 2008, APA leaders changed their ethics policy to match the unethical activities that some psychologists wanted to carry out -- and thus make potential torture appear ethical. “The evidence supports the conclusion that APA officials colluded with DoD officials to, at the least, adopt and maintain APA ethics policies that were not more restrictive than the guidelines that key DoD officials wanted,” the investigation found, “and that were as closely aligned as possible with DoD policies, guidelines, practices or preferences, as articulated to APA by these DoD officials.” Among the main culprits was the APA’s own ethics director.
Commentators claim that the organization is unique, and in some ways it is. The APA’s leaders had the uncommonly poor judgment and moral weakness to intentionally alter its ethics policy to aid their personal enlistment into the war on terror. Then they had the exceptional bad luck to get caught.
Yet the focus on a few moral monsters misses a massive, systemic quirk in how the APA -- and many other organizations -- creates its code of ethics. The elite professionals who are empowered to write and change an ethics policy have tremendous influence over its content. But ethics policies are anonymous because they have force only to the extent that they appear to represent the position of an entire organization, not a few powerful people. The process is designed to erase the mark of those heavy hands who write the rules for everyone.
The APA’s current scandal may be new, but its problems on this front are decades old. The APA passed its first comprehensive code of ethics in 1973 after seven years of work by six top U.S. psychologists who had been appointed by the APA’s leadership. I have examined the records of this committee’s work housed at the Library of Congress and recently published my findings in Journal of the History of the Behavioral Sciences. The men were given an impossible task: to write a code that represented the ethical views of all psychologists and at the same time erase their own biases and interests. The effort was prompted by worries that if the organization neglected to regulate itself, the government would do it for them. “President Nixon is moving rapidly in this area,” one psychologist at the time put it. “Behavioral scientists must stay ahead of him or we will be in big trouble.” Among the troubles they were facing within the profession was how psychologists could continue to be employed and funded by the U.S. military and not appear to break the profession’s ethics policy -- precisely the contradiction that resulted in APA’s current imbroglio.
In an effort to appear democratic and transparent, the members of the 1973 ethics committee collected survey responses from thousands of psychologists and interviewed key stakeholders in the profession. Psychologists reported back with descriptions of activities that ranged from callous to criminal -- research with LSD, government-backed counterinsurgency efforts, neglect of informed consent. Still, the six psychologists had to boil down an ocean of responses into an ethics code that purported to fit with all psychologists’ needs and perspectives -- which included their own.
At the height of the Cold War, scores of psychologists painted a picture of a profession rife with secrecy and dodgy funding sources. They specifically told of military research that appeared to require an abdication of ethics. “These are seen as highly necessary studies,” one psychologist reported regarding research he did for the Defense Department. “Unless the research is highly realistic, it will not provoke psychological stress and hence will be useless.” In one study, the human subject was led to believe he was in an underwater chamber. “The subject sits in this chamber and performs specific tasks at an equipment console. If water rises inside the chamber one of the controls is supposed to exhaust it. At first the control operates. Later, however, if fails and the water gradually rises higher and higher around the subject’s body.” But the human subject was not really underwater and the psychologist was in control. “It is the practice to stop the experience at various points for different subjects, depending upon the amount of excitement they appear to show at different water levels.”
Studies like this were hotly disputed among psychologists at the time. Some felt that being deceived or hurt, especially by an authority figure like a psychologist, fundamentally damaged people. Humans are fragile, the line went, and can be psychologically scarred by psychologists themselves.
Yet the six members of the 1973 ethics committee were skeptical. The committee's leader, Stuart Cook, found the position implausible based on his own experience as a researcher and in his early training as a student. “When I was a subject I expected to be deceived; I knew that performance under stress was an issue,” he reflected. After talking with colleagues about the trade-offs of tighter ethics for psychologists, Stuart delivered the punch line: “We should cut down our obligation to fully inform."
Another ethics committee member, William McGuire, regarded the “fragile self” view as ludicrous in general and its main (female) proponents ridiculous in particular. McGuire had made a celebrated career studying persuasion -- largely funded by the U.S. government in light of its Cold War concerns about political indoctrination. McGuire is a good example of how the ethical views of the policy writers did not stray far from their own personal stakes in ethics policies. “My feeling is that the field must face up to the fact that there are a lot of moral costs in psychological research and that this can be done only by going through two steps,” McGuire told a colleague. “The first step is to admit, well, all right, there is something morally bothersome about many aspects of the research including leaning ever so slightly on people to get them to participate, or especially misleading them about the nature of the research even in minor ways, using their behavior or behavioral traces without their explicit consent, etc. But going through this first step frankly and admitting there are unpleasant aspects of the research does not mean that we cannot do it. On the contrary,” he continued, “it is necessary to go through the second step and decide whether the reasons for doing the research outweigh these reasons for not doing it.” This view fit tidily with support of military research using stress, deception, drugs and other contested methods.
In 1971, the committee published a draft of the ethics policy they had created to gauge APA members’ responses. When a few of the ethics committee members considered taking seriously the complaints from that large faction of psychologists who raised concerns about the laxity of the draft ethics code, McGuire threatened to quit. “It seems to me that there has been a change in mood in the committee in a somewhat conservative direction, which surprised me a little bit and made me worry lest I might have fallen out of tune with the other committee members,” he explained. “I do want to mention that the committee members had moved in a direction and distance that I had not quite anticipated so that perhaps I would be perceived as holding back progress or being an obstructionist.”
Instead, William McGuire, Stuart Cook and the four other psychologists stuck together and ushered in an ethics policy that corresponded to their own research needs and interests. The final version of the 1973 ethics code, for example, eased restrictions on psychologists’ use of deception that had appeared in earlier drafts. The final policy allowed researchers to lie -- for the sake of science -- despite the loudly announced disagreement from many psychologists that deception, stress and other forms of harm, however temporary, could do long-term damage to people and deserved to be controlled through the APA’s code of ethics.
In 1973, as in events leading to the APA’s current crisis, the organization’s ethics policy bore the marks of the handful of psychologists who were empowered to write the rules. Like anyone, they had their own political and scientific interests in the content of the ethics policy. But unlike others, and to a varying degree, they managed their own interests by changing the policy to suit their interests.
In recent weeks, critics have rightly and roundly condemned the current APA leaders who are at fault in the recent scandal. But it is misguided to think that the APA’s problem of professional ethics can be solved by throwing out a few exceptionally bad apples.
Next month, thousands of psychologists are meeting for the APA’s annual convention. They will have plenty to discuss. It is clear that some leaders behaved condemnably -- perhaps criminally -- and three have already been forced out. Yet continuing to castigate individuals alone misses the larger problem.
The APA’s current ethics mess is a problem inherent to its method of setting professional ethics policy and a problem that faces professional organizations more broadly. Professions’ codes of ethics are made to seem anonymous, dropped into the world by some higher moral authority. But ethics codes have authors. In the long term, the APA’s problems will not be solved by repeating the same process that empowers a select elite to write ethics policy, then removes their connection to it.
All ethics codes have authors who work to erase the appearance of their influence. Personal interests are inevitable, if not unmanageable, and it may be best for the APA -- and other professional groups -- to keep the link between an ethics policy and its authors. Take a new lesson from the Hippocratic oath by observing its name. The APA should make its ethics policies like most other papers that scientists write: give the code of ethics a byline.
If you can remember the 1960s, the old quip goes, you weren’t really part of them. By that standard, the most authentic participants ended up as what used to be called “acid casualties”: those who took spiritual guidance from Timothy Leary’s injunction to “turn on, tune in and drop out” and ended up stranded in some psychedelic heaven or hell. Not that they’ve forgotten everything, of course. But the memories aren’t linear, nor are they necessarily limited to the speaker’s current incarnation on this particular planet.
Fortunately Stephen Siff can draw on a more stable and reliable stratum of cultural memory in Acid Hype: American News Media and the Psychedelic Experience (University of Illinois Press). At the same time, communicating about the world as experienced through LSD or magic mushrooms was ultimately as difficult for a sober newspaper reporter, magazine editor or video documentarian as conversation tends to be for someone whose mind has been completely blown. The author, an assistant professor of journalism at Miami University in Ohio, is never less than shrewd and readable in his assessment of how various news media differed in method and attitude when covering the psychedelic beat. The slow and steady buildup of hype (a word Siff uses in a precise sense) precipitated an early phase of the culture wars -- sometimes in ways that partisans now might not expect.
Papers on experimentation with LSD were published in American medical journals as early as 1950, and reports on its effects from newspaper wire services began tickling the public interest by 1954. The following year, mass-circulation magazines were devoting articles to LSD research, followed in short order by a syndicated TV show’s broadcast of film footage showing someone under the influence. The program, Confidential File, sounds moderately sleazy (the episode in question was described as featuring “an insane man in a sensual trance”) but much of the early coverage was perfectly respectable, treating LSD as a potential source of insight into schizophrenia, or a potential expressway to the unconscious for psychoanalysts.
But the difference between rank sensationalism and science-boosting optimism may count for less, in Siff’s interpretation, than how sharply coverage of LSD broke with prevailing media trends that began coming into force in the 1920s.
After the First World War, with wounded soldiers coming back with a morphine habit, newspapers carried on panic-stricken anti-drug crusades (“The diligent dealer in deadly drugs is at your door!”) and any publication encouraging recreational drug use, or treating it as a fact of life, was sure to fall before J. Edgar Hoover’s watchful eye. Early movie audiences enjoyed the comic antics of Douglas Fairbanks Sr.’s detective character Coke Ennyday (always on the case, syringe at the ready), or in a more serious mood they could go to For His Son, D. W. Griffith’s touching story of a man’s addiction to Dopokoke, the cocaine-fueled soft drink that made his father rich. But by the time the talkies came around, the Motion Picture Production Code categorically prohibited any depiction of drug use or trafficking, even as a criminal enterprise. Siff notes that in the 20 years following the code’s establishment in 1930, “not a single major Hollywood film dealing with drug use was distributed to the public.”
Not that depictions of substance abuse were a forbidden fruit the public was craving, exactly. But the relative openness of the mid-1950s (emphasis on “relative”) allowed editors to risk publishing stories on what was, after all, serious research on a potential new wonder drug. Siff points out that general-assignment newspaper reporters attending a scientific or medical conference, unable to tell what sessions were worth covering, could feel reasonably confident that a title mentioning LSD would probably yield a story.
At the same time, writers for major newsmagazines and opinion journals were following the lead of Aldous Huxley, the novelist and late-life religious searcher, who wrote about mystical experiences he had while taking mescaline. In 1955, when the editors of Life magazine decided to commission a feature on hallucinogenic mushrooms, it turned to Wall Street banker and amateur mycologist R. Gordon Wasson. He traveled to Mexico and became, in his own words, one of “the first white men in recorded history to eat the divine mushroom” -- and if not, then surely the first to give an eyewitness report on “the archetypes, the Platonic ideals, that underlie the imperfect images of everyday life” in the pages of a major newsweekly.
Suffice it to say that by the time Timothy Leary and associates come on the scene (wandering around Harvard University in the early 1960s, with continuously dilated pupils and only the thinnest pretense of scientific research) it is rather late in Siff’s narrative. And Leary’s legendary status as psychedelic shaman/guru/huckster seems much diminished by contrast with the less exhibitionistic advocacy of LSD by Henry and Clare Boothe Luce. Beatniks and nonconformists of any type were mocked regularly in the pages of Time or Life, but the Luce publications were for many years very enthusiastic about the potential benefits of LSD. The power couple tripped frequently, and hard. (Some years ago, when I helped organize Mrs. Luce’s papers at the Library of Congress, the LSD notes were a confidence not to be breached, but now the experiments are a matter of public record.)
The hippies, in effect, seem like a late and entirely unintentional byproduct of industrial-strength hype. “During an episode of media hype,” Siff writes, “news coverage feeds on itself, as different news outlets follow and expand on one another’s stories, reacting among themselves and to real-world developments. Influence seems to flow from the larger news organizations to smaller ones, as editors at smaller or more marginal media operations look toward the decisions made by major outlets for ideas and confirmation of their own judgment.”
That is the process, broadly conceived. In Acid Hype, Siff charts the details -- especially how the feedback bounced around between news organizations, not just of different sizes, but with different journalistic cultures. Newspaper coverage initially stuck to the major talking points of LSD researchers; it tended to stress the potential wonder-drug angle, even when the evidence for it was weak. Major magazines wanted to cover the phenomenon in greater depth -- among other things, with firsthand reports on the psychedelic universe by people who’d gone there on assignment. Meanwhile, the art directors tried to figure out how to convey far-out experiences through imagery and layout -- as, in time, did TV producers. (Especially on Dragnet, if memory serves.)
Some magazine editors seem to have been put off by the religious undercurrents of psychedelic discourse. Siff exhibits a passage in a review that quotes Huxley’s The Doors of Perception but carefully removes any biblical or mystical references. But someone like Leary, who proselytized about psychedelic revolution, was eminently quotable -- plus he looked good on TV because (per the advice of Marshall McLuhan) he smiled constantly.
The same hype-induction processes that made hallucinogens seem like the next step toward improving the American way of life (or, conversely, the escape route for an alternative to it) also went into effect when the tide turned: just as dubious claims about LSD’s healing properties were reported without question (it’ll cure autism!), so did horror stories about side effects (it’ll make you stare at the sun until you go bling!).
The reaction seems to have been much faster and more intense than the gradual pro-psychedelic buildup. Siff ends his account of the period in 1969 -- oddly enough, without ever mentioning the figure who emerged into public view that year as the embodiment of LSD's presumed demons: Charles Manson. You didn't hear much about the drug's spiritual benefits after Charlie began explaining them. That was probably for the best.
What happens in Wisconsin will not stay in Wisconsin. Lawmakers here are moving quickly to hollow out the definition of tenure and strip away due process rights for faculty members and academic staff. For legislators in other states who want to dismantle public higher education, they might look here to find new plays for their playbooks.
It is not uncommon for legislators to threaten tenure or criticize public education -- many do it for sport. But what’s unique in Wisconsin is that the proposed tenure changes are not coming from a fringe coalition: they are coming from the Joint Finance Committee, the most powerful body in the Legislature.
I am a tenure-track faculty member in the School of Education at the University of Wisconsin at Madison and have been in the state for only two years. I have a lot to learn and am naively optimistic that cooler heads will prevail and the tenure threats will wash over in time. But I cannot bring myself to a place of comfort; I am truly worried. And I am not just worried for Wisconsin, but for other states that will follow suit if this change actually happens.
Wisconsin is unique in that we are the only state (to my knowledge) to have enshrined tenure into state law. Moving this law from state statute to the University of Wisconsin Board of Regents policy would not be entirely uncommon in the national context. What is uncommon is how political our board is compared to other states -- the governor appoints 16 of the 18 members and colleges don’t have their own campus boards to interact with the system.
But even less common -- and far much more egregious -- is Section 39 of the Joint Finance Committee’s omnibus motion. It allows the Board to “terminate any faculty or academic staff appointment… due to a budget or program decision…” So instead of using widely accepted processes, faculty and staff can be terminated for “…program discontinuance, curtailment, modification or redirection, instead of when a financial emergency exists under current law.”
This undermines the core principles of shared governance, strips away due process rights and is an obvious assault on academic freedom. The board says its members will “adopt policies that reflect existing statutory language” and ensure faculty and staff will retain the same due process protections currently under state law.
If Section 39 of the budget bill redefines tenure, then the board must comply with the new state law.
This new definition extends far beyond the standard financial exigency criteria for termination of appointments and is out of line with the American Association of University Professors’ academic freedom guidelines. And the proposed change is happening without consulting the very stakeholders the law was designed to protect -- university faculty and staff members.
I know these tensions aren’tnew; we are constantly justifying our existence and under financial stress. I get that. But this is a bridge too far. It doesn’t matter if the regents use existing statutory language, because this omnibus motion would kill it all. It trumps regents policy.
If this policy change happens, it will set a precedent for other states to follow, so watch Wisconsin closely. Keeping Section 39 could set in motion a series of events that will threaten the university’s ability to recruit and retain faculty, generate revenue, and even threaten our accreditation status.
As much as I wish this were all political theater or a simple misunderstanding, it is not. It is a very real threat and one that has been years in the making.
Instituting the $250 million budget cut will create the conditions where the Board of Regents can exercise their new authority to fire at will. The long-term academic and financial costs will far outweigh the short-term political benefits, and I hope our elected officials have the ability to see that far down the road.
Nicholas Hillman is an assistant professor of educational leadership and policy analysis at the University of Wisconsin at Madison.
There are important issues around diversity -- notably in terms of ethnicity/race, socioeconomic class, sexual orientation and gender -- that have been of concern to institutions of higher education for a while now. The progress made in these areas may be less than impressive, but they have a conspicuous place on our radar screens.
There is another dimension of diversity that has yet to attract the attention it deserves: the diversity of contributions that can be made by different members of an institution’s tenured and tenure-track faculty. Faculty members in these positions are pivotal to fostering the kind of change needed in our colleges and universities if we are to better serve our students. Such change would involve how faculty members judge one another, how departments view their responsibilities, how those responsibilities can best be fulfilled and how the work of faculty members is viewed by academic administrators.
Different institutions have different missions, which should be reflected in what is reasonably expected of their respective faculties. These differences have unfortunately been eroded by status-seeking mission creep. So, for openers, there is the famous advice of Polonius (who has received insufficient respect for his wisdom, probably because he conveyed it in a way that was boring to a younger person): “To thine own self be true.”
While it may seem obvious that a one-size-fits-all approach is inappropriate and undesirable for institutions with different missions and constituencies, it may also be undesirable within an institution as well, even if that institution is a research university. While the holy trinity of research, teaching and service on the face of it provides room for flexibility, differences in how each is valued and assessed yield a generally hierarchical structure with publication and attracting grant funds being the coin of the realm and relatively easy to quantify.
But even in research universities, not all members of a department need to balance their research and teaching contributions in exactly the same proportions. Moreover, one faculty member in his/her time plays many roles -- there may be times in between research projects in which a faculty member might wish to focus more on teaching. (As an aside: the pressure to publish as much and as quickly as possible seems clearly linked to the level of retractions we have been seeing on the part of major scientific and scholarly journals when major research flaws are revealed postpublication.)
A better solution would be an understanding -- reflected in the reward structure -- that not every member of a department needs to make precisely the same contribution to the department in meeting its goals and responsibilities. Crafting such a reward structure is something that the New American Colleges and Universities consortium, for example, has been working on with funding from the Teagle Foundation.
To be sure, one expects that departments in research universities would have a sufficiently strong complement of truly distinguished scholars and scientists who are making significant contributions to the knowledge base in their fields, including some who may not be God’s gift to teaching. Fortunately, many highly distinguished scientists and scholars are also superb teachers. But there should also be room for faculty members whose teaching outdistances their research. If research universities presume to educate undergraduates, they need to consider how well they are fulfilling that responsibility. They should also feel an obligation to prepare their graduate students for occupying positions at a wide range of institutions of higher education; that is, they should be preparing graduate students seeking an academic career for their work not only as researchers, but as teachers.
There have been proposals for a separate track for faculty members who would focus on teaching, as opposed to research. This, however, is a solution that is part of the problem, since it will almost certainly perpetuate a culture of relative disdain for teaching, along with a tendency for teaching-focused appointments to be non-tenure-track. While there is a place for continuing appointments off the tenure track, viewing teaching in general as something unworthy of tenure would be unfortunate both in terms of institutional culture and how universities are viewed by the public.
It would also be desirable to recognize and reward those faculty members who have a special flair for sharing significant results of science and scholarship with a wide audience of readers -- beyond even The New York Review of Books. We already have an admirable complement of public intellectuals who earn their high position in the academic food chain by the traditional measure of research excellence -- though we could always use more of them. In addition, there are those whose contributions to public enlightenment might in and of themselves merit reward beyond what the current system offers.
Barriers to achieving a more informed citizenry may seem daunting, even at times insurmountable, especially when one figures in efforts at deliberate deception by powerful figures and opinion leaders. Indeed, we may feel the need to modify Abraham Lincoln’s famous observation that you can fool all the people some of the time and some of the people all the time by observing that those have turned out to be pretty good odds. But we should reward those who give the advancement of public knowledge their best effort -- and sometimes manage to make a difference.
Judith Shapiro is president of the Teagle Foundation and a former president of Barnard College.