Last week, an independent investigation of the American Psychological Association found that several of its leaders aided the U.S. Department of Defense’s controversial enhanced interrogation program by loosing constraints on military psychologists. It was another bombshell in the ongoing saga of the U.S. war on terror in which psychologists have long served as foot soldiers. Now, it appears, psychologists were among its instigators, too.
Leaders of the APA used the profession’s ethics policy to promote unethical activity, rather than to curb it. How? Between 2000 and 2008, APA leaders changed their ethics policy to match the unethical activities that some psychologists wanted to carry out -- and thus make potential torture appear ethical. “The evidence supports the conclusion that APA officials colluded with DoD officials to, at the least, adopt and maintain APA ethics policies that were not more restrictive than the guidelines that key DoD officials wanted,” the investigation found, “and that were as closely aligned as possible with DoD policies, guidelines, practices or preferences, as articulated to APA by these DoD officials.” Among the main culprits was the APA’s own ethics director.
Commentators claim that the organization is unique, and in some ways it is. The APA’s leaders had the uncommonly poor judgment and moral weakness to intentionally alter its ethics policy to aid their personal enlistment into the war on terror. Then they had the exceptional bad luck to get caught.
Yet the focus on a few moral monsters misses a massive, systemic quirk in how the APA -- and many other organizations -- creates its code of ethics. The elite professionals who are empowered to write and change an ethics policy have tremendous influence over its content. But ethics policies are anonymous because they have force only to the extent that they appear to represent the position of an entire organization, not a few powerful people. The process is designed to erase the mark of those heavy hands who write the rules for everyone.
The APA’s current scandal may be new, but its problems on this front are decades old. The APA passed its first comprehensive code of ethics in 1973 after seven years of work by six top U.S. psychologists who had been appointed by the APA’s leadership. I have examined the records of this committee’s work housed at the Library of Congress and recently published my findings in Journal of the History of the Behavioral Sciences. The men were given an impossible task: to write a code that represented the ethical views of all psychologists and at the same time erase their own biases and interests. The effort was prompted by worries that if the organization neglected to regulate itself, the government would do it for them. “President Nixon is moving rapidly in this area,” one psychologist at the time put it. “Behavioral scientists must stay ahead of him or we will be in big trouble.” Among the troubles they were facing within the profession was how psychologists could continue to be employed and funded by the U.S. military and not appear to break the profession’s ethics policy -- precisely the contradiction that resulted in APA’s current imbroglio.
In an effort to appear democratic and transparent, the members of the 1973 ethics committee collected survey responses from thousands of psychologists and interviewed key stakeholders in the profession. Psychologists reported back with descriptions of activities that ranged from callous to criminal -- research with LSD, government-backed counterinsurgency efforts, neglect of informed consent. Still, the six psychologists had to boil down an ocean of responses into an ethics code that purported to fit with all psychologists’ needs and perspectives -- which included their own.
At the height of the Cold War, scores of psychologists painted a picture of a profession rife with secrecy and dodgy funding sources. They specifically told of military research that appeared to require an abdication of ethics. “These are seen as highly necessary studies,” one psychologist reported regarding research he did for the Defense Department. “Unless the research is highly realistic, it will not provoke psychological stress and hence will be useless.” In one study, the human subject was led to believe he was in an underwater chamber. “The subject sits in this chamber and performs specific tasks at an equipment console. If water rises inside the chamber one of the controls is supposed to exhaust it. At first the control operates. Later, however, if fails and the water gradually rises higher and higher around the subject’s body.” But the human subject was not really underwater and the psychologist was in control. “It is the practice to stop the experience at various points for different subjects, depending upon the amount of excitement they appear to show at different water levels.”
Studies like this were hotly disputed among psychologists at the time. Some felt that being deceived or hurt, especially by an authority figure like a psychologist, fundamentally damaged people. Humans are fragile, the line went, and can be psychologically scarred by psychologists themselves.
Yet the six members of the 1973 ethics committee were skeptical. The committee's leader, Stuart Cook, found the position implausible based on his own experience as a researcher and in his early training as a student. “When I was a subject I expected to be deceived; I knew that performance under stress was an issue,” he reflected. After talking with colleagues about the trade-offs of tighter ethics for psychologists, Stuart delivered the punch line: “We should cut down our obligation to fully inform."
Another ethics committee member, William McGuire, regarded the “fragile self” view as ludicrous in general and its main (female) proponents ridiculous in particular. McGuire had made a celebrated career studying persuasion -- largely funded by the U.S. government in light of its Cold War concerns about political indoctrination. McGuire is a good example of how the ethical views of the policy writers did not stray far from their own personal stakes in ethics policies. “My feeling is that the field must face up to the fact that there are a lot of moral costs in psychological research and that this can be done only by going through two steps,” McGuire told a colleague. “The first step is to admit, well, all right, there is something morally bothersome about many aspects of the research including leaning ever so slightly on people to get them to participate, or especially misleading them about the nature of the research even in minor ways, using their behavior or behavioral traces without their explicit consent, etc. But going through this first step frankly and admitting there are unpleasant aspects of the research does not mean that we cannot do it. On the contrary,” he continued, “it is necessary to go through the second step and decide whether the reasons for doing the research outweigh these reasons for not doing it.” This view fit tidily with support of military research using stress, deception, drugs and other contested methods.
In 1971, the committee published a draft of the ethics policy they had created to gauge APA members’ responses. When a few of the ethics committee members considered taking seriously the complaints from that large faction of psychologists who raised concerns about the laxity of the draft ethics code, McGuire threatened to quit. “It seems to me that there has been a change in mood in the committee in a somewhat conservative direction, which surprised me a little bit and made me worry lest I might have fallen out of tune with the other committee members,” he explained. “I do want to mention that the committee members had moved in a direction and distance that I had not quite anticipated so that perhaps I would be perceived as holding back progress or being an obstructionist.”
Instead, William McGuire, Stuart Cook and the four other psychologists stuck together and ushered in an ethics policy that corresponded to their own research needs and interests. The final version of the 1973 ethics code, for example, eased restrictions on psychologists’ use of deception that had appeared in earlier drafts. The final policy allowed researchers to lie -- for the sake of science -- despite the loudly announced disagreement from many psychologists that deception, stress and other forms of harm, however temporary, could do long-term damage to people and deserved to be controlled through the APA’s code of ethics.
In 1973, as in events leading to the APA’s current crisis, the organization’s ethics policy bore the marks of the handful of psychologists who were empowered to write the rules. Like anyone, they had their own political and scientific interests in the content of the ethics policy. But unlike others, and to a varying degree, they managed their own interests by changing the policy to suit their interests.
In recent weeks, critics have rightly and roundly condemned the current APA leaders who are at fault in the recent scandal. But it is misguided to think that the APA’s problem of professional ethics can be solved by throwing out a few exceptionally bad apples.
Next month, thousands of psychologists are meeting for the APA’s annual convention. They will have plenty to discuss. It is clear that some leaders behaved condemnably -- perhaps criminally -- and three have already been forced out. Yet continuing to castigate individuals alone misses the larger problem.
The APA’s current ethics mess is a problem inherent to its method of setting professional ethics policy and a problem that faces professional organizations more broadly. Professions’ codes of ethics are made to seem anonymous, dropped into the world by some higher moral authority. But ethics codes have authors. In the long term, the APA’s problems will not be solved by repeating the same process that empowers a select elite to write ethics policy, then removes their connection to it.
All ethics codes have authors who work to erase the appearance of their influence. Personal interests are inevitable, if not unmanageable, and it may be best for the APA -- and other professional groups -- to keep the link between an ethics policy and its authors. Take a new lesson from the Hippocratic oath by observing its name. The APA should make its ethics policies like most other papers that scientists write: give the code of ethics a byline.
Laura Stark is assistant professor in the Center for Medicine, Health and Society at Vanderbilt University. She is the author of Behind Closed Doors: IRBs and the Making of Ethical Research (University of Chicago Press).
Read more by
You may also be interested in...
Opinions on Inside Higher Ed
Inside Higher Ed’s Blog U
What Others Are Reading