Humanist Perspectives: issue 151: Psychology and Ethics

things that go bump
Psychology & Ethics
by James Alcock

Based on a paper presented at the
Conference on Science and Ethics,
Toronto, May 2004

Ethics are forever tied to the Zeigeist, the spirit of the time. Educated, decent and ‘ethical’ folk of one century may profit from slaves and burn witches without risking any stain on their ethical eschuteons. Educated and decent and ‘ethical’ folk of another may bomb cities in the name of freedom and excuse the deaths of innocent civilians as unfortunate ‘collateral damage,’ without feeling any challenge to their ethical principles. This is also true in the realm of science. In one era, it may seem no ethical outrage to test the risks of new medication, such as birth control pills, on poor and uneducated women in poor and undeveloped nations, while in another era, this may be viewed as exploitative and discriminatory.

Psychologists throughout their professional history have been concerned about ethics within their realms of research and practice. In the early days of the discipline, as psychology emerged out of philosophy, such interest was focused on ethics as part of the fabric of the human social condition, something worthy of study in its own right. However, over time, psychology assumed a less contemplative and more invasive role in society, reaching directly into people’s lives. This was the result both of applying psychological knowledge and speculation through psychotherapy and social engineering (e.g., industrial psychology), and of conducting research using human participants. Gradually, it became clear that careless or misguided psychological intervention can do harm, and that careless or misguided psychological research can also bring about harmful consequences to the individuals and groups and societies being studied.

The American Psychological Association, the Canadian Psychological Association, the Association of State and Provincial Psychology Boards, and other psychological bodies have all promulgated ethical codes. By way of example, the Code of Ethics of the Canadian Psychological Association, which is general enough to apply to research as well as to applied practice, is built around four primary ethical principles:

When these principles are translated into specific dos and don’ts for researchers, psychologists are given direction with regard to a number of specific concerns, including protection from harm; the right to privacy; the concern about the practice of deception; the need for informed consent and debriefing; and the social responsibility of researchers. It is recognized that researchers who carry out psychology experiments are usually in a position of some authority over the research participant. This authority, especially if age or academic position also confers a difference of status, creates a relationship of trust whereby the research participants assume that the experimenter will not harm or exploit them. Researchers must act ethically and responsibly so participants do not suffer in any way as a result of research.

past practice

In 1976, the Journal of Personality and Social Psychology, the flagship social psychology journal, published an article entitled Personal space invasions in the lavatory: Suggestive evidence for arousal. The research was intended to study the influence of the simple presence of others on nervous system arousal, and since such arousal interferes with release of the urinary sphincter muscle, the idea was to measure how long it took for a man to begin urination if there were another man at the next urinal, as compared to when the urinal beside him was free. Unbeknownst of course to the hapless peer, an upside-down periscope was mounted under the wall of a nearby cubicle, and by this means, the data were gathered. As predicted, it took longer to begin urination when someone was at the adjacent urinal.

This was published without demurral by the leading social psychology journal in 1976. Was the theoretical interest behind the study justifiable? Probably, for it did show that the mere presence of others leads to autonomic arousal. Were the means justifiable? Certainly not at all, in terms of today’s understanding of research ethics. Was anyone hurt by it? No, they did not even know they were in the experiment. Could anyone have been hurt? Certainly — the man at the periscope gains some intimate knowledge of someone’s private parts. Moreover, such a study contributes, once published, to a paranoid belief that one’s privacy is always under threat.

between one-half and three-quarters of published social psychological research reports involve some element of deception

Most psychology experiments are carried out in the laboratory, not the lavatory, and in such cases, already participants know that they are being studied. However, it is still important to keep from them just what is being studied, or their behaviour will be deformed by their knowledge of what the researcher wants to find. However, this means deception of one form or another. The use of deception was rare in social psychology in the 1920s and 1930s, and was not all that common even in the 1940s and 1950s, but its use mushroomed during the 1960s. Currently, between one-half and three-quarters of published social psychological research reports involve some element of deception.

Not surprisingly, as we become more sensitive to ethical concerns in general, many researchers have expressed concern about the ethics of deception. Deceit presents a moral problem: to most psychologists concerned with this issue, the term deception means ‘trickery’ or ‘lying.’ It is argued not only that such dishonesty might tarnish the moral authority of professors and experimenters in the eyes of research participants (and lead to a less respectful view of science in general), but also that research participants could experience guilt or lowered self-esteem as a result of learning the true meaning of an experimental task. What if research participants approach an experiment believing that they are going to be involved in a memory study, and leave having learned that in the face of a contrived emergency that they reacted with cowardice? Does the experimenter have the right to provide this knowledge to an unsuspecting individual who might otherwise never have been aware of such cowardice?

Consider two very significant social psychological studies:

Our insight into the powerful role of the situation in determining human behaviour is much richer because of the Milgram and Zimbardo research; it has brought about an understanding of human compliance with authority on the one hand, and the ease with which some normal healthy people can become abusive and sadistic, something which had been almost inconceivable before. Yet, we would be cut off from gaining such knowledge today because our ethical codes would not allow such research to be carried out.

We of course need codes of ethics, and for reasons that are obvious. All over the Western world, professions and universities and governments are taking great strides to make sure that the rights of research participants are protected and observed. The Canadian federal government’s new privacy legislation must be taken into serious account when doing research with human subjects. The three major research granting agencies in Canada (Social Sciences and Humanities Research Council; National Science and Engineering Research Council; Medical Research Council) have recently issued a major policy statement (Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans) that has been implemented at all Canadian universities. The new policies essentially dictate how ethical concerns will be dealt with in research involving human subjects.

As well-intended as such regulations are, and as important as they may be for our individual welfare, they have overshot the mark when it comes to much research that is clearly of minimal risk. Few psychological studies have the sort of impact on participants that those of Milgram and Zimbardo did, and in any case, no one has demonstrated any actual long-term negative consequences as a result of deception in a social psychological experiment. Indeed, most such deception is so benign that it is hard to find fault with it, except on the general principle that the researcher has no right to deceive.

Consider three examples of some of the presumably unintended consequences of new and well-intentioned ethics codes, (and specifically the Tri-Council Code mentioned earlier):


The Milgram and Zimbardo studies are two of the most significant studies in the history of social psychology, studies which have illuminated the human condition and demonstrated the awful power of the situation to influence the behaviour of healthy, normal peace-loving people. Want to understand what went wrong in the Iraqi prisons? Read the Zimbardo research. Should it concern us that these studies absolutely could not be conducted today, even though there appears to have been no long-term harm to the research participants, and none of them has ever complained?

Should it bother us as well that in the name of research ethics, researchers can no longer record observations about stroke victims without getting the consent of the victim or the legally empowered substitute decision-maker? Should we be concerned that a slavish and bureaucratic devotion to ethical codes — the fundamentalist view of ethics — is going to make some research impossible, research that we would all agree is virtually certain to be harmless to anyone?

In conclusion, we absolutely need professional ethics codes to govern research and practise. Yet, we actually face an ethical dilemma in terms of applying them.

First horn

If we allow deception in order to carry out important research into human behaviour, we abrogate the rights of those we study. Is it, in principle, ever acceptable to deceive research participants, to decide for them what level of emotional stress is tolerable, or to observe and record people’s behaviour without their informed consent?

Second horn

If we rule out deception, and if we insist on signed, informed consent no matter how harmless the research, then we make much social psychological research all but impossible.

Ultimately, the way out of the dilemma is to weigh the potential value of the research against the potential harm to the participants, which is how we used to vet research proposals in Canadian universities. Focusing only on the first horn ignores the second. In so doing, may we not one day be judged guilty of a greater sin, that of failing to use the powerful methodology of science to understand human social behaviour so that we can learn to reduce aggression, diminish prejudice and enhance everyone’s quality of life?

James Alcock is Professor of Psychology at York University in Toronto and a Fellow of the Committee for the Scientific Investigation of Claims of the Paranormal.