Ethical Awareness and Ethical Theories
Ethical theories provide a moral framework to reflect on conflicting obligations. Unfortunately, ethical theories tend to emphasize one idea as the foundation for moral decision making, and illustrative problems are often reduced to that one idea. Given the complexity of moral reality, these frameworks are probably not mutually exclusive in their claims to moral truth (Steinbock, Arras, & London, 2003). However, awareness of the moral frameworks that might help address an ethical concern can also help clarify the values and available ethical choices (Beauchamp & Childress, 2001; Fisher, 1999; Kitchener, 1984).
Deception Research: A Case Example for the Application of Different Ethical Theories
Since Stanley Milgram (1963) published his well-known obedience experiments, the use of deception has become normative practice in some fields of psychological research and a frequent source of ethical debate (Baumrind, 1964, 1985; Fisher & Fyrberg, 1994). Deceptive techniques in research intentionally withhold information or misinform participants about the purpose of the study, the methodology, or roles of research confederates (Sieber, 1982). The methodological rationale for the use of deception is that some psychological phenomena cannot be adequately understood if research participants are aware of the purpose of the study. For example, deception has been used to study the phenomenon of “bystander apathy effect,” the tendency for people in the presence of others to observe but not help a person who is a victim of an attack, medical emergency, or other dangerous condition (Latane & Darley, 1970). In such experiments, false emergency situations are staged without the knowledge of the research participants, whose reactions to the “emergency” are recorded and analyzed.
By its very nature, the use of deception in research creates what Fisher (2005a) has termed the consent paradox. On the one hand intentionally deceiving participants about the nature and purpose of a study conflicts with Principle C: Integrity and with enforceable standards requiring psychologists to obtain fully informed consent of research participants prior to study initiation. On the other hand by approximating naturalistic contexts in which everyday behaviors take place, the use of deception research can reflect Principle A: Beneficence and Nonmaleficence by enhancing the ability of psychologists to generate scientifically and socially useful knowledge that might not otherwise be obtained.
Below are examples of how different ethical theories might lead to different conclusions about the moral acceptability of deception research. Readers should refer to Chapter 11 for a more in-depth discussion of Standard 8.07, Deception in Research.
Deontology has been described as “absolutist,” “universal,” and “impersonal” (Kant, 1785/1959). It prioritizes absolute obligations over consequences. In this moral framework, ethical decision making is the rational act of applying universal principles to all situations irrespective of specific relationships, contexts, or consequences. This reflects Immanuel Kant’s conviction that ethical decisions cannot vary or be influenced by special circumstances or relationships. Rather, a decision is “moral” only if a rational person believes the act resulting from the decision should be universally followed in all situations. For Kant, respect for the worth of all persons was one such universal principle. A course of action that results in a person being used simply as a means for others’ gains would be ethically unacceptable.
With respect to deception in research, from a deontological perspective, since we would not believe it moral to intentionally deceive individuals in some other context, neither potential benefits to society nor the effectiveness of participant debriefing for a particular deception study can morally justify intentionally deceiving persons about the purpose or nature of a research study. Further, deception in research would not be ethically permissible since intentionally disguising the nature of the study for the goals of research violates the moral obligation to respect each participant’s intrinsic worth by undermining individuals’ right to make rational and autonomous decisions regarding participation (Fisher & Fyrberg, 1994).
Utilitarian theory prioritizes the consequences (or utility) of an act over the application of universal principles (Mill, 1861/1957). From this perspective, an ethical decision is situation specific and must be governed by a risk–benefit calculus that determines which act will produce the greatest possible balance of good over bad consequences. An “act utilitarian” makes an ethical decision by evaluating the consequences of an act for a given situation. A “rule utilitarian” makes an ethical decision by evaluating whether following a general rule in all similar situations would create the greater good. Like deontology, utilitarianism is impersonal: It does not take into account interpersonal and relational features of ethical responsibility. From this perspective, psychologists’ obligations to those with whom they work can be superseded by an action that would produce a greater good for others (Fisher, 1999).
A psychologist adhering to act utilitarianism might decide that the potential knowledge about social behavior generated by a specific deception study could produce benefits for many members of society, thereby justifying the minimal risk of harm and violation of autonomy rights for a few research participants. A rule utilitarian might decide against the use of deception in all research studies because the unknown benefits to society did not outweigh the potential harm to the discipline of psychology if society began to see it as an untrustworthy science.
Communitarian theory assumes that right actions derive from community values, goals, traditions, and cooperative virtues. Accordingly, different populations with whom a psychologist works may require different conceptualizations of what is ethically appropriate (MacIntyre, 1989; Walzer, 1983). Unlike deontology, communitarianism rejects the elevation of individual over group rights. Whereas utilitarianism asks whether a policy will produce the greatest good for all individuals in society, communitarianism asks whether a policy will promote the kind of community we want to live in (Steinbock et al., 2003).
Scientists as members of a community of shared values have traditionally assumed that (a) the pursuit of knowledge is a universal good and that (b) consideration for the practical consequences of research will inhibit scientific progress (Fisher, 1999; Sarason, 1984; Scarr, 1988). From this “community of scientists” perspective, the results of deception research are intrinsically valuable, and standards or regulations prohibiting deceptive research would deprive society of this knowledge. Thus, communitarian theory may be implicitly reflected, at least in part, in the acceptance of deception research in the APA Ethics Code (Standard 8.07, Deception in Research) and in current federal regulations (Department of Health and Human Services [DHHS], 2009) as representing the values of the scientific community. At the same time little is known about the extent to which the “community of research participants” shares the scientific community’s valuing of deception methods (Fisher & Fyrberg, 1994).
Feminist ethics, or an ethics of care, sees emotional commitment to act on behalf of persons with whom one has a significant relationship as central to ethical decision making. This moral theory rejects the primacy of universal and individual rights in favor of relationally specific obligations (Baier, 1985; Brabeck, 2000; Fisher, 2000; Gilligan, 1982). Feminist ethics also focuses our attention on power imbalances and supports efforts to promote equality of power and opportunity. In evaluating the ethics of deception research, feminist psychologists might view intentional deception as a violation of interpersonal obligations of trust by investigators to participants and as reinforcing power inequities by permitting psychologists to deprive persons of information that might affect their decision to participate.