Language or prosody, digital communication will alter the ways in which we exchange infor- mation, communicate norms, and exert persuasive influence (Bargh & McKenna, 2004). Nonetheless, in certain online contexts, other studies have shown that laws of social influ- ence, such as the foot-in-the-door technique, still hold in purely virtual settings (Eastwick & Gardner, 2009), and merely providing participants with numerical consensus information can change prejudicial beliefs about various racial groups (Stangor, Sechrist, & Jost, 2001) and the obese (Puhl, Schwartz, & Brownell, 2005). This suggests that, while there may have been initial doubts about the extent of conformity in anonymous online contexts, these new virtual spaces remain susceptible to social influence.
Prior research has also raised questions about whether conformity operates differently within certain domains, such as moral or evaluative judgments. Traditional philosophical views (e.g., Aristotle, 1941; Kant, 1996) emphasize that moral judgments should ideally be free from social influences, depending only one’s own judgment. In line with this ideal, more recent psychological experimentation suggests that people at least sometimes are less likely to conform when they have a strong moral basis for an attitude (Hornsey, Majkut, Terry, & McKimmie, 2003). In contrast, however, other studies have shown that at least some moral opinions can be influenced by social pressure in small group discussions (Aramovich, Lytle, & Skitka, 2012; Kundu & Cummins, 2012; Lisciandra, Postma-Nilsenová, & Colombo, 2013), and information about the distribution of responses elicits conformity in deonto- logical, but not consequentialist, responses to the Trolley problem (Bostyn & Roets, 2016). Taking these ideas together, we were interested in whether the mere knowledge of others’ opinions online would produce conformity regarding moral issues, particularly in online contexts.
Study 1: impersonal statistics influence moral judgments
In Study 1, we examined participants’ sensitivity to anonymous moral judgments regard- ing ethical dilemmas. We presented participants with two stories, along with statistical information about how other participants had responded. Unlike other research providing distributions of responses (e.g., Bostyn & Roets, 2016) this information is similar to what users might see on a social media website like Twitter, Facebook, or Reddit, where users can see numerical information about how other users reacted to some opinion (e.g., ‘15 users liked this post’ or ‘35 users favorited this tweet’). While we provide no information about what proportion of participants responded this way to each scenario, this mirrors the experience of being in an online context where we are unaware of how many users have seen a post without reacting.
Participants Participants were recruited through the online labor market Amazon Mechanical Turk (MTurk) and redirected to Qualtrics to complete an online survey. All participants provided written informed consent as part of an exemption approved by the Institutional Review Board of Duke University. Each participant rated one of two scenarios; 302 participants rated Scenario A, while 290 participants rated Scenario B. Participants were restricted to those located in the US with a task approval rating of at least 80%. Although no demographic
SOCIAL INFLUENCE 59
information was collected on our participants specifically, a typical sample of MTurk users is considerably more demographically diverse than an average American college sample (36% non-White, 55% female; mean age = 32.8 years, SD = 11.5; Buhrmester, Kwang, & Gosling, 2011). Numerous replication studies have also demonstrated that data collected on MTurk is reliable and consistent with other methods (Rand, 2012). Participants were compensated $.10 for their involvement.
Materials Participants were randomly assigned to one of two scenarios. Scenario A, one of Haidt’s classic moral scenarios, describes a family that eats their dead pet dog (Haidt, Koller, & Dias, 1993). Scenario B involves the passengers of a sinking lifeboat that sacrifice an overweight, injured passenger. (See Table 1 for full text of scenarios.) These scenarios were chosen partly because they fall under different moral foundations (Haidt & Graham, 2007). Because the foundations have been shown to exhibit dissimilar properties in other studies (e.g., Young & Saxe, 2011), we were interested in how the degree of conformity might vary in a scenario involving harm violations versus purity violations.
Procedure Participants read an ethical dilemma and were asked how morally condemnable the agent’s actions were. Ratings were made on an 11-point Likert scale from 0 (completely morally acceptable) to 10 (completely morally condemnable). Participants were randomly assigned to one of three conditions in this survey. Two of the conditions contained a prime to induce conformity by providing an established opinion about the scenario. The form of that prime mirrored that seen on many social media websites (e.g., Facebook): it described the number of people who provided a given rating when viewing a similar scenario. For Scenario A, participants read the following: ‘58 people who previously took this survey rated it as morally condemnable [acceptable]’. Participants read an identical statement for Scenario B, except they were told that 65 people previously took the survey. To ensure that no deception was used, these numbers of people had indeed rated these scenarios that way in a previous experiment.
The final condition served as a baseline and contained no prime; participants merely read and rated the moral dilemma. This design was repeated in separate samples for scenarios A and B. While the core of the paradigm remained constant throughout our experiments, the survey from Study 1 Scenario B also contained a follow-up question measuring level of confidence and a catch question about details from the scenario.
Table 1. Scenarios detailing moral violations in the purity (Scenario a) and harm (Scenario B) domains.
Scenario a a family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so
they cut up the dog’s body and cooked it and ate it for dinner B a cruise boat sank. a group of survivors are now overcrowding a lifeboat, and a storm is coming. The
lifeboat will sink, and all of its passengers will drown unless some weight is removed from the boat. nobody volunteers. Ten passengers are so small that two of them would have to be thrown overboard to save the rest. However, one passenger is very large and seriously injured. if the ten small passengers throw the very large passenger overboard, then he will drown but the others will survive. They throw the large passenger overboard
60 M. KELLY ET AL.
We performed a one-way ANOVA on moral ratings by condition for each scenario. In Scenario A, moral ratings differed significantly across three conditions, [F(2, 299) = 3.78, p = .024, �2p = .025]. Post-hoc Tukey tests of the three conditions indicated that the con- demnable group (M = 7.09, SD = 2.98) gave significantly higher ratings (more condemnable) than the acceptable group (M = 5.80, SD = 3.67), p = .019, d = .39 (Figure 1). Comparisons between the baseline group (M = 6.26, SD = 3.47) and the other two groups were not sig- nificant. The same results were obtained for Scenario B: moral ratings differed significantly across three conditions, [F(2,287) = 4.28, p = .015, �2p = .029]. Post-hoc Tukey tests of the three conditions indicated that the condemnable group (M = 6.08, SD = 2.90) gave signifi- cantly higher ratings (more condemnable) than the acceptable group (M = 4.82, SD = 3.08), p = .010, d = .42 (Figure 1). Comparisons between baseline group (M = 5.43, SD = 2.97) and the other two groups were not significant. For illustrative purposes all figures show the average difference from baseline for each condition.
We found manipulations containing sparse statistical data about other participants’ attitudes were effective in inducing conformity in moral judgments. Though early research in con- formity suggested that face-to-face interactions were critical, and both philosophical and psychological writing on moral judgments suggest it should be free from social influence, these results show that all that is required to induce conformity in moral judgments is to provide statistical information about how others responded. Even subtle social information in anonymous contexts seems to affect moral judgments.
Figure 1. Statistical information about other participants’ moral judgments significantly influences individual responses. note: error bars represent standard errors. *p < .05.
SOCIAL INFLUENCE 61
Having observed conformity to manipulations containing only statistical information, we were next interested in how different kinds of arguments, specifically emotional and rational arguments, might be more or less effective at influencing moral judgments.
Study 2: rational arguments elicit more conformity than emotional arguments
Having observed conformity to primes using mere statistical information, we were inter- ested in whether the effect could be strengthened by the addition of different types of argu- ments: those containing emotionally charged language to appeal to participants’ feelings or arguments using reasoning referring to consequences or moral principles. The distinction between emotional and rational arguments reflects some of the core predictions put forth by prominent psychological models of moral judgment. In the Social Intuitionist Model (SIM), for example, ‘moral intuitions (including moral emotions) come first and directly cause moral judgments’ (Haidt, 2001, p. 814), while reasoning is purely a post hoc defense of those emotional intuitions. The SIM predicts that moral conformity would only manifest by altering others’ emotional intuitions, thus in order to change what people think about a moral issue, they must first change how they feel.
This prediction is supported by a host of studies that measure changes in moral opinions after manipulating emotions and reasoning (for a review, see Avramova & Inbar, 2013). For example, inducing positive emotions through funny videos (Valdesolo & DeSteno, 2006), encouraging emotion regulation (Feinberg, Willer, Antonenko, & John, 2012), and prompting longer reflection (Paxton & Greene, 2010) all generated less harsh moral judg- ments. Furthermore, moral outrage from one scenario may spill over into harsher judgments of subsequent scenarios (Goldberg, Lerner, & Tetlock, 1999), and emotion drives higher ascription of intentionality in cases involving negative consequences (Ngo et al., 2015). Recent work utilizing virtual reality also demonstrates a discrepancy between hypothetical moral judgments and moral decisions taken in virtual environments, and this discrepancy seems modulated by emotional responses (Francis et al., 2016; Patil, Cogoni, Zangrando, Chittaro, & Silani, 2014). Other work, for example, suggests that emotions are instrumental for driving moral behavior (for a review, see Teper, Zhong, & Inzlicht, 2015). Therefore, this literature suggests that emotional manipulations would be particularly effective in swaying moral attitudes.
In accordance with these findings, we hypothesized that arguments appealing to partic- ipants’ emotions would affect their judgments more than arguments citing abstract princi- ples, rights, or reasons. To test this hypothesis, we gave participants emotional or rational justifications for why the dilemma was either morally acceptable or morally condemnable according to previous participants.