Chapter 5 Quiz: The Social Psychology of Belief and Group Conformity

Instructions: Answer all questions. For multiple choice, select the best answer. For short answer questions, responses should be 1-3 sentences. Answers are hidden — click to reveal after attempting each question.


Section I: Multiple Choice (Questions 1–15)

Question 1

In Asch's (1956) line-matching experiments, approximately what percentage of responses on critical trials showed conformity to the incorrect group consensus?

A) 5–10% B) 20–25% C) 35–40% D) 55–60% E) 75–80%

Reveal Answer **Correct answer: C) 35–40%** Asch found that approximately 37% of responses on critical trials (when confederates gave the unanimous wrong answer) showed conformity. While about 75% of participants conformed at least once, the overall rate across all trials was approximately 37%. This figure represents both normative influence (fear of social consequences) and, in some cases, genuine perceptual distortion under social pressure (as shown by later fMRI research by Berns et al., 2005).

Question 2

Deutsch and Gerard (1955) distinguished two types of social influence. A student who adopts a professor's view on a contested issue because they believe the professor has expert knowledge they lack is primarily demonstrating:

A) Normative social influence B) Deindividuation C) Informational social influence D) Reference group conformity E) Identity-protective cognition

Reveal Answer **Correct answer: C) Informational social influence** Informational social influence occurs when people use others' beliefs as evidence about what is true. Deferring to a professor's expertise is an epistemic strategy — using the professor's superior knowledge as a guide to truth. This is rational under conditions of genuine uncertainty or information asymmetry. Normative influence, by contrast, would operate through social costs of disagreement rather than epistemic deference.

Question 3

Tajfel and Turner's Social Identity Theory predicts that when a person's group membership is threatened by unflattering information about their group, they will most likely:

A) Update their beliefs to accept the unflattering information B) Seek additional sources to verify the information C) Engage in motivated reasoning to protect the group's positive image D) Leave the group and find a more positive identity E) Apply equally high standards to both flattering and unflattering information

Reveal Answer **Correct answer: C) Engage in motivated reasoning to protect the group's positive image** SIT predicts that individuals are motivated to maintain positive distinctiveness for their groups. When unflattering information threatens this, people are motivated to restore the positive evaluation through various strategies — most commonly by dismissing, reinterpreting, or seeking to refute the threatening information. While leaving the group (D) is one possible strategy, it is typically the last resort; motivated cognition to protect in-group evaluation is the primary default response.

Question 4

Kahan et al.'s research on "identity-protective cognition" found that on politically contested empirical questions, higher scientific literacy and numeracy were associated with:

A) Less polarization on the contested questions B) Greater accuracy in evaluating scientific consensus C) Greater polarization on the contested questions D) More nuanced, less certain beliefs E) Greater openness to changing positions

Reveal Answer **Correct answer: C) Greater polarization on the contested questions** This counterintuitive finding — that more analytically sophisticated individuals show greater partisan polarization on contested empirical questions — is one of the most important and replicated findings in the study of politically motivated reasoning. More sophisticated thinkers are better at finding reasons to accept identity-consistent information and reasons to reject identity-inconsistent information. This directly contradicts the "deficit model" which assumes that more knowledge and reasoning skill leads to less polarization.

Question 5

Which of the following is NOT one of Janis's eight symptoms of groupthink?

A) Illusion of unanimity B) Collective rationalization C) Self-appointed mindguards D) Principled disagreement protocols E) Belief in the group's inherent morality

Reveal Answer **Correct answer: D) Principled disagreement protocols** Principled disagreement protocols — established procedures for systematically raising and evaluating objections — are precisely what Janis recommended as a *remedy* for groupthink, not a symptom of it. The eight symptoms Janis identified are: illusion of invulnerability, collective rationalization, belief in inherent morality, stereotyped views of out-groups, direct pressure on dissenters, self-censorship, illusion of unanimity, and self-appointed mindguards.

Question 6

In the Elaboration Likelihood Model, attitude changes produced through the central route are characterized by which of the following?

A) Fast formation, easily reversed by counter-persuasion B) Durable, resistant to counter-persuasion, and predictive of behavior C) Based primarily on source attractiveness and social proof D) Produced by emotional arousal rather than argument evaluation E) Less correlated with actual behavior than peripheral-route attitude changes

Reveal Answer **Correct answer: B) Durable, resistant to counter-persuasion, and predictive of behavior** A key practical distinction between central and peripheral route persuasion is the durability and behavioral relevance of the attitude change produced. Central route processing produces attitude changes that are stable over time, resistant to reversal by counter-arguments, and strongly predictive of subsequent behavior, because the attitude is grounded in actual evaluation of the evidence. Peripheral route attitude changes are shallower and more easily reversed, because they are based on heuristic cues rather than substantive processing.

Question 7

The "illusory truth effect" refers to:

A) The tendency to believe that common beliefs must be true B) Increased perceived truth of statements that have been repeatedly encountered C) The belief that one's own group cannot be deceived D) The persuasive effect of authoritative-seeming language E) The false sense of certainty produced by confirmation bias

Reveal Answer **Correct answer: B) Increased perceived truth of statements that have been repeatedly encountered** The illusory truth effect is the empirical finding that repeated exposure to a statement increases its subjective feeling of truth, independent of reflective evaluation. This occurs because repeated processing fluency (the statement is easier to process because it is familiar) is misattributed to truth. Critically, the effect has been demonstrated even when participants know the statement is contested or even labeled as false, because familiarity affects intuitive truth assessment independently of deliberative judgment.

Question 8

Robert Cialdini's principle of "scarcity" is exploited in misinformation most commonly through:

A) Claims that many respected authorities endorse the misinformation B) Creating feelings of personal obligation to those who share information C) "Suppression narratives" suggesting the true information is being hidden or censored D) Presenting misinformation sources as similar to the audience E) Making audiences feel they have previously committed to the misinformation's premise

Reveal Answer **Correct answer: C) "Suppression narratives" suggesting the true information is being hidden or censored** The scarcity principle holds that things become more valued when they are rare or being taken away. In misinformation, this is exploited through narratives that claim the "real truth" is being suppressed by governments, corporations, or media elites — making the misinformation feel valuable precisely because it presents itself as rare, hidden, or censored. "Watch before it gets taken down" and "what they don't want you to know" are characteristic rhetorical signals of scarcity exploitation.

Question 9

Surowiecki's "wisdom of crowds" concept holds that large groups can produce better judgments than individual experts, but only under specific conditions. Which of the following conditions is NOT required for collective intelligence?

A) Diversity of opinion B) Independence of judgments C) Existence of a mechanism for aggregating judgments D) Majority-rule voting on the final answer E) Decentralization allowing access to local knowledge

Reveal Answer **Correct answer: D) Majority-rule voting on the final answer** Surowiecki's four conditions are: diversity of opinion, independence, decentralization, and aggregation. Majority-rule voting is one possible aggregation mechanism but is not itself required — indeed, markets (which aggregate through prices) and prediction markets are often more accurate than majority votes. The key insight is that aggregation must allow independent signals to combine; majority voting can work but can also be dominated by correlated errors if diversity and independence conditions are not met.

Question 10

Haidt's Moral Foundations Theory proposes that politically liberal individuals typically:

A) Weight all six moral foundations approximately equally B) Emphasize Care and Fairness while treating other foundations as less morally compelling C) Show higher sensitivity to Authority and Sanctity foundations D) Base moral judgments primarily on Loyalty and group identity E) Weight Liberty/Oppression above all other foundations

Reveal Answer **Correct answer: B) Emphasize Care and Fairness while treating other foundations as less morally compelling** Haidt's cross-cultural research consistently finds that politically liberal individuals place high moral weight on Care/Harm (preventing suffering) and Fairness/Cheating (reciprocity and justice) while treating Loyalty, Authority, and Sanctity as less morally compelling or even as potential rationalizations for prejudice. Conservative individuals, by contrast, tend to place significant weight on all six foundations. This difference in moral vocabulary has important implications for cross-partisan communication.

Question 11

Brady et al.'s (2017) research on moral-emotional language on Twitter found that:

A) Moral-emotional language reduced sharing because it seemed extreme B) Each additional moral-emotional word increased retweet probability by approximately 20% C) Moral-emotional language was effective mainly for cross-partisan sharing D) Liberal and conservative posts differed markedly in their use of moral language E) Moral language effects were confined to political content

Reveal Answer **Correct answer: B) Each additional moral-emotional word increased retweet probability by approximately 20%** Brady and colleagues analyzed thousands of tweets and found a robust positive relationship between the density of moral-emotional language (words that are simultaneously morally judgmental and emotionally arousing) and sharing rates. The effect held across political affiliations and topic domains, and — crucially — was stronger within ideologically homogeneous networks than across ideological lines, suggesting that moralized content is optimized for within-group amplification.

Question 12

The concept of "group polarization" in social psychology refers to:

A) The phenomenon of groups splitting into irreconcilable factions B) The tendency for groups of like-minded individuals to converge on more extreme positions than any member initially held C) The exclusion of minority viewpoints from group discussions D) The adoption of opposing positions by groups on either side of a controversy E) The influence of group leaders on moderating extreme positions

Reveal Answer **Correct answer: B) The tendency for groups of like-minded individuals to converge on more extreme positions than any member initially held** Group polarization is the well-documented finding that deliberation among like-minded individuals tends to shift group opinion toward more extreme versions of the initial central tendency. The effect has been replicated across cultures, political orientations, and issue domains. The mechanisms include: persuasive argument theory (more arguments for the initially favored position are generated in deliberation), social comparison theory (members try to appear as good as or better than other in-group members in holding the valued position).

Question 13

The Asch conformity experiments found that the MOST critical variable in reducing conformity rates was:

A) Having the task be easier and less ambiguous B) The presence of just one person who disagreed with the majority C) Having the participant make their judgment privately (in writing) D) Using prestigious, high-status confederates E) Repeating the task multiple times to build confidence

Reveal Answer **Correct answer: B) The presence of just one person who disagreed with the majority** Unanimity was the critical variable in Asch's experiments. When just one confederate broke from the majority — even giving a different wrong answer, not necessarily the correct one — conformity rates dropped dramatically from ~37% to under 10%. This finding has profound implications for combating misinformation: visible, socially safe dissent provides "social permission" to trust one's own perception and resist majority pressure.

Question 14

Dan Kahan and colleagues' "cultural cognition" research demonstrates identity-protective cognition most powerfully through which finding?

A) Uneducated individuals are more susceptible to partisan bias than educated ones B) Numeracy skills improve climate change acceptance across all groups C) More numerate individuals show greater partisan divergence on contested scientific questions D) Identity-motivated reasoning is confined to low-information voters E) Cultural worldviews predict attitudes toward science generally, not just specific issues

Reveal Answer **Correct answer: C) More numerate individuals show greater partisan divergence on contested scientific questions** Kahan et al.'s key finding is that on questions like climate change and gun control, higher numeracy is associated with greater — not lesser — partisan polarization. High-numeracy individuals who hold hierarchical/individualist worldviews are more likely to dismiss climate science than their low-numeracy counterparts with the same worldview, because they have greater analytical resources to find supporting evidence for their identity-consistent position. This is the definitive empirical challenge to the "knowledge deficit" model of science communication.

Question 15

Psychological inoculation, as researched by van der Linden and Roozenbeek, works primarily by:

A) Providing accurate factual information to counter specific false claims B) Pre-exposing individuals to weakened forms of misinformation while teaching them to recognize manipulation techniques C) Shaming individuals who have shared misinformation D) Restricting access to sources of misinformation E) Building general scientific literacy as protection against specific false claims

Reveal Answer **Correct answer: B) Pre-exposing individuals to weakened forms of misinformation while teaching them to recognize manipulation techniques** Inoculation theory, borrowed from immunology, proposes that pre-exposure to weakened forms of misinformation — accompanied by explicit explanation of the manipulation technique being used — builds "cognitive antibodies" that resist subsequent exposure to stronger versions. Unlike debunking (correcting specific false beliefs after the fact), inoculation works prospectively and generalizes to novel instances of the same technique. Research has shown that brief gamified inoculation interventions (Bad News, Go Viral!) can significantly improve detection of misinformation at scale.

Section II: True/False (Questions 16–20)

Question 16

True or False: The "filter bubble" thesis — that algorithms create perfectly personalized information environments — has been fully confirmed by large-scale empirical research on social media usage.

Reveal Answer **FALSE** While Eli Pariser's filter bubble concept was highly influential, large-scale empirical research has produced a more nuanced picture. Studies (including Bakshy et al.'s 2015 Facebook study) show that most users are exposed to ideologically diverse content, and that individual choice to engage selectively with identity-consistent content often exceeds algorithmic filtering in reducing ideological diversity. The concern about echo chambers is empirically supported, but the mechanism is not primarily algorithmic personalization alone — individual selective engagement and social network structure play at least as large a role.

Question 17

True or False: Normative social influence only operates when people are aware that they are conforming to avoid social rejection.

Reveal Answer **FALSE** Normative social influence can operate unconsciously. Research (including Berns et al.'s 2005 fMRI study) suggests that social pressure can alter perception at a processing level that precedes conscious awareness. Participants may genuinely experience a shift in what they perceive or believe, not merely in what they report. Additionally, the process of gradually adopting beliefs originally endorsed only publicly (internalization through repeated performance) often occurs without awareness of the mechanism.

Question 18

True or False: Cialdini's "commitment and consistency" principle predicts that people who have publicly shared a piece of misinformation will be harder to correct than those who have privately held the same belief.

Reveal Answer **TRUE** Cialdini's commitment and consistency principle predicts that people feel psychological pressure to remain consistent with their prior public commitments. Someone who has publicly shared a piece of misinformation (especially on social media, where the share is documented and visible to their network) has made a public commitment to the claim. Correcting such a person requires them to acknowledge a publicly visible error, which threatens self-consistency. Research on correction after public endorsement supports this prediction — social sharing significantly reduces correction receptivity.

Question 19

True or False: Mass psychogenic illness (historically called "mass hysteria") typically spreads through random community contact rather than following the structure of social networks.

Reveal Answer **FALSE** Research on mass psychogenic illness consistently finds that transmission follows social network structure. Socially central individuals develop symptoms before peripheral individuals; people who know affected individuals are significantly more likely to develop symptoms than those with no social connection to affected community members. This pattern mirrors the network dynamics of rumor and misinformation spread, suggesting that the same social mechanisms underlie both types of contagion.

Question 20

True or False: According to Feinberg and Willer's research on moral reframing, conservative Americans were more persuaded by pro-environmental messages framed in terms of harm prevention than messages framed in terms of purity.

Reveal Answer **FALSE** Feinberg and Willer (2015) found the opposite: pro-environmental messages framed in terms of Purity ("a clean and unpolluted America") were more persuasive to conservatives than messages framed in terms of Harm/Care (which is the foundation that environmentalists typically use). This is because Purity is a foundation that conservatives weight highly, while Care is already a strong liberal foundation — preaching to the choir on Care does not increase persuasion for conservatives. The finding illustrates that effective cross-partisan communication requires using the moral vocabulary of the target audience, not the messenger.

Section III: Short Answer (Questions 21–25)

Question 21

Define "tribal epistemics" and explain why digital social media platforms might intensify this phenomenon compared to pre-digital social environments.

Reveal Answer **Model Answer** Tribal epistemics refers to the subordination of independent epistemic evaluation to group loyalty — accepting or rejecting claims based primarily on their implications for one's social identity rather than their evidential support. Information is filtered through the question "would believing this be consistent with who I am and who my group is?" rather than "what is the evidence for this?" Digital social media intensifies tribal epistemics through several mechanisms: (1) Identity salience is chronically high — social media platforms are fundamentally identity-presentation environments where users constantly signal group membership; (2) Engagement algorithms favor content that activates identity-relevant responses, meaning users are disproportionately exposed to identity-politicized information; (3) Public social approval mechanisms (likes, shares) create social rewards for identity-consistent sharing; and (4) The removal of cross-domain social contact (neighborhood, workplace, civic organization) that historically created exposure to diverse others reduces the moderating effect of non-partisan social ties.

Question 22

Explain the difference between "public compliance without private acceptance" and genuine belief change in the context of conformity. Why does this distinction matter for understanding how misinformation spreads?

Reveal Answer **Model Answer** Public compliance without private acceptance occurs when a person expresses a belief publicly (to conform to social expectations or avoid social costs) while privately holding a different view. This is sometimes called "preference falsification" (Kuran, 1995). Genuine belief change occurs when the person's actual credence in a proposition changes as a result of social influence. This distinction matters for misinformation spread in several ways. First, compliance can over time shade into genuine acceptance through mechanisms like self-perception theory (we infer our private beliefs from our public behavior) and the illusory truth effect (repeated public expression increases familiarity, which increases perceived truth). Second, public compliance even without private acceptance contributes to the appearance of consensus, which then influences others through informational social influence — making the false consensus appear more robust than it is. Third, preference falsification can produce sudden opinion cascades: when conditions change that make dissent safe (a visible dissenter appears), previously compliant individuals may simultaneously reveal their private doubts, shifting apparent consensus rapidly.

Question 23

A misinformation researcher observes that a particular health claim is spreading rapidly in a specific online community even though it is factually false. Using concepts from this chapter, propose four distinct explanatory mechanisms that might account for the rapid spread.

Reveal Answer **Model Answer** Four distinct explanatory mechanisms: 1. **Moral-emotional amplification (Brady et al.)**: If the health claim is framed in morally charged terms — implicating corporate greed, government deception, threat to vulnerable populations — it activates moral-emotional processing that dramatically increases sharing rate. The moral framing creates intrinsic sharing motivation independent of epistemic evaluation. 2. **Identity-protective cognition (Kahan)**: If the claim is consistent with this community's identity-relevant beliefs (e.g., skepticism of pharmaceutical industry, distrust of government health authorities), members will apply low evidential standards to accepting it while applying high standards to counter-evidence, producing rapid within-community propagation. 3. **Social proof exploitation (Cialdini)**: As the claim accumulates likes and shares, subsequent viewers interpret the high engagement as evidence of credibility, triggering the social proof heuristic and accelerating further sharing in a cascade dynamic. 4. **Illusory truth through network repetition**: As the claim circulates repeatedly within the homophilous network, each exposure increases processing fluency and therefore perceived truth for individual members, even without any additional evidential support — entrenching the claim through familiarity rather than evaluation.

Question 24

Surowiecki argued that crowds are sometimes wiser than experts. However, social media trending topics are rarely accurate indicators of important news. Reconcile these observations using Surowiecki's four conditions.

Reveal Answer **Model Answer** Surowiecki's four conditions for collective intelligence are: diversity of opinion, independence, decentralization, and aggregation. Social media trending topics systematically violate at least three: **Independence is violated**: Social media is specifically designed to show users what others are engaging with, making individual decisions to engage heavily dependent on prior engagement by others. Trending is partly a self-fulfilling mechanism — content trends partly because it is shown as trending. The cascade dynamics this creates produce correlated errors rather than independent signals. **Diversity is compromised**: Content tends to trend within algorithmically defined communities before crossing to broader audiences, and the content that crosses community boundaries tends to be emotionally extreme rather than epistemically significant. **The aggregation mechanism is wrong**: Social media aggregates engagement — emotional reactions, curiosity, outrage — rather than epistemic judgments about importance or truth. The metric being aggregated (engagement) is not well-correlated with informational value. Thus trending topics reflect emotional salience and network amplification dynamics, not diverse, independent, expert judgments about what is important — violating the conditions under which Surowiecki's argument applies.

Question 25

Explain what "epistemic authority" is, why its decline is particularly problematic for misinformation resilience, and propose two approaches to rebuilding it.

Reveal Answer **Model Answer** Epistemic authority refers to the recognized capacity of certain individuals or institutions to produce reliable knowledge in a domain — authority that justifies deference to their assessments without requiring each person to independently verify every claim. Functional epistemic authority requires both perceived competence (the authority actually knows what they're talking about) and perceived benevolence (the authority is serving the audience's interests rather than their own). Its decline is problematic because epistemic communities require reference points to anchor collective belief. Without trusted epistemic authorities, people are thrown back on their own judgment for all questions — including those requiring extensive expertise to evaluate accurately. This creates vulnerability to authoritative-seeming but untrustworthy alternatives: once people distrust scientists on climate, they become more susceptible to industry-funded "experts" who use the trappings of authority without the substance. Two approaches to rebuilding epistemic authority: 1. **Demonstrated accountability and transparency**: Authorities rebuild trust through transparent acknowledgment of uncertainty and past error, public explanation of reasoning processes, and genuine accountability mechanisms when errors occur. This requires institutional reform, not just communication strategy — the perceived benevolence component requires evidence that institutions actually serve public rather than institutional interests. 2. **Pre-bunking trust attacks**: Since much of the trust decline results from strategic delegitimization campaigns, inoculation approaches targeting the specific techniques used to undermine authority (cherry-picking, "experts disagree," credential misrepresentation) can build resistance to these campaigns. Teaching people to recognize manufactured doubt as a technique can reduce its effectiveness at eroding legitimate epistemic authority.