Chapter 17: Exercises — Social Proof and Manufactured Consensus
Section I: Comprehension and Recall
Exercise 1 [Short Answer] Define social proof in your own words, drawing on Cialdini's framework. Under what two conditions, according to Cialdini, is social proof most powerful?
Exercise 2 [Short Answer] Explain the evolutionary basis for social proof as a cognitive heuristic. Why is the concept of "information cascades" relevant to understanding both the adaptive value of social proof and its potential for exploitation?
Exercise 3 [Identification] Identify and briefly describe four distinct social proof signals on social media platforms (e.g., like counts, follower counts, share counts, trending designations). For each, describe what the signal ostensibly represents and one way it can be artificially inflated.
Exercise 4 [Short Answer] Describe the Muchnik et al. (2013) experiment: what was the experimental design, and what did the results show? Why are the findings significant for understanding social influence on digital platforms?
Exercise 5 [Short Answer] What is "coordinated inauthentic behavior" and how does it differ from simple bot account manipulation? Why is this distinction important for detection efforts?
Section II: Analysis and Application
Exercise 6 [Analysis] Explain the "virality cascade" mechanism. Walk through the full sequence of events, from an initial social proof signal to widespread apparent consensus, identifying at each step whether the engagement is genuinely independent or algorithmically/artificially amplified.
Exercise 7 [Scenario Analysis] You see a post in your social feed with 500,000 likes and 80,000 shares claiming that a popular medication has been quietly found to be harmful in a leaked study. Using the concepts from this chapter, analyze the social proof dynamics at play. What would you need to know to assess whether the social proof signals are reliable in this case?
Exercise 8 [Comparative Analysis] Compare the social proof dynamics of (a) a restaurant that is always packed with customers, (b) a book with 50,000 Amazon reviews, and (c) a tweet with 500,000 likes. In which context is social proof most reliable? In which is it most susceptible to manipulation? What features of each context explain your answer?
Exercise 9 [Application] Apply the social proof framework to the Instagram influencer economy. Trace the chain of incentives that makes purchased followers economically rational for influencers, the resulting distortion of social proof signals for consumers, and the detection arms race that followed. Who is harmed and who benefits from this ecosystem?
Exercise 10 [Critical Analysis] The chapter describes the circular logic of trending algorithms: content trends because it is popular; it becomes more popular because it is trending. Is this circularity necessarily a design flaw, or could it reflect genuine organic consensus in some cases? How would you distinguish genuine trending from manufactured trending?
Exercise 11 [Case Application] Apply the social proof framework to analyze the political information environment during the 2016 U.S. presidential election. What specific mechanisms — bot accounts, coordinated posting, engagement amplification — contributed to manufactured consensus? What was the estimated scale of artificial engagement relative to genuine engagement?
Exercise 12 [Psychological Analysis] Using the evolutionary account of social proof, explain why the heuristic is poorly calibrated for digital environments. Specifically: what assumptions does social proof make about others' behavior that are violated by (a) bot accounts, (b) engagement pods, and (c) algorithmic amplification?
Section III: Empirical Engagement
Exercise 13 [Research Review] Summarize the key findings of Vosoughi, Roy, and Aral's 2018 Science paper on the spread of true and false information on Twitter. What mechanisms did the researchers identify as driving the differential spread? How do these mechanisms interact with social proof dynamics?
Exercise 14 [Research Design] Design an experiment to test whether visible like counts influence users' assessments of the truth or quality of news articles. Specify: (a) your hypothesis, (b) your experimental design, (c) your dependent measures, (d) your controls, and (e) potential confounds. What ethical considerations apply?
Exercise 15 [Data Interpretation] The Instagram like count removal experiment found mixed results: reduced social comparison anxiety in some users, disorientation and opposition in others. How should researchers and policymakers interpret mixed findings when evaluating a platform intervention? What additional data would you want before recommending global implementation?
Exercise 16 [Platform Analysis] Conduct a social proof audit of a social media platform of your choice. Identify: (a) all visible social proof signals in the interface, (b) the behavioral actions these signals encourage, (c) any evidence that signals may be inflated or manufactured, and (d) any design features that attempt to address social proof manipulation. Write a 400-word analysis of your findings.
Section IV: Ethical Reasoning
Exercise 17 [Ethical Analysis] Evaluate the ethics of three actors in the manufactured consensus ecosystem: (a) the bot network operator who sells fake followers; (b) the influencer who purchases followers knowing they are fake; (c) the platform that fails to remove fake followers despite having the technical capability to do so. How does moral responsibility differ across these three actors?
Exercise 18 [Stakeholder Analysis] Identify five stakeholders affected by manufactured social proof in political contexts: (1) voters who encounter manufactured consensus, (2) legitimate political campaigns competing with manufactured consensus, (3) platforms that profit from engagement generated by inauthentic content, (4) foreign state actors who use manufactured consensus as an influence tool, and (5) domestic democracies whose information environments are manipulated. Describe the interests and harms of each.
Exercise 19 [Policy Design] Design a regulatory framework for social proof signals on social media platforms. Your framework should address: (a) mandatory disclosure of what engagement metrics include and exclude; (b) auditing requirements for influencer engagement; (c) requirements for platforms to detect and remove inauthentic engagement; (d) transparency about algorithmic amplification. Be specific about enforcement mechanisms.
Exercise 20 [Moral Responsibility] The Velocity Media audit found that 12% of trending content showed early engagement patterns consistent with coordinated inauthentic behavior. Marcus Webb argued for improving detection rather than eliminating the feature; Dr. Johnson argued that the structural problem (engagement metrics as proxies for quality) would persist even with better detection. Who is more right? What standard of care should platforms apply to features that institutionalize potentially manipulable social proof?
Exercise 21 [Comparative Ethics] Compare the ethics of the following: (a) a book publisher buying copies of a book to get it on the bestseller list (a known practice); (b) a political campaign buying social media followers to appear more popular; (c) a health product company buying likes for content that makes false health claims. Are these ethically equivalent? What makes the digital cases more or less problematic than the physical case?
Section V: Design Challenges
Exercise 22 [Redesign Exercise] Redesign Facebook's News Feed to reduce the systematic amplification of emotionally provocative content while maintaining user engagement. What specific changes to the algorithm would you propose? What trade-offs would these changes involve, and how would you measure whether the redesign succeeded?
Exercise 23 [Design Analysis] Instagram's like count removal experiment resulted in an opt-in rather than default implementation. Design a study to test whether opt-in or opt-out (hiding like counts by default, with the option to show them) produces better outcomes for user wellbeing. What would the study need to measure? What baseline data would you need?
Exercise 24 [Interface Design] Design an alternative to the like count that would provide users with useful information about content quality without being as susceptible to manipulation as simple engagement counts. Consider what information would genuinely help users assess content value and how that information could be gathered and displayed.
Exercise 25 [Trending Redesign] Redesign the trending topics feature to reduce its susceptibility to coordinated manipulation while preserving its function of surfacing genuinely significant events and conversations. What criteria would you use to determine what is "trending" if not raw engagement counts? How would your redesign address the circularity problem identified in the chapter?
Section VI: Personal Reflection
Exercise 26 [Reflection] Think of a time when you changed your behavior or beliefs based on apparent social consensus on a digital platform — you shared something because many others had shared it, you believed something because it had high engagement, or you purchased something because of influencer endorsement. Reflecting on this experience: were the social proof signals you responded to reliable? What would you do differently knowing what you now know?
Exercise 27 [Self-Assessment] Rate yourself on the following: how often do you (a) check like counts before deciding whether to read content, (b) evaluate news credibility based on how many people have shared it, (c) form opinions about celebrities or public figures based on their follower counts? For each behavior you recognize in yourself, explain the social proof mechanism at work.
Exercise 28 [Observation Exercise] For one week, before engaging with any piece of content that has visible engagement metrics, note: (a) the specific metrics visible, (b) your initial reaction to those metrics, (c) whether those metrics influenced your decision to engage with the content, and (d) after engaging, whether the content quality seemed commensurate with its engagement signals. Report your observations and analyze them.
Section VII: Integration and Synthesis
Exercise 29 [Cross-Chapter Integration] Chapter 16 examined loss aversion; this chapter examines social proof. Both are evolutionary cognitive heuristics that social media platforms exploit. Compare and contrast these two mechanisms: how do they differ in their psychological basis? How do they interact in the context of a feature like a like count that can be lost (declining) or used as social proof (absolute number)?
Exercise 30 [Synthesis Essay] In 600-800 words, argue for or against the following proposition: "Because social proof signals on social media are systematically corrupted by manufactured engagement, regulators should require platforms to hide quantified engagement metrics from users." Address both the psychological harm evidence for this position and the epistemic loss (removal of genuine social information) against it.
Exercise 31 [Historical Comparison] Identify a historical example of manufactured consensus — the engineering of apparent popular opinion through means other than genuine public persuasion — from before the social media era. Analyze how the mechanisms compare to digital manufactured consensus: what was similar, what was different, and what the comparison reveals about the novelty (or continuity) of the social media problem.
Exercise 32 [Industry Perspective] Write a 300-word statement from the perspective of Meta's communications team defending the decision not to remove like counts globally after Instagram's test. Address the specific concerns raised in the chapter while maintaining that the company is acting responsibly. Then write a 200-word response from a mental health researcher who participated in the test.
Exercise 33 [Legal Analysis] Several jurisdictions have considered legislation requiring platforms to disclose the proportion of accounts interacting with their trending content that have been flagged as potentially inauthentic. Analyze this proposal: what would it require technically, who would it protect, and what enforcement challenges would it face?
Exercise 34 [Theory Application] Information cascade theory predicts that under uncertainty, people rationally defer to others' apparent choices. But this rationality assumption breaks down when others' choices are manufactured. Formalize the conditions under which social proof is epistemically rational versus epistemically irrational in digital environments. What information would a user need to make social proof rational?
Exercise 35 [Capstone Project] Choose a major social media platform and conduct a comprehensive analysis of its social proof ecosystem. Your analysis should address: (a) what social proof signals are visible to users and how they are presented; (b) documented evidence of social proof manipulation on that platform; (c) the platform's stated policies and actual enforcement regarding inauthentic engagement; (d) the implications for political information, influencer marketing, and user wellbeing; and (e) your recommendations for platform redesign and regulatory intervention. Your analysis should be 1,000-1,500 words and include specific examples and cited evidence.