Exercises: Misinformation, Disinformation, and Platform Governance

These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.

Difficulty Guide: - Star-1 Foundational (5-10 min each) - Star-2 Intermediate (10-20 min each) - Star-3 Challenging (20-40 min each) - Star-4 Advanced/Research (40+ min each)


Part A: Conceptual Understanding (Star-1)

Test your grasp of core concepts from Chapter 31.

A.1. Section 31.1.1 distinguishes between misinformation, disinformation, and malinformation. For each of the following scenarios, classify the information type and explain your reasoning:

  • (a) A political operative leaks an authentic but private email from a rival candidate, removing context to make the rival appear corrupt.
  • (b) A well-meaning neighbor shares a Facebook post claiming that 5G towers cause cancer, genuinely believing the claim is true.
  • (c) A state-sponsored troll farm creates fake accounts to spread fabricated stories about election fraud.
  • (d) A journalist reports preliminary findings from a study that is later retracted due to methodological flaws.
  • (e) An activist shares real statistics about police misconduct but selectively omits data showing recent reforms.

A.2. Explain the "information disorder spectrum" as described by Wardle and Derakhshan (Section 31.1.2). Why do the three categories (agent, message, interpreter) make content moderation more difficult than a simple true/false classification system?

A.3. Section 31.2.1 describes the Vosoughi, Roy, and Aral (2018) study on the spread of false information. Summarize the three most significant findings. Then explain Dr. Adeyemi's observation about why the finding on novelty is particularly troubling for platform governance.

A.4. Define the "content moderation trilemma" (Section 31.3.3). Provide a concrete example illustrating each of the three trade-off combinations (fast + scalable, fast + accurate, scalable + accurate).

A.5. In your own words, explain the difference between Section 230(c)(1) and Section 230(c)(2) of the Communications Decency Act. Why was Section 230(c)(2) included, and what problem was it intended to solve?

A.6. Section 31.5.2 introduces "prebunking" (inoculation theory). How does prebunking differ from fact-checking, and why do researchers believe it may be more effective across political ideologies?

A.7. Describe the "amplification distinction" introduced in Section 31.6.2. How does Sofia Reyes's loudspeaker analogy illustrate why this distinction matters for platform governance?


Part B: Applied Analysis (Star-2)

Analyze scenarios, arguments, and real-world situations using concepts from Chapter 31.

B.1. Consider the following scenario:

A health technology company launches a community forum on its platform where patients can discuss their conditions and share advice. One user, who identifies themselves as a nurse, begins posting recommendations for herbal supplements as alternatives to prescribed medications. The posts are written in professional-sounding language, cite real (but misinterpreted) studies, and attract enthusiastic engagement from other users. Several patients report following the advice and reducing their prescribed medications.

Using the Misinformation Response Framework from Section 31.8, walk through all five steps to analyze this situation. At which step is the governance challenge most difficult, and why?

B.2. The chapter describes how the VitraMed data breach (Section 31.7) generated three distinct types of false information: disinformation about data selling, malinformation about algorithmic discrimination, and misinformation created through the mixing of true and false elements. Analyze why the malinformation claim (partially true, about model bias) was the most damaging. In your analysis, explain what Mira means when she says "the truth is trapped."

B.3. Compare the two regulatory models summarized in the callout box in Section 31.4.2 (Section 230 vs. the EU DSA) across the following dimensions:

  • (a) Who bears the burden of proof for content moderation decisions?
  • (b) What mechanisms exist for transparency?
  • (c) How is algorithmic amplification addressed?
  • (d) What recourse does an individual user have if their content is wrongly removed?

B.4. Section 31.2.3 describes how Facebook's algorithm weighted "angry" reactions at five times the weight of other reactions. Explain the mechanism by which this design decision would amplify divisive content. Then propose an alternative weighting scheme that might reduce amplification of misinformation while still providing useful engagement signals to the recommendation algorithm.

B.5. Eli observes that his grandmother sees more health misinformation in her Facebook feed than his roommate does (Section 31.2.3). Using concepts from both Chapter 31 and Chapter 32 (digital divide), analyze why engagement algorithms might disproportionately target certain demographic groups with misinformation. Include at least two specific mechanisms.

B.6. Section 31.5.1 identifies the "implied truth effect" — where unlabeled content is perceived as more credible because other content has been labeled. Design a content labeling system that addresses this problem. Explain what changes you would make to current labeling practices and what evidence supports your proposal.


Part C: Real-World Application Challenges (Star-2 to Star-3)

These exercises ask you to investigate the information environment around you.

C.1. (Star-2) Misinformation Audit. Spend 30 minutes scrolling through a social media platform you use regularly. Identify at least three posts that contain claims that could be misinformation, disinformation, or malinformation. For each post, classify it using the Wardle and Derakhshan framework, identify the likely mechanism of spread (emotional arousal, identity affirmation, social currency, or cognitive shortcuts), and assess whether any platform intervention (label, demotion, removal) has been applied. Document your findings in a table.

C.2. (Star-2) Fact-Check Challenge. Select a health-related claim you have encountered on social media in the past month. Attempt to verify or debunk it using at least three different sources (a fact-checking organization, a primary scientific source, and a reputable news outlet). Document: (a) the original claim, (b) what each source says, (c) your assessment of the claim's accuracy, and (d) your reflection on how much time and effort fact-checking requires relative to the effort of sharing the original claim.

C.3. (Star-3) Platform Policy Comparison. Compare the content moderation policies of two social media platforms (e.g., Meta/Instagram, X/Twitter, TikTok, YouTube). For each platform: (a) What is their stated policy on misinformation? (b) What enforcement mechanisms do they describe? (c) What transparency reports are publicly available? (d) What appeal mechanisms exist for users whose content is removed? Write a one-page comparison identifying which platform provides more effective governance and why.

C.4. (Star-3) Algorithmic Feed Experiment. If possible, create a new social media account on a platform that allows recommendation without prior history. Over the course of one week, interact only with content related to a specific topic (e.g., health, politics, climate). Document what the algorithm recommends to you over time. Do you observe any patterns consistent with the "rabbit hole" dynamic described in Section 31.2.3? Describe your methodology and findings.


Part D: Synthesis & Critical Thinking (Star-3)

These questions require you to integrate multiple concepts from Chapter 31 and think beyond the material presented.

D.1. The chapter identifies a tension between encryption (which protects privacy) and content moderation (which requires the ability to observe content). WhatsApp messages are end-to-end encrypted, meaning that the platform cannot read message content — but also cannot moderate misinformation spreading within group chats.

Write a two-paragraph analysis of this tension. In the first paragraph, argue that encryption should be maintained even at the cost of reduced content moderation. In the second paragraph, argue the opposite. Then, in a third paragraph, propose a governance approach that attempts to address both concerns. Reference specific concepts from this chapter and from Chapter 8 (surveillance).

D.2. Dr. Adeyemi states: "You can fact-check a million claims and still not address the structural incentive to produce them" (Section 31.1.3). Construct an argument that addresses the structural incentives rather than individual claims. Your argument should include: (a) what structural incentive you would target, (b) what policy mechanism you would use, (c) what trade-offs your proposal involves, and (d) how you would measure its effectiveness.

D.3. Mira observes that VitraMed "can't address the real bias issue without it being interpreted through the lens of the false narratives" (Section 31.7.1). This describes a situation in which the information ecosystem prevents honest acknowledgment of real problems. Identify another real-world example where the misinformation environment has made it harder for an organization or government to acknowledge and address a genuine problem. Analyze the dynamics using concepts from this chapter.

D.4. The chapter presents three proposals for reforming platform accountability: algorithmic accountability, data access for researchers, and business model reform (Section 31.6.3). Rank these three proposals from most to least likely to produce meaningful change, and justify your ranking. Consider feasibility, political opposition, potential unintended consequences, and the structural incentives of existing platform business models.


Part E: Research & Extension (Star-4)

These are open-ended projects for students seeking deeper engagement. Each requires independent research beyond the textbook.

E.1. The Infodemic in Depth. The WHO declared a "COVID-19 infodemic" alongside the pandemic (Section 31.7.2). Research the infodemic response in one specific country of your choice. Write a 1,000-word report covering: (a) what types of health misinformation were most prevalent, (b) what interventions the government and platforms deployed, (c) what evidence exists about the effectiveness of those interventions, (d) what role algorithmic amplification played, and (e) what lessons the experience offers for future health crises. Use at least four sources beyond this textbook.

E.2. Comparative Regulatory Analysis. The chapter identifies platform governance approaches in Australia, Brazil, India, and Singapore (Section 31.4.3). Choose one of these jurisdictions and research its approach in depth. Write a comparative analysis (800-1,200 words) that evaluates the approach against the EU DSA framework. Consider: What does the jurisdiction's approach do better? What does it do worse? What human rights concerns does it raise? Would its approach be effective if adopted in a different cultural or political context?

E.3. Prebunking Experiment Design. Drawing on the research described in Section 31.5.2, design a prebunking intervention for a specific population (e.g., college students, older adults, healthcare workers) targeting a specific misinformation technique (e.g., emotional manipulation, false authority, scapegoating). Write a research proposal (600-1,000 words) including: (a) the target population and misinformation technique, (b) the intervention design, (c) how you would measure effectiveness, (d) what control conditions you would use, and (e) what ethical considerations apply.

E.4. Platform Worker Investigation. Research the working conditions of content moderators (referenced in Section 31.3.1 and Chapter 31's discussion of the "Behind the Scenes" workforce). Write a report (800-1,200 words) examining: (a) where content moderators are typically located, (b) what working conditions they face, (c) what psychological harms have been documented, (d) what legal protections exist, and (e) how the harms experienced by content moderators connect to the chapter's themes of Power Asymmetry and Accountability Gap.


Solutions

Selected solutions are available in appendices/answers-to-selected.md.