Chapter 33: Exercises — Policy Responses to Misinformation: Global Perspectives

Section A: Conceptual Understanding

Exercise 1: The Definitional Challenge Define and distinguish the following three terms: misinformation, disinformation, and malinformation. For each, provide one concrete example from a real or realistic scenario. Then explain: why does the distinction between misinformation and disinformation matter for legal regulation? What intent requirements would apply to each?

Exercise 2: The Four Structural Problems Section 33.1 identified four structural problems that make misinformation hard to regulate: the definitional problem, the scale problem, the speed problem, and the cross-border problem. For each problem: - Briefly restate the core challenge in your own words. - Identify a policy intervention that would reduce (but not eliminate) the problem. - Identify a potential negative consequence of that intervention.

Exercise 3: Constitutional Mapping Read the following three hypothetical laws. For each, analyze whether it would be constitutional under (a) the US First Amendment framework, and (b) the ECHR Article 10 framework. Explain your reasoning. - A law requiring social media platforms to remove "demonstrably false" claims about election results within 24 hours of a government agency's determination. - A law requiring political advertisers to provide a source for any factual claim in an online advertisement. - A law imposing criminal liability on individuals who knowingly spread false health information during a declared national emergency.

Exercise 4: Section 230 Analysis Read the text of Section 230(c)(1) and (c)(2). For each of the following scenarios, determine whether Section 230 immunity would apply: a) Facebook fails to remove a defamatory post about a private citizen after being notified by the subject. b) YouTube's algorithm recommends a conspiracy theory video to a user based on their viewing history. c) Twitter suspends an account for violating its misinformation policy, and the account holder sues for wrongful suspension. d) A platform publishes its own editorial commentary alongside a user's post, amplifying the user's message.

Exercise 5: Comparative Policy Table Complete the following comparison table for the five regulatory frameworks discussed in the chapter. Use your own analysis where the chapter provides information, and note where information is unclear or contested.

Framework Year Jurisdiction Enforcement body Penalty Free speech safeguards Scope
Section 230 (US)
DSA (EU)
NetzDG (Germany)
POFMA (Singapore)
Online Safety Act (Australia)

Section B: Applied Analysis

Exercise 6: NetzDG Compliance Decision You are the trust and safety director for a mid-sized social media platform with 5 million registered users in Germany. Your platform has received the following complaints under NetzDG:

a) A user has complained that another user's post calls them "a worthless immigrant who should go back to their country." German law prohibits incitement to hatred based on national origin.

b) A user has complained about a news article shared from a German newspaper alleging that a local politician is corrupt. The politician claims the allegation is false and defamatory.

c) A user has complained about a satirical meme depicting Chancellor Scholz in a compromising situation, labeled "SATIRE" in large text.

d) A user has complained that a video falsely claims that a particular COVID-19 vaccine causes infertility. German law does not specifically criminalize health misinformation.

For each complaint: (1) Does NetzDG require action? (2) What is the applicable timeline? (3) What would you do and why?

Exercise 7: DSA Risk Assessment You are a researcher contracted to conduct a DSA-mandated risk assessment for a Very Large Online Platform. The platform's recommendation algorithm analyzes user engagement data to surface content. Your task is to assess systemic risks to "civic discourse" during an upcoming national election.

Design a risk assessment framework addressing: a) What data would you need to collect? b) What specific risks would you evaluate? c) What mitigation measures might you recommend? d) How would you measure whether mitigations are effective? e) What limitations would your assessment have?

Exercise 8: POFMA Case Study Analysis In 2020, during the Singapore general election, the Singapore Democratic Party posted on its Facebook page claiming that the government had cut public healthcare spending. The Ministry of Health issued a POFMA correction direction, asserting the claim was false and that healthcare spending had in fact increased. The SDP was required to attach a correction notice to its post.

Analyze this case from three perspectives: a) The Singaporean government's perspective: Why was the direction appropriate? b) The SDP's and civil liberties perspective: Why was the direction problematic? c) An independent analyst's perspective: How would you evaluate whether the direction was appropriate? What information would you need?

Exercise 9: India's IT Rules — Tracing Encryption Tradeoffs India's IT Rules 2021 require significant social media intermediaries to be able to identify the "first originator" of messages on end-to-end encrypted platforms.

a) Explain technically why providing this capability would require breaking end-to-end encryption. b) Identify three legitimate uses of end-to-end encryption that would be compromised. c) Identify the government's stated justification for the requirement. d) Propose an alternative mechanism that might address the government's concern while preserving encryption. e) Evaluate your proposed alternative: does it fully satisfy the government's justification? Does it create new risks?

Exercise 10: Self-Regulation Failure Analysis The "Facebook Papers" leaked in 2021 revealed internal research documenting how the platform's algorithms amplified divisive content and that changes to reduce this amplification were resisted due to engagement concerns.

Using the self-regulation theory and failures discussed in Section 33.7, analyze: a) What incentive structures explain the gap between Facebook's public commitments and internal behavior? b) What would a more effective self-regulatory structure look like? c) What external accountability mechanisms — government, civil society, or market-based — might have produced different outcomes?


Section C: Policy Design

Exercise 11: Draft a Misinformation Law Draft the key provisions of a national misinformation law for a democracy of your choice. Your draft should: - Define what content is regulated and how "false" is determined - Specify enforcement mechanisms and who makes enforcement decisions - Include appeals and due process provisions - Specify penalties - Include safeguards against political abuse

Then write a 1-page assessment of your own draft: what are its weaknesses? How might it be misused?

Exercise 12: The Santa Clara Principles Applied The Santa Clara Principles specify minimum standards for platform content moderation: publication, notice, appeals, and data.

Evaluate the following platform behaviors against each principle: a) A platform publishes a 50-page Community Standards document but does not specify which provision was violated when notifying users of removals. b) A platform offers an appeals button but does not provide any information about how appeals are decided or how long they take. c) A platform publishes an annual transparency report with aggregate removal numbers but does not break them down by category or country. d) A platform notifies users of removals only when the content was removed following a government request, not when removed for community standards violations.

Exercise 13: Policy Effectiveness Metrics A government has enacted a mandatory removal law requiring platforms to remove election misinformation within 48 hours. Two years after enactment, it wants to evaluate the law's effectiveness.

Design an evaluation framework: a) What outcome metrics should be measured? b) What process metrics should be measured? c) What comparison group would you use (if any)? d) What data sources are available? e) What alternative explanations for changes in misinformation prevalence would you need to rule out?

Exercise 14: Designing an Appeals Process A major social media platform processes approximately 10 million content removal decisions per month. It currently does not offer meaningful appeals — users receive a notification that their content was removed for violating a specific policy, but have no ability to contest the decision.

Design a scalable appeals system that: a) Provides meaningful review of disputed decisions b) Can handle at least 1% of removals appealing (100,000 per month) c) Uses both automated and human review appropriately d) Produces and publishes data on appeal outcomes e) Fits within a reasonable operational budget

What compromises did you need to make? What does "meaningful review" require at this scale?

Exercise 15: Co-Regulation Design The chapter distinguishes among self-regulation (platforms govern themselves), co-regulation (platforms adopt self-regulatory codes that become enforceable through government oversight), and hard law (direct government mandates).

Design a co-regulatory framework for health misinformation on social media platforms. Your framework should: a) Specify the substantive standards platforms must meet b) Specify the governance process (who sets standards? who monitors? who enforces?) c) Create accountability mechanisms that give the self-regulatory code legal force d) Include safeguards against capture by regulated industries e) Explain how it compares to a hard law alternative in terms of effectiveness, speed, flexibility, and free speech risk


Section D: Research and Investigation

Exercise 16: Regulatory Impact Research Research one of the following policy interventions and write a 1,000-word evidence-based assessment of its effectiveness: a) The EU Code of Practice on Disinformation (any year's implementation report) b) Germany's NetzDG (using published transparency reports and academic studies) c) Twitter/X's election integrity policies during any national election d) YouTube's strikes system applied to COVID-19 misinformation

Your assessment should: identify claimed goals, identify measurable outcomes, assess the evidence for each claimed goal, and identify limitations in the available evidence.

Exercise 17: Cross-Border Disinformation Case Research a documented disinformation campaign that operated across multiple national borders (suggestions: Internet Research Agency operations 2016, COVID-19 infodemic, election interference in France 2017, Brazil 2018 election).

Analyze the jurisdictional challenges it posed: a) Where was the content produced? b) Where was it distributed? c) Where were the audiences? d) Which national laws potentially applied? e) What enforcement actually occurred? f) What regulatory gap did it reveal?

Exercise 18: Platform Transparency Report Analysis Download a recent transparency report from one major platform (Meta, Google, TikTok, or X). Answer: a) What categories of content removal are reported? b) What geographic breakdown is provided? c) What government requests for content removal are reported? d) What information is missing that would be necessary for meaningful accountability? e) Compare this report's disclosures to the Santa Clara Principles' data requirements.

Exercise 19: Civil Society Monitor Methodology Examine either the Global Disinformation Index (GDI) or NewsGuard's rating methodology. a) What criteria do they use to rate outlets? b) What is the methodology for applying these criteria? c) What safeguards against political bias do they claim? d) Evaluate the plausibility of these safeguards. e) What are the legal, ethical, or practical limits of this type of private accountability mechanism?

Exercise 20: The Spread of NetzDG Research which countries have enacted legislation similar to Germany's NetzDG, using reports from Freedom House, Article 19, or Access Now. a) Identify at least three countries that have adopted NetzDG-style legislation. b) For each, note: the political context at enactment, the scope of the law, and documented uses since enactment. c) Assess: have any of these NetzDG-inspired laws been used in ways NetzDG's architects would have considered appropriate?


Section E: Argumentation and Debate

Exercise 21: The Case for and Against Section 230 Reform Write two 500-word op-eds: a) "Section 230 Must Be Reformed" — arguing from a progressive perspective that the law's immunity for algorithmic amplification of harmful content is unjustifiable b) "Section 230 Must Be Preserved" — arguing that reforms would threaten free expression and the open internet

After writing both, write a 200-word reflection on which argument you found easier to make and why.

Exercise 22: Mock Legislative Hearing Your class will conduct a mock legislative hearing on a proposed law requiring social media platforms to label all health claims that have not been verified by a national health authority within 24 hours.

Prepare testimony for one of the following roles: a) A platform trust and safety executive opposing the bill b) A public health official supporting the bill c) A civil liberties attorney raising constitutional concerns d) A misinformation researcher offering an evidence-based assessment e) A small health news outlet concerned about competitive harm

Exercise 23: The Dual-Use Dilemma A colleague argues: "Anti-misinformation laws are always going to be misused against dissent, so democratic governments simply should not enact them — the cure is worse than the disease."

Write a structured response that: a) Acknowledges the strongest version of your colleague's argument b) Argues that some forms of anti-misinformation regulation can be appropriately designed to minimize dual-use risk c) Identifies specific design features that provide meaningful protection against political misuse d) Concedes what you cannot refute

Exercise 24: The AI Deepfake Scenario Hyper-realistic AI-generated video (deepfake) technology now makes it trivially easy to create convincing video of a political candidate saying things they never said. Before a national election, a deepfake video of Candidate X "admitting" to corruption goes viral.

Design a policy response addressing: a) What existing legal framework (in any jurisdiction of your choice) would apply? b) What gaps does the existing framework leave? c) Propose a specific new policy intervention. Address: definition, scope, enforcement, appeals, penalties, and free speech implications. d) Evaluate your proposal against the evidence-based principles in Section 33.10.

Exercise 25: International Treaty Negotiation Misinformation campaigns operate across national borders, but each country has different laws, different free speech traditions, and different political interests. You are a delegate to an international negotiation aimed at establishing a multilateral framework for misinformation governance.

a) What provisions could plausibly achieve consensus among democratic nations (US, EU members, Canada, Australia, Japan)? b) What provisions would the United States likely resist? Why? c) What provisions would authoritarian states attempt to weaken or eliminate? d) Draft three treaty articles that could realistically attract broad support.


Reflection Questions

Exercise 26: Personal Application You have just consumed a piece of information on social media that you suspect might be misinformation. Walk through the policy landscape you have studied: a) What legal obligations (if any) does the platform have to label or remove this content? b) What can you do if you believe the content should be removed and the platform declines? c) What can you do if you believe the content has been removed unjustifiably? d) How does your answer change if you are in the US versus Germany versus Singapore?

Exercise 27: Values Clarification This chapter has described significant trade-offs between free expression and information quality. Where do you personally draw the line? Answer the following: a) Should governments be able to require platforms to remove provably false claims about election results? What safeguards would be necessary? b) Should platforms be allowed to remove true information that is being spread in a manipulative context (malinformation)? c) Should there be any category of false information that is categorically prohibited regardless of demonstrated harm? d) Who should be the arbiter of truth in each of the above scenarios?