Chapter 11 Exercises: Taxonomy of Information Disorder
Part A: Conceptual Foundations (Questions 1–8)
Exercise 11.1 — Framework Analysis Wardle and Derakhshan's framework organizes information disorder along two axes: veracity (true/false) and intent (harmful/not harmful). Draw the resulting 2×2 matrix and place each of the following into the appropriate quadrant. Explain your reasoning for each.
a) A satirical article from The Onion about a politician, shared by someone who doesn't realize it's satire b) A fabricated document designed by a foreign intelligence service to discredit a domestic politician c) A journalist's accurate story about a private individual's past bankruptcy, published solely to humiliate them during a public dispute d) A social media post accurately describing a celebrity's health diagnosis without their consent, to satisfy public curiosity e) A shared Facebook post containing false vaccination statistics that the person who shared it believed to be true f) A government-produced report containing deliberately misleading economic projections presented as genuine forecasts
Exercise 11.2 — Definitions and Distinctions Write a clear, original definition (in your own words, not copied from the chapter) for each of the following terms. Then provide one original example you have constructed yourself — not from the chapter — illustrating each term.
a) Misinformation b) Disinformation c) Malinformation d) False context e) Manipulated content f) Coordinated inauthentic behavior
Exercise 11.3 — Why "Fake News" Falls Short The chapter argues that the term "fake news" is analytically inadequate. Write a 400-word essay that: 1. Explains three specific shortcomings of "fake news" as a classification category 2. Provides a concrete example of a real information disorder problem that "fake news" fails to describe accurately 3. Argues why precision in terminology matters for practical responses to information disorder
Exercise 11.4 — The Intent Problem Consider the following scenario: A state senator shares a social media post claiming that a proposed local water project will contaminate drinking water with a dangerous chemical. The claim is false. Later investigation reveals that the senator genuinely believed the claim after reading it on a community website, but the community website had originally received the claim from a political operative who fabricated it to derail the project.
a) Is the senator's sharing of the post an act of misinformation or disinformation? Defend your answer. b) Is the political operative's original fabrication misinformation or disinformation? Defend your answer. c) What category of information disorder actor is the community website? What about the senator? d) How does the intent of different actors in this chain affect how we should respond to the incident?
Exercise 11.5 — Malinformation Ethics The malinformation category creates significant ethical complexity because it involves true information. For each of the following scenarios, decide whether you believe the disclosure constitutes legitimate journalism/whistleblowing or malinformation. Justify your reasoning with reference to the criteria discussed in the chapter.
a) A newspaper publishes the home address of a CEO whose company has been cited for safety violations, as part of an investigative story. b) A political opposition group publishes accurate private text messages between two government ministers discussing personal matters (not policy), obtained through hacking, one week before an election. c) A former employee posts accurate screenshots of a company's internal Slack messages showing a toxic work culture, after being dismissed. d) An advocacy organization publishes the names and addresses of physicians who perform legal medical procedures the organization opposes. e) A news organization publishes accurate information about a public official's past criminal conviction that the official had successfully expunged under a rehabilitation statute.
Exercise 11.6 — The Seven Types in Sequence Consider a hypothetical disinformation campaign targeting a pharmaceutical company. Design a narrative arc showing how information about a fictional drug side effect could flow through all seven types of mis/disinformation content (satire/parody, misleading content, imposter content, fabricated content, false context, manipulated content, false connection) at different stages of the campaign. For each type, describe: - What specific content would be created - Who might create it - How it would spread - What harm it might cause
Exercise 11.7 — The Actors-Messages-Interpreters Model Apply the Actors-Messages-Interpreters model to the following historical episode:
During the COVID-19 pandemic, claims circulated widely that 5G cell towers were causing COVID-19 symptoms and that telecom workers were deliberately spreading disease. The claims originated on fringe internet forums, were amplified by celebrity social media accounts with large followings, were discussed (and debunked) by mainstream news media, and led to dozens of physical attacks on cell towers in the UK and other countries.
For each of the three components of the model (Agents, Messages, Interpreters): a) Who were the relevant agents, and what were their motivations? b) What properties of the message contributed to its spread? c) What factors in the interpretive context made some audiences receptive to the claims?
Exercise 11.8 — Taxonomy Comparison The Wardle-Derakhshan framework is not the only taxonomy of information disorder. Research ONE of the following alternative frameworks and write a 500-word comparative analysis:
a) The First Draft typology of visual misinformation b) The NATO Strategic Communications Centre of Excellence taxonomy c) The Reuters Institute typology of trust and misinformation d) The European Commission's High Level Expert Group (HLEG) framework on disinformation
Your analysis should identify: areas of overlap with Wardle-Derakhshan; areas where the alternative framework adds categories or dimensions not captured by Wardle-Derakhshan; and the practical contexts for which each framework might be best suited.
Part B: Classification Exercises (Questions 9–18)
Instructions: For each of the following descriptions, identify: 1. The primary category (misinformation, disinformation, or malinformation) 2. The most applicable of the seven content types 3. A brief justification (2–3 sentences)
Exercise 11.9 A widely-shared video titled "Politician admits to corruption" shows a politician saying, "I took money from lobbyists." In context — visible in the full video — the politician was quoting a charge made against a political opponent, not confessing to their own conduct. The video clip was created by a partisan media outlet that trimmed the surrounding context.
Exercise 11.10 A website called "ABCNewsReport.net" (distinct from the legitimate ABCNews.com) publishes a story claiming that a presidential candidate failed a drug test before a debate. The story is entirely fabricated. The website's design closely mimics the visual style of the real ABC News website.
Exercise 11.11 A social media post shows a striking photograph of flood damage with the caption "Unprecedented flooding hits [Country X] after government's failed infrastructure policies." The photograph is real, but it was taken in a different country five years earlier during a different flood event unrelated to the government being criticized.
Exercise 11.12 A local politician shares an article claiming that crime rates in a neighboring district rose 200% last year, citing it as evidence of policy failure. The article's statistic is technically accurate: crime rose from 1 incident to 3 incidents — a 200% increase — but the district is so small that the absolute numbers are meaningless. The politician seems to genuinely believe the statistic supports their argument.
Exercise 11.13 An activist group publishes the names, addresses, and daily routines of scientists conducting legally permitted animal research at a university. All information is factually accurate, gathered from public records. The publication is accompanied by language characterizing the scientists as "targets" of the campaign.
Exercise 11.14 A major news outlet publishes a headline: "New Study Links Coffee to 40% Reduction in Cancer Risk." The underlying study is a preliminary observational study with significant methodological limitations, which found a weak correlation in a specific population, with multiple caveats — none of which are mentioned in the headline. The outlet's science reporter wrote the headline without reading the study's limitations section.
Exercise 11.15 An audio recording circulates on social media that appears to show a prominent politician making racist remarks. Audio forensics experts later determine that the recording is genuine voice samples of the politician, digitally stitched together to create a statement the politician never actually made.
Exercise 11.16 The satirical website The Babylon Bee publishes an article: "CDC Recommends Americans Wear Hazmat Suits Before Reading News." The article is clearly labeled as satire on the website. Three weeks later, screenshots of the article appear on Facebook without the satirical context, and thousands of users engage with it as if it were genuine CDC guidance.
Exercise 11.17 A foreign state intelligence service creates 200 social media accounts posing as American citizens across the political spectrum. The accounts share genuine news articles but add inflammatory commentary designed to maximize outrage and division. The news articles themselves are accurate; the added commentary consists of opinion, not false factual claims.
Exercise 11.18 A financial analyst publishes a detailed report predicting the collapse of a competitor company, citing specific internal documents that prove financial irregularities. The analyst's firm holds short positions in the competitor's stock and stands to profit significantly from its collapse. The documents cited are genuine, and the irregularities are real.
Part C: Applied Analysis (Questions 19–25)
Exercise 11.19 — Content Audit Select a public social media account (a political figure, news organization, or prominent influencer) and conduct a "content audit" over a 48-hour period. For every post that makes factual claims: a) Identify the claim being made b) Attempt to verify the claim using at least two independent sources c) If false or misleading, classify it according to the seven-type taxonomy d) Note whether the content appears to have been shared with harmful intent
Present your findings in a structured table. Discuss any methodological challenges you encountered (define operationalization, verification source limitations, intent determination).
Exercise 11.20 — Platform Policy Analysis Review the publicly available Community Standards or Content Policy of ONE major platform (Facebook/Meta, Twitter/X, YouTube, or TikTok). Analyze: a) Which of the three information disorder categories (misinformation, disinformation, malinformation) does the platform's policy address? b) Which of the seven content types does the policy specifically target? c) What interventions does the policy describe (removal, labeling, demotion, user notification)? d) Are there types of information disorder that the policy leaves unaddressed? e) What are the policy's stated exceptions or edge cases?
Write a 600-word critical assessment of the policy's comprehensiveness and effectiveness.
Exercise 11.21 — Measuring Misinformation Methodological Critique The chapter discusses several studies measuring the prevalence and spread of misinformation. For ONE of the following studies, write a 400-word methodological critique:
a) Vosoughi, Roy, and Aral (2018), "The Spread of True and False News Online," Science b) Guess, Nagler, and Tucker (2019), "Less Than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook" c) Pennycook and Rand (2019), "Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than Motivated Reasoning"
Your critique should address: sampling methodology, operationalization of "misinformation," generalizability of findings, potential confounders, and what alternative methodologies might address the study's limitations.
Exercise 11.22 — Policy Design You have been hired as a consultant to advise a hypothetical government on developing a national strategy to address information disorder. The government is committed to freedom of expression and democratic values. Design a policy framework that: a) Addresses each of the three information disorder categories with appropriate, distinct interventions b) Incorporates both regulatory and non-regulatory approaches c) Establishes safeguards against the policy being misused to suppress legitimate speech d) Includes a monitoring and evaluation component
Present your framework in a structured memo format (800 words maximum).
Exercise 11.23 — Historical Case Classification For each of the following historical information disorder episodes, identify the category (misinformation/disinformation/malinformation) and the most applicable content type(s). Then write 2–3 sentences explaining the historical significance of the episode and what it reveals about the nature of information disorder.
a) The "War of the Worlds" radio broadcast panic (1938) — was there actually widespread panic, and how does the historical record change your classification? b) The "Stab-in-the-back" myth (Dolchstoßlegende) in post-WWI Germany — a deliberately false narrative that Germany lost WWI due to betrayal by Jews and socialists c) Operation INFEKTION (Soviet active measures, 1983–1987) — the KGB-originated disinformation campaign claiming the US government created the AIDS virus d) The Satanic Panic of the 1980s — widespread false beliefs about Satanic ritual abuse in daycare centers, amplified by media and prosecutorial misconduct
Exercise 11.24 — Prebunking Design The concept of "prebunking" (or "inoculation theory") suggests that exposing audiences to weakened forms of misinformation techniques, before they encounter actual misinformation, builds resistance to manipulation. Design a short prebunking intervention for ONE of the seven content types. Your intervention should: a) Explain the technique clearly and accessibly for a general audience b) Provide 2–3 illustrative examples c) Give concrete detection strategies d) Explain how the technique exploits cognitive vulnerabilities
The intervention should be no longer than 400 words (it will be used in a short educational video).
Exercise 11.25 — Taxonomy Reflection Taxonomies are not neutral — they reflect the values and perspectives of those who construct them. Write a 500-word critical reflection on the Wardle-Derakhshan taxonomy that considers: a) What assumptions about harmful speech underlie the taxonomy? b) Are there cultural or political contexts where the taxonomy's categories might operate differently or carry different implications? c) Whose voices or perspectives may be underrepresented in how the taxonomy was developed? d) What aspects of information disorder does the taxonomy leave unmapped?
This is an analytical exercise, not an endorsement or rejection of the framework. Strong answers will acknowledge both the framework's value and its limitations.
Solutions for selected exercises are available in the code/exercise-solutions.py file, and worked examples are provided for the classification exercises.