38 min read

> "We are living in a time when the very notion of truth is under attack."

Chapter 11: Taxonomy — Disinformation, Misinformation, and Malinformation

"We are living in a time when the very notion of truth is under attack." — Claire Wardle and Hossein Derakhshan, Information Disorder (2017)


Learning Objectives

By the end of this chapter, students will be able to:

  1. Distinguish between misinformation, disinformation, and malinformation using the Wardle-Derakhshan information disorder framework.
  2. Apply the seven-type taxonomy to classify real-world examples of problematic content.
  3. Identify the actors, messages, and interpreters involved in information disorder episodes.
  4. Explain why taxonomic precision matters for policy design, platform governance, and individual media literacy.
  5. Critically evaluate methodological challenges in measuring the prevalence of misinformation.
  6. Analyze how different types of information disorder require fundamentally different interventions.

Introduction

Imagine you share a post on social media claiming that a popular medication causes a dangerous side effect. You believed it was true — you read it on what seemed like a legitimate health website. A week later, you learn the story was fabricated by a disgruntled former employee of the pharmaceutical company, designed specifically to damage the company's stock price. Another week passes, and you discover the "legitimate" website was itself a front created by political operatives hoping to undermine public trust in the healthcare system more broadly.

In each stage of that scenario, something different is happening. In the first stage, you are an unwitting participant in misinformation — you spread false content without harmful intent. In the second stage, the creator of the story was engaging in disinformation — deliberately false content created to deceive. In the third stage, the political operatives were engaged in a sophisticated campaign that blends disinformation with strategic malinformation — using a false story to corrode trust in legitimate institutions.

These distinctions are not merely academic. They determine who is morally responsible, what legal frameworks might apply, which platform interventions are appropriate, and how individuals should calibrate their skepticism. Without a rigorous taxonomy — a systematic classification system — we lack the language to think clearly about these distinctions, let alone act on them.

This chapter introduces the foundational taxonomy of information disorder, grounded primarily in the landmark 2017 Council of Europe report by Claire Wardle and Hossein Derakhshan. We will explore each category in depth, examine the actors and processes involved, grapple with the challenge of measuring misinformation's scale, and consider why getting the taxonomy right has real stakes for democratic life.


Section 11.1: The Information Disorder Framework — Wardle and Derakhshan's 3×3 Typology

11.1.1 Origins of the Framework

In 2017, researchers Claire Wardle and Hossein Derakhshan published Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making for the Council of Europe. The report emerged from a growing recognition among scholars, journalists, and policymakers that the term "fake news" — then proliferating in political discourse — was simultaneously too narrow, too broad, and too politically charged to serve as a useful analytical category.

"Fake news," Wardle and Derakhshan argued, captures only one corner of a far more complex information ecosystem. A satirical article misread as fact, a genuine photo stripped of its original context, a factual story about a private individual published to destroy their reputation — none of these fit neatly into "fake news," yet all cause genuine harm. What was needed was a framework that could systematically classify information disorder according to its core dimensions.

Wardle and Derakhshan proposed organizing problematic information along two primary axes:

  1. The veracity dimension: Is the content false, or is it true?
  2. The intent dimension: Was it created and shared with the intent to cause harm, or not?

This yields a conceptual matrix from which three broad categories emerge: misinformation, disinformation, and malinformation. The power of this framework lies not in the categories themselves — researchers had used these terms before — but in the systematic way it maps veracity against intent, and in the subsequent elaboration of seven discrete types of problematic content.

11.1.2 The 3×3 Matrix

The framework can be visualized as a matrix:

                    INTENT TO HARM
                    Low          High
                    ──────────────────
FALSE CONTENT  │ Misinformation │ Disinformation │
               │                │                │
TRUE CONTENT   │ (Neutral)      │ Malinformation │
               └────────────────┴────────────────┘

This is a simplification — in practice, intent exists on a spectrum, veracity is rarely binary, and many real-world cases occupy ambiguous middle ground. Nevertheless, the framework provides a valuable starting structure.

Misinformation: False or inaccurate information spread without malicious intent. The person sharing it may believe it to be true, or may not have considered whether it is true. Misinformation causes harm as a consequence of its falseness, not through any deliberate design.

Disinformation: False information deliberately created and spread with the intent to deceive. Disinformation is a weapon — it serves someone's strategic interest. The falseness is not incidental but essential.

Malinformation: Genuine information — factually accurate — used with the intent to cause harm. This category is perhaps the most counterintuitive, since we tend to think truth-telling is inherently beneficial. But the strategic deployment of true information can destroy reputations, endanger lives, or erode institutional trust.

11.1.3 Why "Fake News" Is Inadequate

The term "fake news" entered widespread usage during the 2016 US presidential election cycle and proliferated thereafter. Its inadequacy as an analytical category is multi-dimensional.

First, "fake news" implies a format — "news" — that excludes vast categories of problematic content: social media posts, academic-sounding websites, viral images, manipulated videos, and out-of-context quotations. These often cause more harm than traditional "news" formats.

Second, "fake news" implies total fabrication, missing the far more common phenomenon of content that is partially true, misleadingly framed, or selectively presented. As Chapter 15 explores in depth, political manipulation more often works through strategic emphasis and omission than through outright fabrication.

Third, and critically, "fake news" became weaponized as a political epithet, deployed by politicians across the ideological spectrum to dismiss coverage they disliked. When US President Donald Trump, UK Prime Minister Boris Johnson, and authoritarian governments from Hungary to the Philippines all deployed "fake news" to attack credible journalism, the term lost whatever descriptive utility it once had.

Wardle and Derakhshan's framework deliberately sidesteps "fake news" in favor of the more neutral "information disorder" — a term that can encompass the full range of problematic information phenomena without carrying the political baggage that "fake news" accumulated.


Section 11.2: Misinformation — False Content Without Harmful Intent

11.2.1 Defining the Category

Misinformation, in the Wardle-Derakhshan framework, is false or inaccurate information spread without malicious intent on the part of the person spreading it. This definition contains an important qualifier: it refers to the intent of the person spreading the information, not necessarily the person who originally created it. A person who sincerely shares a fabricated story because they believe it to be true is engaging in misinformation, even if the original creator was a disinformation actor.

This distinction has significant implications. It means that misinformation is enormously more prevalent than disinformation, because most people who share false content are not doing so maliciously. Research consistently finds that the vast majority of problematic content circulation involves ordinary people spreading things they believe to be true. The professional disinformation actors are a small but consequential upstream node in a much larger downstream misinformation network.

11.2.2 Sources of Misinformation

Misinformation arises from multiple sources:

Honest mistakes: Journalists, experts, and ordinary individuals make errors. These errors, once published or shared, can propagate widely before correction. The early days of the COVID-19 pandemic saw numerous honest mistakes from credentialed health organizations — including the World Health Organization's initial uncertainty about aerosol transmission — that spread widely and caused real confusion.

Cognitive biases: Humans are systematically prone to misperceiving information. Confirmation bias leads us to uncritically accept information that confirms our existing beliefs. The availability heuristic makes vivid, easily recalled examples feel more statistically common than they are. Pattern recognition — evolutionarily useful — causes us to see meaningful connections in random data. These cognitive tendencies generate a constant stream of individually innocuous misinformation.

Satire misread as fact: Satirical content, clearly labeled as such on its original platform, is frequently stripped of its satirical context and recirculated as factual reporting. The Onion, The Babylon Bee, The Daily Mash, and similar outlets produce content that regularly escapes its satirical frame. Studies have documented cases of politicians, journalists, and millions of social media users treating obviously satirical content as factual reporting.

Outdated information: Accurate information that has since been superseded continues to circulate. Medical advice changes; scientific understanding evolves; political situations shift. Content that was accurate when created may become misinformation as circumstances change, yet continues to be shared as if current.

Out-of-context accurate information: A statistic cited without its denominator; a photograph captioned with the wrong location; a quotation attributed without its surrounding context that changes its meaning. This type of misinformation is technically based in fact but creates a false impression. (Note: when done deliberately, this becomes disinformation; when done innocently, it remains misinformation.)

11.2.3 The Misinformation-Disinformation Interface

The boundary between misinformation and disinformation is blurry in practice, because intent is not always knowable, and because the same content can shift categories as it travels. A deliberately fabricated story (disinformation) shared by someone who believes it to be true becomes misinformation in their hands. This is precisely why disinformation campaigns so often succeed — they are designed to generate misinformation as their primary mode of spread. The professional disinformation actor creates content intended to deceive; thousands of ordinary people share it innocently.

This asymmetry has important policy implications. Holding individuals legally or morally responsible for sharing misinformation — absent evidence of intent to deceive — risks criminalizing honest error, which would have severe chilling effects on public discourse. Most legal frameworks and platform policies focus on the creators of disinformation rather than those who spread misinformation innocently.

11.2.4 Case Examples

  • The "Great Moon Hoax" of 1835: The New York Sun published a series of articles claiming astronomers had discovered life on the moon. The articles were intended as satire/hoax, but many readers — including other newspapers — took them at face value. This early example illustrates how misinformation can originate in deliberate fabrication but spread through sincere credulity.

  • Early COVID-19 claims about surface transmission: In early 2020, public health messaging emphasized surface (fomite) transmission as a primary risk. People spread this guidance sincerely and in good faith. As the science evolved to emphasize aerosol transmission, the earlier messaging became misinformation in context, though those who spread it had no harmful intent.

  • Misidentification of suspects after tragedies: After mass casualty events, well-meaning users frequently misidentify suspects on social media. They are genuinely trying to help; the information is genuinely false; the harm to misidentified individuals can be severe.


Section 11.3: Disinformation — False Content Created to Deceive

11.3.1 Defining the Category

Disinformation is deliberately false information created with the intent to deceive a target audience. Unlike misinformation, disinformation involves intentionality: someone knows the information is false (or is deliberately indifferent to its truth) and creates or spreads it anyway, for strategic purposes.

The concept has deep roots. The term dezinformatsiya was used by Soviet intelligence services to describe a category of active measures — operations designed to spread false information among adversary populations. The KGB's Department D, later Department A, was tasked specifically with disinformation operations, including the fabrication of documents, the planting of false stories in foreign media, and the creation of front organizations.

Modern academic usage broadly follows this intuition: disinformation is the deliberate creation and dissemination of false content to further an agenda.

11.3.2 Disinformation Actors

Disinformation is produced by a range of actors with varying motivations:

State actors: Governments and intelligence services have historically been the most sophisticated producers of disinformation. Russian information operations — including Internet Research Agency social media campaigns, GRU hack-and-leak operations, and the RT/Sputnik ecosystem — represent the most extensively documented contemporary example. China's "50 Cent Army" (wumao) produces pro-government content domestically and, increasingly, internationally. Iran, North Korea, and numerous other state actors operate disinformation programs of varying sophistication.

Political operatives: Domestic political campaigns, party organizations, and political consultants produce disinformation content targeting electoral outcomes. The Cambridge Analytica scandal (examined in Chapter 12) documented how sophisticated psychographic targeting could be used to deliver disinformation tailored to individual psychological vulnerabilities. Astroturfing — the creation of fake grassroots movements — is a domestic political disinformation technique as well as a state-level one.

Commercial actors: Financially motivated entities produce disinformation for profit. The Macedonian teenagers who created hundreds of pro-Trump fake news websites during the 2016 US election were primarily motivated by advertising revenue generated by viral content. Some pharmaceutical companies have been documented funding disinformation campaigns to suppress evidence of drug side effects. Tobacco companies pioneered the model of industry-funded disinformation to create "manufactured controversy" around scientific consensus.

Ideological actors: Non-state actors motivated by ideological commitments — extremist groups, conspiracy theory networks, religious fundamentalist organizations — produce disinformation to advance their worldviews. These actors may genuinely believe some of the content they create, making the line between disinformation and sincere but false belief difficult to draw.

11.3.3 Structural Features of Disinformation Campaigns

Sophisticated disinformation operations share several structural features:

Platform diversification: Content is seeded across multiple platforms simultaneously to avoid platform-specific removal and to create the appearance of independent corroboration. A story originated on a fringe forum might be amplified by partisan news sites, picked up by social media bots, and eventually legitimized by mainstream media covering the social media reaction.

Laundering through intermediaries: Disinformation actors often avoid direct association with their content by routing it through intermediaries — legitimate-seeming media outlets, influential social media accounts, or real journalists who can be manipulated into spreading the content. Once "laundered" through a credible source, the content becomes harder to dismiss.

Timing and information environment exploitation: Disinformation campaigns exploit high-information-demand moments — breaking news events, crises, elections — when audiences are hungry for information and fact-checking is slowest. Fabricated content released in the immediate aftermath of a terrorist attack or natural disaster spreads far faster than corrections that emerge hours or days later.

Emotional targeting: Disinformation is typically crafted to maximize emotional response. Content that provokes anger, fear, or outrage spreads faster on social media platforms than neutral content, because emotional content triggers more immediate sharing behavior. Disinformation actors understand this dynamic and craft content accordingly.

11.3.4 The Challenge of Attribution

Attributing disinformation to specific state or non-state actors is genuinely difficult. State actors use layers of proxies and technical obfuscation. The Internet Research Agency operated behind dozens of false persona accounts and domain names. Attributing operations with high confidence requires access to technical evidence (server logs, financial records, metadata) that is rarely publicly available, and even government intelligence agencies with access to classified data frequently hedge their attribution assessments.

This attribution difficulty has policy implications. Sanctions, diplomatic responses, and counter-operations require confident attribution. The intelligence community's assessment that Russia interfered in the 2016 US election was based on classified evidence that was only partially shared publicly, making democratic deliberation about the appropriate response difficult.


Section 11.4: Malinformation — True Information Used to Harm

11.4.1 Defining the Category

Malinformation is the third vertex of the Wardle-Derakhshan triangle: genuine, factually accurate information deployed with the intent to cause harm to individuals, organizations, or social institutions. The category is counterintuitive because our default assumption is that true information is beneficial — that the cure for bad speech is more speech, and that facts ultimately serve truth-seeking.

But this assumption breaks down when accurate information is selected, contextualized, and deployed strategically to cause harm. The information may be true; the act of deploying it may nonetheless be deeply harmful.

11.4.2 Types of Malinformation

Doxxing: The publication of private, factually accurate information about an individual — home address, workplace, family members' identities, phone numbers — to expose them to harassment, threats, or violence. The information published in doxxing is typically true. Its publication is harmful not because it is false but because it arms hostile actors with actionable targeting information.

Strategic leaks: Factually accurate documents or communications leaked to cause political harm. The distinction between a public-interest leak (whistleblowing) and malinformation is complex and contested, but strategic leaks designed primarily to damage a political adversary — particularly when timed for maximum political damage — fit the malinformation category. The publication of Hillary Clinton campaign chairman John Podesta's emails by WikiLeaks in 2016 involved genuine emails, selectively published and timed to cause maximum political damage.

Out-of-context real content: A genuine video clip, accurately reproduced, presented in a misleading context that creates a false overall impression. A politician filmed losing their temper in a private, extremely stressful situation; a factual statistic about a minority group cited in a way designed to stoke prejudice; a scientist's words accurately quoted but stripped of the qualifications that would change the listener's interpretation. The underlying content is real; the overall impression created is false.

Outing: The deliberate revelation of private personal information — sexual orientation, health status, immigration status, past legal trouble — that a person has chosen not to disclose publicly. Even when the revealed information is accurate, its revelation can cause severe harm: loss of employment, family rupture, physical danger, or psychological trauma.

Historical weaponization: The selective citation of accurate historical records to construct misleading narratives about current events or groups. Accurate historical quotes by a politician, selectively assembled, can create a composite portrait that misrepresents their actual record. Genuine historical crimes committed by a group's ancestors, weaponized to justify contemporary prejudice, represent malinformation.

11.4.3 The Ethical Complexity of Malinformation

Malinformation poses the most profound ethical challenges of the three categories, because distinguishing it from legitimate journalism and whistleblowing is genuinely difficult.

Journalists routinely publish information that sources would prefer to remain private — financial records, internal communications, embarrassing personal conduct. This is a cornerstone of accountability journalism. What distinguishes legitimate accountability journalism from malinformation?

Several criteria are relevant:

  1. Public interest: Does the information reveal matters of genuine public concern — corruption, abuse of public trust, public safety risks — or does it merely satisfy prurient curiosity or serve partisan goals?

  2. Proportionality: Is the privacy harm proportionate to the public interest served? Revealing a politician's expense account fraud serves a public interest; revealing their medical history unrelated to their public duties typically does not.

  3. Minimization of harm: Did the publisher take reasonable steps to minimize harm to innocent third parties — redacting names of witnesses, protecting confidential sources, considering the safety implications of publication?

  4. Process and verification: Was the information obtained and verified through rigorous journalistic processes, or through hacking, theft, or manipulation?

These criteria do not always yield clear answers. They are contested in individual cases and across different journalistic traditions. The malinformation category forces us to confront the tension between the values of transparency and privacy, and between the public's right to know and individuals' rights to control their own narratives.


Section 11.5: Seven Types of Mis/Disinformation Content

Beyond the three-category framework, Wardle elaborated a more granular taxonomy of seven types of content, based on the degree of falseness of the content and the intent to deceive. These types range from content that is not false at all (satire/parody) to content that is entirely fabricated, and from content created without harmful intent to content specifically designed to deceive.

Type 1: Satire and Parody

Definition: Content that uses irony, exaggeration, or humor to comment on public figures, events, or social phenomena. Satire and parody are not false in the sense of making sincere truth claims — their creators know the content is not literally true, and the content is typically understood as non-literal by its intended audience.

How it causes harm: Satire causes information disorder when it escapes its original context and is received by audiences who do not recognize it as satirical. A headline from The Onion, screenshotted and shared without attribution, may be indistinguishable from genuine news to audiences unfamiliar with the outlet. The harm is not in the satire itself but in the reception.

Examples: "Pope Francis Shocks World, Endorses Donald Trump for President" (satirical, spread as fact in 2016); "No, Tide Did Not Release an Ad Encouraging Kids to Eat Tide Pods" (satire misread as fact in 2018).

Key distinguishing feature: The creator does not intend to deceive; the harm is a consequence of reception, not creation.

Type 2: Misleading Content

Definition: Content that uses genuine information in a misleading way — selective statistics, misleading framing, omission of essential context — to create a false overall impression. The individual facts cited may be accurate, but the overall message is deceptive.

Examples: A politician's unemployment statistics that count only full-time employment and omit part-time workers and discouraged workers; a headline that accurately quotes the title of a study but misrepresents its conclusions; crime statistics for a minority community presented without demographic context.

Key distinguishing feature: The content is not entirely false, but creates a false impression through selective presentation, framing, or omission.

Type 3: Imposter Content

Definition: Content that mimics or impersonates legitimate sources — news organizations, government agencies, academic institutions, individual experts — to borrow their credibility. The source label is false even if some content is genuine.

Examples: Websites with domain names nearly identical to major news outlets ("ABCnews.com.co" vs. "ABCnews.com"); social media accounts impersonating scientific journals; email communications fraudulently attributed to government agencies; deepfake videos appearing to show real politicians.

Key distinguishing feature: The falseness is in the attributed source rather than (necessarily) the content itself.

Type 4: Fabricated Content

Definition: Entirely invented content presented as factual. This is the most straightforward type of disinformation: the content is wholly false, and its creator knows it.

Examples: Invented quotations attributed to real politicians; false claims about political candidates' personal conduct; invented scientific studies; fabricated video of events that did not occur.

Key distinguishing feature: The content is entirely false and is presented as factual. This is the "fake news" of popular usage, though it represents only one type among seven.

Type 5: False Context

Definition: Genuine content — real photographs, real videos, real statements — presented with false contextual claims about when, where, or why it was created. The content itself is authentic; the contextual information is fabricated.

Examples: A genuine photograph from a 2010 flood in Pakistan recirculated in 2017 as showing "recent" flooding; a real video of street protests in one country presented as evidence of unrest in another; a politician's statement from one context, presented as if made in a different context where it means something different.

Key distinguishing feature: The underlying content is real; the false element is the contextual claims surrounding it. Often detectable through reverse image search.

Type 6: Manipulated Content

Definition: Genuine content that has been digitally altered to change its meaning. Photographs edited to add or remove elements; audio recordings spliced to create statements that were never made; videos edited to remove context or change sequence.

Examples: A photograph of a politician with a crowd, digitally altered to make the crowd appear larger or smaller; a video of a speech edited to remove qualifying statements; audio recordings manipulated to change emotional tone or meaning.

Key distinguishing feature: The original content is genuine, but has been altered. Distinguished from fabricated content (which is wholly invented) and from false context (where the content itself is unaltered).

Type 7: False Connection

Definition: Content where headlines, captions, or other framing elements do not accurately represent the actual content — often called "clickbait." The headline makes a claim that the body of the article does not support, or the image used to accompany a story does not represent what the story describes.

Examples: "Scientists Discover Cure for Cancer" (article describes a very preliminary cell culture study); image of a crying child used to illustrate a story about a country other than where the photograph was taken; video thumbnails that misrepresent the content.

Key distinguishing feature: The disconnect is between different elements of the same content package — typically headline and body, or image and caption — rather than between the content and external reality.

11.5.1 Applying the Seven-Type Taxonomy

These seven types are not mutually exclusive. A single piece of content may exhibit multiple types simultaneously. A manipulated video (Type 6) may also be presented with a false headline (Type 7) and attributed to an impersonated news organization (Type 3). More complex information disorder operations routinely combine multiple types.

The taxonomy also does not map perfectly onto the three-category framework. Satire/parody (Type 1) typically produces misinformation rather than disinformation, since the creators do not intend to deceive. Types 2-4 and 6-7, when created deliberately, are typically disinformation; when spread innocently, misinformation. False context (Type 5) can shade into malinformation when genuine content is deliberately decontextualized to harm a real person.


Section 11.6: The Actors, Messages, and Interpreters Model

11.6.1 Beyond Content Classification

Classification of content types is necessary but not sufficient for understanding information disorder. We also need to understand the process by which information disorder is created, transmitted, and received. Wardle and Derakhshan proposed a complementary model that maps the information disorder process across three stages: Agents, Messages, and Interpreters.

11.6.2 Agents (Creators and Amplifiers)

Who creates information disorder?

Agents of information disorder range from the highly sophisticated to the entirely naive. State intelligence services with multi-million dollar budgets; political operatives with professional communications expertise; commercial actors with financial incentives; ideological activists with passionate commitments; individual trolls motivated by chaos or attention; and ordinary people innocently sharing content they believe to be true.

Crucially, the original creator of disinformation is often not the most important agent. Disinformation campaigns are typically designed to be amplified — to be picked up by real people, influential accounts, and eventually mainstream media. The Russian Internet Research Agency's social media operations depended not on the IRA's own reach (which was relatively modest) but on the amplification provided by real American social media users who engaged with, shared, and reacted to IRA content without knowing its origin.

Motivations of agents:

  • Political influence: Shaping electoral outcomes, legislative debates, or public opinion on policy issues
  • Financial gain: Advertising revenue, short-selling stocks, commercial sabotage of competitors
  • Ideological goals: Advancing religious, political, or social agendas
  • Strategic advantage: Destabilizing adversaries, undermining institutional trust, creating social division
  • Entertainment/chaos: Trolling, lulz, attention-seeking
  • Sincere belief: Agents who genuinely believe the content they create and spread, even if that content is false

Creator vs. Amplifier: The Wardle-Derakhshan model distinguishes between agents who create information disorder content and agents who amplify it. Amplifiers may be paid (bot networks, coordinated inauthentic behavior), motivated (partisan activists, ideological communities), or innocent (ordinary users who share content because they believe it). The distinction matters for both attribution and intervention: addressing paid amplification requires different tools than addressing sincere but mistaken amplification.

11.6.3 Messages (Content Attributes)

What properties of the message affect its spread?

Not all false content spreads equally. The characteristics of a message significantly influence its propagation through information networks:

Format: Video and image content spreads faster and more widely than text on most platforms. Emotional visuals — particularly photographs of human suffering, joy, or outrage — generate higher engagement than equivalent text-based content.

Emotional valence: Content that provokes strong emotional responses — particularly anger and anxiety — spreads faster than neutral content. This has been demonstrated empirically in studies of Twitter/X sharing behavior and in experimental studies of information contagion. Disinformation designers understand this dynamic and craft emotionally charged content deliberately.

Narrative coherence: False content that fits into pre-existing narrative templates ("corrupt elite victimizes ordinary people"; "out-group threatens in-group"; "trusted institution covers up important truth") is more readily accepted and shared than content that contradicts these templates. Disinformation that confirms what people already believe — confirmation bias in action — requires less evidential support to gain acceptance.

Apparent source credibility: Content presented as coming from authoritative sources — scientific institutions, government agencies, credentialed experts — spreads more readily than content from obviously low-credibility sources. This is why imposter content (Type 3) and source fabrication are such common disinformation tactics.

Novelty: Novel information spreads faster than familiar information on social media, independent of its truth value. Research by Vosoughi, Roy, and Aral (2018) found that false news spreads significantly faster than true news on Twitter, in part because false news is more novel.

11.6.4 Interpreters (Audiences and Reception)

How do audiences receive and process information disorder?

The final stage of the Wardle-Derakhshan model concerns the audiences who receive information disorder content. Reception is not passive — audiences bring cognitive frameworks, social contexts, and identity commitments that shape how they interpret incoming information.

Prior beliefs and identity: Information that is consistent with a person's existing beliefs and group identities is more readily accepted than information that challenges those beliefs. This is the basis of motivated reasoning — the tendency to evaluate evidence not by its epistemic quality but by whether it supports what we already believe or want to believe. Political identity is particularly powerful: partisans systematically rate identical information as more credible when attributed to their own party than to the opposing party.

Information environment: The information environment in which content is received matters enormously. Content shared by trusted friends or family members is received differently than content from strangers. Content encountered on a platform perceived as credible generates different reception than identical content on a platform associated with low-quality information. The social context of information receipt — who else is seen to believe it, who endorses or challenges it — shapes individual reception.

Literacy and cognitive resources: Individuals differ in their general cognitive resources and in their specific media literacy skills. Research suggests that analytical thinking style (reflective rather than intuitive) is associated with lower susceptibility to misinformation, independent of education level. Specific skills — source evaluation, lateral reading, reverse image search — reduce susceptibility to specific types of misinformation.

Repetition effects: Repeated exposure to false claims increases their perceived truth, even when people know the claims are false — the "illusory truth effect." Corrections and rebuttals do not fully neutralize this effect, because the repeated exposure to the original false claim, even in a rebuttal context, reinforces the mental association between the claim and truth-feeling.


Section 11.7: Measuring the Scale — Prevalence and Methodological Challenges

11.7.1 How Much Misinformation Is There?

Establishing the actual prevalence of misinformation in the information environment is one of the most methodologically vexed questions in the field. Estimates vary enormously depending on how misinformation is defined, what platforms are studied, what time periods are examined, and what measurement methods are employed.

Several significant findings have emerged from the research literature:

On social media during elections: Studies of the 2016 US election found that the 20 most-shared fabricated election stories on Facebook generated more social media engagement than the 20 most-shared genuine election stories from mainstream news outlets (BuzzFeed analysis). However, this finding measures top-line engagement, not consumption, and does not establish whether misinformation reached more people than genuine news overall.

Twitter false news spread: The landmark Vosoughi, Roy, and Aral (2018) study in Science analyzed 126,000 Twitter rumor cascades and found that false stories were 70% more likely to be retweeted than true stories, and reached audiences of 1,500 people six times faster than true stories. This finding has been widely cited, though critics note that the study examined only stories fact-checked by specific organizations, which may not be representative.

Actual consumption of misinformation: Importantly, Guess, Nagler, and Tucker (2019) and subsequent studies have found that actual consumption of misinformation (as opposed to potential exposure through spreading) may be concentrated among a relatively small portion of the population — particularly older, highly partisan users. A large majority of Americans, in pre-election surveys, reported having seen little or no misinformation. This finding does not mean misinformation is harmless, but it complicates the narrative of universal exposure.

Platform prevalence: The proportion of content on any given platform that constitutes misinformation varies enormously by platform, topic area, time period, and analytical method. Estimates from Twitter content analysis suggest that less than 1% of content is outright fabricated, but much larger percentages may be misleading or decontextualized. WhatsApp and other private messaging platforms, which are harder to study, may host higher concentrations of misinformation in some contexts.

11.7.2 Methodological Challenges

Several deep methodological challenges complicate efforts to measure misinformation prevalence:

Definition operationalization: No research project can measure "misinformation" without first operationalizing a definition. Operationalizations vary enormously across studies, making direct comparison difficult. Some studies focus only on fabricated content; others include misleading content; others include conspiracy theories; others include content rated false by fact-checkers. These different operationalizations yield radically different prevalence estimates.

Selection bias in fact-checking: Studies that measure misinformation by checking claims against fact-checker databases are limited by the fact that fact-checkers do not check a random sample of claims — they check claims that are already prominent, already politically contentious, or already flagged by users. This selection bias means fact-checker-based studies systematically miss misinformation that flies below the radar of fact-checkers.

Platform access limitations: The most rigorous research requires access to platform data — not just what is publicly visible, but data on what content is shown to which users, at what frequency, through what algorithmic amplification. Platforms have historically been reluctant to provide such access, and when they have, the research conditions have often been tightly controlled in ways that limit scientific validity.

The denominator problem: Measuring the prevalence of misinformation requires knowing not just how much misinformation exists, but how much information exists in total. Without a reliable denominator, prevalence estimates are unreliable. Given the enormous and constantly growing volume of online content, establishing this denominator is essentially impossible.

Causal inference challenges: Even establishing that people were exposed to misinformation does not establish that the misinformation caused changes in their beliefs or behaviors. Disentangling the causal effect of misinformation from pre-existing beliefs, other information sources, and social influences requires experimental designs that are difficult to implement at scale.

11.7.3 What We Know with Reasonable Confidence

Despite these methodological challenges, several conclusions can be stated with reasonable confidence:

  1. False content spreads faster than true content on social media platforms, and spreads further.
  2. Consumption of misinformation is unequally distributed — some individuals and communities are significantly more exposed than others.
  3. Corrections reduce belief in false claims, but rarely eliminate it entirely, and often do not prevent future spreading of the same claim.
  4. During high-stakes moments (elections, crises, pandemics), misinformation surges and can reach large audiences rapidly.
  5. Emotionally charged misinformation spreads faster than emotionally neutral misinformation.

Section 11.8: Why Taxonomy Matters — Policy and Intervention Implications

11.8.1 Different Problems Require Different Solutions

The most important practical implication of the Wardle-Derakhshan taxonomy is that the three categories — and the seven sub-types — require fundamentally different responses. A policy designed to address disinformation may be actively harmful if applied to misinformation, and vice versa.

Responding to misinformation (false content spread without harmful intent) appropriately focuses on: - Education and media literacy, helping people evaluate information more critically - Friction-based design interventions, slowing the sharing process to allow for reflection - Correction prompts that help people assess accuracy before sharing - Prebunking approaches that inoculate audiences against common misinformation techniques

Responding to disinformation (deliberately created false content) appropriately focuses on: - Platform enforcement against coordinated inauthentic behavior - Attribution and exposure of disinformation actors - Regulatory frameworks addressing political advertising transparency - Diplomatic and law enforcement responses to state-sponsored operations - Source labeling and transparency requirements for content

Responding to malinformation (true content used to harm) appropriately focuses on: - Privacy law frameworks - Anti-harassment and anti-doxxing enforcement - Platform policies around private information publication - Supporting victims of targeted harassment campaigns

Conflating these categories produces policy errors. A "fake news" law designed to prohibit false content would be inappropriately applied to malinformation (which is true) and might be weaponized against satire and opinion. Platform removal policies designed for fabricated content might not address misleading content that is technically accurate.

Legal approaches to information disorder are necessarily shaped by constitutional constraints on speech regulation, which vary significantly across jurisdictions. In the United States, the First Amendment significantly constrains government regulation of false speech, with the Supreme Court having held in United States v. Alvarez (2012) that the government cannot prohibit false statements of fact without narrowly tailored justification.

In Europe, broader defamation laws, privacy regulations (including GDPR), and the EU Digital Services Act provide more regulatory tools. Article 17 of the Digital Services Act requires platforms to implement mechanisms for reporting illegal content, and the Code of Practice on Disinformation creates voluntary commitments for platforms operating in the EU.

The taxonomy matters for legal frameworks because legal tools that work for one category may not work — or may cause harm — when applied to another. Defamation law addresses false statements that harm individuals' reputations; it does not address malinformation (which involves true statements) or pure disinformation (which may not target specific individuals). Privacy law addresses some forms of malinformation but is not designed for disinformation involving public figures.

11.8.3 Platform Governance Implications

Social media platforms have developed their own governance frameworks for information disorder, and the taxonomy shapes these frameworks in important ways. Most major platforms maintain policies against:

  • False information (addressing misinformation and disinformation): Platforms like Facebook/Meta, Twitter/X, and YouTube have developed systems for labeling disputed health and election information, removing demonstrably false content in specific categories (e.g., COVID-19 vaccine safety), and demoting content identified as misinformation by fact-checking partners.

  • Coordinated inauthentic behavior (a disinformation-specific category): Platforms remove networks of accounts operating in a coordinated way while misrepresenting their true identity or origin — essentially, bot networks and fake persona operations characteristic of disinformation campaigns.

  • Privacy violations (addressing some malinformation): Platforms restrict the sharing of private personal information, including home addresses and financial information, and have anti-doxxing policies.

The taxonomy reveals gaps in platform governance. The "misleading content" type (Type 2) is particularly difficult to address through platform enforcement, because the individual facts cited may be accurate and the misleading element lies in framing and omission that is difficult to automate. Similarly, the "false connection" type (Type 7) — misleading headlines — is often caught only after viral spread, when the platform detects the discrepancy between headline engagement and body-content engagement.

11.8.4 Implications for Individual Media Literacy

For individual media consumers, the taxonomy provides a framework for more sophisticated information evaluation. Rather than a binary true/false judgment, the taxonomy suggests a series of questions:

  1. Is this content accurate? (Addressing fabricated content)
  2. Does the content accurately represent the source it's attributed to? (Addressing imposter content)
  3. Does the headline/caption accurately represent the content? (Addressing false connection)
  4. Is the context provided for this content accurate? (Addressing false context)
  5. Has this content been altered from its original form? (Addressing manipulated content)
  6. Does this content, while perhaps technically accurate, create a misleading overall impression? (Addressing misleading content)
  7. Is this content intended to be satirical? (Addressing satire/parody)

These questions are the operational form of the taxonomy applied at the individual level. Media literacy curricula that teach students to ask these specific questions — rather than simply asking "is this true or false?" — provide a more sophisticated and practically useful framework for navigating information disorder.


Key Terms

Information Disorder: The umbrella term coined by Wardle and Derakhshan to encompass the full range of problems in the information ecosystem, including but not limited to "fake news."

Misinformation: False or inaccurate information spread without harmful intent on the part of the spreader.

Disinformation: Deliberately false information created and spread with the intent to deceive.

Malinformation: Factually accurate information deployed with the intent to cause harm to individuals, organizations, or social institutions.

Dezinformatsiya: Soviet/Russian intelligence term for active measures operations involving the spread of false information; root of the modern "disinformation."

Satire/Parody: Content that uses irony and exaggeration for social commentary; causes misinformation when received out of context.

Fabricated Content: Entirely invented content presented as factual.

False Context: Genuine content presented with false contextual claims.

Manipulated Content: Genuine content that has been digitally altered.

Imposter Content: Content falsely attributed to legitimate sources.

Coordinated Inauthentic Behavior: Platform policy term for organized disinformation operations using fake accounts and coordinated activity.

Illusory Truth Effect: The psychological phenomenon whereby repeated exposure to a false claim increases its perceived truth.

Motivated Reasoning: The tendency to evaluate evidence based on whether it supports pre-existing beliefs rather than on its epistemic quality.

Lateral Reading: A fact-checking technique, recommended by fact-checkers and librarians, involving searching for information about a source from other sources rather than reading only within the original source.


Callout Boxes

Critical Thinking Prompt 11.1: The Intent Problem The Wardle-Derakhshan framework classifies content partly by the intent of the creator or spreader. But intent is often unverifiable. A politician who spreads a false claim about a political opponent might argue they sincerely believed it was true, placing their behavior in the "misinformation" category rather than "disinformation." How should we handle cases where intent cannot be established? Does intent matter for assessing harm? Should it matter for determining legal or platform responses?

Research Spotlight 11.1: Vosoughi, Roy, and Aral (2018) The study "The Spread of True and False News Online," published in Science in 2018, analyzed 126,000 news stories shared by approximately 3 million people on Twitter over a decade. The findings were striking: false stories were 70% more likely to be retweeted than true stories, and false stories spread faster, further, deeper, and more broadly than true stories in every category of information. The researchers found that these effects were driven primarily by human behavior rather than bots — real people were more likely to share false news because it was more novel and emotionally provocative. This finding challenged the assumption that algorithmic amplification was the primary driver of misinformation spread.

Policy Brief 11.1: The EU Digital Services Act and Information Disorder The European Union's Digital Services Act (DSA), fully applicable from February 2024, creates new obligations for very large online platforms (VLOPs) and very large online search engines (VLOSEs) regarding systemic risks, including "the dissemination of illegal content" and "actual or foreseeable negative effects for civic discourse and electoral processes, public security, and public health." Platforms must conduct annual risk assessments and implement risk mitigation measures. The DSA's approach is notable for treating disinformation as a systemic risk to be managed through platform design and governance, rather than purely a content moderation problem.


Discussion Questions

  1. The Wardle-Derakhshan framework classifies content along two primary dimensions: veracity and intent. What other dimensions might be relevant to a comprehensive taxonomy of information disorder? Consider, for example, the harm caused, the scale of spread, the format, or the target audience.

  2. The malinformation category suggests that true information can cause harm. Does this challenge your assumptions about free speech? How should liberal democracies committed to freedom of expression handle the publication of true but harmful information?

  3. Consider the seven content types. Which do you believe is most harmful? Which is most difficult to address through platform governance? Which is most amenable to individual media literacy interventions?

  4. The section on misinformation emphasizes that most people who spread false content do so without harmful intent. How does this change your moral assessment of people who spread misinformation? Should there be a duty of care before sharing information on social media?

  5. The "illusory truth effect" suggests that even corrections can reinforce false beliefs by re-exposing people to the original false claim. If corrections are partially counterproductive, what are the alternatives? Is there a correction strategy that avoids this problem?

  6. Taxonomy-building is a political as well as scientific act — the categories we use shape what we see and what remains invisible. Whose interests are served by the Wardle-Derakhshan taxonomy? Are there aspects of information disorder that it obscures or marginalizes?


Summary

This chapter introduced the Wardle-Derakhshan information disorder framework, the most influential taxonomy in the academic study of misinformation. The framework organizes information disorder along two primary dimensions — veracity and intent — yielding three master categories: misinformation (false, no harmful intent), disinformation (false, harmful intent), and malinformation (true, harmful intent). Within these categories, seven more granular content types — satire/parody, misleading content, imposter content, fabricated content, false context, manipulated content, and false connection — allow for more precise classification of specific content.

The Actors-Messages-Interpreters model complements the content taxonomy by mapping the process through which information disorder is created, transmitted, and received, emphasizing that understanding information disorder requires analyzing not just content but the agents who create it, the properties that drive its spread, and the cognitive and social contexts in which audiences receive it.

Measuring the scale of misinformation remains methodologically challenging, but several robust findings have emerged: false content spreads faster than true content; consumption is unevenly distributed; and corrections are only partially effective.

Most importantly, the taxonomy has practical stakes: different types of information disorder require fundamentally different responses from individuals, platforms, and policymakers. The precision offered by rigorous taxonomy is not an academic luxury but a practical necessity for effective intervention.


Next: Chapter 12 examines propaganda — its historical development, classic techniques, and modern digital manifestations — building on the foundational taxonomy established in this chapter.