Appendix E: Glossary

This glossary defines 150+ key terms used throughout the textbook, organized alphabetically. Each entry includes the term, a definition of two to four sentences, and a chapter reference indicating where the term is first introduced or most fully discussed. Terms appearing in multiple chapters are referenced at their primary treatment.


A

Accuracy nudge: A brief prompt asking readers to consider the accuracy of content before sharing it online. Studies by Pennycook and Rand (2019) found that accuracy nudges reduce the sharing of misinformation without suppressing sharing overall. (Chapter 35)

Ad hominem fallacy: A logical fallacy in which the character, motives, or circumstances of the person making an argument are attacked instead of engaging with the argument itself. Though relevant character information can sometimes be germane, pure ad hominem attacks are a form of distraction that circumvents rational discourse. (Chapter 25)

Adjacency matrix: A square matrix representation of a graph G = (V, E) where entry A_ij = 1 if an edge exists between node i and node j, and A_ij = 0 otherwise. Weighted variants allow A_ij to represent edge weight (e.g., retweet frequency). (Chapter 23)

Algorithmic amplification: The process by which platform recommendation algorithms preferentially surface and promote certain content, increasing its reach beyond what organic sharing alone would achieve. Amplification can disproportionately benefit emotionally arousing or outrage-inducing content. (Chapter 8)

Anchoring bias: A cognitive bias in which individuals rely too heavily on the first piece of information encountered (the "anchor") when making subsequent judgments. Initial exposure to a false claim can anchor subsequent evaluations even after correction. (Chapter 4)

Astroturfing: The practice of creating the false impression of grassroots public support for a position, product, or person. Named after the artificial turf brand, astroturfing disguises the organized or corporate origin of what appears to be spontaneous citizen activity. (Chapter 20)

Attribution error (fundamental): The tendency to overattribute others' behavior to character or disposition rather than situational factors. In misinformation contexts, this appears as attributing belief in false claims entirely to character flaws rather than recognizing the situational and cognitive pressures that make anyone susceptible. (Chapter 4)

Availability heuristic: A mental shortcut in which the ease of recalling examples is used as a proxy for the frequency or likelihood of events. Vivid, emotionally charged misinformation can make false scenarios cognitively available, distorting risk perception. (Chapter 4)


B

Backfire effect: The originally reported but now contested finding that corrections to misinformation could paradoxically strengthen belief in the original falsehood among strongly committed believers. More recent and rigorous replications suggest the backfire effect is rare; most people do update toward truth when corrected. (Chapter 3)

Base rate neglect: The tendency to ignore statistical background frequency (base rate) when evaluating individual-case evidence. A test with 99% sensitivity applied to a rare disease still produces mostly false positives because the base rate of the disease is very low. (Chapter 28)

BERT (Bidirectional Encoder Representations from Transformers): A large language model introduced by Google in 2018 that is pre-trained on vast text corpora using masked language modeling and next-sentence prediction. BERT's bidirectional context processing made it transformative for NLP classification tasks including misinformation detection. (Chapter 22)

Betweenness centrality: A graph centrality measure quantifying how often a node lies on the shortest path between other pairs of nodes. Nodes with high betweenness centrality act as brokers or bridges between communities and are critical for information flow control. (Chapter 23)

Bots (social media): Automated accounts on social media platforms that post, like, retweet, or follow at rates impossible for humans, often to artificially amplify messages, create the illusion of consensus, or harass users. Sophisticated bots hybridize automation with human oversight ("cyborgs"). (Chapter 21)

Bounded rationality: The concept from Herbert Simon that human reasoning is limited by cognitive constraints, available information, and time pressure. Bounded rationality helps explain why intelligent people make predictable reasoning errors, including susceptibility to misinformation. (Chapter 3)

Brandolini's Law: An informal observation (also called the "Bullshit Asymmetry Principle") stating that the effort required to refute misinformation is an order of magnitude greater than the effort required to produce it. This asymmetry is a fundamental challenge for fact-checkers and corrections researchers. (Chapter 19)


C

Cherry-picking: The selective presentation of evidence that supports a conclusion while ignoring contradictory evidence. Cherry-picking is a common technique in climate change denial, vaccine skepticism, and other pseudo-scientific claims. (Chapter 25)

CIB (Coordinated Inauthentic Behavior): Meta's term for campaigns using fake accounts, pages, or groups to manipulate public discourse in a deceptive manner. CIB violates platform policies regardless of whether the underlying message is true or false. (Chapter 21)

Clickbait: Headlines or thumbnails crafted to generate curiosity or emotional arousal, maximizing clicks at the potential expense of accuracy or relevance. Clickbait exploits the "curiosity gap" between what is known and unknown. (Chapter 7)

ClaimBuster: An automated system that scores sentences by their "check-worthiness" — the likelihood that they contain factual claims warranting verification. ClaimBuster is used in automated fact-check triage systems. (Chapter 22)

Cognitive dissonance: The psychological discomfort experienced when holding two conflicting beliefs or when new information contradicts existing beliefs. People reduce dissonance by rejecting new information rather than updating beliefs, contributing to misinformation persistence. (Chapter 3)

Community notes (formerly Birdwatch): Twitter/X's community-driven fact-checking system in which users collaboratively write and evaluate contextual notes on potentially misleading tweets. Notes require agreement from users with diverse political viewpoints before being shown publicly. (Chapter 34)

Confirmation bias: The tendency to search for, interpret, and favor information that confirms pre-existing beliefs. Confirmation bias is among the most extensively documented cognitive biases and is a primary driver of partisan misinformation acceptance. (Chapter 4)

Conspiracy theory: An explanatory framework attributing events to secret plots by powerful, malevolent actors. Conspiracy theories often share structural features including unfalsifiability, proportionality bias (big events require big causes), and persecution narratives. (Chapter 13)

Content moderation: The practice of monitoring, reviewing, and enforcing community standards on user-generated content. Content moderation can be human-reviewed, automated, or hybrid, and involves complex trade-offs between harmful content removal and freedom of expression. (Chapter 33)

Correlation: A statistical measure of the degree to which two variables change together. Correlation does not imply causation; many observed correlations in social science reflect confounding variables or reverse causation. (Chapter 28; Appendix A)

Curation (algorithmic): The process by which platforms algorithmically select which content to show each user from the total available pool. Algorithmic curation shapes information exposure more powerfully than user choices alone. (Chapter 8)


D

Dark patterns: User interface design choices that nudge users toward actions (e.g., sharing before reading, staying on platform) not necessarily in their interest. Dark patterns in social media design can inadvertently facilitate misinformation spread. (Chapter 8)

Deepfake: Synthetic media — images, audio, or video — generated using deep learning techniques (particularly generative adversarial networks) to depict events or statements that did not occur. Detection of deepfakes is an active area of AI research. (Chapter 10)

Degree centrality: A basic graph centrality measure equal to the number of edges incident to a node, normalized by the maximum possible. In directed networks, in-degree and out-degree are distinguished. (Chapter 23)

Deliberation: A form of public reasoning in which participants exchange reasons, listen to others, and potentially revise their views. Deliberative democracy theory holds that legitimate collective decisions emerge from authentic deliberative processes, which misinformation can corrupt. (Chapter 30)

Disinformation: False information that is deliberately created and spread with the intent to deceive, often for political, financial, or strategic advantage. Disinformation is distinct from misinformation (unintentionally false) and malinformation (true but used to harm). (Chapter 1)

DSA (Digital Services Act): European Union legislation enacted in 2022 that regulates digital intermediaries and platforms, imposing transparency requirements, due diligence obligations, and crisis response protocols on very large online platforms. The DSA establishes systemic accountability for content moderation and algorithmic systems. (Chapter 36)

Dual-process theory: A framework in cognitive psychology distinguishing between fast, automatic, intuitive reasoning (System 1) and slow, deliberate, analytical reasoning (System 2). Both systems can produce errors; misinformation often exploits System 1 processing. (Chapter 3)


E

Echo chamber: An informational environment in which people are predominantly exposed to views aligned with their own, reinforcing existing beliefs through repetition and social validation. Echo chambers may be formed through algorithmic curation, social network homophily, or active choice. (Chapter 9)

Effect size: A quantitative measure of the practical significance of a research finding, independent of sample size. Common effect sizes include Cohen's d (for mean differences), Pearson r (for correlations), and odds ratio (for categorical outcomes). (Chapter 28; Appendix A)

Emotional reasoning: Making logical inferences from emotional states rather than evidence — "I feel strongly that this is true, therefore it must be." Emotionally charged content exploits this tendency to promote belief without adequate evidence. (Chapter 25)

Entropy (information): A measure from information theory of the uncertainty or unpredictability of a random variable. In media research, entropy quantifies the diversity of an information diet — low entropy indicates narrow, predictable content exposure. (Chapter 23; Appendix A)

Epistemic authority: The recognized right or legitimacy of a source to make credible knowledge claims in a domain. Misinformation often undermines legitimate epistemic authorities (scientists, journalists, public health officials) or exploits the appearance of authority through false credentials. (Chapter 1)

Epistemic cowardice: Deliberately vague or uncommitted communication to avoid controversy, failing to share relevant knowledge or judgment when the situation calls for it. Journalists and academics can exhibit epistemic cowardice through excessive "both sides" framing. (Chapter 41)


F

Fact-checking: The practice of investigating claims made in public discourse and publishing verdicts on their accuracy. Professional fact-checkers typically use primary sources, expert consultation, and transparent methodology. IFCN-signatories commit to nonpartisanship and transparency. (Chapter 19)

False balance: The journalistic practice of giving equal coverage to two sides of an issue regardless of the actual weight of evidence, thereby creating the misleading impression that a scientific or factual question is more contested than it is. (Chapter 18)

False consensus effect: The tendency to overestimate the proportion of people who share one's own beliefs, attitudes, or behaviors. People who believe false claims often assume their beliefs are widely held, which can reinforce confidence in those beliefs. (Chapter 4)

False dilemma: A logical fallacy presenting only two options as if they are the only possibilities when other alternatives exist. Also called a "false dichotomy" or "either-or fallacy." (Chapter 25)

FEVER: Fact Extraction and VERification — a benchmark dataset for automated claim verification against Wikipedia evidence. See Appendix D. (Chapter 22)

Filter bubble: A state of intellectual isolation resulting from personalized algorithmic curation, in which a user encounters only content that reinforces pre-existing views. Coined by Eli Pariser (2011); the empirical evidence for strong filter bubble effects from algorithms is more mixed than popular discourse suggests. (Chapter 9)

Framing effect: The phenomenon in which the same information presented in different ways produces different responses. A vaccine presented as "95% effective" is evaluated more favorably than one presented as "failing in 5% of cases" despite identical meaning. (Chapter 3)


G

GAN (Generative Adversarial Network): A class of deep learning architectures in which a generator network creates synthetic content and a discriminator network attempts to distinguish real from synthetic content. The two networks train simultaneously in an adversarial process, producing increasingly realistic synthetic text, images, audio, and video. (Chapter 10)

Gish gallop: A debate tactic of overwhelming opponents with a rapid succession of many weak arguments, making comprehensive rebuttal impractical in the available time. Named after creationist debater Duane Gish. (Chapter 25)

GDELT: The Global Database of Events, Language, and Tone — a petabyte-scale open dataset monitoring global news media continuously. (Appendix D)

Graph theory: The mathematical study of graphs (networks of nodes and edges). Graph theory provides the formal foundation for network analysis of information ecosystems, including centrality, community structure, and diffusion modeling. (Chapter 23)


H

Hasty generalization: A logical fallacy in which a broad conclusion is drawn from an insufficient or unrepresentative sample. Misinformation frequently uses single anecdotes to generate sweeping generalizations about vaccines, immigrants, or other groups. (Chapter 25)

Heuristic: A mental shortcut or rule of thumb that enables fast decision-making at the cost of occasional error. Heuristics are adaptive in many environments but create systematic vulnerabilities to misinformation. (Chapter 3)

Homophily: The tendency of individuals to associate with others who are similar in attributes such as political views, demographics, or interests. Network homophily contributes to ideological segregation and echo chamber formation. (Chapter 9)

Hyperpolitics: A media environment in which an increasingly wide range of topics become politicized, causing audiences to evaluate information primarily through partisan identity rather than evidence. (Chapter 30)


I

IFCN (International Fact-Checking Network): A network of fact-checking organizations housed at the Poynter Institute that certifies organizations meeting standards of nonpartisanship, transparency, and corrections policy. IFCN certification is used by platforms to identify trusted fact-checking partners. (Chapter 19)

Illusory truth effect: The finding that repeated exposure to a statement increases its perceived truth, even when the statement is labeled as false and even among people who initially knew it to be false. The effect is driven by processing fluency — familiar information feels true. (Chapter 3)

Inoculation theory: A persuasion theory analogizing cognitive resistance to biological vaccination — exposing people to weakened forms of misinformation arguments, along with refutation, builds resilience against subsequent exposure to the full-strength claim. Pioneered by McGuire (1964) and extended by van der Linden and colleagues. (Chapter 35)

Information cascade: A social dynamics phenomenon in which individuals observe others' behavior (sharing, liking, endorsing content) and rationally imitate it, potentially spreading misinformation independent of its truth value. (Chapter 9)

Information disorder: A framework developed by Claire Wardle and Hossein Derakhshan distinguishing three types of problematic information: misinformation (false, no harmful intent), disinformation (false, harmful intent), and malinformation (true, harmful intent). (Chapter 1)

IRA (Internet Research Agency): A Russian government-linked organization based in St. Petersburg that conducted large-scale social media influence operations targeting the United States and other countries between at least 2014 and 2018. The IRA's operations were documented in Mueller Report Vol. I and subsequent Senate Intelligence Committee reports. (Chapter 21)


J

Journalistic norms: The professional standards and values guiding responsible journalism, including accuracy, verification, balance, fairness, independence, and minimizing harm. Misinformation often mimics the surface appearance of journalism while violating its norms. (Chapter 18)


K

KL divergence (Kullback-Leibler divergence): A measure of how much one probability distribution differs from a reference distribution. In information ecosystem research, KL divergence can measure how far an individual's media diet diverges from some reference norm. (Chapter 23; Appendix A)


L

Laplacian matrix: In graph theory, the matrix L = D − A, where D is the degree matrix and A is the adjacency matrix. The Laplacian's eigenvalues reveal structural properties of the graph including the number of connected components and information about community structure. (Chapter 23; Appendix F)

Lateral reading: A verification strategy in which readers leave a website to search for information about it from other sources, rather than reading deeper within the site. Adopted from professional fact-checkers, lateral reading is among the most effective media literacy strategies identified by the Stanford History Education Group. (Chapter 19)

LIAR dataset: A benchmark dataset of 12,836 political statements from PolitiFact with six-level truthfulness labels, widely used in automated misinformation detection research. (Chapter 22; Appendix D)

Logical fallacy: A pattern of reasoning that appears valid but contains a flaw that invalidates the conclusion. Recognizing logical fallacies is a core component of critical thinking education. (Chapter 25)


M

Malinformation: True information used with intent to cause harm, such as doxxing, sharing private information, or weaponizing accurate but damaging information about individuals. One of the three categories in the information disorder framework. (Chapter 1)

Media literacy: The ability to access, analyze, evaluate, create, and act using all forms of communication. Modern media literacy frameworks extend beyond traditional print to digital, social, and algorithmic media. (Chapter 17)

MIL (Media and Information Literacy): UNESCO's framework for media literacy that encompasses both media literacy (traditional) and information literacy (digital/library science). MIL emphasizes civic competencies and human rights dimensions of information access. (Chapter 17)

Misinformation: False or inaccurate information spread without deliberate intent to deceive. The lack of intent distinguishes misinformation from disinformation, though in practice the intent of the original creator may differ from the intent of secondary sharers. (Chapter 1)

Modularity (Q): A measure of the quality of a partition of a network into communities. Modularity compares the density of edges within communities to what would be expected in a random graph with the same degree sequence. High modularity (Q close to 1) indicates strong community structure. (Chapter 23)

Motivated reasoning: The tendency to process information in a manner that arrives at desired conclusions rather than accurate ones. Motivated reasoning is not merely wishful thinking — it employs real logical operations, but selectively. (Chapter 3)

Mutual information: An information-theoretic measure of the amount of information shared between two random variables. In media research, mutual information quantifies how predictable one variable is from another — e.g., how predictable a person's news diet is from their political affiliation. (Chapter 23; Appendix A)


N

NAMLE (National Association for Media Literacy Education): The leading US organization for media literacy education, providing frameworks, standards, and professional development. NAMLE's core principles emphasize inquiry, context, and constructivist learning. (Chapter 17)

NER (Named Entity Recognition): An NLP task in which a model identifies and classifies named entities in text — persons, organizations, locations, dates, etc. NER is used in claim extraction and political actor tracking. (Chapter 22)

NetzDG (Netzwerkdurchsetzungsgesetz): Germany's Network Enforcement Act (2017), which requires social media platforms with over two million users in Germany to remove clearly illegal content within 24 hours (or 7 days for complex cases). NetzDG was among the first national content moderation laws and has influenced subsequent EU-level regulation. (Chapter 36)

Network centrality: A family of measures quantifying the importance of a node within a network, including degree centrality, betweenness centrality, closeness centrality, and PageRank. Different centrality measures capture different notions of structural importance. (Chapter 23)

NLP (Natural Language Processing): The field of computer science and linguistics concerned with enabling computers to understand, interpret, and generate human language. NLP techniques underpin most automated misinformation detection systems. (Chapter 22)

Nudge: A behavioral intervention that alters people's behavior in a predictable way without restricting options or significantly changing economic incentives. Accuracy nudges, friction, and warning labels are examples used in misinformation contexts. (Chapter 35)


O

Online harassment: Sustained, targeted abusive behavior toward individuals online. Disinformation campaigns often weaponize harassment to silence journalists, researchers, and public health officials who counter false narratives. (Chapter 39)


P

PageRank: An algorithm developed by Larry Page and Sergey Brin that ranks nodes by the number and quality of incoming links, iteratively treating each node's rank as a function of its neighbors' ranks. Used to model information authority in networks and as a feature in misinformation detection. (Chapter 23)

Pareidolia: The tendency to perceive meaningful patterns (faces, figures) in random or ambiguous data. Extended to pattern-seeking generally, it explains why people find significance in coincidental events, supporting conspiracy thinking. (Chapter 13)

Partisan media: News organizations that deliberately frame coverage to favor a particular political party, ideology, or worldview. Partisan media ecosystems can create self-reinforcing information environments where misinformation aligned with partisan interests spreads without correction. (Chapter 7)

Phishing: A cyber-attack technique using deceptive messages (often email) designed to trick recipients into revealing credentials or installing malware. Phishing campaigns frequently incorporate disinformation narratives to increase believability. (Chapter 39)

PHEME dataset: A Twitter dataset of rumour conversations around nine breaking news events with stance and veracity labels. (Chapter 22; Appendix D)

POFMA (Protection from Online Falsehoods and Manipulation Act): Singapore's 2019 law giving government ministers the power to issue correction and takedown orders to online platforms and individuals spreading falsehoods deemed contrary to the public interest. POFMA has been criticized by press freedom organizations for its breadth of discretion. (Chapter 36)

Post-truth: A cultural and epistemic condition in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. The term was Oxford Dictionary's Word of the Year in 2016. (Chapter 1)

Prebunking: The proactive delivery of inoculation content before exposure to misinformation, in contrast to debunking (correcting after exposure). Prebunking gives individuals advanced warning of manipulation techniques. (Chapter 35)

Precision (NLP/ML): The proportion of items classified as positive that are truly positive. High precision means few false positives — important when the cost of incorrectly labeling true content as misinformation is high. (Chapter 22)

Proportionality bias: The cognitive tendency to assume that major events must have major causes, making conspiracy explanations for large events more intuitively appealing than small causes. (Chapter 13)

Proxy war (information): The use of third-party actors, media outlets, or unwitting citizens to amplify strategic narratives, obscuring the original sponsor. Russia's use of RT, Sputnik, and Internet Research Agency activities exemplify proxy information operations. (Chapter 21)


R

Radicalization: The process by which individuals adopt increasingly extreme ideological positions, often associated with exposure to radical content online. Recommendation algorithms may contribute to radicalization through incremental escalation toward more extreme content. (Chapter 15)

Recall (sensitivity): In classification, the proportion of actual positives that are correctly identified. High recall means few false negatives — important when failing to detect misinformation carries high social costs. (Chapter 22)

Refutation: The explicit correction of a false belief, providing the accurate information. Effective refutation explains not only what is false but why it is false and what the truth is. (Chapter 35)

Relative risk: The ratio of the probability of an outcome in an exposed group to the probability in an unexposed group. Misinformation frequently misrepresents relative risks, particularly in health contexts where absolute risk differences are more meaningful. (Chapter 28; Appendix A)

ROC AUC (Receiver Operating Characteristic — Area Under the Curve): A metric for binary classifier performance measuring the probability that a randomly chosen positive case is ranked higher than a randomly chosen negative case. AUC = 0.5 is chance; AUC = 1.0 is perfect. (Chapter 22)


S

Section 230: A provision of the US Communications Decency Act (1996) providing internet platforms immunity from liability for most user-generated content, and protecting good-faith content moderation decisions. Section 230 has been debated as both enabling and constraining platform responses to misinformation. (Chapter 36)

SIFT method: A four-step lateral reading framework developed by Mike Caulfield: Stop, Investigate the source, Find better coverage, and Trace claims to their original context. SIFT provides a practical heuristic for digital information verification. (Chapter 19)

SIR model: A compartmental epidemiological model dividing a population into Susceptible (S), Infected (I), and Recovered (R) compartments. The SIR model has been adapted to model misinformation spread, with "infected" representing those who believe a false claim. (Chapter 16)

Slippery slope fallacy: The claim that a relatively small first step will inevitably lead to a chain of events resulting in a significant negative outcome, without evidence that this chain is likely. (Chapter 25)

Social proof: A psychological phenomenon in which people assume that others' actions reflect correct behavior, particularly in uncertain situations. Displaying high share or like counts leverages social proof to make content appear credible. (Chapter 4)

Source monitoring error: A memory failure in which the origin of information is forgotten while the information itself is retained. This contributes to the "truth" of fact-checked misinformation, since the correction may be forgotten while the original claim lingers. (Chapter 3)

Stochastic parrots: A term from Bender et al. (2021) describing large language models that generate plausible text by pattern-matching without genuine understanding. The metaphor highlights risks of treating LLM output as authoritative knowledge. (Chapter 10)

Strawman fallacy: Misrepresenting an opponent's argument in a weaker or more extreme form, then refuting that distorted version. Strawman arguments are common in political discourse and misinformation campaigns that caricature opposing positions. (Chapter 25)


T

TF-IDF (Term Frequency-Inverse Document Frequency): A weighting scheme for text features that rewards terms frequent in a document but rare across the corpus, capturing distinctive content. TF-IDF is a foundational feature engineering technique for NLP classification tasks. (Chapter 22)

Transparency: The quality of being open and observable about processes, methods, funding, ownership, and decisions. Transparency is a core norm for both journalism and AI systems, as opacity facilitates manipulation. (Chapter 41)

Trolling: Deliberately inflammatory, offensive, or disruptive online behavior intended to provoke emotional reactions and derail productive discourse. Troll farms employ paid workers to systematically troll discussions of political or social topics. (Chapter 21)

Troll farm: An organization employing workers to create and operate fake online personas for purposes of political influence, propaganda, or harassment. The IRA is the most documented example. (Chapter 21)

Trust: The willingness to be vulnerable to another party based on positive expectations about their behavior. Epistemic trust — trust in others as sources of knowledge — is the foundation of functional information ecosystems. (Chapter 6)

Truth sandwich: A corrective communication strategy in which a correction leads with the true information rather than the false claim, avoiding the backfire risk of repetition. The strategy was popularized by linguist George Lakoff. (Chapter 35)


U

Uncertainty: The state of having incomplete or imprecise information about a state of the world. Science communicators face the challenge of conveying genuine scientific uncertainty without enabling the exploitation of that uncertainty by denialists. (Chapter 18)


V

Verification: The process of confirming the accuracy of a claim against available evidence. Professional verification involves triangulating across independent sources, consulting domain experts, and applying methods appropriate to the claim type. (Chapter 19)

Viral misinformation: False information that spreads rapidly through social networks at rates far exceeding average content, often driven by emotional resonance, novelty, or partisan identity. Vosoughi et al. (2018) documented that false news spreads faster and farther than true news on Twitter. (Chapter 9)

Vulnerability: In the context of misinformation susceptibility, individual or situational factors that increase the likelihood of accepting false claims, including cognitive load, emotional arousal, time pressure, and identity threats. (Chapter 38)


W

Wardle-Derakhshan framework: See "Information disorder" above. The framework proposed by Claire Wardle and Hossein Derakhshan in their 2017 Council of Europe report, distinguishing three types of information disorder by falseness and intent. (Chapter 1)

Whataboutism: A rhetorical strategy of deflecting criticism by pointing to a comparable action of an opponent — "What about when you/they did X?" Whataboutism avoids engaging with the original criticism by substituting a counter-accusation. (Chapter 25)

Wisdom of crowds: The phenomenon in which the aggregated judgments of many independent individuals can exceed expert performance. Crowd-sourced fact-checking (e.g., Community Notes) attempts to harness wisdom of crowds while mitigating coordinated manipulation. (Chapter 34)


Z

Zero-shot learning: A machine learning paradigm in which a model classifies items from categories not seen during training, using only class descriptions. Large language models enable zero-shot claim classification without labeled training data. (Chapter 22)

Z-score: A standardized score indicating the number of standard deviations a value lies from the mean of its distribution. Z-scores enable comparison of values across different scales. (Chapter 28; Appendix A)