Glossary

296 terms from Misinformation, Media Literacy, and Critical Thinking in the Digital Age

# A B C D E F G H I K L M N P Q R S T V W Y

#

"Slightly"
conveys effect size, preventing misinterpretation of small effects as large. 3. **"Heavy smokers"** — specifies the population in which the effect was found, preventing overgeneralization. 4. **"Observational"** — correctly signals the study design and its limitations for causal inference. 5. **"Res → Chapter 28 Quiz: Probabilistic Thinking and Uncertainty
(a)
P1: All peer-reviewed studies are reliable. - P2: This study is peer-reviewed. - C: This study is reliable. → Chapter 25: Exercises — Logic, Argumentation, and Fallacy Recognition
(Applied)
Misleading Axes: Recreating the Deception → Chapter 21 Exercises: Data Journalism and Statistical Literacy
(b)
P1: If a person spreads misinformation, they are irresponsible. - P2: This person is irresponsible. - C: This person spreads misinformation. → Chapter 25: Exercises — Logic, Argumentation, and Fallacy Recognition
(c)
P1: Either the vaccine is safe or there is a government cover-up. - P2: The vaccine is not safe. - C: There is a government cover-up. → Chapter 25: Exercises — Logic, Argumentation, and Fallacy Recognition
(Coding)
Creating Honest Visualizations → Chapter 21 Exercises: Data Journalism and Statistical Literacy
(d)
P1: If social media companies cared about truth, they would remove all false content. - P2: Social media companies do not remove all false content. - C: Social media companies do not care about truth. → Chapter 25: Exercises — Logic, Argumentation, and Fallacy Recognition
(e)
P1: Some conspiracy theories turn out to be true. - P2: This claim is a conspiracy theory. - C: This claim might be true. → Chapter 25: Exercises — Logic, Argumentation, and Fallacy Recognition
(Research)
GDP and Human Wellbeing → Chapter 21 Exercises: Data Journalism and Statistical Literacy
1. Privatization
assigning control of information spaces to private entities who then have incentives to maintain quality. Strength: private controllers have direct incentives to maintain quality if their audience values it. Weakness: private controllers may maximize engagement rather than accuracy; they may be unac → Chapter 41 Quiz
10. Key Research Findings
False news spreads 70% faster and further than true news on Twitter (Vosoughi, Roy, and Aral, 2018) - The difference is driven primarily by human sharing behavior, not bots - Consumption of misinformation is concentrated among a relatively small share of the population, particularly older, highly pa → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
11. Methodological Challenges
**Operationalization**: Definition determines what gets measured; different definitions yield different prevalence estimates - **Selection bias in fact-checking**: Fact-checkers target prominent claims, not a random sample - **Platform access limitations**: Internal platform data required for rigoro → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
13. Why Visual Propaganda Works
**Speed**: Images processed before conscious evaluation can intervene - **Indexicality**: Cultural assumption that photos show "what happened" — automatic credibility - **Non-propositional**: Creates impressions without explicit claims that can be rebutted - **Aesthetic experience**: Beauty and subl → Chapter 12 Key Takeaways: Propaganda Techniques
14. Classic Visual Techniques
Juxtaposition: Enemy imagery alongside disgust/danger imagery — no explicit claim needed - Selective cropping: Context removal changes meaning (visual false context) - Scale and perspective: Architecture and staging create visual rhetoric of power - Idealized typification: Depicting types rather tha → Chapter 12 Key Takeaways: Propaganda Techniques
US First Amendment significantly limits government regulation of false speech (*United States v. Alvarez*, 2012) - European frameworks (GDPR, EU Digital Services Act) provide more regulatory tools - Defamation law covers false statements harming individuals but not all disinformation - Privacy law c → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
15. Platform Governance Implications
Platforms apply different mechanisms to different types: removal (fabricated content), labeling (disputed content), demotion (misleading content), account removal (coordinated inauthentic behavior), privacy enforcement (malinformation) - Misleading content (Type 2) is the most difficult to address: → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
17. Plandemic (Case Study 1)
A single piece of health misinformation can contain elements of all three categories simultaneously - Sincere belief by a primary spokesperson does not preclude the overall product being structured as disinformation - Platform removal can trigger the "Streisand Effect" — increased interest due to pe → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
18. Operation Secondary Infektion (Case Study 2)
State-sponsored disinformation can run for years (six-plus years) before comprehensive identification - The signature technique — media laundering — exploits journalistic norms rather than audience credulity - Most content achieved low amplification, but strategic impact may exceed reach - Attributi → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
18. Structural Complements to Individual Literacy
Platform algorithmic transparency requirements - Dark ad disclosure mandates for political advertising - Friction design (accuracy prompts before sharing) - Political advertising disclosure requirements - Researcher data access for accountability - Prebunking/inoculation programs at scale → Chapter 12 Key Takeaways: Propaganda Techniques
2. Regulation
external rules limiting uses of the epistemic commons that degrade its quality (truth-in-advertising laws, misinformation liability, platform accountability standards). Strength: external requirements can mandate quality without depending on market incentives. Weakness: regulation can suppress legit → Chapter 41 Quiz
21. Nazi Propaganda (Case Study 1)
Propaganda requires total information environment control to achieve totalitarian effects - Dehumanization is a necessary (if not sufficient) psychological prerequisite for mass atrocity - Spectacle and aesthetic experience are powerful propaganda vectors independent of propositional content - The " → Chapter 12 Key Takeaways: Propaganda Techniques
22. Cambridge Analytica (Case Study 2)
The data harvest violated privacy norms and platform policies for tens of millions of people - The claimed effectiveness of psychographic targeting is substantially overstated by the available evidence - CA represents classical propaganda techniques (bandwagon, card stacking, fear appeals) evolved i → Chapter 12 Key Takeaways: Propaganda Techniques
3. Cooperation through norms and institutions
developing shared epistemic norms and institutions maintained through social rather than governmental enforcement (equivalent to Ostrom's common-pool resource management). Strength: norm-based governance is flexible, resistant to political capture, and can operate without coercive enforcement. Weakn → Chapter 41 Quiz
4. Malinformation: True Content Used to Harm
The most counterintuitive category: factually accurate information can cause serious harm when deployed strategically - Examples: doxxing, strategic leaks of private communications, outing, historical weaponization - Creates tension between press freedom/transparency and privacy/dignity - Distinguis → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
6. Key Properties of the Seven Techniques
**Not mutually exclusive**: Most propaganda combines multiple techniques simultaneously - **Not inherently false**: Card stacking and Plain Folks can use entirely accurate information - **Not inherently partisan**: The techniques are deployed across the ideological spectrum - **Awareness ≠ immunity* → Chapter 12 Key Takeaways: Propaganda Techniques
7. Types Map Differently to Categories
Satire/Parody (Type 1) typically produces misinformation (no harmful intent) - Fabricated Content (Type 4) created deliberately is disinformation; shared innocently becomes misinformation - False Context (Type 5) used to harm a real person shades into malinformation - Intent of the specific actor in → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
9. Soviet Dezinformatsiya Techniques
Document forgery and black propaganda (falsely attributed) - Media laundering: plant stories in developing-world media, amplify to Western outlets - Front organizations: apparently independent groups secretly controlled by intelligence - Conspiracy narratives: muddying information environment even w → Chapter 12 Key Takeaways: Propaganda Techniques

A

A) Affective polarization roughly doubled
Iyengar and colleagues documented that the gap between how partisans feel about their own party versus the opposing party, measured through feeling thermometers, approximately doubled between 1994 and 2016. This represents a dramatic increase in inter-partisan animosity, not merely disagreement. → Chapter 30: Quiz — Democracy, Polarization, and the Misinformation Crisis
Adversarial red-teaming
proactively attempting to evade one's own detection systems to identify vulnerabilities 4. **Network-level analysis** that detects coordination regardless of individual account sophistication 5. **Infrastructure analysis** (IP ranges, hosting providers, payment methods) that detects operations at th → Chapter 24: Computational Propaganda and Bot Detection
aktivnyye meropriyatiya
"active measures." The term encompassed a wide range of operations conducted by the KGB's Service A (active measures) and later the International Information Department of the Communist Party Central Committee. Active measures included: → Chapter 31: State-Sponsored Disinformation and Information Warfare
Answer: (A)
**Precision** = True Positives / (True Positives + False Positives) = fraction of flagged items that are actually misinformation. At 95% precision: 5% of flagged items are false positives (legitimate speech incorrectly flagged). → Chapter 22 Quiz: Natural Language Processing for Misinformation Detection
apophenia
the perception of meaningful patterns in random or ambiguous data. → Chapter 3: How the Human Mind Processes Information
Architectural markers:
Building construction style (concrete block versus mud brick versus modern prefab) varies by region and era - Window and door shapes, balcony configurations, and roof styles vary regionally - Specific buildings may be identifiable from their distinctive features → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
Attribution
You must give appropriate credit and indicate if changes were made - **NonCommercial** — You may not use the material for commercial purposes - **ShareAlike** — If you remix or transform the material, you must distribute your contributions under the same license → Misinformation, Media Literacy, and Critical Thinking in the Digital Age

B

B) Antonio Gramsci
Antonio Gramsci developed the concept of hegemony to describe how dominant groups maintain power through cultural consensus rather than direct coercion. Critical media literacy applies this concept to analyze how media naturalize particular social arrangements and make them appear inevitable or comm → Chapter 29: Quiz — Media Literacy Frameworks
B) d = 0.37 (small to moderate)
The Jeong et al. systematic review found an average effect size of approximately 0.37 across 51 media literacy intervention studies, representing a small to moderate effect. Knowledge gains were more robust than attitude or behavioral changes. → Chapter 29: Quiz — Media Literacy Frameworks
B) S.R. Ranganathan's Five Laws of Library Science
UNESCO explicitly modeled its Five Laws of MIL on Ranganathan's famous laws, which guided library science for decades. Ranganathan's laws emphasized access, efficiency, and the living nature of libraries; UNESCO's laws similarly emphasize access, universal citizenship, and the dynamic nature of MIL. → Chapter 29: Quiz — Media Literacy Frameworks
Bakshy, Messing, and Adamic (2015)
a study by Facebook researchers — found that Facebook's News Feed algorithm did reduce exposure to cross-cutting content, but that user choice (who people chose to friend and what they chose to click) was a larger driver of exposure restriction than the algorithm itself. → Chapter 8: Platform Algorithms and the Attention Economy
Base rate neglect
The failure to incorporate the prior probability of an event when evaluating new evidence; often leads to overestimation of the probability of rare events after positive test results. → Chapter 28: Probabilistic Thinking and Uncertainty
base rate problem
occurs because the false positive rate, applied to the large population of people who don't have the condition, generates as many false positives as true positives. → Chapter 28: Probabilistic Thinking and Uncertainty
Bayes' Theorem
fully derived and explained intuitively in Chapter 28 and Appendix A → Prerequisites
below the diagonal
specifically, it bows toward the lower-right. When this forecaster says 90%, they are actually correct only 70-75% of the time. Their stated confidence systematically exceeds their actual accuracy. The curve typically shows little deviation at low confidence levels (around 50-60%) but increasingly l → Chapter 28 Quiz: Probabilistic Thinking and Uncertainty
Best practices for uncertainty communication:
Pair verbal expressions with numerical estimates - Use natural frequencies rather than conditional probabilities - Report absolute risk alongside relative risk - Specify the confidence interval alongside the point estimate → Chapter 28 Key Takeaways: Probabilistic Thinking and Uncertainty
Brier score
A mathematical measure of forecast accuracy: (probability - outcome)²; lower is better; 0 is perfect. → Chapter 28: Probabilistic Thinking and Uncertainty

C

C) 2015
ACRL adopted the new Framework in 2015, replacing the 2000 Standards. The shift from "standards" to "frames" reflected a move from a competency checklist model to a threshold concepts model emphasizing conceptual understanding and disciplinary context. → Chapter 29: Quiz — Media Literacy Frameworks
C) 2016
"Post-truth" was named Word of the Year for 2016 by Oxford Dictionaries, reflecting the prominence of concerns about factual standards in public discourse during that year's political events, including Brexit and the U.S. presidential election. → Chapter 30: Quiz — Democracy, Polarization, and the Misinformation Crisis
C) Checkology
Checkology is the News Literacy Project's online learning platform for middle and high school students. MediaWise is a similar initiative by the Poynter Institute; AllSides is a media bias rating service; iCivics is a civics education platform. → Chapter 29: Quiz — Media Literacy Frameworks
C) Five
DigComp organizes digital competence into five areas: Information and Data Literacy; Communication and Collaboration; Digital Content Creation; Safety; and Problem Solving. → Chapter 29: Quiz — Media Literacy Frameworks
C) Media as Social Institution
This is not part of the ACRL Framework. The six frames are: Authority Is Constructed and Contextual; Information Creation as a Process; Information Has Value; Research as Inquiry; Scholarship as Conversation; and Searching as Strategic Exploration. → Chapter 29: Quiz — Media Literacy Frameworks
C) NAMLE
The National Association for Media Literacy Education articulated this six-part definition in its Core Principles document. UNESCO's earlier definition used four parts (access, analyze, evaluate, create), and the addition of "reflect" and "act" is distinctive to NAMLE's framework. → Chapter 29: Quiz — Media Literacy Frameworks
Calculation:
P(COVID) = 0.003 - P(no COVID) = 0.997 - P(positive | COVID) = 0.97 - P(positive | no COVID) = 0.005 → Case Study 28-1: Bayesian Reasoning and COVID-19 Testing — Understanding False Positives
calibration
the practice of assigning degrees of confidence to beliefs that correspond to how well-supported those beliefs actually are. A well-calibrated person who says they are "80% confident" in a claim is right approximately 80% of the time when they say that. Their expressed uncertainty tracks their actua → Chapter 28: Probabilistic Thinking and Uncertainty
Chronolocation
The process of determining when a photograph or video was taken using shadow analysis, vegetation cues, and other temporal signals. → Chapter 27: Lateral Reading and Advanced Web Literacy
ClaimReview
A structured data schema allowing fact-checkers to mark up their findings so they appear in search engine results. → Chapter 27: Lateral Reading and Advanced Web Literacy
clickbait farms
content operations that produce large quantities of low-quality, often misleading, but not necessarily false content optimized for social media sharing. Clickbait farms exist across the political spectrum and across topical niches (health, celebrity gossip, personal finance, politics) and represent → Chapter 10: The Business Model of Outrage — Engagement Over Truth
Co-regulation
exemplified by the EU Code of Practice — combines platform self-governance with government framework-setting and monitoring. It can be more effective than pure self-regulation when the threat of harder regulation is credible, and more flexible than hard law, but remains vulnerable to industry captur → Chapter 33: Key Takeaways — Policy Responses to Misinformation: Global Perspectives
Complementary approaches the evidence suggests:
**Environmental design**: Changing information presentation (accuracy nudges before sharing, base rate visibility, frequency vs. probability formats) reduces bias impact without requiring analytical improvement. - **Specific technique training**: Teaching specific, evidence-based techniques (conside → Chapter 4 Quiz: Cognitive Biases and Heuristics That Make Us Vulnerable
Confirmation bias
the tendency to search for, interpret, favor, and recall information in a way that confirms one's preexisting beliefs — is perhaps the most extensively documented cognitive bias. Epistemologically, it violates norms of impartial inquiry: one should weigh evidence that speaks against one's hypothesis → Chapter 1: What Is Truth? Epistemological Foundations
conjunction fallacy
arises because the description of Linda is highly representative of a feminist, making "feminist" feel like it adds plausibility to the overall description rather than reducing its probability. The representativeness heuristic (Chapter 4) overrides proper probability reasoning. → Chapter 28: Probabilistic Thinking and Uncertainty
Correct Matching:
A → 3 (Rathergate: the 2004 CBS/Bush document controversy) - B → 1 (Network effects: value increases with participants) - C → 4 (Autoplay: automatic next video loading) - D → 2 (Private virality: spread through encrypted channels) - E → 5 (Narrative transportation: story absorption that suspends cri → Chapter 7 Quiz: The Rise of Digital and Social Media
correspondence standard
does this claim accurately describe facts in the world? — is the most practically applicable. → Chapter 1 Key Takeaways: What Is Truth? Epistemological Foundations
CPM
cost per mille, or cost per thousand impressions. Advertisers pay a CPM rate for every 1,000 times their advertisement is shown to users. CPM rates vary enormously depending on the audience (demographic characteristics, purchasing intent, engagement level), the platform, the content context, and the → Chapter 10: The Business Model of Outrage — Engagement Over Truth
Cultural relativism
the anthropological position that moral and social practices should be understood within their cultural context rather than judged by external standards — has legitimate methodological uses. But it is frequently misapplied as a justification for **moral relativism**: the view that moral claims have → Chapter 1: What Is Truth? Epistemological Foundations

D

D) Media economics
Masterman's four key concepts were: media languages, media representations, media institutions, and media audiences. While economic analysis of media is relevant to the "media institutions" concept, "media economics" was not listed as a separate category. → Chapter 29: Quiz — Media Literacy Frameworks
D) Passive reading
Hall's three positions were: the dominant/preferred reading (accepting the text's intended meaning), the negotiated reading (partially accepting while partially resisting), and the oppositional reading (rejecting the preferred meaning and substituting an alternative). Hall was explicitly opposed to → Chapter 29: Quiz — Media Literacy Frameworks
deliberative democracy
formal processes in which citizens discuss political issues under structured conditions — consistently finds more positive effects than incidental cross-cutting exposure. Deliberative mini-publics, citizen assemblies, and structured dialogue programs create the conditions (equal status, good faith e → Chapter 9: Filter Bubbles, Echo Chambers, and Algorithmic Curation
disinformation
false or misleading information that is deliberately created and spread to deceive. This distinguishes state-sponsored operations from the broader ecosystem of **misinformation** (false information spread without intent to deceive, as when someone shares an inaccurate article in good faith) and **ma → Chapter 31: State-Sponsored Disinformation and Information Warfare

E

elaboration likelihood
the motivation and ability to think carefully about a message. Motivation is reduced by low personal relevance, positive mood, and high information load. Ability is reduced by distraction, time pressure, and lack of background knowledge. → Chapter 5: The Social Psychology of Belief and Group Conformity
engagement
the set of behaviors (clicks, watches, likes, shares, comments, return visits) that indicate that a user's attention has been captured. Engagement is the proxy for the platform's core product: attention. → Chapter 8: Platform Algorithms and the Attention Economy
epistemic commons
the shared information environment on which all members of a community depend for accurate beliefs about the world — provides a moral framework for thinking about information sharing as more than an individual act. → Chapter 38: Building Personal Resilience Against Misinformation
Evidence set:
Official FBI crime statistics show violent crime rates declined from 2015 to 2022 in the United States overall. - Several major cities (including New York City, Los Angeles, and Chicago) saw violent crime rate increases in 2020-2021, followed by partial declines in 2022. - The FBI changed its crime → Chapter 19 Exercises: Fact-Checking Methods, Organizations, and Limitations
Exercise 1.2
*Classify each of the following as misinformation, disinformation, or malinformation, and explain your reasoning:* *(a) A parent shares a Facebook post claiming vaccines cause autism, having genuinely read and believed it.* *(b) A foreign intelligence agency creates fake health advisories to undermi → Appendix G: Answers to Selected Exercises
Exercise 1.5
*Wardle and Derakhshan argue that "information disorder" is a more precise term than "fake news." Construct an argument for this position, then construct the strongest counter-argument.* → Appendix G: Answers to Selected Exercises
Exercise 11.1
*Classify the following claims using the full taxonomy of false information types (fabricated content, manipulated content, imposter content, misleading content, false context, satire/parody). Explain each classification.* → Appendix G: Answers to Selected Exercises
Exercise 13.3
*Using the CONSPIRE framework, analyze the appeal of a specific contemporary conspiracy theory.* → Appendix G: Answers to Selected Exercises
Exercise 19.2
*Apply the SIFT method to the following claim: "A new study shows that drinking coffee reduces Alzheimer's risk by 65%."* → Appendix G: Answers to Selected Exercises
Exercise 21.1
Mean vs. Median: The Distortion of Averages → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.10
Question Wording Effects: Designing Biased and Unbiased Questions → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.11
Margin of Error and Significance → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.14
Reading Study Abstracts → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.15
Confounders and Study Design → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.17
Unemployment Rate Definitions → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.2
Absolute vs. Relative Risk: Drug Trial Analysis → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.21
Evaluating Data Journalism → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.22
Constructing a Misleading Statistic → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.23
Historical Case: Tobacco Industry Statistics → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.26
Economic Statistics Audit → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.27
Polling Audit: 2020 US Presidential Election → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.28
The Replication Crisis and Its Implications → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.3
Base Rate Neglect: Airport Security Screening → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.30
Capstone: Statistical Analysis of a Public Health Claim → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.4
Sample Size and Precision → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.5
Cherry-Picking Timeframes: Crime Statistics → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.7
Correlation and Causation: Identifying Causal Structures → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.8
The Replication Crisis: Understanding p-Values → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 21.9
Evaluating Poll Quality → Chapter 21 Exercises: Data Journalism and Statistical Literacy
Exercise 22.1
*Write Python code to compute TF-IDF vectors for three short documents and identify the most distinctive term in each. Explain the output.* → Appendix G: Answers to Selected Exercises
Exercise 22.10
(Coding) SVM vs. Naive Bayes Comparison → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.11
(Coding) Random Forest with Mixed Features → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.12
(Conceptual) LIAR Dataset Analysis → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.13
(Coding) Word2Vec Exploration → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.14
(Coding) Document Embedding for Claim Matching → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.15
(Conceptual) Bias in Word Embeddings → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.16
(Coding) Fine-Tuning DistilBERT for Text Classification → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.17
(Conceptual) Understanding Self-Attention → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.18
(Coding) BERT Attention Visualization → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.19
(Coding) Build a Simple Claim Matching System → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.2
(Conceptual) The Human Oversight Continuum → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.20
(Conceptual) Stance Detection and Its Limits → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.21
(Conceptual) FEVER Benchmark Analysis → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.22
(Coding) Adversarial Example Generation → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.23
(Coding) Label Leakage Investigation → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.24
(Conceptual) Dataset Bias Audit → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.25
(Conceptual) False Positive / False Negative Ethics → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.26
(Research) Audit a Real Platform's Automated Moderation → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.27
(Coding) Model Card for a Fake News Classifier → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.28
(Research) The FEVER Shared Task → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.29
(Coding) End-to-End Misinformation Detection Pipeline → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.3
*Evaluate the following classifier output using precision, recall, F1, and ROC AUC. Discuss what the metrics imply about the classifier's practical utility.* → Appendix G: Answers to Selected Exercises
Exercise 22.30
(Conceptual) Capstone: Design a Responsible Misinformation Detection System → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.4
(Coding) Tokenization Comparison → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.5
(Conceptual) Stopword Removal and Misinformation → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.6
(Coding) TF-IDF Feature Analysis → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.7
(Coding) Stylometric Feature Extraction → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.8
(Conceptual) Feature Engineering Trade-offs → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 22.9
(Coding) Naive Bayes Classifier for Fake News → Chapter 22 Exercises: Natural Language Processing for Misinformation Detection
Exercise 23.2
*A retweet network has the following properties: 1,000 nodes, 5,000 edges, and a modularity score Q = 0.72. Interpret these statistics and explain what they suggest about the information ecosystem represented.* → Appendix G: Answers to Selected Exercises
Exercise 25.1
*Identify the logical fallacy in each of the following arguments and explain why it fails.* → Appendix G: Answers to Selected Exercises
Exercise 28.1
*A misinformation detection classifier has 85% sensitivity and 90% specificity. In a corpus where 15% of articles are actually false, compute the probability that an article flagged as false is actually false. Then compute what the false-article base rate would need to be for this probability to rea → Appendix G: Answers to Selected Exercises
Exercise 3.1
*A study finds that people who scored higher on the Cognitive Reflection Test (CRT) were better at identifying false news headlines. Does this mean that System 2 thinking is the primary protection against misinformation? What alternative explanations exist?* → Appendix G: Answers to Selected Exercises
Exercise 30.2
*Assess the claim that social media misinformation was the primary cause of unexpected electoral outcomes in 2016. What does the empirical evidence support?* → Appendix G: Answers to Selected Exercises
Exercise 35.1
*Design an inoculation intervention targeting the "appeal to false authority" manipulation technique used in health misinformation. Include: the refutational preemption, the "microdose" of the technique, and a plan for evaluating effectiveness.* → Appendix G: Answers to Selected Exercises
Exercise 38.3
*Reflect on three specific cognitive biases that you believe most influence your own information processing. For each, describe a concrete behavioral strategy to counteract it.* → Appendix G: Answers to Selected Exercises
Exercise 4.3
*Design a study to test whether the illusory truth effect operates even when participants are told that the repeated statement has been fact-checked and found to be false.* → Appendix G: Answers to Selected Exercises
Exercise 41.2
*A researcher proposes to deploy a prebunking intervention to the entire user base of a social media platform without obtaining individual informed consent. They argue that (a) the intervention is low-risk, (b) obtaining consent would introduce selection bias and (c) public health benefit justifies → Appendix G: Answers to Selected Exercises
Exercise 8.2
*Explain the engagement-accuracy trade-off in platform recommendation algorithms. Why do platforms have economic incentives that may conflict with information quality?* → Appendix G: Answers to Selected Exercises
Expected value
The probability-weighted sum of possible outcomes of an action; the central concept in rational decision-making under uncertainty. → Chapter 28: Probabilistic Thinking and Uncertainty

F

Fabricated or heavily embellished reporting
events that did not happen, quotes that were invented - **Manufactured stories** — reporters sent overseas with instructions to find or create newsworthy events - **Crusading campaigns** designed to position the paper as a moral champion - **Graphic imagery** of violence, crime, and suffering → Chapter 6: The Evolution of Traditional Media
Fact-based inoculation is preferable when:
The specific false claims are known in advance (e.g., predictable myths about a new vaccine or a recurring seasonal health topic). - The target audience is likely to encounter primarily one form of the false claim. - The topic is specific and bounded, making it possible to address the relevant claim → Chapter 35: Prebunking and Inoculation Theory
FALSE
with important qualifications. → Chapter 4 Quiz: Cognitive Biases and Heuristics That Make Us Vulnerable
False context
a genuine photograph (the content is real and unaltered) presented with false contextual claims about when, where, and under what circumstances it was taken. → Chapter 11 Quiz: Taxonomy of Information Disorder
false flag operations
operations deliberately designed to appear to originate from a different actor. → Chapter 31: State-Sponsored Disinformation and Information Warfare
food frequency questionnaires (FFQs)
asking participants to recall and estimate their average consumption of dozens or hundreds of food items over the past year. This measurement method is afflicted by: → Case Study 26.1: How Press Releases Became Fake Science News — The Case of Nutrition Research
For analytical essays (10.21, 10.23):
Quality of argument and engagement with chapter concepts: 40% - Use of evidence and specific examples: 30% - Consideration of counter-arguments: 20% - Clarity of writing: 10% → Chapter 10 Exercises: The Business Model of Outrage — Engagement Over Truth
For analytical exercises (9.1, 9.3, 9.4):
Accuracy of conceptual definitions: 30% - Quality of reasoning and argument: 40% - Use of evidence from the chapter: 20% - Clarity of expression: 10% → Chapter 9 Exercises: Filter Bubbles, Echo Chambers, and Algorithmic Curation
For case analyses (10.5, 10.9, 10.13):
Accuracy of economic analysis: 30% - Depth of case research: 30% - Connection to chapter concepts: 25% - Clarity and organization: 15% → Chapter 10 Exercises: The Business Model of Outrage — Engagement Over Truth
For civic and political engagement:
Recognize that state-sponsored operations primarily target existing divisions and grievances — the emotional intensity you feel around certain political issues may have been deliberately amplified. - Understand that even genuine grievances can be exploited by foreign influence operations, without th → Chapter 31 Key Takeaways: State-Sponsored Disinformation and Information Warfare
For civic participation:
Recognize that voter suppression messaging specifically targets your participation — the claim that your vote doesn't matter or that participation is futile is itself a political intervention. - Know your jurisdiction's official election information resources — election commission websites, official → Chapter 32 Key Takeaways: Election Interference — Case Studies and Countermeasures
For design exercises (9.13, 9.14, 9.16):
Creativity and feasibility of design: 30% - Grounding in evidence and theory: 30% - Consideration of trade-offs and unintended consequences: 25% - Clarity of presentation: 15% → Chapter 9 Exercises: Filter Bubbles, Echo Chambers, and Algorithmic Curation
For empirical exercises (9.5, 9.6, 9.7):
Understanding of research methods: 35% - Critical evaluation of evidence: 35% - Identification of limitations: 20% - Clarity and organization: 10% → Chapter 9 Exercises: Filter Bubbles, Echo Chambers, and Algorithmic Curation
Apply the source question: Who is claiming election irregularities? Do they have specific evidence, or general suspicion? - Check whether specific claims have been litigated and adjudicated — courts represent the highest evidentiary standard for election fraud claims. - Distinguish between general " → Chapter 32 Key Takeaways: Election Interference — Case Studies and Countermeasures
For evaluating media and information:
Apply the source origin question to all information: who created this, for what audience, with what likely objectives? - Recognize "firehose" patterns: when confronted with contradictory claims in rapid succession, this may signal a confusion strategy rather than genuine uncertainty. - Understand th → Chapter 31 Key Takeaways: State-Sponsored Disinformation and Information Warfare
For policy analysis:
Evaluate counter-disinformation proposals against both their claimed effectiveness and their potential for government overreach. - Recognize that platform-based enforcement actions, while necessary, are insufficient responses to the demand-side conditions that make influence operations effective. - → Chapter 31 Key Takeaways: State-Sponsored Disinformation and Information Warfare
For policy proposals (10.12, 10.18):
Problem identification and framing: 25% - Policy mechanism specificity and feasibility: 35% - Analysis of trade-offs and unintended consequences: 25% - Writing quality and professional format: 15% → Chapter 10 Exercises: The Business Model of Outrage — Engagement Over Truth
For the media/platform context (Part II):
Tim Wu, *The Attention Merchants* — the history of attention capture - Or any recent journalism about social media platforms → Prerequisites
For the philosophical foundations (Part I):
Plato's *Meno* (short, freely available) — on the nature of knowledge - Carl Sagan, *The Demon-Haunted World* (1995) — on scientific thinking → Prerequisites
For the political context (Part VI):
Steven Levitsky & Daniel Ziblatt, *How Democracies Die* — democratic backsliding context → Prerequisites
For the psychological foundations (Chapters 3–5):
Daniel Kahneman, *Thinking, Fast and Slow* — particularly Chapters 1, 11, and 12 → Prerequisites
Forecaster A:
(0.9 - 1)² = 0.01 - (0.9 - 1)² = 0.01 - (0.9 - 0)² = 0.81 - (0.6 - 1)² = 0.16 - (0.6 - 0)² = 0.36 Sum = 1.36; Average Brier score = **0.272** → Chapter 28 Quiz: Probabilistic Thinking and Uncertainty
Forecaster B:
(0.7 - 1)² = 0.09 - (0.7 - 1)² = 0.09 - (0.3 - 0)² = 0.09 - (0.65 - 1)² = 0.1225 - (0.35 - 0)² = 0.1225 Sum = 0.515; Average Brier score = **0.103** → Chapter 28 Quiz: Probabilistic Thinking and Uncertainty

G

Geolocation
The process of determining where a photograph or video was taken using visual features, maps, and geographic analysis. → Chapter 27: Lateral Reading and Advanced Web Literacy
Google Images
broadest coverage; use "Tools > Time" filter to identify earliest indexed appearance 2. **TinEye** — best for finding earliest exact copy; sort by "Oldest" 3. **Yandex Images** — catches matches that Google misses, especially faces and Eastern European/Russian content → Chapter 27 Key Takeaways: Lateral Reading and Advanced Web Literacy
Group 2: Professional historians
PhDs with faculty positions in history departments, with extensive experience evaluating historical sources as part of their academic work. → Case Study 20.1: The Stanford History Education Group's SHEG Study — How Experts Evaluate Sources vs. How Students Do
Group 3: Professional fact-checkers
journalists employed at major fact-checking organizations (PolitiFact, FactCheck.org, others) as their primary job function. → Case Study 20.1: The Stanford History Education Group's SHEG Study — How Experts Evaluate Sources vs. How Students Do

H

Habermas's public sphere ideal
inclusive, rationally organized, publicly oriented discourse — provides a normative standard against which contemporary digital information environments can be measured. On virtually every dimension, contemporary platforms fall short of this ideal, though they also enable forms of participation the → Chapter 30: Key Takeaways — Democracy, Polarization, and the Misinformation Crisis
Hedgehogs
experts who explained everything through one big idea or theoretical framework, and who gave confident, definitive predictions. These experts were frequently sought by media precisely because their certainty made for compelling content. They were systematically less accurate. - **Foxes** — experts w → Case Study 28-2: Superforecasting and Political Uncertainty — What Tetlock's Research Teaches About Expert Prediction
homophily
the well-documented social tendency for people to associate with others who share their demographic characteristics, political views, and cultural tastes. Homophily operates at the level of relationship formation: we are more likely to befriend, follow, and listen to people like ourselves. This crea → Chapter 9: Filter Bubbles, Echo Chambers, and Algorithmic Curation

I

ideological radicalization
users progressively recommended more extreme versions of content they have already engaged with, regardless of the specific political direction. → Chapter 9: Filter Bubbles, Echo Chambers, and Algorithmic Curation
implicit premises
that the argument requires but does not state? → Chapter 25: Logic, Argumentation, and Fallacy Recognition
In 1,000 employees tested:
50 have COVID: 48.5 test positive (true positives), 1.5 test negative - 950 don't have COVID: 4.75 test positive (false positives), 945.25 test negative - Total positive tests: 53.25 - True positives: 48.5/53.25 ≈ 91.1% - False positives: 4.75 (less than 5 people wrongly excluded) → Case Study 28-1: Bayesian Reasoning and COVID-19 Testing — Understanding False Positives
In 1,000 symptomatic patients tested:
300 have COVID: 225 test positive (true positives), 75 test negative (false negatives — missed!) - 700 don't have COVID: 14 test positive (false positives), 686 test negative - Total positive tests: 239 - True positives among positive tests: 225/239 ≈ 94.1% → Case Study 28-1: Bayesian Reasoning and COVID-19 Testing — Understanding False Positives
In 10,000 people tested:
30 have COVID: 29.1 test positive (true positives), 0.9 test negative - 9,970 don't have COVID: 49.9 test positive (false positives), 9,920.1 test negative - Total positive tests: 79 - True positives among positive tests: 29.1/79 ≈ 36.9% → Case Study 28-1: Bayesian Reasoning and COVID-19 Testing — Understanding False Positives
inferential indicator
a linguistic cue that signals the conclusion follows from what precedes it. Common conclusion indicators include: *therefore, thus, hence, consequently, it follows that, so, which shows that, which means that, which implies that*. → Chapter 25: Logic, Argumentation, and Fallacy Recognition
information asymmetry
where one party to a transaction has information the other lacks — is foundational to understanding why financial misinformation is so damaging. George Akerlof's Nobel Prize-winning work on "lemons" (low-quality goods that look identical to high-quality goods from the outside) demonstrated that seve → Chapter 17: Financial Misinformation and Market Manipulation
Information disorder
Umbrella term for the full range of problematic information phenomena - **Misinformation** — False content spread without harmful intent - **Disinformation** — False content created and spread with harmful intent - **Malinformation** — True content deployed to cause harm - **Dezinformatsiya** — Sovi → Chapter 11 Key Takeaways: Taxonomy of Information Disorder
Infrastructure markers:
Electricity pole configurations (wooden pole vs. concrete, single arm vs. multiple) - Street width and paving type - Vehicle types and license plates (Syrian license plates have distinctive regional codes) - Telephone infrastructure visible in the background → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
inoculation theory
teaching people to recognize manipulation tactics in general, so that they are resistant to specific applications of those tactics when they encounter them. → Chapter 31: State-Sponsored Disinformation and Information Warfare
Intermediate outcomes:
Performance on source evaluation tasks using real (not constructed) content - Observed information-seeking behavior in simulated or naturalistic contexts - Social media behavior (sharing, liking, commenting on accurate vs. inaccurate content) → Chapter 36: Education-Based Interventions and Media Literacy Programs
InVID/WeVerify
A browser extension tool for video verification, supporting keyframe extraction, metadata analysis, and geolocation. → Chapter 27: Lateral Reading and Advanced Web Literacy

K

Key findings:
97% of original studies had reported significant results (p < 0.05) - Only 36–39% of replication studies achieved significance at p < 0.05 in the same direction - The average effect size in replications was roughly half the average effect size in original studies - Cognitive psychology studies repli → Case Study 21-2: The Replication Crisis in Social Psychology — What It Means for Misinformation Research
Key principles:
Active, hands-on exercises are more effective than lectures - Distributed practice (regular brief exercises) builds habits better than occasional intensive sessions - Retrieval practice (recalling and applying techniques) builds more durable skills than re-study - Age-appropriate scaffolding: younge → Chapter 27 Key Takeaways: Lateral Reading and Advanced Web Literacy
Key rules to internalize:
P(not A) = 1 - P(A) — the complement rule - P(A or B) = P(A) + P(B) - P(A and B) — the addition rule - P(A and B) = P(A) × P(B|A) — the multiplication rule - P(B|A) = P(A and B) / P(A) — the definition of conditional probability → Chapter 28 Key Takeaways: Probabilistic Thinking and Uncertainty
Key signals of potential inauthenticity:
Very recent account creation (especially during a news event) - Posting frequency far exceeding human capacity - High follower count with very low engagement - Exclusively single-topic posting - Synchronized activity with other accounts posting identical content - Profile photo that reverse image se → Chapter 27 Key Takeaways: Lateral Reading and Advanced Web Literacy

L

lateral reading
accumulating evidence from multiple sources about a source itself. → Appendix A: Mathematical Foundations
required for criminal prosecution or civil liability — are the most demanding, requiring evidence admissible in court and proof beyond reasonable doubt (in criminal proceedings). The Mueller indictments of IRA and GRU operatives were legally sound but are effectively unenforceable since the indicted → Chapter 31: State-Sponsored Disinformation and Information Warfare
Likelihood ratio
The ratio P(E|hypothesis true) / P(E|hypothesis false); measures how much evidence E increases or decreases the odds of a hypothesis. → Chapter 28: Probabilistic Thinking and Uncertainty
Limitations of state criminal law:
Prosecution requires identifying an often anonymous perpetrator - Cross-border production and distribution complicate jurisdiction - Prosecution resources are limited and prosecutors prioritize other crimes - Criminal prosecution occurs after the harm — it cannot prevent distribution - Sentences for → Case Study 18-2: Non-Consensual Intimate Deepfakes — Scale, Harm, and Legal Responses
Logic-based inoculation is preferable when:
The misinformation landscape is varied and rapidly evolving. - A diverse audience will encounter misinformation in different forms and across different topics. - The goal is long-term resilience rather than protection against a specific current threat. - Scalable delivery is a priority (e.g., a soci → Chapter 35: Prebunking and Inoculation Theory

M

malinformation
true information disclosed with the intent or effect of causing harm — was introduced in the information disorder literature by Claire Wardle and Hossein Derakhshan in 2017. Malinformation sits at the opposite end of the truth spectrum from misinformation: it is accurate, but its disclosure serves h → Case Study 2: Whistleblowing as Epistemic Justice
Manufactured uncertainty
The deliberate creation of doubt about scientific consensus by interested parties, used historically by tobacco and fossil fuel industries. → Chapter 28: Probabilistic Thinking and Uncertainty
Miranda Fricker's epistemic injustice framework
testimonial injustice (credibility deficits based on identity) and hermeneutical injustice (conceptual gaps that make experience inarticulable) — provides tools for understanding whose voices are discounted in democratic information environments, and why some communities' skepticism of institutions → Chapter 30: Key Takeaways — Democracy, Polarization, and the Misinformation Crisis
Misinformation exploitation:
Relative risk inflation (50% relative risk reduction from a tiny base = tiny absolute effect) - Cherry-picked denominators (numerators without relevant base rate comparators) - Misused raw counts (larger vaccinated population produces more breakthrough cases even if the rate is far lower) → Chapter 28 Key Takeaways: Probabilistic Thinking and Uncertainty
moral contagion
the tendency for morally loaded emotional language to spread disproportionately rapidly in social networks. → Chapter 10: The Business Model of Outrage — Engagement Over Truth
Multiplication rule (AND):
If A and B are independent (occurrence of one doesn't affect the other): P(A and B) = P(A) × P(B) - In general: P(A and B) = P(A) × P(B|A), where P(B|A) is the conditional probability of B given A → Chapter 28: Probabilistic Thinking and Uncertainty

N

Narrative laundering
also called the "fringe to mainstream pipeline" — is the process by which narratives that originate in state-sponsored or fringe-extremist sources acquire mainstream credibility through a series of intermediate amplification steps. The typical pathway: → Chapter 31: State-Sponsored Disinformation and Information Warfare
Native advertising
paid content designed to resemble the editorial content of the publication in which it appears — has existed in print form since at least the 19th century. The "advertorial" was a standard feature of 20th-century magazines: a paid advertisement formatted to look like an editorial article, typically → Chapter 10: The Business Model of Outrage — Engagement Over Truth
network effects
the increase in value of a network as more people join — explains why social media platforms grew so rapidly and consolidated so dramatically. A social network where all your friends are present is more valuable than one where only some are, which creates powerful incentives to join the dominant pla → Chapter 7: The Rise of Digital and Social Media
New additions beyond traditional literacy:
**Synthetic media recognition**: Knowing specific artifacts to look for in AI-generated images (unusual hands, background inconsistencies, text errors) and video (unnatural blinking, temporal artifacts) — a moving target as AI improves. - **Provenance checking**: Knowing how to check content credent → Chapter 39 Quiz: AI, Generative Models, and the Future of Synthetic Media
Nyhan et al. (2019)
including Brendan Nyhan himself, one of the original backfire effect researchers — published a study in *Science* examining the effects of fact-checking labels on Facebook during the 2016 election. Their finding: fact-check labels significantly reduced belief in false content and increased accurate → Case Study 4.1: The Backfire Effect — Does Correcting Misinformation Always Help?

P

parasocial relationships
first theorized by sociologists Horton and Wohl in 1956 — describes the one-sided relationships that audiences develop with media figures. Television viewers who watched Johnny Carson for decades felt they knew him personally, despite having never met him. This feeling of personal relationship, even → Chapter 7: The Rise of Digital and Social Media
Part I (Chapters 1–5): Foundations
Why do we believe what we believe? What makes a source trustworthy? How do cognitive biases and social dynamics make us vulnerable to misinformation? Start here if you're new to these questions. → How to Use This Book
Part II (Chapters 6–10): The Information Ecosystem
How did we get here? From the penny press to the attention economy, this part traces the structural conditions that make misinformation so pervasive today. → How to Use This Book
Part III (Chapters 11–18): Types and Mechanisms
What kinds of misinformation exist, and how does each work? This part is a field guide to the specific forms misinformation takes, from propaganda to deepfakes. → How to Use This Book
Part IV (Chapters 19–24): Detection and Analysis
How do we identify misinformation? Professional fact-checking, source evaluation, data literacy, and three Python-intensive chapters on computational detection methods. → How to Use This Book
Part V (Chapters 25–29): Critical Thinking Skills
The cognitive toolkit for resistance: logic, scientific thinking, probabilistic reasoning, lateral reading, and established media literacy frameworks. → How to Use This Book
Part VI (Chapters 30–33): Political Dimensions
Democracy, polarization, state-sponsored disinformation, election interference, and global policy responses. → How to Use This Book
Part VII (Chapters 34–38): Countermeasures
Platform content moderation, inoculation theory, education-based interventions, regulatory approaches, and building personal resilience. → How to Use This Book
Part VIII (Chapters 39–41): Advanced Topics
Generative AI, global perspectives, and the ethics of truth. Graduate-level extensions of the core material. → How to Use This Book
Phase 1: Establishing distrust (minutes 0-5)
Attack the credibility of mainstream medicine and government health agencies - Invoke financial conflicts of interest - Appeal to personal autonomy and parental authority → Case Study 25.2: The Gish Gallop in Practice — Analyzing a Misinformation Presentation
Phase 2: The Gallop itself (minutes 5-25)
Rapid-fire presentation of 30-50 specific claims about vaccine ingredients, adverse events, studies, statistics, and anecdotes - Claims shift rapidly between domains (chemistry, history, statistics, law, anecdote) - Each claim is presented with apparent confidence and specificity → Case Study 25.2: The Gish Gallop in Practice — Analyzing a Misinformation Presentation
Phase 3: The alternative (minutes 25-30)
Brief, positive presentation of an alternative worldview (natural immunity, holistic health) - Emotional appeal to parental love and protective instinct → Case Study 25.2: The Gish Gallop in Practice — Analyzing a Misinformation Presentation
Phase 4: Call to action (final minutes)
Encourage research ("do your own research") - Provide alternative information sources - Community solidarity ("you're not alone") → Case Study 25.2: The Gish Gallop in Practice — Analyzing a Misinformation Presentation
Platform voluntary commitments
including the EU Code of Practice on Disinformation, third-party fact-checking programs, and labeling systems — represent the primary approach to misinformation governance in the United States and an important complement to hard law in Europe. The "Facebook Papers" documents revealed the gap between → Chapter 33: Key Takeaways — Policy Responses to Misinformation: Global Perspectives
Policy standards
the standard appropriate for government action in response to foreign interference — are contested. How much confidence is needed to justify sanctions, diplomatic expulsions, or counter-operations? Different actors answer this question differently, and the answer has significant strategic implicatio → Chapter 31: State-Sponsored Disinformation and Information Warfare
Populism's thin-centered ideology
dividing society into "the pure people" and "the corrupt elite" — creates structural anti-expert and anti-media sentiment that makes populist audiences more receptive to misinformation that positions itself against institutional authority. This is not a property of any specific partisan group but a → Chapter 30: Key Takeaways — Democracy, Polarization, and the Misinformation Crisis
Posterior probability
The probability assigned to a hypothesis after incorporating new evidence. → Chapter 28: Probabilistic Thinking and Uncertainty
Prebunking / inoculation
explaining manipulation techniques *before* exposure - **Motivational interviewing** — non-confrontational questioning of the person's own reasoning - Correcting false social norm perceptions → Chapter 1 Key Takeaways: What Is Truth? Epistemological Foundations
Precautionary principle
The principle that uncertain risks of serious harm should be given priority even in the absence of certainty. → Chapter 28: Probabilistic Thinking and Uncertainty
Preprints
manuscripts posted to public servers like arXiv, bioRxiv, or medRxiv before peer review — played an enormous role in COVID-19 research communication, where the speed of peer review was genuinely too slow for a rapidly evolving public health crisis. But preprints that were widely covered in media bef → Chapter 21: Data Journalism and Statistical Literacy
Prior probability
The probability assigned to a hypothesis before new evidence is considered. → Chapter 28: Probabilistic Thinking and Uncertainty
Processing fluency
the subjective ease with which cognitive operations are performed—functions as a metacognitive signal that the brain uses as a proxy for truth and familiarity. → Chapter 3 Key Takeaways: How the Human Mind Processes Information
Prosecutor's fallacy
Confusing P(evidence|innocent) with P(innocent|evidence); assuming that low probability of evidence given innocence implies high probability of guilt. → Chapter 28: Probabilistic Thinking and Uncertainty
Publication bias
the tendency of journals to publish positive, significant results while rejecting null results — meant that the published literature was a selected sample biased toward false positives. If ten research teams independently study a hypothesis, and by chance one finds p < 0.05 while nine find null resu → Case Study 21-2: The Replication Crisis in Social Psychology — What It Means for Misinformation Research

Q

Q10: (B)
Abductive reasoning identifies the hypothesis that best explains available evidence; it yields probable rather than necessary conclusions. Deductive reasoning guarantees conclusions given true premises. Abduction is foundational to scientific hypothesis formation. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q11: (B)
Appeal to ignorance (argumentum ad ignorantiam): arguing that because something cannot be disproved, it might be true (or is true). The inability to prove a universal negative does not validate a positive claim. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q12: (B)
Cherry picking is most accurately described as the suppressed evidence fallacy: presenting a selective portion of the evidence that supports one's conclusion while omitting disconfirming evidence. While it can involve misrepresentation (like straw man), the defining feature is selective evidence pre → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q12: (C)
A Cochrane systematic review comprehensively and systematically synthesizes all available RCT evidence using pre-specified inclusion criteria and rigorous methodology. It is generally the most reliable guide to evidence on medical interventions. Individual expert opinion, media coverage, and popular → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q13: (B)
The winner's curse describes how published significant results from underpowered studies tend to overestimate true effect sizes. When a small study produces a significant result by chance, the observed effect must be large enough to exceed the significance threshold despite the large sampling variab → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q13: (C)
Modus ponens: If P then Q; P; therefore Q. Option A is modus tollens; option B is disjunctive syllogism; option D is hypothetical syllogism. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q14: (A)
Appeal to tradition: arguing that something is correct or beneficial because it has been practiced for a long time. Longevity of practice does not establish efficacy — many harmful practices persisted for centuries. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q14: (C)
While all Bradford Hill criteria are useful, temporality is the only strictly necessary condition for causation — exposure must precede outcome. Without establishing that coffee consumption preceded rather than followed liver disease protection, causation cannot be inferred. Biological mechanism (pl → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q15: FALSE
P-values are strongly influenced by sample size. With a very large sample, even a tiny (practically insignificant) effect produces a very small p-value. A study with n = 1,000,000 might find p = 0.0001 for an effect of Cohen's d = 0.01 (trivially small), while a study with n = 50 might find p = 0.04 → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q15: TRUE
A valid argument guarantees that IF the premises are true, the conclusion must be true. But validity says nothing about whether premises actually are true. Example: "All cats are robots. My cat is a robot. Therefore, something in this argument is invalid." Wait — that's valid! Both premises are (sti → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q16: FALSE
Peer reviewers do not have access to the original raw data in standard peer review. They evaluate the methods and results as reported. Sophisticated data fabrication (which produces internally consistent data) is essentially impossible to detect through peer review alone. Major fraud cases (Hwang, S → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q16: TRUE
Denying the antecedent is a formal fallacy: If P then Q; not-P; therefore not-Q. This is invalid because Q might be true via other pathways besides P. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q17: FALSE
The burden of proof lies with whoever makes a positive claim, not with the skeptic. Skeptics are not required to disprove every speculative claim; the burden is on those asserting that something is the case. (Context and prior plausibility matter, but the basic burden principle favors the null posit → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q18: TRUE
Cogency is to inductive arguments what soundness is to deductive arguments. An inductive argument is cogent if and only if it is strong (premises support conclusion probabilistically) and all premises are true. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q19: FALSE
The Gish Gallop is equally effective in written media, YouTube videos, blog posts, and social media threads. A written article making 40 claims in rapid succession creates the same asymmetry: readers encounter a cascade of assertions, and addressing each thoroughly would require a much longer rebutt → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q1: (B)
Popper's criterion is that a hypothesis is scientific only if there exists possible evidence that would show it to be false — i.e., it generates testable predictions that could be refuted by observation. This demarcates science from non-falsifiable claims like "God did it" or "the universe was creat → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q1: (C)
Validity is defined purely by the impossibility of true premises and a false conclusion simultaneously. It says nothing about whether premises are actually true (that's soundness) or whether the argument is persuasive. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q20: FALSE
Moving the goalposts is a distinct fallacy involving arbitrary escalation of evidential standards to prevent a conclusion from being established. Appeal to ignorance argues from inability to disprove. Though related (both can protect beliefs from evidence), they are structurally different tactics. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q2: (C)
Modus tollens: If A then B; not-B; therefore not-A. Option A is modus ponens; option B is the fallacy of affirming the consequent; option D is hypothetical syllogism. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q3: (B)
Tu quoque ("you too") is a variant of ad hominem that deflects criticism by pointing out the critic's own alleged hypocrisy. The claim "you eat fast food" doesn't address the validity of the argument about food additives. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q4: (B)
The Gish Gallop's core mechanism is the asymmetry: producing a fallacious or weak claim takes seconds; properly refuting it requires time, expertise, and evidence. A presenter deploying 50 claims in 20 minutes cannot be adequately rebutted in the same timeframe. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q5: (C)
Affirming the consequent is a formal fallacy because it is an error in logical form: If P then Q; Q; therefore P. Ad hominem, slippery slope, and appeal to nature are all informal fallacies arising from content and context rather than logical structure. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q6: (C)
Soundness requires both validity (correct logical form) and truth of all premises. A true conclusion is not sufficient — the argument could have a coincidentally true conclusion without valid inference. Absence of fallacies is related but not definitionally sufficient. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q7: (B)
False dichotomy (false dilemma). The statement presents only two extreme options (ban all social media vs. accept child damage) when many intermediate regulatory, educational, and technical options exist. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q8: (B)
Post hoc ergo propter hoc ("after this, therefore because of this") is the fallacy of concluding causation from temporal sequence. A precedes B, therefore A caused B — which overlooks the vast number of other events that also preceded B. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q8: (C)
The confidence interval is a procedure that, if repeated many times, would contain the true value in 95% of repetitions. It does NOT say there is a 95% probability that the true value lies within this specific interval (that would be a Bayesian credible interval). The confidence level is a property → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Q9: (B)
The no true Scotsman fallacy involves protecting a universal claim from counterexamples by definitionally redefining the category to exclude the counterexample, rather than revising the claim. Option B illustrates this perfectly. → Chapter 25: Quiz — Logic, Argumentation, and Fallacy Recognition
Q9: (C)
Relative risk reduction of 25% on a 2% baseline = 0.25 × 2% = 0.5 percentage points absolute risk reduction. This means the drug reduces stroke from 2% per year to 1.5% per year. While the relative reduction sounds impressive (25%), the absolute reduction is modest. The number needed to treat (NNT) → Chapter 26: Quiz — Scientific Thinking and Evidence Evaluation
Quote Investigator
A research website and methodology for tracing the actual historical origins of attributed quotations. → Chapter 27: Lateral Reading and Advanced Web Literacy

R

repetition increases perceived credibility
the "illusory truth effect" — and that **epistemic confusion** (making people unsure what to believe) can be as strategically valuable as persuasion to specific false beliefs. The firehose strategy aims not primarily to convince but to exhaust: to make critical evaluation so cognitively taxing that → Chapter 31: State-Sponsored Disinformation and Information Warfare
Researcher degrees of freedom
the legitimate choices researchers make about data collection, analysis, and reporting — create opportunities for inflation of false positive rates. Uri Simonsohn, Leif Nelson, and Joseph Simmons demonstrated in a 2011 paper in *Psychological Science* that these degrees of freedom, used flexibly, co → Case Study 21-2: The Replication Crisis in Social Psychology — What It Means for Misinformation Research
A technique for finding other occurrences of an image on the web, used to identify the earliest and original context of a photograph. → Chapter 27: Lateral Reading and Advanced Web Literacy

S

scaled to epistemic influence
those who have large platforms exercise power over others' beliefs at scale, and this power generates commensurate responsibilities. → Chapter 41 Quiz
Selection bias
the systematic tendency to sample non-representatively — is one of the most common ways inductive generalizations fail. Online polls, Twitter responses, and convenience samples are notorious for selection bias. → Chapter 25: Logic, Argumentation, and Fallacy Recognition
selection problem
fact-checkers must choose which claims to investigate from among far more claims than they can check — directly interacts with epistemic justice concerns. Selection decisions are not epistemically neutral: by choosing to check powerful politicians' claims more often than equivalent claims by less po → Chapter 41 Quiz
Shadow analysis
Using the direction and length of shadows in images to determine approximate time of day and season. → Chapter 27: Lateral Reading and Advanced Web Literacy
Share
copy and redistribute the material in any medium or format - **Adapt** — remix, transform, and build upon the material → Misinformation, Media Literacy, and Critical Thinking in the Digital Age
shared informational infrastructure
common facts, sources, and experiences that people across the political spectrum can engage with. Public broadcasting systems (BBC, PBS, NPR) have historically served this function, and contemporary initiatives like The Correspondent (a member-funded, non-profit news platform) aim to reconstruct sha → Chapter 9: Filter Bubbles, Echo Chambers, and Algorithmic Curation
SIFT method
A four-move web verification framework: Stop, Investigate the source, Find better coverage, Trace claims. → Chapter 27: Lateral Reading and Advanced Web Literacy
social graph
the network of nodes (people) and edges (connections between people) that defines the structure of a social network. Facebook's core innovation was not the social networking concept (MySpace preceded it, Friendster preceded MySpace) but the systematization and mining of the social graph at scale. By → Chapter 7: The Rise of Digital and Social Media
Standard form:
P1: Social media platforms have financial incentives to promote engagement over accuracy. - P2: Sensational misinformation generates more engagement than boring truth. - C: We should not trust social media for news. → Chapter 25: Logic, Argumentation, and Fallacy Recognition
Stop
interrupts the emotional automaticity that drives impulsive sharing. → Chapter 38: Building Personal Resilience Against Misinformation
Strategies to improve calibration:
Reference class forecasting (base rate before case-specific reasoning) - Premortem analysis (imagine being wrong; why would that be?) - Explicit track record keeping (write down predictions, check them later) - Distinguish outside view from inside view; default to outside view first → Chapter 28 Key Takeaways: Probabilistic Thinking and Uncertainty
Subject Areas:
Media Studies - Information Science - Critical Thinking - Political Science - Cognitive Psychology - Digital Humanities → Misinformation, Media Literacy, and Critical Thinking in the Digital Age
Superforecaster
A person demonstrating significantly above-average calibration in probabilistic forecasting across diverse domains, as identified in Tetlock's Good Judgment Project. → Chapter 28: Probabilistic Thinking and Uncertainty
synthetic personas
entirely fabricated online identities whose biographical details, photographs, and digital histories are manufactured. The widespread availability of **generative adversarial network (GAN)** technology since approximately 2018 has made it trivially easy to produce photorealistic faces of people who → Chapter 31: State-Sponsored Disinformation and Information Warfare

T

Technique 1: False equivalence
treating two things as equivalent when they are not (e.g., "One study shows X, another shows Y, so the science is 50-50.") → Chapter 38: Exercises — Building Personal Resilience Against Misinformation
Technique 2: Anecdote as data
using a single case to generalize (e.g., "I know someone who was fully vaccinated and still got COVID, so vaccines don't work.") → Chapter 38: Exercises — Building Personal Resilience Against Misinformation
Technique 3: Cherry-picking
selecting supporting evidence while ignoring contradicting evidence. → Chapter 38: Exercises — Building Personal Resilience Against Misinformation
Testimonials from satisfied users
the anecdotal case that substitutes for clinical evidence > - **Pseudoscientific language** — "detox," "alkaline," "quantum healing," "energy fields" — that sounds scientific without being meaningful > - **The appeal to nature** — "all natural," "chemical-free," "the body's own healing power" > - ** → Chapter 14: Health Misinformation — From Snake Oil to Anti-Vax
Testimony
communicating information from one person to another — is the primary mechanism by which humans transmit knowledge. When a friend tells you the restaurant on the corner is closed, when a doctor explains your diagnosis, when a journalist reports events you didn't witness, you acquire beliefs through → Chapter 1: What Is Truth? Epistemological Foundations
Tetlock's key findings:
Most expert pundits are barely better than chance at political prediction - A small subset of people ("superforecasters") is substantially more accurate - The difference is cognitive style, not domain expertise or information access - Superforecasters consistently outperform intelligence analysts wi → Chapter 28 Key Takeaways: Probabilistic Thinking and Uncertainty
Text:
Arabic script on buildings, street signs, and vehicles - Shop names and advertisements often reference local businesses - Graffiti and spray-painted markings sometimes include dates or slogans with dating information → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
The DSA's systemic risk approach
requiring risk assessments and mitigation measures rather than content removal mandates — represents the most promising regulatory direction for addressing algorithmic amplification of misinformation while preserving editorial discretion. → Chapter 33: Key Takeaways — Policy Responses to Misinformation: Global Perspectives
The transfer problem
the difficulty of getting classroom-learned skills to transfer to real-world media behavior — is one of the most significant challenges in media literacy education. Habits, authentic practice, and metacognitive reflection are key to improving transfer. → Chapter 29: Key Takeaways — Media Literacy Frameworks
Topography:
Mountain silhouettes visible in background - Terrain slope and orientation - River valley configurations → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
Total Points: 40
## Part I: Multiple Choice (1 point each) → Chapter 1 Quiz: What Is Truth? Epistemological Foundations
Total Points: 42
## Part I: Multiple Choice (1 point each) → Chapter 2 Quiz: The History of Misinformation
transferable strategies
habits of lateral reading, source investigation, claim tracing, and uncertainty acknowledgment — that can be applied to novel situations not yet encountered. → Chapter 27 Key Takeaways: Lateral Reading and Advanced Web Literacy
Tribal epistemics
a term used by philosophers and psychologists to describe the subordination of epistemic evaluation to group loyalty — is arguably the dominant epistemic pathology of the digital age. Social media platforms maximize engagement by making identity-relevant content salient; the result is that users enc → Chapter 5: The Social Psychology of Belief and Group Conformity
true
You actually **believe** it - Your belief is **justified** by appropriate evidence or reliable reasoning → Chapter 1 Key Takeaways: What Is Truth? Epistemological Foundations
Trust in major American institutions
Congress, media, government, science — has declined dramatically since the mid-20th century. Distinguishing justified from unjustified distrust is analytically and practically essential: some trust decline reflects accurate responses to genuine institutional failures; some reflects deliberate disinf → Chapter 30: Key Takeaways — Democracy, Polarization, and the Misinformation Crisis
Type 1: Satire/Parody
Not applicable. The video presents itself as a sincere documentary investigation. → Case Study 11.1: The "Plandemic" Video — A Taxonomy Analysis
Typosquatting
Registering domain names that differ by small typographical variations from legitimate domains, in order to deceive users. → Chapter 27: Lateral Reading and Advanced Web Literacy

V

Vegetation:
Trees that are identifiable to species can constrain latitude and climate zone - Olive trees (common across Syria) behave differently from agricultural crops in indicating season - Mountain vegetation patterns vary with elevation and region → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
Verification resources:
Quote Investigator (quoteinvestigator.com) — traces earliest documented print occurrence - Wikiquote — maintains "Misattributed" sections for major figures - Google Books — for searching historical appearance of exact phrases - Primary source archives — Einstein Papers, Lincoln digitized papers, etc → Chapter 27 Key Takeaways: Lateral Reading and Advanced Web Literacy
Vertical reading
Reading deeply within a single source, examining its internal evidence; characteristic of non-expert web users. → Chapter 27: Lateral Reading and Advanced Web Literacy

W

What documentation includes:
Screenshot of each step of the reverse image search with dates - Annotated comparison images showing feature matching between the disputed image and the verified location - SunCalc screenshots showing solar geometry calculations - Links to satellite imagery and Street View where used - Final confide → Case Study 27-1: Geolocating a Conflict Photo — The Syria White Helmets Controversy
What is limited or ineffective:
Simple information provision without identity threat reduction, for identity-laden topics - Corrections that center and repeat the false claim - Relying on analytical ability as a general protection against misinformation susceptibility - Post-hoc corrections after wide circulation of false claims ( → Chapter 3 Key Takeaways: How the Human Mind Processes Information
What shows promise:
Deliberate engagement of System 2 before sharing or accepting information (accuracy nudges, slowing down prompts) - Lateral reading: investigating what others say about a source rather than evaluating the source's own claims - Inoculation/pre-bunking: exposure to weakened forms of misinformation wit → Chapter 3 Key Takeaways: How the Human Mind Processes Information
WHOIS
A protocol that returns publicly available registration information about a domain name, including registration date and registrant details. → Chapter 27: Lateral Reading and Advanced Web Literacy
wumao dang
the "50-cent party" or "50-cent army" — so named for the rumored payment of 50 Chinese yuan cents per post. Research by Gary King, Jennifer Pan, and Margaret Roberts (published in the *American Political Science Review* in 2017) obtained leaked Chinese government documents revealing the actual scale → Chapter 31: State-Sponsored Disinformation and Information Warfare

Y

YouTube's borderline content policy
reducing algorithmic recommendation for content approaching but not clearly violating guidelines — is among the most consequential and least transparent moderation interventions at scale. Its existence illustrates both the potential for soft moderation to reduce harmful content reach and the transpa → Chapter 34: Key Takeaways — Platform Content Moderation: Policies, Challenges, Trade-offs