Appendix G: Media Literacy Toolkit

A practical reference for source evaluation, propaganda analysis, inoculation message design, and community audit. Use this appendix in class, in the field, and as the foundation for your Inoculation Campaign project.


Section 1: Quick Reference — Source Evaluation

The SIFT Method

SIFT is a four-step check you can run in under two minutes before sharing, acting on, or citing any piece of information.

S — Stop

Before you like, share, or react: pause. Ask yourself: Do I actually know whether this is reliable? The urge to share quickly is itself a vulnerability. Strong emotional reactions — outrage, triumph, disbelief — are a reliable signal that you should slow down, not speed up.

I — Investigate the Source

Do not read the article yet. First, find out who made it.

  • Open a new tab and search the outlet or author name.
  • Look for: ownership, funding, editorial history, track record of corrections, and known political alignment.
  • Ask: Is this a known outlet? Does it have a Wikipedia entry? What do other sources say about it?

F — Find Better Coverage

If the claim is significant, search for it independently.

  • Google the headline or core claim in quotes.
  • Do multiple reliable outlets cover it? If only one source reports a dramatic claim, treat it with skepticism.
  • If better coverage exists, use it. You do not need to return to the original source.

T — Trace Claims, Quotes, and Media

Many viral claims are distorted versions of real events.

  • Trace the claim upstream: Who published it first? What was the original context?
  • For quotes: search the exact wording in quotes to find the original source.
  • For images and video: use reverse image search (see Section 9) to verify origin and date.

The CRAAP Test

Use CRAAP for deeper evaluation of academic, institutional, or unfamiliar sources.

Criterion Key Questions
Currency When was it published or last updated? Is currency relevant for this topic?
Relevance Does it directly address your question? Is the audience appropriate?
Authority Who is the author? What are their credentials? Who publishes this?
Accuracy Are claims supported by evidence? Are sources cited? Can claims be verified?
Purpose Why was this created? To inform, sell, persuade, or entertain? Who benefits?

Why Lateral Reading Beats Vertical Reading

Vertical reading means reading deeply within a source — evaluating its text, citations, and design. The problem: sophisticated disinformation mimics the look of credible sources. Design and citations can be fabricated.

Lateral reading means leaving the source immediately and reading about it from independent sites. This is what professional fact-checkers do. It is faster and more reliable.

How to do it:

  1. Note the source's name or URL.
  2. Open two or three new browser tabs.
  3. Search: [source name] credibility, [source name] bias, [source name] funding.
  4. Check Wikipedia, Media Bias/Fact Check, and Ad Fontes Media.
  5. Return to the original only after you have a baseline assessment.

Rule of thumb: spend more time reading about sources than reading within them.


Section 2: Propaganda Technique Identification Worksheet

Use this worksheet to analyze any propaganda artifact — an advertisement, political speech, social media post, documentary, or news segment.


ARTIFACT IDENTIFICATION

Field Your Notes
Source (who created this?)
Date of creation or publication
Target audience
Distribution channel / platform
Format (video, text, image, audio, other)
Stated purpose (if any)

TECHNIQUE IDENTIFICATION CHECKLIST

Check all that apply. (See Appendix F for full definitions.)

  • [ ] Appeal to fear
  • [ ] Appeal to authority (legitimate vs. false authority)
  • [ ] Bandwagon / social proof
  • [ ] Name-calling / ad hominem
  • [ ] Glittering generalities
  • [ ] Plain folks appeal
  • [ ] Transfer (symbolic association)
  • [ ] Testimonial
  • [ ] Card stacking / cherry picking
  • [ ] Black-and-white / false dilemma
  • [ ] Repetition / slogan
  • [ ] Scapegoating
  • [ ] Dog whistle / coded language
  • [ ] Firehose of falsehood
  • [ ] Manufactured consensus
  • [ ] Conspiracy framing
  • [ ] False equivalence
  • [ ] Emotional override (designed to prevent analysis)
  • [ ] Other: ___

EMOTIONAL TRIGGER ANALYSIS

Question Your Notes
What primary emotion does this target? (fear, anger, pride, disgust, hope)
What secondary emotion(s)?
How is the emotion activated? (imagery, music, language, framing)
Does the emotional intensity seem designed to bypass analysis?
What action does the emotion push the audience toward?

EVIDENCE QUALITY ASSESSMENT

Question Your Notes
What specific factual claims are made?
What evidence is provided for each?
Is the evidence verifiable? Verified?
Are statistics used? Are they accurately sourced and contextualized?
Are expert sources cited? Are they credible and relevant?

MISSING INFORMATION AUDIT

Question Your Notes
What relevant context is omitted?
What counterevidence exists that is not mentioned?
Whose perspective is absent?
What would a skeptic ask that this does not address?

SOURCE AND FUNDING TRACE

Question Your Notes
Who ultimately funds or controls this source?
Does the creator have a disclosed interest in this message?
Is funding transparent?
Does this source have a track record of accuracy or inaccuracy?

PRELIMINARY VERDICT

Place an X on the spectrum:

Legitimate Advocacy ----|----|----|----|---- Propaganda
                        1    2    3    4    5

Score: ___

Brief justification (2–3 sentences):


Section 3: Five-Step Propaganda Analysis Protocol

Use this protocol for in-depth written analysis — for papers, assignments, and the Inoculation Campaign.


Step 1: Identify the Artifact

Document the basics before interpreting anything.

  • Full citation: source, creator, date, platform, format
  • Brief description of what the artifact is (not what it means)
  • Note your initial emotional reaction — then set it aside

Step 2: Map the Message

Separate what the artifact says explicitly from what it implies.

  • Explicit claim: What is directly stated? (e.g., "Candidate X raised taxes seventeen times.")
  • Implicit message: What is the artifact designed to make you feel or believe that it does not state directly? (e.g., "Candidate X cannot be trusted with your money.")
  • Call to action: What behavior or attitude change is the artifact designed to produce?

Step 3: Analyze the Techniques

Apply the taxonomy from Appendix F systematically.

  • List every technique present (use the checklist in Section 2 as a starting point).
  • For each technique, quote or describe the specific element that instantiates it.
  • Identify the primary technique (the one doing the most work).
  • Note any techniques that work in combination.

Step 4: Evaluate the Evidence

Assess factual claims against available evidence.

  • List each specific factual claim made.
  • Search for independent verification of each.
  • Rate each claim: Verified / Partially true / Missing context / Unverified / False.
  • Note what evidence would be needed to evaluate unverified claims.
  • Identify the "missing middle" — what the artifact omits that a fair account would include.

Step 5: Assess the Intended Effects

Think about what the artifact is designed to do in its target audience.

  • What beliefs is it designed to create or reinforce?
  • What attitudes toward a group, institution, or policy does it promote?
  • What actions does it encourage — voting, donating, sharing, avoiding, fearing?
  • Is the effect likely to be short-term (behavioral) or long-term (attitudinal)?
  • Who benefits if these effects occur?

Section 4: Source Quality Assessment Guide

Use this sequence for rapid source evaluation.

1. Check the About page Every credible outlet has one. Look for: ownership, founding date, stated mission, named editorial leadership, physical address. Absence of any of these is a warning sign.

2. Check the URL Mimicry sites deliberately resemble real outlets. - Watch for: .com.co, -news, usa or daily added to familiar names - Examples: ABCnews.com.co (not ABC News), NationalReport.net (satire site frequently mistaken for news)

3. Check known bias databases

Tool What It Tells You
Media Bias/Fact Check (mediabiasfactcheck.com) Political lean, factual reporting rating
Ad Fontes Media (adfontesmedia.com) Reliability and bias on a two-axis grid
AllSides (allsides.com) Left-center-right bias ratings with methodology notes

Use these as starting points, not final verdicts. Cross-reference at least two.

4. Check funding transparency Who pays for this outlet? Look for: - Disclosed donors or owners on the About page - IRS Form 990 (for nonprofits) via ProPublica Nonprofit Explorer - OpenSecrets.org for organizational political spending - Undisclosed funding is not disqualifying but raises questions

5. Check correction policies Credible outlets publish corrections prominently. Search site:[outlet.com] correction or site:[outlet.com] editor's note. No corrections ever = either perfect accuracy or no accountability culture.

6. Use WHOIS for unfamiliar domains Search WHOIS [domain] to find registration date, registrant, and hosting country. Recently registered domains (under six months old) promoting dramatic news warrant additional scrutiny.


Section 5: The FLICC Identification Guide

FLICC identifies five techniques used to manufacture doubt and resist scientific or factual consensus. Originally developed in the context of climate denial, it applies broadly to disinformation.


F — Fake Experts

What it looks like: - "As a doctor / scientist / former official, I can tell you..." - Citing credentials that are real but irrelevant to the claim - Groups with authoritative-sounding names but no institutional affiliation - Lists of "signatories" to contrarian petitions (often outdated, unverifiable, or unrelated to the field)

Phrases to watch: "Thousands of scientists disagree..." / "Former [agency] insider reveals..." / "Experts the media won't interview..."

Prebunking message: "When someone challenges scientific consensus, check whether their expert credentials are actually relevant to the specific claim. A cardiologist is not an expert on climate science. A list of signatures is not evidence."

What to say in conversation: "That's an interesting source — what's their specific expertise in this area? Are they published in peer-reviewed journals on this topic?"


L — Logical Fallacies

What it looks like: - Straw man: misrepresenting the opposing view to make it easier to attack - False dilemma: "Either you support X or you support Y" (ignoring other options) - Ad hominem: attacking the person rather than the argument - Slippery slope: claiming one step leads inevitably to an extreme outcome

Phrases to watch: "If we allow X, next they'll want Y..." / "So you're saying..." / "Of course they would say that, they're funded by..."

Prebunking message: "Logical fallacies are argument errors that sound convincing but don't hold up under examination. Learning to name them makes them less powerful."

What to say in conversation: "I think that might be a straw man — that's not quite what [X] is actually arguing. What's your response to what they're actually saying?"


I — Impossible Expectations

What it looks like: - Demanding a level of certainty that science never provides - Treating any uncertainty as equivalent to total ignorance - "They keep changing their recommendations, so how can we trust them?" - Holding scientific evidence to a different standard than anecdote or ideology

Phrases to watch: "Science doesn't know for sure..." / "They said X last year and Y this year..." / "If the experts can't agree, why should I listen to them?"

Prebunking message: "Science communicates in probabilities, not certainties. Updated recommendations reflect new evidence — that's the system working correctly, not failing."

What to say in conversation: "What level of evidence would you need to change your mind? That's a useful question for both of us to answer."


C — Cherry Picking

What it looks like: - Citing one study that contradicts a large body of evidence - Focusing on short-term data trends that contradict long-term patterns - Highlighting exceptions while ignoring the rule - Presenting carefully selected quotes stripped of context

Phrases to watch: "This one study shows..." / "Even NASA admitted in 1975 that..." / "Here's what they don't want you to see..."

Prebunking message: "Any large body of evidence contains individual studies that point in different directions. Scientific consensus is based on the weight of evidence, not any single study."

What to say in conversation: "That's one data point — what does the broader research literature say? Does this study represent the majority view or an outlier?"


C — Conspiracy Theories

What it looks like: - Claims that require coordinated silence across thousands of independent actors - Treating absence of evidence as evidence of cover-up - "Official" explanations dismissed by definition as part of the conspiracy - Unfalsifiability: any counter-evidence is reframed as further proof

Phrases to watch: "They don't want you to know..." / "The fact that they're denying it proves..." / "Connect the dots..." / "Do your own research..."

Prebunking message: "Conspiracy theories become unfalsifiable when any counter-evidence is framed as further proof of the conspiracy. Real conspiracies get exposed — and the evidence for them grows over time."

What to say in conversation: "What evidence would convince you this theory is wrong? If there's no possible answer, that's worth examining."


Section 6: Inoculation Message Design Template

Use this template to design an inoculation message for your campaign. Each message targets a technique, not a specific claim — this gives it broad applicability across many future messages using the same technique.


Step 1: Select the Target Technique

Name the technique your message will inoculate against: _______________________________________________

Why this technique? (Who uses it in your target community, and about what topics?) _______________________________________________


Step 2: Write the Forewarning (1–2 sentences)

Alert the audience that a specific manipulation technique exists and may be used on them. Name it without naming a specific actor.

Example (for false authority): "Propagandists sometimes use people with impressive-sounding credentials to make false claims seem credible — even when those credentials have nothing to do with the subject."

Your forewarning: _______________________________________________


Step 3: Write the Weakened Example (2–4 sentences)

Demonstrate the technique in a low-stakes, clearly artificial context. The example should be just strong enough to be recognizable, but obviously false so it does not inoculate the wrong thing.

Example: "Imagine an advertisement that says: 'As a celebrated chef with three Michelin stars, I can tell you that this herbal supplement cured my arthritis.' The chef's credentials are real — but they tell us nothing about medicine."

Your weakened example: _______________________________________________


Step 4: Write the Refutation (2–3 sentences)

Explain specifically why the technique does not work as evidence. Do not just assert that it's wrong — explain the mechanism of why it fails.

Example: "Culinary expertise doesn't transfer to medical claims. When evaluating health claims, ask: Is this person's training in the relevant field? Are they citing peer-reviewed evidence?"

Your refutation: _______________________________________________


Step 5: Test for Clarity, Brevity, and Emotional Resonance

Test Pass / Revise
Can someone with no background understand it in one reading?
Is it under 150 words for the core message?
Does it engage the audience without triggering defensiveness?
Does it end with a concrete action (a question to ask, a check to perform)?

Step 6: Select Delivery Mechanism and Format

Decision Your Choice
Primary format (video, graphic, text post, podcast clip, printed card)
Primary platform or venue
Messenger (who will deliver it — peer, teacher, influencer, none)
Timing (before anticipated exposure, or just-in-time?)
Language and register (formal, informal, age-appropriate)

Section 7: Counter-Messaging Template

Use the truth sandwich structure when correcting misinformation in public communication. The goal is to reinforce accurate information rather than amplify the false claim through repetition.


1. Lead with truth

State the accurate information clearly and directly. This should be the strongest sentence — it is both your opening and the most-repeated element.

Example: "Vaccines do not cause autism. The largest studies ever conducted, involving millions of children, have found no link."


2. Briefly acknowledge the false claim

Name the misinformation once, without repeating the specific false phrase more than necessary. Do not quote it directly if possible.

Example: "A 1998 study claimed to find a link, but it was retracted after its data was found to be fraudulent and its author lost his medical license."


3. Explain the technique

Name what the propagandist did and why it works psychologically. This helps audiences recognize the technique in future encounters.

Example: "This is a common pattern: a discredited single study is kept alive long after the scientific community moved on, exploiting the fact that most people don't follow scientific retractions."


4. Return to truth

End on the accurate information. This is the last thing the audience hears — make it count.

Example: "The evidence is clear and has been replicated across dozens of countries: childhood vaccines are safe and do not cause autism."


Checklist before publishing a counter-message:

  • [ ] Does it lead with truth, not with the false claim?
  • [ ] Does it repeat the false phrase as few times as possible?
  • [ ] Does it name the technique (not just the actor)?
  • [ ] Does it end with the accurate information?
  • [ ] Is it appropriate in tone for the target audience?
  • [ ] Is it brief enough to be shared easily?

Section 8: Community Propaganda Audit Template

Use this template during the research phase of your Inoculation Campaign to map the information environment in your target community.


Part A: Community Profile

Field Your Notes
Community name / description
Approximate size and geographic scope
Age range of primary audience
Primary languages
Education level (general)
Economic context
Political context (if relevant)
Key community identities or affiliations

Part B: Information Environment Map

Platform / Channel Usage Level (High / Med / Low) Primary Content Type Key Trusted Voices
Facebook
YouTube
WhatsApp / group chats
Local TV news
Local newspapers
Radio
TikTok / Instagram
Podcasts
Community organizations
Religious institutions
Other:

Part C: Threat Identification Worksheet

For each identified threat, complete one row:

Technique Where It Appears Who Uses It Topic / Issue Frequency

Part D: Vulnerability Assessment

Rate the community's relative exposure to each cognitive bias (1 = low, 3 = high):

Bias Rating Evidence or Reasoning
Confirmation bias (seeking confirming information)
In-group loyalty (accepting claims from the in-group uncritically)
Authority bias (deferring to credentials regardless of relevance)
Illusory truth effect (believing repeated claims)
Emotional susceptibility (high-emotion content bypasses analysis)
Novelty bias (sharing surprising information without checking)
Distrust of mainstream media

Part E: Counter-Resource Inventory

Resource Format Access Trustworthiness in This Community

List trusted local voices, organizations, or institutions that could serve as messengers for accurate information in this community.


Fact-Checking Organizations

Organization URL Scope
PolitiFact politifact.com U.S. political claims
FactCheck.org factcheck.org U.S. political and policy claims
Snopes snopes.com Viral claims, urban legends, social media
Full Fact fullfact.org UK-focused
AFP Fact Check factcheck.afp.com International, multilingual
DW Fact Check dw.com/en/fact-check Global, strong on European disinformation

Open-Source Investigation Tools

Tool URL Use
Bellingcat bellingcat.com OSINT guides and investigations
TinEye tineye.com Reverse image search
Google Reverse Image images.google.com Reverse image search
InVID / WeVerify invid-project.eu Video verification browser extension
Yandex Images yandex.com/images Reverse image (often stronger for faces/locations)

Research Databases and Investigative Reports

Source URL Specialty
Stanford Internet Observatory cyber.fsi.stanford.edu Platform disinformation, influence operations
DFRLab (Digital Forensic Research Lab) dfrlabs.org State-sponsored disinformation
Graphika graphika.com Network analysis of influence campaigns
ProPublica propublica.org Investigative journalism, data-driven

Media Bias Resources

Tool URL Method
Ad Fontes Media adfontesmedia.com Two-axis reliability/bias chart
Media Bias/Fact Check mediabiasfactcheck.com Categorical ratings with sourcing
AllSides allsides.com Crowd-sourced + editorial ratings

Platform Transparency Reports

Platform Report Location
Meta transparency.meta.com
Google transparencyreport.google.com
TikTok tiktok.com/transparency
Twitter / X transparency.twitter.com

Inoculation Games (Free, Browser-Based)

Game URL Technique Targeted
Bad News getbadnews.com Multiple propaganda techniques
Go Viral! goviralgame.com COVID-19 misinformation techniques
Harmony Square harmonysquare.game Election manipulation techniques
Cranky Uncle crankyuncle.com Science denial / FLICC

These games are designed for classroom use. Playing them before encountering real disinformation reduces susceptibility.

Primary Source Archives

Archive URL Content
UCSF Industry Documents Library industrydocuments.ucsf.gov Tobacco, pharmaceutical, fossil fuel industry internal docs
Internet Archive / Wayback Machine web.archive.org Historical snapshots of any website
Declassified Document Archive nsarchive.gwu.edu Declassified U.S. government documents

Section 10: Glossary Quick Reference

Astroturfing: Manufacturing the appearance of grassroots support using paid or coordinated actors who conceal their true origin.

Backfire effect: The tendency for corrections to strengthen rather than weaken a false belief when the belief is tied to identity (note: replications of the original study have shown mixed results).

Cherry picking: Selectively presenting evidence that supports a desired conclusion while ignoring contradicting evidence.

Cognitive bias: A systematic pattern of deviation from rational judgment, often exploited by propaganda.

Confirmation bias: The tendency to search for, favor, and recall information that confirms existing beliefs.

Deepfake: AI-generated synthetic media — video, audio, or images — designed to depict events or statements that did not occur.

Firehose of falsehood: A propaganda technique that overwhelms audiences with high-volume, high-speed false claims, making systematic rebuttal impossible.

Inoculation theory: A psychological model predicting that exposure to a weakened form of a persuasion attempt — with explicit refutation — increases resistance to future attempts.

Lateral reading: The practice of evaluating a source by reading other sources about it rather than reading only within it.

Manufactured consensus: Creating a false impression that a fringe view is widely held, typically through coordinated fake accounts, bots, or astroturfing.

Prebunking: Delivering inoculation messages before audiences are exposed to disinformation, rather than correcting it afterward.

Propaganda: Communication designed primarily to influence attitudes or actions by exploiting psychological vulnerabilities rather than engaging rational deliberation.

Scapegoating: Directing blame for a complex problem onto a specific group, typically a marginalized or out-group population.

Source heuristic: A mental shortcut in which audiences accept or reject information based on who produced it rather than evaluating the information itself.

Truth sandwich: A counter-messaging structure that leads and ends with accurate information, briefly acknowledging the false claim in between to minimize its reinforcement.

FLICC: An acronym identifying five science-denial techniques: Fake experts, Logical fallacies, Impossible expectations, Cherry picking, Conspiracy theories.

SIFT: A four-step source evaluation method: Stop, Investigate the source, Find better coverage, Trace claims.

Illusory truth effect: The tendency to rate statements as more true after repeated exposure, regardless of their actual accuracy.

Dog whistle: Coded language that communicates one message to a general audience and a different, targeted message to a specific in-group.

Epistemological crisis: A breakdown in the shared standards by which a society determines what counts as credible evidence — often described as a precondition for authoritarian consolidation.


This toolkit is a living document. Tools and resources change: fact-checking organizations emerge and close, platforms alter their transparency reporting, and new manipulation techniques develop. Revisit and update these resources each semester.