In April 2014, a photograph circulated widely on social media purporting to show a massive protest march in Washington, D.C., against government surveillance. The image was striking and widely shared by journalists, politicians, and activists. It...
In This Chapter
- Learning Objectives
- Introduction
- Section 20.1: The Credibility Assessment Problem
- Section 20.2: The SIFT Method
- Section 20.3: Lateral Reading
- Section 20.4: Investigating Sources
- Section 20.5: Finding Better Coverage
- Section 20.6: Tracing Claims
- Section 20.7: The SIFT Method for Visual Content
- Section 20.8: Domain Credibility Assessment
- Section 20.9: Building Verification Habits
- Key Terms
- Callout Box: The Three Moves for Checking Viral Claims
- Callout Box: The "Too Good to Be True" Heuristic
- Discussion Questions
- Summary
Chapter 20: Source Evaluation and the SIFT Method
Learning Objectives
By the end of this chapter, students will be able to:
- Explain why traditional source evaluation frameworks like the CRAAP test are inadequate for the contemporary digital information environment.
- Apply the SIFT method (Stop, Investigate the source, Find better coverage, Trace claims) to unfamiliar sources and claims encountered online.
- Describe the practice of lateral reading — checking what sources outside a website say about that website — and explain why it is more effective than deep reading of a source's self-presentation.
- Use Wikipedia, WHOIS lookups, the Wayback Machine, and About page analysis as starting points for source investigation.
- Apply advanced search operators and evidence-triangulation strategies to find better and more authoritative coverage of claims.
- Trace claims back to their original sources, including using reverse image search (TinEye, Google Images) and video verification tools (InVID/WeVerify).
- Apply the SIFT method and related workflows to visual content, including basic deepfake detection heuristics, metadata analysis, and geolocation verification.
- Use domain credibility assessment resources (MediaBiasFactCheck, AllSides, Ad Fontes Media) appropriately, understanding their value and their limitations.
- Articulate the behavioral dimensions of verification — the 30-second pause, building verification habits, and teaching SIFT to others.
Introduction
In April 2014, a photograph circulated widely on social media purporting to show a massive protest march in Washington, D.C., against government surveillance. The image was striking and widely shared by journalists, politicians, and activists. It had one problem: the photograph was not taken in 2014. It was taken during a different event, in a different year, and repurposed with new captions to make it appear relevant to current events.
This episode illustrates a challenge that has become routine in the digital information environment: the challenge of source evaluation in an ecosystem where images can be extracted from their original context, websites can be created in minutes, and the visual credibility signals that once distinguished authoritative from non-authoritative sources have been democratized to the point of uselessness.
The traditional response to this challenge — the development of structured source evaluation frameworks for students and citizens — has not kept pace with the sophistication of the problem. The CRAAP test, for decades the dominant framework for teaching source evaluation, was designed for an information environment that no longer exists. Its prescriptions, however well-intentioned, guide users to perform exactly the kind of evaluation that is least effective in the current environment.
This chapter examines source evaluation as it must be practiced today. It begins with the failure of traditional frameworks, proceeds through Mike Caulfield's SIFT method as a more effective alternative, examines the specific verification skills that SIFT requires — lateral reading, source investigation, better coverage finding, and claim tracing — and extends the framework to visual content, domain credibility, and the building of verification habits.
Throughout, the chapter emphasizes a core insight from research on expert information behavior: the most effective verifiers do not read sources deeply to determine their credibility. They read them laterally — quickly navigating to external sources to check what the broader information ecosystem says about a source — and they do so efficiently, making quick, decisive judgments about where to invest further attention.
Section 20.1: The Credibility Assessment Problem
Why Traditional Source Evaluation Fails
For much of the late twentieth century, source evaluation in educational contexts was organized around the concept of "authoritative sources." Authoritative sources had certain markers: they were produced by recognizable institutions (universities, government agencies, major news organizations), they went through editorial review processes, they carried observable signals of quality (peer review, editorial mastery, institutional affiliation, bylines from credentialed authors). Teaching source evaluation meant teaching students to look for these markers.
The CRAAP test — Currency, Relevance, Authority, Accuracy, Purpose — became the dominant operationalization of this approach in library and information science pedagogy. Developed by Sarah Blakeslee at California State University, Chico, the CRAAP test provided a checklist of questions that students could apply to any source: How recent is it? How relevant is it to my question? Who is the authority? Is the information accurate? What is the purpose of the source?
These are reasonable questions for the information environment of the 1990s and early 2000s. In that environment, the effort required to establish a credible-seeming institutional presence was substantial. Creating a website that appeared authoritative required real resources, and the markers of authority — institutional affiliation, professional design, editorial polish — were genuinely correlated with actual authority.
That environment no longer exists. Creating a website that appears credible, institutional, and authoritative now requires minimal resources, time, or expertise. Professional-looking website templates are available for free. Domain names that closely resemble credible institutions can be purchased for a few dollars. Stock photos of "editorial staff" can be licensed or generated by AI. Plausible-sounding organizational names can be invented. The design and language signals that previously distinguished authoritative from non-authoritative sources have been thoroughly democratized.
In this environment, evaluating a source by reading it carefully — CRAAP's implicit model of "investigate from within" — is precisely what effective misinformers want us to do. A well-designed disinformation operation will satisfy CRAAP checklist criteria for Currency, Relevance, apparent Authority, apparent Accuracy, and stated Purpose. The CRAAP test cannot distinguish between a genuine research institution and a think tank that uses research-institutional language to lend credibility to advocacy.
The Expert-Novice Gap in Source Evaluation
Research by Sam Wineburg and colleagues at the Stanford History Education Group (documented in detail in Case Study 20.1) found a striking pattern: when college students, professional historians, and fact-checkers were all asked to evaluate the credibility of unfamiliar websites, the fact-checkers dramatically outperformed both other groups. The professional historians barely outperformed the college students.
The key difference was not expertise in the domain — the fact-checkers were not subject matter experts in the topics covered by the websites they evaluated. The key difference was in verification strategy. The professional historians tended to read websites deeply and carefully, scrutinizing the source's self-presentation to assess its credibility. The fact-checkers left the website almost immediately, opening multiple new tabs to check what other sources said about the website in question. They used the techniques of lateral reading.
This finding overturns an assumption embedded in traditional source evaluation pedagogy: that careful, attentive reading of a source is the path to accurate credibility assessment. In the current environment, careful reading of a source's self-presentation is often exactly what disinformation producers want — it exposes readers to the persuasive content while providing little useful information about the source's actual credibility. The path to accurate credibility assessment runs outside the source, not deeper into it.
Section 20.2: The SIFT Method
Mike Caulfield, a digital literacy researcher at the University of Washington's Center for an Informed Public, developed the SIFT method as a practical, evidence-based alternative to traditional source evaluation frameworks. SIFT stands for:
- S — Stop
- I — Investigate the source
- F — Find better coverage
- T — Trace claims, quotes, and media to their original context
Each element addresses a specific failure mode of uncritical information consumption. Together, they constitute a workflow rather than a checklist — a series of moves that, practiced habitually, dramatically improve the accuracy and efficiency of source evaluation.
Stop
The first move is the simplest and in some ways the most important: stop before sharing, liking, or even continuing to read content that provokes a strong reaction. The value of stopping is precisely that strong emotional reactions — surprise, outrage, vindication, moral disgust — are the primary emotional triggers that drive viral misinformation spread. Content that makes us feel strongly is content that we are most likely to share before verifying.
The "stop" move is not an instruction to be skeptical about everything or to never trust any information. It is an instruction to pause when a piece of content triggers a strong reaction, to notice the reaction, and to begin the verification process before acting on the reaction. The pause is the space in which verification can occur.
Research on the psychology of misinformation sharing suggests that people generally want to share accurate information — they do not deliberately share false content. Errors occur when the automatic, fast-processing System 1 mode of cognition drives sharing behavior without engaging the slower, more deliberate System 2 evaluation. The "stop" instruction is an attempt to activate System 2 evaluation at the moment when System 1 is most strongly engaged.
Investigate the Source
Before reading an article or consuming content, take a moment to investigate the source. Who is making this claim? What do we know about this organization or individual? What is their track record? Do they have relevant expertise or institutional standing?
The crucial word here is "investigate," not "examine." Investigating a source means going outside the source to check what others say about it — lateral reading. Examining a source means reading its self-presentation carefully to assess credibility — vertical reading. The research evidence clearly favors lateral reading as more effective.
Lateral reading for source investigation typically begins with a quick Google or DuckDuckGo search for the name of the publication, organization, or author, before reading their content. What does Wikipedia say about this organization? What do other credible outlets say about it? Has it been identified as a partisan operation, a fringe publication, or a misinformation source by any credible fact-checking organization?
Wikipedia plays a specific and important role in this process. Wikipedia is not reliable as a primary source for contested factual claims, but it is generally reliable as a starting point for understanding what an organization is — its founding, its funding, its track record, its controversies. A Wikipedia article about a news organization typically contains exactly the information needed for a quick credibility assessment: funding sources, political orientation, notable controversies, connections to larger networks.
The investigate-the-source move also involves checking sources you already know. If you trust The Atlantic to be a credible general interest magazine, you can assess a new source partly by asking: how does it relate to sources you already know? Is it covered in The Atlantic? Do credible sources link to it? Is it part of a recognizable institutional ecosystem?
Find Better Coverage
For specific factual claims, the most efficient verification move is often not to investigate the source that made the claim but to find better coverage of the same event or claim from other sources.
Finding better coverage means asking: Who else is reporting on this? What do authoritative sources say about this specific claim or event? Is there consensus among independent sources, or does this claim appear only in a single, unfamiliar publication?
The "better" in "find better coverage" does not necessarily mean more favorable — it means more authoritative, more independently verified, and more representative of what the broader information ecosystem says. A claim that appears in a single unfamiliar outlet but is not reported by any other outlet should prompt immediate skepticism: either the claim is false, or it is accurate but has been ignored by all other coverage for some reason. Both possibilities warrant further investigation.
Finding better coverage typically involves a quick search using the key factual elements of the claim — the names, numbers, dates, and events at the heart of the assertion. If the claim is true and significant, multiple independent credible sources should be covering it. If the claim appears only in the source you encountered it in, this is important information about its credibility.
Advanced search operators — site:, filetype:, "exact phrase" in quotes, date range filters — can help find better coverage more efficiently. These techniques are examined in detail in Section 20.5.
Trace Claims, Quotes, and Media to Their Original Context
The fourth SIFT move addresses the specific problem of decontextualization — the extraction of claims, quotes, images, or videos from their original context to make them appear to support different claims than they actually do.
Tracing means finding the original source of a claim, quote, statistic, or image, and checking whether the claim accurately represents what the original source actually says. Many forms of online misinformation involve legitimate source material that has been decontextualized: a genuine statistic cited out of context, a real quote from a credible expert that means something different in its original setting, an actual photograph that was taken at a different time or place than claimed.
Tracing involves reverse image search (Google Images, TinEye, Yandex Images) for visual content, quote verification through direct search for the attributed source, and statistical tracing to the original dataset or study. These specific techniques are examined in detail in Sections 20.5 and 20.6.
Section 20.3: Lateral Reading
What Is Lateral Reading?
Lateral reading is the practice of evaluating a source by reading what sources outside that source say about it, rather than reading the source itself deeply. The term was coined by Sam Wineburg and Sarah McGrew in their research at the Stanford History Education Group.
In contrast to vertical reading — scrolling down through a source's content to assess its credibility from within — lateral reading involves immediately opening new browser tabs to check what Wikipedia, established news organizations, or other credible sources say about the source in question. Fact-checkers do this automatically and efficiently; they have developed it as a habitual professional practice. The research finding is that even brief lateral reading (a minute or two) is typically more informative about source credibility than extended vertical reading of the source itself.
The metaphor is helpful: if you want to know whether a new restaurant is trustworthy, you do not read the restaurant's own menu and décor attentively to assess their credibility. You check Yelp, you ask people who have been there, you read restaurant reviews from independent critics. Reading the restaurant's own descriptions of its food tells you what the restaurant wants you to know, not what independent observers have found. The same logic applies to evaluating information sources.
The Stanford Web Credibility Study
The Stanford History Education Group's landmark web credibility study (Wineburg et al., 2022, drawing on earlier 2019 research) tested three groups on their ability to assess the credibility of online sources: college students at Stanford University, professional historians, and professional fact-checkers. The study used three specific tasks that required participants to assess the credibility of particular websites and claims.
The results were striking:
- Professional fact-checkers dramatically outperformed both other groups on all three tasks.
- Professional historians barely outperformed college students and performed substantially worse than fact-checkers.
- The key behavioral difference was reading strategy: fact-checkers left the source almost immediately and performed lateral reading; historians tended to read the source carefully (vertical reading); students used a mixture of strategies but generally performed poorly.
The finding that professional historians — domain experts with decades of experience evaluating sources — barely outperformed undergraduates using traditional source evaluation methods is particularly important. It suggests that the problem with traditional source evaluation pedagogy is not that students are poorly trained in it; it is that the approach itself is inadequate for the current information environment. Even highly skilled practitioners using traditional methods are less effective than non-expert fact-checkers using lateral reading.
Teaching Lateral Reading
Lateral reading can be taught and practiced. Several experimental studies have found that brief interventions — as short as one class period — that teach lateral reading strategies produce meaningful improvements in students' ability to assess source credibility. Caulfield's own research with students using the SIFT method found that students who practiced lateral reading showed significantly improved credibility judgment compared to students who received traditional source evaluation instruction.
The key elements of lateral reading instruction include:
- Teaching the concept of reading about sources rather than reading sources.
- Providing practice with the browser behaviors involved — opening new tabs, using Wikipedia as a starting point, interpreting search results.
- Teaching students to notice and trust external consensus signals rather than internal source signals.
- Practice recognizing when a source passes lateral reading and when it fails.
Section 20.4: Investigating Sources
Wikipedia as Starting Point
Wikipedia's role in source investigation is specific and important: it is the best available starting point for quick source investigation, but it is not an ending point. Wikipedia articles on news organizations, advocacy groups, think tanks, government agencies, and public figures typically contain exactly the information useful for credibility assessment: founding history, funding sources, editorial orientation, notable controversies, and connections to larger institutional ecosystems.
The reason Wikipedia is valuable for this purpose, despite its limitations as a primary source, is that its articles are aggregations of what secondary sources say about their subjects. A Wikipedia article about a news organization synthesizes what mainstream news organizations and other secondary sources have reported about that organization — which is precisely the information you need for a lateral reading assessment.
Wikipedia's limitations — primarily that it can contain errors, can be manipulated by interested parties, and is less reliable on contested political and scientific topics — matter less for source investigation than for primary fact-checking. When using Wikipedia to check whether an unfamiliar website is a legitimate news organization or a partisan operation, you are not relying on Wikipedia for precise factual claims; you are using it to get a quick, rough assessment of the source's standing in the broader information ecosystem.
WHOIS Lookups
WHOIS is a publicly accessible database that records domain registration information for websites. A WHOIS lookup — performed through services like ICANN's WHOIS lookup tool, whois.domaintools.com, or similar services — reveals when a domain was registered, who registered it (sometimes, though privacy protection services frequently obscure registrant identity), where it is registered, and when it expires.
WHOIS information is useful for source investigation in several ways:
Domain age: A website claiming to be an established news organization but registered last week is immediately suspicious. Domain registration dates are verifiable facts that can contradict a source's self-presentation.
Registrant information: When registrant privacy protection is not used, the organization or individual who registered the domain is publicly visible. This can reveal connections between ostensibly independent sources that are actually controlled by the same actor.
Registration patterns: Multiple domains registered on the same date by the same registrant, or domains registered through the same obscure registrar, can indicate coordinated inauthentic behavior — the creation of a network of artificial sources designed to appear independent.
The Wayback Machine
The Internet Archive's Wayback Machine (web.archive.org) stores historical snapshots of websites, allowing investigators to see what a website looked like at various points in the past. The Wayback Machine is useful for source investigation in several scenarios:
Checking for identity changes: A website that recently rebranded itself as a news organization may have previously been an overtly political or commercial operation. The Wayback Machine can reveal this history.
Verifying claimed history: A source that claims to have operated since a certain year can be checked against Wayback Machine records to verify whether the website actually existed at the claimed dates.
Finding deleted content: When a source deletes or modifies content, the Wayback Machine may preserve the original version, which can be relevant to investigations of deceptive editing or claim modification.
About Page Analysis
Most legitimate news organizations, research institutions, and advocacy groups publish "About" pages that describe their mission, history, funding, editorial leadership, and institutional affiliations. Careful analysis of About pages can yield useful credibility signals — though these signals must be interpreted with awareness that bad actors can write credible-sounding About pages.
Useful questions for About page analysis:
- Is leadership identified? Can those individuals be independently verified to exist and have the credentials claimed?
- Is funding disclosed? If so, who are the funders, and do they have obvious interests in the content the organization produces?
- Is there a physical address? Can it be verified as a real location (via Google Maps, Street View)?
- Is the editorial process described? Does the description match recognized professional standards?
- Are there contact details? Are they working contact details?
Section 20.5: Finding Better Coverage
The "Best Claim" Search Strategy
When evaluating a specific factual claim, the most efficient verification strategy is often to search for the claim itself using carefully chosen search terms rather than to investigate the source that made the claim. If the claim is accurate and significant, multiple authoritative sources will be covering it, and a simple search for the key factual elements will surface that coverage.
The "best claim" strategy involves identifying the most specific, verifiable elements of a claim — a specific number, a specific person, a specific date or event — and searching for those elements. The specificity of the claim should guide the specificity of the search. A vague claim that "taxes are rising" is harder to search for productively than a specific claim that "the top marginal income tax rate was raised from 37 to 39.6 percent."
Advanced Search Operators
Major search engines support advanced search operators that can dramatically improve the efficiency of finding authoritative coverage:
Exact phrase search ("phrase in quotes"): Forces the search engine to return only results containing the exact phrase, rather than results containing the words in any order. Essential for finding specific quotes and specific numerical claims.
Site operator (site:domain.com query): Restricts results to a specific website or domain. Use site:.gov for government sources, site:.edu for academic sources, or site:reuters.com for Reuters coverage.
Exclude operator (-word): Excludes results containing a specific word. Useful for filtering out low-quality sources.
Date range filtering: Most search engines allow filtering by publication date. Filtering to recent results can find fresh coverage; filtering to earlier dates can check whether a claim predates the current discussion.
OR operator: Returns results containing either of two terms, useful for checking alternative formulations of the same claim.
Identifying the Original Source
Many online claims derive from a single original source — a study, a government report, a survey — that has been cited, summarized, paraphrased, and sometimes distorted through multiple rounds of secondary coverage. Finding better coverage for such claims means finding the original primary source, not just finding additional secondary sources that cite the same original.
Original sources can typically be found by: - Following citation chains through secondary sources (which studies/reports are cited?) - Searching for the specific data or findings attributed to a named study or organization - Going directly to the websites of the most likely original sources (government statistical agencies, academic journals, named research institutions)
Consensus Checking
For claims in empirical domains — particularly scientific claims — finding better coverage includes checking for scientific or expert consensus. A single study finding a health effect, for instance, should be evaluated against the larger body of evidence. The consensus of expert bodies (medical societies, national science academies, major public health agencies) is typically more reliable for assessing the current state of scientific knowledge than any individual study.
Section 20.6: Tracing Claims
Reverse Image Search
Images are among the most frequently decontextualized content on the internet. A photograph taken at one event is frequently repurposed with new captions to suggest it depicts a different event. Reverse image search — uploading an image or providing its URL to a search engine that searches by image content rather than by text — is the primary tool for detecting this form of decontextualization.
Three major reverse image search tools are in wide use:
Google Images: Accessed by clicking the camera icon in Google's image search interface or by dragging an image to the search bar. Google's image recognition identifies visually similar images across its index, often surfacing the original source of a repurposed image.
TinEye: A dedicated reverse image search engine that specializes in finding exact matches (or near-exact copies) of images across the web. TinEye is particularly useful for finding the earliest known occurrence of an image, which can establish whether an image predates the event it is claimed to depict.
Yandex Images: The Russian search engine Yandex has a powerful reverse image search that often finds images not indexed by Google, particularly for images that have spread primarily in Eastern European contexts.
The workflow for reverse image verification is: 1. Save or right-click copy the image. 2. Upload to or drag into all three reverse image search tools (results vary between tools). 3. Check the earliest known occurrence: does it predate the event the image is claimed to depict? 4. Check the original context: does the original caption or context differ from the current claim?
Video Verification: InVID/WeVerify
Video content is harder to reverse-search than images because its size and format require different tools. The InVID/WeVerify browser extension, developed with European Union research funding and maintained by the WeVerify project, is the primary tool for video verification. InVID breaks videos into component keyframes and allows reverse image searching of individual frames, which can identify whether a video was recorded at the place and time claimed or was repurposed from another context.
InVID also provides tools for checking video metadata and for detecting digital manipulation artifacts. It is used by professional fact-checkers at organizations including AFP Fact Check, Reuters Fact Check, and Bellingcat.
Quote Verification
Claims attributed to public figures — especially claims that "X said Y" where the quote is surprising or outrageous — are particularly susceptible to fabrication or decontextualization. Quote verification involves:
- Searching for the exact quoted phrase in quotation marks.
- Checking whether the attributed speaker has been covered saying this.
- If the quote is found, checking the original source to verify context.
- For public speeches and statements, checking official transcripts or video archives.
Many viral "quotes" attributed to public figures are fabricated, misattributed, or taken dramatically out of context. Quotes claiming that a political figure said something outrageous are especially likely to be fabricated or decontextualized, precisely because outrage drives sharing behavior.
Section 20.7: The SIFT Method for Visual Content
Deepfake Detection Workflow
Deepfakes — AI-generated or AI-manipulated videos in which a person's likeness is placed in fabricated situations — represent a novel challenge for visual verification. While detection technology is developing alongside generation technology in a continuing arms race, several practical heuristics can help identify potential deepfakes:
Unnatural eye blinking: Early deepfakes frequently showed abnormal blinking patterns — too infrequent, too regular, or absent. More sophisticated models have addressed this, but blinking anomalies remain a detection signal in lower-quality fakes.
Facial boundary artifacts: Deepfakes produced by swapping one person's face onto another's body often show artifacts at the facial boundary — blurring, color discontinuities, or unnaturally smooth edges where the swapped face meets the neck or hairline.
Lighting inconsistencies: The lighting on a deepfaked face may not match the environmental lighting in the video, creating subtle shadows or highlights that differ from the physical environment.
Temporal inconsistencies: Deepfaked faces may show temporal flickering or instability — particularly around facial hair, glasses, or jewelry — that is not present in genuine video.
Audio-visual desynchronization: In audio deepfakes or low-quality video deepfakes, lip movements may not perfectly sync with audio, particularly for fricative sounds (f, v, th) that require specific lip shapes.
These heuristics are helpful but not definitive. Sophisticated deepfakes may avoid all of them. The appropriate response to suspected deepfakes is to treat them as check-worthy claims requiring verification through other means — not to conclude definitively that a video is fake based on visual inspection alone.
Metadata Analysis
Digital media files often contain metadata — information embedded within the file by the device or software that created it — that can assist verification. Image files in JPEG format commonly contain EXIF data that may include:
- Camera make and model
- Date and time of capture (device clock time, which may be wrong)
- GPS coordinates of where the image was taken (if location services were enabled)
- Software used to process the image
EXIF data can corroborate or contradict claims about when and where an image was taken. However, EXIF data is easily modified and should be treated as one data point among several, not as definitive evidence.
Importantly, the absence of EXIF data — or EXIF data that has been stripped — is itself a potential signal. Many social media platforms strip EXIF data from uploaded images as a privacy measure, so stripped metadata on social media is not suspicious. But EXIF data stripped from an image presented as an original unmodified photograph may warrant further investigation.
Geolocation Verification
For images and videos that claim to depict specific locations, geolocation verification — identifying the actual location depicted using visual cues — is a powerful verification technique. Geolocation compares visual elements in an image (distinctive buildings, road markings, signs, terrain features, vegetation) against satellite imagery, street-level photography, and other geographic resources to determine whether the claimed location matches the actual depicted location.
Key tools for geolocation: - Google Maps / Google Earth: For visual comparison and measuring scale - Google Street View: For ground-level visual comparison of distinctive features - Sentinel Hub / other satellite imagery providers: For conflict zones or remote areas - SunCalc (suncalc.org): For determining the sun's position at a specific location and time based on shadow angles
Geolocation was pioneered as a systematic verification practice by the investigative research organization Bellingcat, which has documented its methodology extensively in online guides that are available freely.
Section 20.8: Domain Credibility Assessment
Available Tools
Several organizations have developed tools for assessing the credibility and political orientation of news sources at the domain level:
Media Bias/Fact Check (MBFC): Evaluates news sources on two dimensions: political bias (from "Extreme Left" to "Extreme Right") and factual reporting quality (from "Very High" to "Very Low"). MBFC also identifies sources as conspiracy/pseudoscience or as satire. Founded by Dave Van Zandt, MBFC has become one of the most widely referenced tools for domain credibility assessment.
AllSides: Provides a political bias rating for news sources and curates news coverage by showing how different outlets cover the same story. AllSides uses a multi-rater methodology including community ratings, editorial review, and blind surveys.
Ad Fontes Media: Produces the Media Bias Chart, which rates news sources on two dimensions: political bias (horizontal axis) and reliability/quality (vertical axis). The chart is updated regularly and uses trained raters who follow a methodology designed to minimize rater political bias.
Uses of Domain Credibility Tools
Domain credibility tools are useful for:
- Getting a quick first assessment of an unfamiliar source
- Checking whether a source known to be highly reliable for general news is also reliable for specific coverage
- Identifying sources at the extremes of political bias who may be unsuitable for balanced coverage
- Helping audiences understand the media landscape and how different outlets relate to each other
Limitations
These tools have significant limitations that must be understood before using them:
Subjectivity: Assessments of political bias are inherently contested. A rating that MBFC considers "Right" may be considered "Center" by conservatives and "Far Right" by progressives. These tools offer one perspective on media bias, not an objective measurement.
Coverage limitations: These tools primarily cover major English-language news organizations. Their coverage of local, specialized, or international sources is limited and uneven.
Static ratings: Media organizations change over time, but ratings may not be updated promptly to reflect these changes.
Conflation of bias and quality: Political bias and factual accuracy are different dimensions that may be correlated but are not identical. A clearly left-leaning source may nonetheless report accurately; a centrist source may show factual deficiencies. These dimensions should be evaluated separately.
Selection bias in what gets rated: Only sources that become prominent enough to attract evaluator attention get rated, which skews coverage toward mainstream sources.
The appropriate use of domain credibility tools is as a first filter, not as definitive credibility judgments. A source rated as highly credible by MBFC still deserves lateral reading for consequential claims; a source rated as low credibility may still contain accurate information on specific stories.
Section 20.9: Building Verification Habits
The 30-Second Pause
One of the most powerful, underappreciated facts about online misinformation is that much of its spread occurs in the first 30 seconds after a person encounters content that triggers a strong emotional reaction. The sharing impulse, research suggests, is fastest at the moment of peak emotional arousal — before reflection has had time to operate. A simple behavioral intervention — pausing for 30 seconds before sharing, and using that pause to begin a quick verification check — can substantially reduce sharing of false information.
Research by Gordon Pennycook and colleagues found that individuals who were briefly asked to think about accuracy before sharing news on social media showed reduced sharing of misinformation. The intervention worked even when the accuracy prompt was completely general — not tied to specific content — suggesting that activating accuracy motivation even briefly is sufficient to engage more reflective evaluation.
The 30-second pause is not about being skeptical about everything. It is about building a slight temporal buffer between emotional stimulus and behavioral response — enough time to ask the minimal verification questions that SIFT recommends.
Verification as Routine
The goal of SIFT instruction is not to train students to follow an explicit checklist every time they encounter any piece of information. The goal is to develop verification behaviors that become automatic enough to be performed quickly and habitually, without requiring effortful conscious application of the framework.
Experts — professional fact-checkers, librarians with extensive reference experience, experienced journalists — perform many SIFT-type behaviors automatically. They naturally open new tabs to check what other sources say about an unfamiliar source. They naturally search for the original source of a claim. They naturally perform reverse image searches on photographs that feel "off." These behaviors are habitual, fast, and integrated into their information consumption routine.
Developing this level of automaticity requires practice. Research on skill development consistently finds that expert-level automatic behavior develops through extensive deliberate practice — initially conscious, effortful application of explicit techniques that gradually becomes faster and more automatic with repetition. SIFT instruction should include substantial practice opportunities that allow students to develop not just knowledge of the SIFT framework but fluency in its application.
Teaching SIFT
SIFT has been taught effectively in a variety of educational contexts: university first-year courses, high school media literacy curricula, library instruction, professional development for journalists and educators, and online self-directed learning modules.
Key principles for effective SIFT instruction:
Authenticity: SIFT is most effectively taught using real-world examples — actual websites, actual viral claims, actual images — rather than hypothetical or clearly labeled examples. The judgment calls involved in real-world source evaluation are what makes SIFT challenging; practice on sanitized examples does not develop the skills needed for the messy real information environment.
Active practice over passive exposure: Knowing that lateral reading is better than vertical reading is not the same as being able to perform lateral reading quickly and accurately. Instruction must include extensive practice opportunities.
Reflection and feedback: Students should have opportunities to compare their evaluation processes with those of more experienced evaluators, to understand where their reasoning diverged and why.
Integration over isolated instruction: SIFT is most effectively taught as an integrated component of courses across disciplines — embedded within the natural context of using information for learning — rather than as an isolated media literacy unit delivered once.
Key Terms
CRAAP test: A traditional source evaluation framework (Currency, Relevance, Authority, Accuracy, Purpose) developed for an information environment with different credibility challenges than the current digital environment.
SIFT method: Mike Caulfield's framework for digital source evaluation: Stop, Investigate the source, Find better coverage, Trace claims. Emphasizes lateral reading and efficiency over deep reading.
Lateral reading: The practice of evaluating a source by checking what outside sources say about it, rather than reading the source itself deeply.
Vertical reading: The practice of reading a source carefully from within to assess its credibility — an approach shown to be less effective than lateral reading in the current information environment.
Hostile media effect: (Cross-reference from Chapter 19) The tendency for partisans to perceive neutral media as hostile to their side.
WHOIS: A publicly accessible database of domain registration information, including registration date and registrant identity (when not privacy-protected).
Wayback Machine: The Internet Archive's service (web.archive.org) that stores historical snapshots of websites, allowing investigators to check how a website looked in the past.
Reverse image search: A search technique that uses an image as the query to find visually similar images, enabling identification of the original context of repurposed photographs.
EXIF data: Metadata embedded in digital image files by the capturing device, potentially including camera model, date and time of capture, GPS coordinates, and software used.
Geolocation verification: The practice of identifying where an image or video was actually taken by comparing visual cues to geographic databases, satellite imagery, and street-level photography.
InVID/WeVerify: A browser extension providing tools for video verification, including keyframe extraction for reverse image search.
Deepfake: AI-generated or AI-manipulated video in which a person's likeness is placed in fabricated situations.
Media Bias/Fact Check (MBFC): A widely used service that rates news organizations on political bias and factual reporting quality.
Prebunking: (Cross-reference from Chapter 19) Inoculating audiences against misinformation before they encounter it.
Callout Box: The Three Moves for Checking Viral Claims
When a specific claim is circulating virally, the three most efficient verification moves are:
- Search for the claim: Use the most specific factual element (a specific number, specific name, specific event) as your search query. If the claim is true and significant, authoritative sources will be covering it.
- Check the original source: Who first made this claim, and based on what evidence? Trace back through any citation chain to the primary source.
- Look for consensus: What do domain experts and major news organizations say about this specific claim? Is there consensus that it is accurate, or is it contested?
These three moves take one to three minutes for most viral claims. They will not resolve all uncertainty, but they will quickly distinguish claims with strong evidentiary support from claims that appear only in a single source or that contradict authoritative consensus.
Callout Box: The "Too Good to Be True" Heuristic
Content that perfectly confirms your prior beliefs — that makes you feel vindicated, that seems to prove exactly what you already suspected — deserves special verification attention. Research on misinformation sharing finds that content that provides "identity-relevant" confirmation is shared more uncritically than neutral content, precisely because the emotional reward of confirmation is strong.
The appropriate response to content that feels like strong confirmation is not to distrust your beliefs. It is to recognize that the emotional reward of confirmation can suppress the skeptical evaluation that would normally occur. Apply the 30-second pause with particular discipline to content that feels satisfying. The most effective disinformation is the kind that aligns with what its target audience already wants to believe.
Discussion Questions
-
The CRAAP test asks students to evaluate "Authority" by examining whether an author has credentials relevant to the content. Why is this criterion less useful in the current information environment than it was twenty years ago?
-
Research shows that professional historians — domain experts who have spent careers evaluating sources — barely outperform undergraduates on web credibility tasks, while professional fact-checkers dramatically outperform both groups. What does this finding imply about the nature of source evaluation expertise?
-
Wikipedia is often prohibited as a source in academic papers, yet SIFT recommends it as a starting point for source investigation. Are these recommendations in conflict? Explain the distinction between Wikipedia as a primary source and Wikipedia as a starting point for lateral reading.
-
What are the ethical dimensions of geolocation verification of conflict imagery? When is it appropriate to geolocate and publish the location of an image taken during an active conflict? When might it be harmful?
-
Domain credibility assessment tools (MBFC, AllSides, Ad Fontes) all face the challenge that political bias assessment is itself potentially biased. Is it possible to design a politically unbiased tool for measuring political bias? What methodology would such a tool require?
-
The SIFT method emphasizes efficiency — performing verification quickly rather than comprehensively. What is the cost of this efficiency emphasis? Under what circumstances would you want to spend more time on verification than SIFT's quick moves suggest?
-
Building verification habits requires that students practice SIFT behaviors enough that they become automatic. What structural features of social media platforms make this automaticity difficult to develop and maintain?
Summary
This chapter has examined source evaluation as it must be practiced in the contemporary digital information environment. Key conclusions include:
- Traditional source evaluation frameworks, including the CRAAP test, were designed for an information environment where establishing credible-seeming institutional presence required substantial resources. In the current environment, where professional-appearing websites can be created cheaply and quickly, reading sources deeply from within is often precisely what disinformation producers want.
- Research by the Stanford History Education Group found that professional fact-checkers dramatically outperform both professional historians and college students on web credibility tasks, with the key difference being verification strategy: fact-checkers use lateral reading (checking what outside sources say about a source) while historians use vertical reading (reading the source carefully from within).
- The SIFT method — Stop, Investigate the source, Find better coverage, Trace claims — provides a structured, evidence-based framework for source evaluation that emphasizes lateral reading and efficient verification moves over comprehensive deep reading.
- Specific verification skills include: Wikipedia-based source investigation, WHOIS lookups, Wayback Machine historical checking, reverse image search (Google Images, TinEye, Yandex), video verification (InVID/WeVerify), EXIF metadata analysis, and geolocation verification.
- Domain credibility tools (MBFC, AllSides, Ad Fontes Media) provide useful first-filter assessments but have significant limitations related to subjectivity, coverage, and the conflation of political bias with factual accuracy.
- Building verification habits requires practice sufficient to make SIFT behaviors automatic, integrated across disciplines and contexts rather than taught as isolated media literacy instruction.
This chapter is part of "Misinformation, Media Literacy, and Critical Thinking in the Digital Age," Part IV: Detection and Analysis.