Case Study 37-2: Lateral Reading — What Expert Fact-Checkers Do That Novices Don't
The Stanford History Education Group Research
The Puzzle That Launched the Research
Sam Wineburg is a professor of education and history at Stanford who has spent his career studying how people evaluate historical evidence and how this capacity can be taught. His Stanford History Education Group (SHEG) has been at the forefront of civic education research for two decades.
In 2016, as concerns about online misinformation accelerated following the US presidential election, SHEG conducted a large-scale assessment of how well Americans across age groups evaluated online sources. The results, published in a 2016 report titled "Evaluating Information: The Cornerstone of Civic Online Reasoning," were alarming: students at every level, from middle school through college, showed remarkable inability to evaluate the reliability of online sources. They were routinely fooled by fake news websites, confused sponsored content with news, and trusted information on the basis of visual appeal and claimed credentials without investigating further.
But this raised a follow-up question. If students were bad at evaluating sources, were there people who were good at it? And if so, what did they do differently?
The answer to this question, when it arrived, was counterintuitive enough to significantly change how Wineburg and colleagues thought about the entire project of source evaluation education.
The Study: Three Groups, Same Tasks
The core comparison study (Wineburg, McGrew, and colleagues, published as part of SHEG's "Civic Online Reasoning" research series in 2019) recruited three groups:
Professional fact-checkers: Experienced staff at American fact-checking organizations, with professional experience evaluating online sources for public publication.
Professional historians: PhD historians, primarily at research universities, with extensive training in evaluating historical evidence and sources.
University students: Students at a highly selective US university, presumably among the most academically capable undergraduates in the country.
All participants were given the same set of source evaluation tasks involving real websites, social media posts, and online content. They were asked to assess the reliability of unfamiliar sources and to think aloud while doing so, providing a window into their reasoning processes. All sessions were recorded and analyzed.
The expectation going in was that the historians would perform best. They have professional training in source criticism — the evaluation of primary and secondary sources is a core skill of historical scholarship. The students would perform worst. The fact-checkers, with specific professional experience, would presumably be somewhere in between or perhaps best for online-specific sources.
What actually happened confounded every expectation.
The Findings: An Unexpected Hierarchy
Fact-checkers vastly outperformed both historians and students on nearly every task. They were faster, more accurate, and more confident in their assessments.
Historians and students performed at roughly the same level. The professional historians — with PhDs and years of experience evaluating historical sources — were not significantly better than the university students at evaluating online sources.
The performance gap between fact-checkers and the other two groups was enormous. On tasks involving unfamiliar websites, fact-checkers identified problematic sources with accuracy rates substantially higher than historians and students, who were frequently misled by exactly the sources that fact-checkers quickly identified as unreliable.
Why would trained historians perform no better than students? Why would professional fact-checkers dramatically outperform people with ostensibly more rigorous training in source evaluation?
The answer lay in the strategy each group used.
The Critical Difference: Vertical vs. Lateral Reading
Analysis of the think-aloud protocols revealed a stark strategic difference:
Historians and students used vertical reading. When presented with an unfamiliar source, they stayed within it. They carefully read the About page. They examined internal citations to see if sources were reputable. They assessed visual design quality. They looked for logical consistency. They read the content carefully and evaluated it against their prior knowledge.
Vertical reading is exactly what source evaluation curricula typically teach. It is the dominant approach in media literacy education. And it consistently failed.
Fact-checkers used lateral reading. When presented with an unfamiliar source, they almost immediately left it. Within fifteen to thirty seconds of landing on an unfamiliar website, a fact-checker would open multiple new tabs and begin searching for external information: the organization's name, its funders, what news organizations had written about it, whether it appeared on any reliability assessments, what Wikipedia said about it.
The fact-checkers were not reading the source; they were reading about the source, using the broader information ecosystem to triangulate its reliability rather than trying to assess it in isolation.
Why Vertical Reading Fails
The failure of vertical reading makes sense once you understand what it's competing against. Modern misinformation websites are designed to survive vertical reading. They:
- Have professional visual designs (equal to or better than legitimate sources)
- Include About pages with impressive-sounding organizational descriptions
- Cite real sources (sometimes accurately, sometimes misleadingly, but formally)
- Write with professional prose quality
- Avoid factual errors in secondary details (only the central claim may be false or misleading)
- Include contact information and other markers of legitimacy
A reader who stays within a well-designed misinformation site and reads carefully will, in many cases, come away more confident in its reliability, not less. The site was designed for exactly this reader.
Additionally, vertical reading is slow. Spending five minutes carefully evaluating a single source is not practical when a person encounters dozens of sources per day.
Why Lateral Reading Works
Lateral reading exploits a simple fact: it is very hard to be both systematically unreliable and well-regarded by independent, established sources. A misinformation website may look credible in isolation, but it is much harder to simultaneously:
- Have been fact-checked multiple times without finding significant problems
- Be recommended by librarians, journalists, and academics as a reliable source
- Have a Wikipedia entry that doesn't mention concerns about reliability
- Be cited by news organizations as a source for accurate information
The broader information ecosystem contains distributed reliability signals that the source itself cannot fake without affecting a much larger number of independent actors. Lateral reading accesses these distributed signals rather than trying to generate them from individual assessment.
Lateral reading is also fast. A fact-checker can scan three or four external assessments of a source in the time it takes a historian to read one About page. Speed and accuracy both favor the lateral approach.
Teaching Lateral Reading: The Curriculum Evidence
Wineburg's group didn't stop at describing what fact-checkers do. They developed a curriculum to teach lateral reading to students and evaluated it experimentally.
The curriculum is structured around a simple, explicit procedure taught as a skill to practice:
- When you encounter an unfamiliar source, before reading it deeply, open a new tab.
- Search for the source name or organization, combined with terms like "reliability," "bias," "funding," or the organization's name plus "Wikipedia."
- Read what independent, established sources say about the organization or outlet.
- Use this external assessment to calibrate your confidence in the source's claims.
The randomized evaluation (conducted with high school students) found that students taught lateral reading significantly outperformed control students on source evaluation tasks. The effect held at a six-week follow-up, suggesting that the skill, once taught and practiced, is retained.
Importantly, the curriculum found that students could learn lateral reading fairly quickly — it does not require extensive training. But it does require explicit teaching: students do not spontaneously adopt lateral reading without instruction. Left to their own devices, they default to vertical reading, exactly as the historians did.
Implications for Education
The lateral reading research has several important implications for how media literacy and civic education should be designed:
Skills over dispositions. Telling students to "be skeptical" or "evaluate sources critically" doesn't produce the behavior change that teaching a specific procedure (lateral reading) does. General dispositions without specific procedures fail because people can't translate the disposition into action when they encounter a specific source.
Correct the correction. Much source evaluation education teaches vertical reading skills — how to read an About page, how to look for bias markers, how to assess internal consistency. The research suggests this education may be worse than useless if it gives students false confidence in skills that don't actually work. Curricula need to teach that vertical reading fails and why.
Habit formation through practice. Like any skill, lateral reading atrophies without practice. A one-semester course that teaches lateral reading but doesn't provide ongoing opportunities to practice it will not produce durable behavior change. Effective education embeds skill practice throughout the curriculum, not in a single unit.
The speed advantage. Fact-checkers were not more careful than historians — they were faster and more accurate. This matters for practical implementation: a source evaluation strategy that only works when applied slowly is not viable in the real information environment. The fact that lateral reading is both faster and more accurate means the behavior is self-reinforcing — it doesn't require a sacrifice of efficiency.
Connection to the Larger Argument
The lateral reading research connects to several themes throughout this textbook:
Platform design creates a vertical reading environment. When you're inside a social media platform and an algorithm serves content, you are, by default, in a vertical reading context: you're reading content inside the platform, which is itself not a neutral environment but one designed to maximize engagement. Lateral reading — opening a new tab, searching outside the platform — is literally moving outside the platform's designed information environment.
This is why platforms tend not to facilitate lateral reading. An information environment designed to maximize dwell time has structural incentives against sending users elsewhere. The fact that platforms don't prominently feature external fact-checking links for viral content is not an oversight; it's consistent with the economic logic of engagement maximization.
Lateral reading is teachable but requires teaching. The single most important finding from the research is that lateral reading is not spontaneous. Sophisticated, educated people do not naturally adopt it. It requires explicit instruction and habituated practice. This is both an opportunity (it can be taught) and a challenge (it must be taught — it won't happen on its own).
Conclusion
The Stanford History Education Group's research on lateral reading is among the most practically important findings in the media literacy field. It identifies a specific, teachable behavior that professional evaluators actually use and that substantially outperforms the intuitive alternatives. It connects the mechanism to the structure of the information environment (why vertical reading fails on deceptive sites). And it provides curriculum evidence that students can learn the skill and retain it.
For educators, the message is clear: teach lateral reading explicitly, provide practice, and correct the false confidence that vertical reading curricula may have already instilled. For individuals, the message is equally clear: when you encounter an unfamiliar source that matters, don't read it first — read about it first.
This case study draws on: Wineburg, S., & McGrew, S. (2019). Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record, 121(11). Also: Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2016). Evaluating information: The cornerstone of civic online reasoning. Stanford Digital Repository. Breakstone, J., et al. (2021). Students' civic online reasoning: A national portrait. Educational Researcher.