Chapter 8 Further Reading
Sampling Theory and Methods
1. Kish, Leslie. Survey Sampling. New York: Wiley, 1965. (Wiley Classics reprint, 1995.)
The foundational text of modern survey sampling theory. Kish, who spent decades at the University of Michigan Survey Research Center, developed many of the practical sampling methods used in political research today. Not light reading — it is a technical reference — but Chapter 1 (introduction to probability sampling) and Chapters 2-5 (the major sampling designs) are accessible to graduate students with basic statistics. If you work seriously with complex survey data, you need this book on your shelf.
2. Groves, Robert M., Floyd J. Fowler Jr., Mick P. Couper, James M. Lepkowski, Eleanor Singer, and Roger Tourangeau. Survey Methodology. 2nd ed. New York: Wiley, 2009.
The comprehensive graduate-level reference on all aspects of survey methodology — sampling, nonresponse, measurement, and mode effects. More accessible than Kish while covering more ground than Fowler's undergraduate text. Chapters 4-6 on survey sampling and coverage errors are directly relevant to this chapter. Essential for anyone who needs a comprehensive technical reference.
The Literary Digest and Polling History
3. Squire, Peverill. "Why the 1936 Literary Digest Poll Failed." Public Opinion Quarterly 52, no. 1 (1988): 125-133.
The definitive analysis of the Literary Digest failure. Squire carefully separates the contributions of coverage bias and nonresponse bias to the overall error, using secondary data to reconstruct the magnitude of each. Essential reading for understanding why the standard "coverage bias" explanation, while correct, is incomplete — nonresponse bias was also substantial. Short and accessible.
4. Gallup, George. The Pulse of Democracy: The Public-Opinion Poll and How It Works. New York: Simon & Schuster, 1940.
Gallup's own account of the development of his polling method, written shortly after his 1936 triumph. Readable and historically important. Gallup explains his quota sampling approach, his theory of public opinion, and his philosophy of democratic polling. Reading it makes clear how his methodological claims were as much rhetorical (positioning the new scientific polling as democratic and egalitarian) as technical.
Sampling Frames and Coverage
5. Link, Michael W., and Ali Mokdad. "Alternative Modes for Health Surveillance Surveys: An Experiment with Web, Mail, and Telephone." Epidemiology 16, no. 5 (2005): 701-704.
A comparison of address-based sampling, web, mail, and telephone modes for population surveillance. Though not specifically focused on political polling, it provides one of the cleanest comparative studies of mode-related coverage differences. The finding that ABS achieves substantially better coverage for low-income and minority populations than telephone frames has direct implications for political polling in demographically diverse states.
6. Callegaro, Mario, Reg Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick, and Paul J. Lavrakas, eds. Online Panel Research: A Data Quality Perspective. New York: Wiley, 2014.
The most comprehensive academic assessment of online opt-in panel quality. Multiple chapters address coverage, nonresponse, and the validity of inferences from convenience samples. Chapter 2 (the opt-in panel ecosystem) and Chapter 5 (coverage and representativeness) are most directly relevant to the issues in this chapter. The overall picture is more nuanced than simple "panels are bad" — quality varies substantially across panel providers and weighting approaches.
Nonresponse and Response Rates
7. Groves, Robert M., and Lars Lyberg. "Total Survey Error: Past, Present, and Future." Public Opinion Quarterly 74, no. 5 (2010): 849-879.
An authoritative review of the total survey error framework — the idea that the quality of a poll result depends not just on sampling error (MOE) but on all sources of error including coverage, nonresponse, and measurement. Groves and Lyberg document the decline in response rates and assess its implications for data quality. Essential for understanding why the MOE is the floor on uncertainty, not the ceiling.
8. Pew Research Center. "Assessing the Representativeness of Public Opinion Surveys." Pew Research Center Methods Report, May 2012.
Pew's empirical comparison of a high-response-rate probability sample (their standard methodology) to a lower-response-rate version and to an online panel. The finding that demographic weighting largely eliminates differences between the high- and low-response-rate probability samples was widely cited as reassuring evidence that response rate declines need not compromise data quality — though critics have noted important caveats about the conditions under which this holds.
Weighting and MRP
9. Gelman, Andrew, and Thomas C. Little. "Poststratification into Many Categories Using Hierarchical Logistic Regression." Survey Methodology 23, no. 2 (1997): 127-135.
The original paper introducing multilevel regression and poststratification (MRP, or "Mister P"). Gelman and Little demonstrate how combining multilevel regression with Census poststratification can improve estimates for small geographic units from national surveys. Moderately technical but readable for students familiar with regression. The starting point for understanding what MRP is and how it works.
10. Wang, Wei, David Rothschild, Sharad Goel, and Andrew Gelman. "Forecasting Elections with Non-Representative Polls." International Journal of Forecasting 31, no. 3 (2015): 980-991.
The paper that demonstrated MRP applied to a non-representative Xbox game console survey could produce accurate state-level presidential election forecasts. A striking proof of concept for the claim that heavy statistical adjustment can make non-representative samples useful for political estimation. Should be read alongside critiques that question how broadly the result generalizes.
11. AAPOR Task Force on Non-Probability Sampling. "Report of the AAPOR Task Force on Non-Probability Sampling." Available at aapor.org, 2013.
AAPOR's comprehensive assessment of the validity and appropriate use of non-probability samples. Covers the range of approaches (opt-in panels, river samples, social media data) and the conditions under which they can and cannot support valid inference. Essential for understanding the professional standards debate around online polling and for knowing what questions to ask when evaluating any poll conducted with a non-probability sample.
12. Kennedy, Courtney, et al. "An Evaluation of the 2016 Election Polls in the United States." Public Opinion Quarterly 82, no. 1 (2018): 1-33.
AAPOR's post-mortem on 2016 polling errors, which identified differential nonresponse by educational attainment as a primary driver of the systematic underestimation of Trump support. This report is both a diagnosis of a specific error and a model for how methodological post-mortems should be conducted. Readable and directly relevant to contemporary debates about what went wrong in 2016 and what can be done to prevent similar errors.