Chapter 9 Further Reading: Fielding and Data Collection
Foundational Texts
Groves, R. M., Fowler, F. J., Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey Methodology (2nd ed.). John Wiley & Sons. The definitive reference work on survey methodology. Chapters 5–8 cover nonresponse, interviewer effects, mode effects, and data quality in comprehensive technical detail. Essential for any serious student of survey research.
American Association for Public Opinion Research (AAPOR). (2023). Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys (10th ed.). AAPOR. The authoritative source for AAPOR response rate definitions (RR1–RR6), cooperation rates, contact rates, and refusal rates. Available for free download from aapor.org. Every practitioner should have this document.
Couper, M. P. (2000). "Web surveys: A review of issues and approaches." Public Opinion Quarterly, 64(4), 464–494. The foundational article establishing the methodological terrain of online survey research. Still relevant for understanding coverage, nonresponse, and measurement issues in web modes.
Response Rate Decline
Keeter, S., Kennedy, C., Dimock, M., Best, J., & Craighill, P. (2006). "Gauging the impact of growing nonresponse on estimates from a national RDD telephone survey." Public Opinion Quarterly, 70(5), 759–779. Classic empirical study comparing estimates from surveys with very different response rates and finding surprisingly small differences — important evidence that low response rates do not automatically mean high bias.
Pew Research Center. (2012). Assessing the Representativeness of Public Opinion Surveys. Pew Research Center. Pew's own analysis of how declining response rates affect survey accuracy, with detailed comparisons of survey estimates to external benchmarks. Available free at pewresearch.org.
Tourangeau, R., Conrad, F. G., & Couper, M. P. (2013). The Science of Web Surveys. Oxford University Press. Comprehensive treatment of online survey methodology, covering mode effects, response quality, visual design effects, and coverage issues with opt-in panels versus probability samples.
The 2020 Polling Error
American Association for Public Opinion Research. (2021). 2020 Pre-Election Polling: An Evaluation of the Polls and the Reasons for Missing the Results. AAPOR Task Force Report. The definitive post-mortem on the 2020 polling error. Evaluates multiple hypotheses for the systematic miss and concludes differential nonresponse by party is the most credible primary explanation. Required reading.
Gelman, A., & Azari, J. (2017). "19 things we learned from the 2016 election." Statistics and Public Policy, 4(1), 1–10. Accessible analysis of 2016 polling errors; useful counterpart to the 2020 post-mortem for understanding the pattern across election cycles.
Nonresponse Bias
Groves, R. M. (2006). "Nonresponse rates and nonresponse bias in household surveys." Public Opinion Quarterly, 70(5), 646–675. The most important single article on the relationship between response rates and nonresponse bias. Demonstrates empirically that the correlation between the two is weak, challenging the common assumption that high response rates guarantee low bias.
Bethlehem, J., Cobben, F., & Schouten, B. (2011). Handbook of Nonresponse in Household Surveys. John Wiley & Sons. Technical reference on nonresponse theory, measurement, and correction methods. More advanced than the Groves (2006) article; useful for graduate-level research.
Interviewer Effects and Social Desirability
Schuman, H., & Converse, J. M. (1971). "The effects of Black and White interviewers on Black responses in 1968." Public Opinion Quarterly, 35(1), 44–68. The seminal study establishing race-of-interviewer effects in survey research. Foundational for understanding how interviewer characteristics shape responses to sensitive questions.
Tourangeau, R., & Yan, T. (2007). "Sensitive questions in surveys." Psychological Bulletin, 133(5), 859–883. Comprehensive review of social desirability bias and sensitive question measurement, covering item types, modes, and correction strategies. Excellent for understanding when and how SDB distorts political surveys.
Panel Conditioning
Sturgis, P., Allum, N., & Brunton-Smith, I. (2009). "Attitudes over time: The psychology of panel conditioning." In P. Lynn (Ed.), Methodology of Longitudinal Surveys. John Wiley & Sons. Careful empirical and theoretical treatment of panel conditioning effects, including evidence on which question types are most susceptible and how to detect conditioning in longitudinal data.
Multi-Mode Survey Design
de Leeuw, E. D. (2005). "To mix or not to mix data collection modes in surveys." Journal of Official Statistics, 21(2), 233–255. Authoritative review of multi-mode survey design strategies, covering both sequential mixed-mode (where the same respondent is contacted through multiple modes) and concurrent mixed-mode (where different respondents are assigned to different modes).
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method (4th ed.). John Wiley & Sons. The most comprehensive practical guide to multi-mode survey design. Essential for practitioners designing real field operations.
Online Databases and Ongoing Resources
AAPOR Transparency Initiative (transparencyinitiative.aapor.org): Database of polls that have committed to AAPOR transparency standards, with methodology disclosures searchable by pollster and election cycle.
Roper Center for Public Opinion Research (ropercenter.cornell.edu): Archive of over half a century of American and international public opinion data, with accompanying methodology documentation. Essential for longitudinal research on public opinion trends.
Pew Research Center Methods (pewresearch.org/methods): Regular methodological reports, including annual response rate trend data, comparisons of online and telephone survey methods, and reports on specific methodological challenges. Free and updated regularly.