Chapter 9 Exercises: Fielding and Data Collection

Section A: Conceptual Questions

Exercise 9.1 — Mode Comparison Matrix

Create a comparison matrix with the six survey modes covered in this chapter (CATI, IVR, online probability panel, online opt-in panel, mail, face-to-face) as rows and the following attributes as columns: coverage, cost per complete, average response rate, maximum questionnaire length, social desirability reduction, and TCPA constraints. Fill in each cell with brief, specific information drawn from the chapter. Then write a 200-word paragraph explaining which two modes you would combine for a poll of registered voters in a mid-sized Midwestern state with a $25,000 budget and a 10-day field window.


Exercise 9.2 — AAPOR Response Rate Calculations

A CATI survey dials 10,000 telephone numbers. The outcomes are: - Completed interviews: 480 - Partial interviews (broke off after screener but before completion): 60 - Screened ineligible (not registered voters): 320 - Refusals (identified as eligible, refused to participate): 890 - Non-contacts (no answer, busy, answering machine): 5,800 - Non-working numbers, fax lines, businesses: 2,450

Using AAPOR definitions, calculate:

a. RR1 (minimum response rate, assuming all non-contacts are eligible) b. RR3 (using the estimated eligibility rate from resolved cases) c. The cooperation rate (among all contacted and eligible respondents) d. The contact rate (among all eligible residential numbers)

Show your work for each calculation and write one sentence interpreting each result.


Exercise 9.3 — Nonresponse Bias Analysis

Suppose a statewide poll of likely voters produces an unweighted sample with the following education distribution: 28% no college, 38% some college, 34% college degree or higher. The voter file for the state shows: 36% no college, 33% some college, 31% college degree or higher.

a. Describe the direction and likely source of this nonresponse pattern. b. If education is positively correlated with support for Candidate A in this state, in which direction is the unweighted poll likely biased? Explain your reasoning. c. What weighting approach would you use to correct this, and what additional information would you need? d. Is weighting a complete solution to this problem? What residual concerns remain?


Exercise 9.4 — Interviewer Effects

A survey organization is measuring support for a ballot initiative on police reform. They have a diverse interviewing staff. A quality control audit finds that: - Interviews conducted by Black interviewers show 73% support for the initiative among white respondents - Interviews conducted by white interviewers show 64% support for the initiative among white respondents - Both groups of interviewers administered the same script verbatim

a. What is the most likely explanation for this 9-point gap? b. What are two plausible mechanisms through which this interviewer effect could operate? c. What methodological approaches could reduce this effect? d. Should the organization report a weighted average of both groups' results, or report them separately? Defend your answer.


Exercise 9.5 — Panel Conditioning Design

A research organization wants to track registered voters' presidential approval rating monthly for 24 months using a probability-based online panel. Design a protocol that minimizes panel conditioning effects. Your protocol should address:

a. Maximum survey frequency per respondent b. Topic diversification across survey waves c. Detection and monitoring of conditioning effects d. Sample rotation strategy e. How you would report uncertainty created by potential conditioning in published results


Section B: Applied Problems

Exercise 9.6 — Field Operation Planning

You are Trish McGovern and have just received a contract to poll the Garza-Whitfield Senate race. Client requirements: 600 likely voters, margin of error ≤ ±4%, results in 6 days, budget $35,000.

Design a complete field operation including: a. Mode mix and rationale for each mode b. Sample sizes allocated to each mode (with reasoning) c. Field period structure (which days, which hours for each mode) d. Quota targets for key demographic groups e. Refusal conversion protocol f. Daily monitoring metrics and intervention triggers g. Projected final AAPOR RR3 for each mode component


Exercise 9.7 — Data Quality Audit

You receive an online panel dataset of 1,200 completed surveys. Initial inspection reveals: - 47 records with completion times under 90 seconds (survey designed to take 12 minutes) - 83 records with identical responses across a 10-item grid question - 12 records where open-ended responses are random character strings - 31 records where the same IP address appears with different panel member IDs - 22 records with reported ages below 18 (screener required 18+)

For each category: a. Identify the data quality problem it represents b. Describe the likely cause c. State your recommended disposition (delete outright, flag for review, retain with note) d. Estimate the impact on topline political results if all problematic records were from one party's supporters


Exercise 9.8 — Social Desirability Research Design

You are designing a study to measure the magnitude of social desirability bias in a statewide poll on a ballot initiative to ban assault-style weapons. You have access to a probability panel and can deploy both telephone and online versions.

a. What is your hypothesis about the direction of SDB for this question in this survey context? b. Design an experiment to estimate the size of the SDB effect c. What confounds must you control for in this design? d. If you find a 7-point SDB effect (online shows 7% higher support than telephone), how would you advise a client who received only the telephone results?


Section C: Critical Analysis

Exercise 9.9 — Response Rate and Bias

Using the formula: Nonresponse Bias = (1 - RR) × (Ȳ_nonrespondents - Ȳ_total), explain why:

a. A survey with a 5% response rate could, in theory, be unbiased b. A survey with a 60% response rate could, in theory, be severely biased c. What this implies for how we should evaluate poll quality d. What information, other than response rate, would you need to assess nonresponse bias in a published poll?


Exercise 9.10 — The Coverage Problem

A state election poll is conducted entirely by landline CATI, reaching only households with landline phones. Using data from Pew Research or similar sources, estimate:

a. What percentage of state residents are excluded by this design b. How the excluded group's political characteristics likely differ from landline households c. In a hypothetical race where the cell-phone-only population supports Candidate A by 15 points more than landline respondents, how large would the coverage bias be in a poll showing 50/50 for a statewide sample? d. What is the minimum adjustment Meridian would recommend to address this coverage gap?


Section D: Reflection and Synthesis

Exercise 9.11 — Short Essay

In 500–700 words, respond to the following prompt:

"The decline in response rates from 70% in the 1970s to under 10% today is a methodological crisis that has fundamentally compromised the validity of public opinion research."

Agree, disagree, or take a nuanced position, drawing on evidence from the chapter about the relationship between response rates and nonresponse bias, callback study findings, and the distinction between coverage, nonresponse, and measurement error.


Exercise 9.12 — Who Gets Counted

Identify three populations that are systematically underrepresented or excluded in standard political polling by CATI and opt-in online methods. For each:

a. Identify the coverage or participation mechanism that creates the exclusion b. Estimate the size of the excluded population in your state (use Census or similar data) c. Describe how the political attitudes of this population may differ from those included d. Propose one methodological adjustment that would partially address the exclusion and estimate its cost implications