Chapter 9 Quiz: Fielding and Data Collection
30 questions — multiple choice, true/false, and short answer
Part I: Multiple Choice (2 points each)
1. Which survey mode requires that cell phones be dialed manually rather than by auto-dialer under the Telephone Consumer Protection Act?
a) Mail surveys b) Online opt-in panels c) CATI (both landline and cell) d) IVR (Interactive Voice Response)
Answer: c. CATI cell-phone dialing requires manual operation under the TCPA; IVR auto-dialing of cell phones is prohibited entirely.
2. The AAPOR RR1 formula produces the _______ response rate by assuming all cases of unknown eligibility are eligible.
a) highest possible b) lowest possible c) median estimated d) mode-adjusted
Answer: b. By placing all unknowns in the denominator as eligible, RR1 maximizes the denominator, producing the most conservative (lowest) estimate.
3. A survey that interviews only landline telephone households in 2024 suffers primarily from:
a) Sampling error b) Coverage error c) Measurement error d) Interviewer error
Answer: b. Excluding cell-phone-only households creates a systematic coverage gap — a failure to include a definable population segment in the sampling frame.
4. Panel conditioning refers to:
a) The training of telephone interviewers before data collection begins b) Changes in respondents produced by repeated survey participation c) The process of weighting panel data to population benchmarks d) The attrition of panel members over time
Answer: b.
5. In a dual-frame RDD telephone survey, what are the two frames?
a) Cell phones and smartphones b) Landline and cell phone numbers c) Registered and unregistered voters d) Urban and rural area codes
Answer: b.
6. Which of the following is the most commonly cited mechanism for the long-term decline in telephone survey response rates?
a) Growing distrust of survey researchers' data security practices b) Widespread adoption of caller ID and call-screening technology c) Increased rates of landline cancellation among younger respondents d) Legislative restrictions on survey research under the TCPA
Answer: b.
7. Social desirability bias in survey research refers to:
a) Interviewers subtly leading respondents toward expected answers b) Respondents reporting views they believe are more socially acceptable rather than their true opinions c) The tendency for online samples to overrepresent engaged, opinion-holding adults d) Questions that are framed to make one answer seem more reasonable than another
Answer: b.
8. A "speeder" in an online survey is typically identified by:
a) Submitting the survey without completing all required questions b) Reporting implausibly extreme political views c) Completing the survey in a fraction of the typical completion time d) Giving identical answers to all questions on a grid
Answer: c.
9. The cooperation rate measures what proportion of:
a) All sampled numbers that are successfully contacted b) All contacted, eligible respondents who complete the survey c) All completed surveys that pass quality control review d) All refusals that are converted to completions
Answer: b.
10. Which formula correctly represents the relationship between nonresponse bias, response rate, and the difference between respondents and nonrespondents?
a) NRB = RR × (Ȳ_respondents - Ȳ_nonrespondents) b) NRB = (1 - RR) / (Ȳ_nonrespondents - Ȳ_total) c) NRB = (1 - RR) × (Ȳ_nonrespondents - Ȳ_total) d) NRB = RR / (Ȳ_total - Ȳ_respondents)
Answer: c.
11. IVR surveys are most limited relative to CATI because:
a) They cannot achieve comparable question complexity and must legally avoid cell phones b) They are more expensive per complete c) They require longer field periods d) They are more vulnerable to social desirability bias
Answer: a.
12. An interviewer reads a question about immigration policy with slightly more emphasis on the "restrict" response option. This is an example of:
a) Social desirability bias b) Expectancy effects in interviewer administration c) Panel conditioning d) Coverage error
Answer: b.
13. Probability-based online panels differ from opt-in panels primarily in that they:
a) Offer faster turnaround and lower cost b) Recruit members through probability methods allowing statistical inference c) Include only respondents who have opted in to political surveys d) Use adaptive questioning based on previous responses
Answer: b.
14. Which of the following is NOT a standard component of Meridian Research Group's survey documentation package?
a) Field period log with daily sample sizes and response rates b) Weighting documentation including target margins and design effect c) Individual-level response records linked to respondent names d) Codebook with variable names, question text, and response codes
Answer: c. Individual-level data should never be linked to personal identifiers in a documentation package; respondent anonymity is a fundamental ethical requirement.
15. Straightlining in a survey grid is detected by examining:
a) Response time relative to survey median b) Variance in individual respondents' answers across grid items c) Agreement rate between two independent coders d) Intraclass correlation across interviewer assignments
Answer: b.
Part II: True/False (1 point each)
16. A survey with a 4% response rate cannot produce unbiased estimates.
Answer: FALSE. Response rate and nonresponse bias are related but distinct. If nonresponse is random with respect to variables of interest, a 4% response rate produces unbiased (though imprecise) estimates.
17. Face-to-face surveys typically achieve the highest response rates of any survey mode.
Answer: TRUE. In-person contact, the ability to address concerns directly, and the social norm of hospitality to a visitor combine to produce higher response rates than remote modes.
18. Under AAPOR RR3, the eligibility rate for cases of unknown disposition is assumed to be 100%.
Answer: FALSE. RR3 applies the estimated eligibility rate from resolved cases to unknown cases, producing a less conservative denominator than RR1.
19. The "Bradley Effect" refers to the hypothesis that Black voters are less likely to respond to telephone surveys than white voters.
Answer: FALSE. The Bradley Effect refers to the hypothesis that white voters are reluctant to admit to interviewers that they plan to vote against a Black candidate, leading polls to overestimate support for Black candidates.
20. Mail surveys are better suited to tracking rapidly changing opinion during a campaign than CATI surveys.
Answer: FALSE. Mail surveys require 2–3 weeks for delivery and return, making them poorly suited to fast-moving campaign environments. CATI produces faster results.
21. Item nonresponse on income questions typically runs 10–20% even in otherwise cooperative survey samples.
Answer: TRUE.
22. Refusal conversion always improves data quality by increasing the response rate.
Answer: FALSE. Converted refusals sometimes differ systematically from initial cooperators and can introduce new forms of bias. Refusal conversion must be disclosed and its effects monitored.
23. Callback studies suggest that hard-to-reach respondents tend to be older and more politically engaged than easy-to-reach respondents.
Answer: FALSE. Hard-to-reach respondents (those requiring many contact attempts) tend to be younger, more likely to be employed full-time, and somewhat less politically engaged.
24. An intercoder reliability (Cohen's kappa) of 0.50 indicates adequate agreement for most social science coding tasks.
Answer: FALSE. Kappa below 0.70 typically indicates insufficient agreement, suggesting the coding scheme requires clarification or coders need additional training.
25. Online surveys generally produce higher endorsement of socially sensitive attitudes than telephone surveys with live interviewers.
Answer: TRUE. Reduced social desirability pressure in self-administered online modes typically produces more candid responses to sensitive items.
Part III: Short Answer (5 points each)
26. Explain in 3–5 sentences why a survey with a 60% response rate could theoretically be more biased than a survey with a 6% response rate.
Model Answer: A survey with a 60% response rate could be severely biased if the 40% who refused differ dramatically from respondents on the outcome of interest — for example, if all refusers were strong supporters of one candidate. Conversely, a 6% response survey would be unbiased if the 94% who didn't participate were similar to those who did on the measured variables. The key determinant of bias is not the response rate itself but the correlation between nonresponse and the outcome variable. A high response rate only guarantees low bias if the reasons for nonresponse are unrelated to the survey's subject matter — an assumption that becomes harder to sustain as rates fall but is not guaranteed at high rates.
27. What is the "dual-frame" approach in telephone survey design, and why is it necessary?
Model Answer: Dual-frame telephone survey design combines two separate sampling frames — one for landline numbers and one for cell phone numbers — and interviews respondents from both frames. This design is necessary because a substantial and growing proportion of Americans (roughly 25% as of 2023) live in cell-phone-only households without landline access. A landline-only design would systematically exclude this population, which tends to be younger, more likely to rent, and politically distinct from landline households. The two frames must be merged using statistical weights that account for the probability of selection from each frame and the proportion of the population reached by each.
28. Describe three specific ways that Trish McGovern's multi-mode field operation for Meridian Research Group improves on single-mode designs.
Model Answer: First, combining CATI and online modes improves coverage: CATI reaches people who don't participate in online panels, while online captures those who don't answer phone calls, reducing the exclusion of either group. Second, using multiple modes allows within-study assessment of mode effects: if CATI and online components produce systematically different toplines on the same questions, this flags potential mode bias that a single-mode design could not detect. Third, multi-mode designs allow quota-filling flexibility: if cell-phone Black respondents are underrepresented in the CATI component by midfield, the online component can be targeted to fill that demographic gap without violating the overall sampling plan.
29. What is panel conditioning, and why is it a particular concern for probability-based online panels used in political tracking research?
Model Answer: Panel conditioning describes changes in respondents' attitudes, behaviors, or awareness produced by the act of repeated survey participation itself. In political research, this is particularly concerning because political panel surveys ask respondents repeatedly about candidates, issues, and institutions, which may increase political knowledge, crystallize attitudes, and shift reported opinion in ways that reflect the survey experience rather than actual public opinion. Long-term panelists asked monthly about the Garza-Whitfield race will have more developed opinions on that race than fresh respondents drawn from the same population. Probability-based panels manage this by limiting survey frequency per respondent and monitoring conditioning effects, but cannot eliminate it entirely.
30. Explain why cleaning raw survey data is not merely a technical step but also a consequential analytic decision.
Model Answer: Data cleaning decisions — which speeders to remove, how to handle item nonresponse on the vote-intention question, how to code ambiguous open-ended responses — directly shape what the final dataset says. If all 47 speeder records from a survey happen to be disproportionately from one partisan group, removing them shifts the topline in the other direction. If the coding protocol for "leaning" undecideds allocates them proportionally rather than keeping them separate, it produces a different topline than treating them as truly undecided. These decisions are not pre-specified by any universal standard — they require judgment calls by the analyst. Good practice requires pre-registering cleaning decisions before seeing data, documenting all decisions in the codebook, and conducting sensitivity analyses to assess how different cleaning choices would change conclusions.