Chapter 30 Quiz: Field Experiments in Politics
Multiple Choice
1. The fundamental reason that randomized experiments can support causal inference, while observational studies typically cannot, is that: - A) Experiments use larger samples than observational studies - B) Random assignment ensures that treatment and control groups are, in expectation, equivalent on all characteristics before the experiment begins - C) Experiments are conducted by academics while observational studies are conducted by campaigns - D) Experiments control for all possible confounding variables by measuring them
2. The Gerber and Green (2000) field experiment in New Haven found that: - A) All forms of voter contact produced large, similar turnout effects - B) Phone banking was the most cost-effective GOTV method - C) Personal canvassing produced much larger turnout effects than phone contact or mail - D) Mail was the most effective GOTV technique per dollar spent
3. The intent-to-treat (ITT) estimate in a canvassing experiment measures: - A) The effect of contact on voters who were actually reached by canvassers - B) The effect of being assigned to the treatment group, regardless of whether contact was achieved - C) The intended effect of the campaign's GOTV program - D) The effect of the campaign's outreach on voters who intended to vote anyway
4. Blocked randomization is used primarily to: - A) Prevent spillover between treatment and control groups - B) Reduce ethical concerns about withholding treatment from control voters - C) Ensure balance on key background variables and increase precision - D) Make the experiment faster to implement
5. Cluster randomization is necessary when: - A) The experiment involves digital advertising rather than in-person contact - B) Individual-level randomization would risk contaminating control-group members through spillover - C) The researcher lacks access to individual-level voter data - D) The sample size is too small for individual-level analysis
6. The "social pressure" GOTV mailer, as tested by Gerber, Green, and Larimer (2008), works primarily by: - A) Providing factual information about candidates' policy positions - B) Invoking social norms and accountability by showing voters their and their neighbors' turnout histories - C) Creating a sense of electoral urgency through competitive race information - D) Personalizing campaign appeals based on individual issue priorities
7. The local average treatment effect (LATE) is calculated by: - A) Multiplying the ITT by the contact rate - B) Dividing the ITT by the contact rate - C) Subtracting the control group turnout from the treatment group turnout - D) Averaging the ITT across all subgroups
8. Which of the following non-experimental methods exploits the similarity of observations just above and just below a treatment threshold? - A) Matching - B) Difference-in-differences - C) Instrumental variables - D) Regression discontinuity
9. The "parallel trends" assumption is most central to which non-experimental design? - A) Regression discontinuity - B) Difference-in-differences - C) Propensity score matching - D) Natural experiments
10. Trish McGovern's approach to maintaining canvasser protocol integrity in the Meridian experiment includes: - A) Paying canvassers a bonus for successful contacts in the treatment group - B) Hiding the control group from canvassers' lists and monitoring flag rates for out-of-protocol approaches - C) Assigning only experienced canvassers to the experimental protocol - D) Requiring canvassers to sign legal agreements about following the randomization
True/False
11. In a canvassing field experiment, the control group typically receives no campaign contact of any kind from any source.
12. Statistical power in political field experiments is generally lower (harder to achieve) than in clinical drug trials because political behavior effects are typically small relative to their variance.
13. Difference-in-differences requires only cross-sectional data from a single time point.
14. The original Gerber-Green (2000) study found that telephone contact produced large, significant turnout effects comparable to personal canvassing.
15. IRB review of political field experiments is universally required under US federal research ethics regulations.
Short Answer
16. Explain the fundamental problem of causal inference in the context of evaluating a GOTV canvassing program. Why can't we simply compare the turnout of contacted and non-contacted voters?
17. Why is statistical power a particular challenge for political field experiments? Name two factors that make it difficult to achieve adequate power.
18. Describe the spillover problem in a political field experiment and explain one specific design approach Trish McGovern might use to minimize it in the Meridian canvassing experiment.
19. What is the distinction between prediction and explanation in political analytics, and why does this distinction justify treating field experiments as irreplaceable despite their cost and complexity?
20. A meta-analysis of GOTV canvassing experiments finds an average effect of 2.5 percentage points per contact. What cautions would you apply before using this estimate to plan a canvassing program for a specific local race?
Essay Questions
21. The chapter argues that field experiments are "irreplaceable" for answering causal questions about what makes voters turn out and what moves their vote choices. A skeptic responds: "We have massive voter file data and sophisticated machine learning models. Why do we need expensive experiments?" Write a response to this skeptic that explains what experiments provide that predictive modeling cannot, and vice versa.
22. Describe the Meridian Research Group's canvassing experiment in the Garza-Whitfield race: its design rationale, its key operational challenges, and the complications that arose during execution. What does the Meridian experiment illustrate about the gap between experimental textbooks and operational realities? How did Vivian Park's team navigate that gap?