Chapter 40 Quiz: AI, Automation, and the Future of Political Analytics
Multiple Choice
1. The research finding that most directly accelerated campaign adoption of LLMs for political communication was that:
a) LLMs can write political ad copy faster than human copywriters b) LLMs can generate persuasion messages as effective as — or more effective than — messages written by experienced human political consultants, at fractions of a cent per message c) LLMs can automate the entire voter contact workflow without human review d) LLMs can predict voter turnout more accurately than logistic regression models
2. The "hallucination" problem in large language models is most directly relevant to political applications because:
a) LLMs sometimes refuse to write partisan political content b) LLMs produce fluent, confident text whether or not the underlying claims are accurate, creating risks for political advertising accuracy c) LLMs have difficulty maintaining consistent political ideology across many messages d) LLMs cannot process voter file data in the format campaigns use
3. The "liar's dividend," as articulated by Citron and Chesney, refers to:
a) The financial benefit to AI companies from political advertising contracts b) The ability of deepfake technology to create false candidate statements c) The ability of political actors to dismiss authentic content as AI-generated, because synthetic media has made such claims plausible d) The economic surplus that political campaigns capture from AI-powered ad optimization
4. Which of the following is the most significant methodological concern about synthetic respondent polling for electoral applications?
a) Synthetic respondents take longer to generate than real survey interviews b) Synthetic respondents are most unreliable for the local-specific questions that matter most in electoral contexts c) Synthetic respondents cannot process hypothetical policy proposals d) Synthetic respondents tend to overrepresent the views of younger voters
5. Platform recommendation algorithms' documented political effects most clearly include:
a) Systematic promotion of candidates from the party that purchases the most advertising b) Suppression of content from political candidates who have received negative press coverage c) Systematic exposure of users to emotionally intense political content because engagement correlates with emotional intensity d) Algorithmic detection and demotion of false political advertising
6. The AI disclosure regulatory landscape in the United States as of 2025-2026 is best described as:
a) Comprehensively regulated by a single federal AI disclosure law enacted in 2024 b) Unregulated, with no federal or state requirements of any kind c) Fragmented, with some state laws and platform policies but no comprehensive federal standard d) Governed primarily by FEC regulations that were updated in 2022 to specifically address AI
7. Which of the following decisions would best warrant using an interpretable logistic regression model rather than a high-accuracy black-box model?
a) Real-time ad spend allocation across digital platforms b) Setting the initial priority ranking for a 2-million-voter contact list c) Explaining to a field director which neighborhoods to prioritize and why, so they can make real-time adjustments d) Generating variant text for A/B testing of email subject lines
8. The differential access to AI political analytics tools currently advantages primarily:
a) Smaller campaigns that can use low-cost SaaS tools without technical staff b) Academic researchers who have priority access to API systems c) Well-funded major campaigns, large political parties, and — critically — sophisticated foreign interference operations d) Incumbent officials who have established relationships with AI vendors
9. In the context of AI-generated political communication, "personalization at scale" refers to:
a) Hiring more digital staff to manage individualized voter outreach b) True individual-level message generation that incorporates specific voter profile data, at no additional marginal cost per message c) Segmenting voters into more than 100 distinct demographic categories for targeted advertising d) Using A/B testing to optimize messages for different geographic regions simultaneously
10. The prediction vs. explanation tension becomes more acute in AI-driven political analytics because:
a) AI models are required by law to provide explanations for their predictions b) High-performance AI models can produce accurate predictions without interpretable reasoning, creating accountability gaps and limiting strategic learning c) Political campaigns prefer explanations over predictions because they are more actionable d) AI predictions are less accurate than those from traditional statistical models and require more explanation
Short Answer
11. What is the difference between AI-generated individualized messaging and traditional microtargeting segments? Why does this difference matter for the persuasion-manipulation distinction discussed in Chapter 38? (3-4 sentences)
12. Explain why the liar's dividend is, in some respects, more democratically corrosive than deepfakes themselves. Your answer should engage with the concept of political accountability. (2-3 sentences)
13. What are two affirmative data practices (from Chapter 39) that are specifically important to apply when deploying AI-driven political targeting? Explain why each practice is important in the AI context specifically. (3-4 sentences)
True/False with Justification
For each statement, indicate True or False and provide a one-sentence justification.
14. Large language models are reliable for political advertising copy because they are trained to be factually accurate.
15. The Content Authenticity Initiative (C2PA) is a regulatory body with legal enforcement authority over AI-generated political content.
16. AI-assisted survey interviewing systems have been shown to consistently produce equivalent response quality to human interviewing for all demographic groups tested.
17. Platform algorithmic recommendation systems optimize primarily for engagement, which correlates with emotional intensity in political content.
18. A political campaign that uses an LLM only to select which of ten pre-written human-authored messages to send to each voter would generally be covered by current AI disclosure requirements.
Applied Analysis
19. A campaign manager says: "AI-generated personalized messaging is just a more efficient version of what we've always done — targeting specific voter segments with tailored messages. It's a quantitative change, not a qualitative one. The ethics are the same." Using the persuasion-manipulation framework from Chapter 38 and the capabilities described in Chapter 40, evaluate this claim. Is the manager right that the ethics are the same? What, if anything, is qualitatively different about AI-generated individualized messaging? (200-300 words)
20. You are advising ODA's Adaeze Nwosu on whether to adopt AI-assisted survey interviewing for their work with minority communities in hard-to-count areas. Drawing on Chapter 39's affirmative data practices and Chapter 40's analysis of AI polling methods, develop a recommendation that addresses: (a) the potential benefits of AI-assisted fielding for hard-to-reach populations; (b) the equity concerns that could make AI-assisted fielding worse for minority communities; (c) what validation research ODA should require before adopting the technology; (d) what disclosure obligations ODA would have to its clients and research subjects. (300-400 words)