Chapter 37 Quiz: AI-Generated Content and Synthetic Media


Multiple Choice

1. The term "next-token prediction" refers to which core mechanism of large language models?

A) The process by which AI retrieves relevant passages from a database B) The mechanism by which the model predicts the most likely next word or sub-word unit given preceding text C) A filtering process that checks generated content against a list of banned terms D) The method by which LLMs verify factual claims against their training data


2. What is the key distinction that separates large language models from earlier automated content systems like template-based article generators?

A) LLMs are more expensive to operate than template-based systems B) Template-based systems required human oversight while LLMs operate independently C) LLMs generate new text rather than retrieving and filling in templates, enabling fluent output across any domain D) LLMs can only produce content in the domains they were specifically trained on


3. According to the chapter, what made the IRA's 2016 influence operation qualitatively different from earlier automated bot operations?

A) The IRA used more sophisticated AI tools than previous operations B) IRA content was human-produced, making it qualitatively superior to bot-generated text C) The IRA focused on social media while earlier operations used email D) The IRA operated in fewer countries than earlier bot networks


4. Which of the following categories of AI propaganda application involves generating individually tailored persuasive messages for specific target individuals?

A) Content farm automation B) Comment flooding C) Targeted message personalization D) Sockpuppet network management


5. The chapter describes the emergence of AI-generated local news sites as particularly dangerous for which reason?

A) They are more expensive to produce than traditional disinformation B) Local news deserts make communities vulnerable to fabricated local journalism they have few tools to evaluate C) Local news sites have larger audiences than national news outlets D) AI-generated local news is impossible to distinguish from authentic journalism


6. What is the fundamental detection problem described in Section 37.6?

A) Detection tools are too expensive for most researchers to use B) AI-generated text is impossible to distinguish from human text at any level of analysis C) Any detectable characteristic of AI-generated text is also an optimization target for improving generation to defeat detection D) Detection tools can identify AI text but cannot identify the specific model that generated it


7. The "liar's dividend" concept, developed by Chesney and Citron (2019), refers to:

A) The financial profit that content farms derive from AI-generated advertising revenue B) The way that propaganda operators benefit economically from using AI rather than human writers C) The plausible deniability provided to those with authentic damaging content against them, who can claim it is AI-generated D) The advantage that democratic governments have in using AI for counter-propaganda


8. The chapter identifies "hallucinated citations" as particularly useful for AI content detection because:

A) AI systems always produce the same set of hallucinated citations, making them easy to catalog B) Hallucinated citations are completely verifiable through standard research tools and are a consistent LLM behavior C) Hallucinated citations always appear in the same position in AI-generated text D) Detection algorithms can identify hallucinated citation formatting automatically


9. According to the chapter's analysis of AI and scientific misinformation, which of the following describes the "pre-print server vulnerability"?

A) Academic pre-print servers are funded by industries that benefit from scientific misinformation B) Pre-print servers lack technical infrastructure to host large volumes of content C) AI-generated papers can be posted to pre-print servers before peer review, enabling citation in media before authenticity is verified D) Pre-print servers actively promote AI-generated content for advertising revenue


10. The C2PA (Coalition for Content Provenance and Authenticity) standard addresses the AI-generated content problem by:

A) Building detection algorithms into social media platforms B) Cryptographically recording content origin and transformation history in a machine-readable format C) Requiring AI companies to watermark all outputs with a visible marker D) Establishing a database of known AI-generated images for platform comparison


11. The chapter argues that technique-based inoculation (FLICC framework) transfers to AI-generated propaganda because:

A) AI systems have been trained specifically to avoid FLICC techniques B) AI-generated propaganda uses the same psychological manipulation techniques as human-generated propaganda C) FLICC detection tools are automated and therefore scale with AI production rates D) AI-generated content is easier to identify as propaganda than human-generated content


12. Which of the following does the chapter identify as a new AI-era propaganda technique that existing FLICC-based inoculation does not directly address?

A) Cherry-picking selectively chosen data B) The false dilemma fallacy C) The synthetic consensus technique — AI-generated comments manufacturing apparent public opinion D) Ad hominem attacks on credible sources


13. The chapter describes the watermarking approach to AI content detection as limited primarily because:

A) Watermarks are visible to readers and reduce the effectiveness of propaganda B) Watermarking requires too much computational power to implement at scale C) Watermarks can be removed through simple text transformations, and open-source models can be run without watermarking D) Watermarks only work for images, not text


14. The EU AI Act's Article 50 provisions on AI-generated content are described in the chapter as:

A) The most comprehensive and enforceable solution to AI-generated disinformation globally B) Meaningless because the EU has no enforcement capability outside its borders C) Substantive regulation that applies to compliant actors but cannot reach covert influence operations D) Focused exclusively on election-related AI content


15. In the debate framework (Section 37.13), the Position B argument (AI-generated propaganda as an accelerated old threat rather than a new category) rests primarily on:

A) The claim that AI cannot produce content of sufficient quality to deceive audiences B) The argument that psychological mechanisms are unchanged and existing institutional responses scale appropriately C) Evidence that AI-generated disinformation campaigns have not yet produced documented real-world effects D) The argument that regulatory responses will quickly catch up to AI capabilities


Short Answer

16. In two to three sentences, explain why the economics of AI-generated content production represent a qualitative change rather than merely an incremental efficiency improvement over the IRA's 2016 human-labor-based operation. (4 points)


17. Section 37.8 argues that AI capabilities would have significantly amplified the tobacco industry's manufactured doubt campaign. Identify two specific ways that LLMs would have enhanced that campaign, and explain why each would have been effective. (6 points)


18. The chapter's action checklist (Section 37.14) identifies citation verification as an important tool for identifying AI-generated content. What are the practical limitations of using citation verification as a population-level counter-disinformation strategy? What does this limitation suggest about how counter-disinformation resources should be deployed? (5 points)


Essay

19. The chapter's debate framework poses the question: Does AI-generated content represent a fundamentally new propaganda threat or an accelerated old one?

Drawing on at least three specific sections of Chapter 37 and at least two historical examples from earlier in the course (Chapters 1–36), construct a well-reasoned argument for one of the two positions. Your essay should: (a) clearly state your position, (b) present the strongest version of your chosen argument using evidence from the chapter, (c) acknowledge and respond to the strongest counter-argument, and (d) discuss the practical implications of your position for counter-propaganda efforts.

Recommended length: 600-800 words. Graded on argument clarity, use of evidence, engagement with counterargument, and quality of practical implications discussion.


Answer key and scoring rubric available in the Instructor's Manual. Short answer and essay responses should be evaluated for accuracy of course concept application, quality of analysis, and specificity of evidence.