Chapter 37 Exercises: AI-Generated Content and Synthetic Media


Exercise 37.1 — The Forty-Five-Minute Experiment (Observation and Analysis)

Purpose: To directly observe AI text generation capabilities and apply the chapter's analytical framework to AI-generated content.

Instructions:

Using a publicly available LLM tool (ChatGPT, Claude, Gemini, or a comparable system), conduct a structured generation experiment:

Step 1 — Generate: Prompt the LLM to write a local news article about a fictional community event in your actual city or town. Specify that the article should include: a named local official making a statement, a specific location, a vote or decision outcome, and at least one concerned community member quoted by name. Do not spend more than ten minutes on prompting and light editing.

Step 2 — Analyze: Apply the propaganda analysis framework from Chapter 10 to your own generated article: - What techniques does the article use, if any? - How does it establish source credibility? - What would a reader need to know to identify it as fabricated? - What would a reader not know that would lead them to accept it as authentic?

Step 3 — Test: Share your article with at least two people outside the course (family members, friends, roommates) without identifying it as AI-generated. Ask them to rate its credibility on a 1-10 scale and explain what led to their assessment.

Step 4 — Reflect: In 400-500 words, discuss: (a) how long the generation actually took, (b) what your test readers' responses suggest about the plausibility of the content, and (c) what specific knowledge or context would have been needed to identify the article as fabricated.

Submission: Written reflection (400-500 words) plus the generated article text.


Exercise 37.2 — Citation Verification as AI Detection

Purpose: To practice the single most reliable and accessible technique for identifying AI-generated content: verification of citations.

Background: Hallucinated citations — references to studies, papers, or sources that do not exist — are among the most consistent artifacts of LLM-generated content, particularly in genres that conventionally cite authority (news articles, academic writing, advocacy documents). Citation verification requires no specialized technical tools and is achievable by anyone with internet access.

Instructions:

Part A — Generate with citations: Prompt an LLM to write a 400-word article on a health, environmental, or political topic of your choice, instructing it to include at least four specific citations to academic studies (with journal names, years, and author names). Do not ask it to use real studies — allow it to generate the citations without restriction.

Part B — Verify: Attempt to locate each cited source using: - Google Scholar - PubMed (for health-related research) - Direct journal website search - General web search for author name plus topic

Record what you find for each citation: - Does the journal exist? - Does the volume/year combination exist? - Does the paper title exist? - Do the authors exist? - If authors exist, are they associated with this topic?

Part C — Analysis: In 300-400 words, discuss: - How many of your four citations were fully fabricated? Partially fabricated (real journal, invented paper)? - How long did verification take per citation? - Could this verification process be scaled to the volume of AI-generated content being produced? What are the implications of your answer?

Discussion question: If citation verification is the most reliable detection technique and citation fabrication is a consistent LLM behavior, why haven't academic and news organizations implemented systematic citation verification programs for suspect content?


Exercise 37.3 — The Liar's Dividend: Historical Case Analysis

Purpose: To apply the liar's dividend concept to historical propaganda cases examined in the course.

Background: Section 37.7 introduced the liar's dividend: the way that AI generation capability provides plausible deniability for authentic content, allowing genuine damaging material to be dismissed as potentially synthetic.

Instructions:

Select one of the following historical cases from earlier in the course: - The Pentagon Papers (Chapter 6 or Chapter 19) - The IRA-produced "Blacktivist" and "Being Patriotic" content (Chapter 20) - The tobacco industry's use of front organizations (Chapter 26) - The Nazi propaganda ministry's use of forged documents (Chapter 12)

In a structured analysis of 500-600 words, address:

  1. The original evidence: What authentic documentation, recording, or material served as damaging evidence in this case? What made it convincing or difficult to deny?

  2. The counterfactual: If the liar's dividend had been available at the time — if the subjects of this evidence could credibly claim the material was AI-generated — how effective would that defense have been? What would have needed to be true for audiences to accept or reject the "it's synthetic" claim?

  3. Institutional response: What institutions (courts, press, academic analysis) provided the authentication of the genuine evidence? Would those same institutions be able to authenticate equivalent evidence against a liar's dividend defense today?

  4. Contemporary implication: What does this case suggest about the vulnerability of accountability journalism and documentary evidence in the AI era?


Exercise 37.4 — AI-Era Inoculation Message Design

Purpose: To design an inoculation message specifically targeting one of the three AI-era technique categories identified in Section 37.10.

Background: Section 37.10 identified three AI-specific propaganda techniques that existing FLICC-based inoculation does not directly address: - The AI authority appeal - The synthetic consensus technique - The AI manufactured doubt factory

Effective inoculation messages (Chapter 33) require three components: (1) a forewarning that an attempt to manipulate will occur, (2) a weakened form of the manipulative content, and (3) a refutation of the technique.

Instructions:

Select one of the three AI-era techniques. Design a complete inoculation message (800-1,000 words as final product, preceded by 200-300 words of design rationale) that:

  1. Names the technique in accessible, non-jargon language appropriate for a non-expert audience
  2. Forewarns that this type of manipulation exists and is being deployed
  3. Demonstrates with a weakened example — a clearly labeled, simplified illustration of the technique in action
  4. Refutes — explains why the technique works, what makes it misleading, and what a more accurate evaluation would look like
  5. Provides a practical action step the reader can take when they encounter this technique

Design rationale questions to address: - Who is your target audience, and why did you select this technique as most relevant for them? - What is the appropriate emotional register for your message — alarming, neutral, empowering? - What analogies from pre-AI propaganda techniques help translate this concept?

Evaluation criteria: Clarity, adherence to inoculation design principles, appropriateness for stated audience, accuracy of technical description, and persuasive effectiveness of the demonstration.


Exercise 37.5 — Platform Policy Evaluation

Purpose: To critically evaluate existing platform and regulatory responses to AI-generated content.

Instructions:

Research and compare the current AI-generated content policies of two of the following platforms or regulatory frameworks: - Meta (Facebook/Instagram) - YouTube - Twitter/X - The EU AI Act (Article 50 provisions) - The U.S. FTC's guidelines on AI-generated endorsements (if applicable)

For each, document: - What the policy requires - Who must comply (platforms, advertisers, users, model operators) - What enforcement mechanism exists - What exceptions or limitations are written into the policy

Then write a 600-700 word analysis addressing:

  1. Scope: What categories of AI-generated content does the policy address, and what categories are outside its scope?

  2. Enforcement gap: What is the gap between what the policy requires and what it can actually enforce, given the detection limitations discussed in Section 37.6?

  3. Bad actor applicability: Would a foreign state actor conducting a covert AI-generated disinformation operation be meaningfully constrained by this policy? Why or why not?

  4. What you would change: If you were advising the platform or regulatory body, what one specific change would make the policy meaningfully more effective?


Exercise 37.6 — The Scale Calculation

Purpose: To make the economics of the AI content production shift concrete and testable.

Background: Section 37.4 argued that the shift from human-labor content production to LLM-assisted production represents a change in the economics of propaganda, not merely its efficiency. This exercise asks you to make that argument quantitative.

Instructions:

Step 1 — Establish the IRA baseline: Research and document the IRA's 2016 operation: - Approximate number of employees - Estimated number of posts, articles, and social media interactions produced - Estimated budget of the operation - Time period of operation

Sources for this research include the Senate Intelligence Committee reports (publicly available), the Mueller Report's Internet Research Agency indictment, and academic analyses by Renée DiResta and colleagues at the Stanford Internet Observatory.

Step 2 — Calculate LLM equivalence: Using publicly available pricing from an LLM API provider (OpenAI, Anthropic, or comparable), calculate: - The cost to generate the equivalent volume of content (posts, articles) using LLM API pricing - The time required to generate that content with one human operator managing the API - The human labor cost saving as a percentage of the original estimated IRA budget

Step 3 — Analyze: In 400-500 words, discuss: - What does this calculation suggest about the accessibility of IRA-scale influence operations to non-state actors? - What categories of effort are not reduced by LLM assistance (distribution, account creation, platform evasion, strategic coordination)? - Is the economic transformation you've calculated best described as a quantitative improvement or a qualitative change? Defend your position with reference to the debate framework in Section 37.13.


For instructor use: Exercises 37.1 and 37.2 are appropriate for individual completion in a single session; 37.3 and 37.4 are designed for independent out-of-class work; 37.5 and 37.6 may be assigned as paired or small-group projects. The Progressive Project component (Section 37.15) is tracked separately.