Part Seven: Emerging Frontiers
Chapters 37–40
Everything in Parts One through Six was true before large language models existed.
The techniques in Part Two — emotional appeals, repetition, false authority — predate the internet. The channels in Part Three were all operating before social media. The historical cases in Part Four include events a century old. The inoculation theory in Part Six was developed in the 1960s.
Part Seven examines what is genuinely new: the set of technological capabilities that have emerged in the past decade that change the scale, cost, and nature of propaganda in ways that are not merely extensions of existing patterns.
Chapter 37: AI-Generated Content and Synthetic Media — Large language models can now produce convincing text at a cost that is effectively zero, at a speed that is effectively instant, at a scale that is effectively unlimited. This changes the economics of content farms, astroturfing, and disinformation campaigns in fundamental ways. It also raises the detection challenge: if AI-generated content cannot be reliably identified, what does that mean for the credibility of all text? This chapter examines documented cases, detection methods, and the "liar's dividend" — the use of deepfake awareness to discredit authentic footage.
Chapter 38: Deepfakes, Computational Propaganda, and Influence Operations — Deepfake technology allows the creation of synthetic audio and video that is, under normal viewing conditions, indistinguishable from authentic recordings. This chapter examines what deepfakes can and cannot do (most effective disinformation still uses simple methods, not cutting-edge AI), documents major state-linked computational influence operations, and examines the forensic and policy responses.
Chapter 39: Information Warfare and the Future of Truth — The "firehose of falsehood" doctrine — producing so much conflicting content that audiences give up trying to determine what is true — represents a different strategic goal than classical propaganda. Classical propaganda tries to convince. Information warfare tries to destroy the possibility of shared conviction. This chapter asks whether that goal can be achieved, and what the responses look like in societies that have confronted it most directly.
Chapter 40: Democratic Resilience and the Inoculated Society — The closing chapter of the book is deliberately optimistic about what the evidence supports and honest about what it does not. Estonia's digital society, Finland's media literacy infrastructure, Taiwan's rapid response to disinformation, and the behavioral research on epistemic humility and tolerance for ambiguity all point toward a version of democratic resilience that is achievable. It is not easy. It is not guaranteed. But the evidence suggests it is possible. Sophia Marin's completed Inoculation Campaign appears here as the closing illustration of what individual action looks like at scale.
Inoculation Campaign: The final component of your project is a future-proofing analysis — identifying which AI-generated or synthetic media threats are most likely to affect your target community in the next three to five years, and incorporating at least one response to that threat into your campaign brief. The completed brief is due with the Capstone.