Chapter 18: Exercises — Deepfakes, Synthetic Media, and Emerging Threats
Part A: Conceptual and Definitional Exercises
Exercise 1: Taxonomy Application
For each of the following, classify the manipulation type (cheap fake / shallowfake / deepfake / not manipulation) and explain your reasoning:
a) A 2019 video of House Speaker Nancy Pelosi that appears to show her slurring her words and speaking incoherently, which was created by slowing the playback speed of a genuine video.
b) A political ad that uses genuine footage of an opposing candidate but edits together clips from different speeches and contexts to suggest a position the candidate does not hold.
c) A video in which a public figure's face has been replaced with AI-generated imagery synchronized to audio of the figure's actual voice.
d) A TikTok filter that maps a cartoon face over the user's face in real time using facial recognition — used only for entertainment, not to deceive.
e) A synthetic video in which an AI-generated "newscaster" — not based on any real person — reads a fabricated news script.
f) An AI-generated image used as a fake profile picture for a social media account spreading political disinformation, where the image was created by a GAN and depicts no real person.
Exercise 2: GAN Architecture Conceptual Questions
Without requiring mathematics, answer the following questions about GANs:
a) Why must the generator and discriminator be trained simultaneously rather than separately? What would happen if you trained a perfect discriminator first and then tried to train the generator against it?
b) The GAN training process aims for a "Nash equilibrium" — a state where neither the generator nor the discriminator can improve by changing its strategy alone. Describe what this would look like concretely in terms of the generator's output and the discriminator's classification behavior.
c) Mode collapse is a common GAN training failure where the generator produces only a limited variety of outputs rather than diverse realistic samples. Why would a generator "choose" to collapse in this way given its training objective?
d) Explain how a face-swap deepfake (replacing Person A's face with Person B's face in a video) differs from a face generation deepfake (generating a completely new face). What information is used in each case?
Exercise 3: Historical Parallels
The chapter traces a history of image and video manipulation from darkroom techniques through Photoshop to deepfakes.
a) Research one specific pre-digital example of photographic manipulation used for political purposes (besides Stalin's photo retouching, which is mentioned in the chapter). Describe the technique, the intent, and the historical context.
b) In what ways did the Photoshop era change the epistemology of still photography? Did people's trust in photographs decline after widespread Photoshop awareness? Research any surveys or studies on this question.
c) The chapter argues that the deepfake transition represents a "qualitative shift" from the Photoshop era, not just a quantitative improvement. State the strongest version of this argument in your own words, then write the strongest counter-argument (that deepfakes are just the latest version of an old problem).
Exercise 4: The Liar's Dividend Case Studies
Research and analyze the following cases where the Liar's Dividend dynamic appears:
a) In August 2022, a Republican candidate for governor in Arizona claimed that authentic video footage of him making controversial statements was a "deepfake." Research this claim. Was it credible? What methods were used to authenticate the footage?
b) International Context: Research the "Gabon Bongo video" controversy described in the chapter. What specific visual and technical features did deepfake researchers analyze when evaluating whether the video was authentic? What was the tentative conclusion?
c) In what legal proceeding has the "this video is a deepfake" defense been raised by a defendant? Research any documented examples. What was the outcome, and what authentication procedures did the court use?
Exercise 5: NCII Harms Assessment
Non-consensual intimate imagery represents the most common current deepfake harm.
a) Research the 2023 case of deepfake intimate images distributed at Westfield High School in New Jersey, or a similar documented case involving non-consensual deepfakes targeting minors. Describe the incident, the school's and legal system's response, and the outcomes.
b) The Sensity/Deeptrace research found approximately 96% of deepfakes were NCII. What does this distribution tell us about where regulatory and platform enforcement resources should be prioritized? Are current policy debates appropriately reflecting this distribution?
c) A target of NCII deepfakes takes legal action in a U.S. state with comprehensive NCII legislation. Trace the available legal remedies: (1) Criminal prosecution of the creator; (2) Civil action against the creator; (3) Platform takedown mechanisms; (4) Federal law options. What are the realistic limitations of each?
Exercise 6: Detection Technology Evaluation
For each of the following deepfake detection methods, evaluate: (a) the underlying detection mechanism, (b) the conditions under which it works best, (c) its known limitations and vulnerabilities to countermeasures:
a) Eye blinking frequency analysis b) GAN fingerprint detection via frequency domain analysis c) Physiological signal detection (remote photoplethysmography) d) Facial boundary artifact analysis e) C2PA content credentials verification
Exercise 7: Voice Cloning Threat Assessment
The chapter describes voice cloning as a distinct threat from visual deepfakes.
a) Using publicly available information (news reports, academic papers), find and describe three documented cases of voice cloning used for financial fraud. For each: (1) describe the method, (2) identify the target and perpetrators if known, (3) assess the financial damage, and (4) explain whether the fraud was detected and how.
b) Design a "voice authentication protocol" for a business — specifically, a set of procedures for verifying that a voice call requesting a financial transaction is genuine. What steps would you include? What are the limitations?
c) Compare the regulatory frameworks for preventing voice cloning fraud in financial services to existing frameworks for preventing identity fraud. What do existing identity fraud protections cover? What gaps exist for voice cloning?
Exercise 8: Legal Framework Analysis
Analyze the legal landscape for synthetic media.
a) Research California's AB 739 (2019) and AB 602 (2019), addressing deepfakes in elections and NCII respectively. What conduct does each prohibit? What are the penalties? How have these laws been enforced?
b) The TAKE IT DOWN Act was introduced in the U.S. Senate to require platforms to remove non-consensual deepfake intimate imagery. Research the act's provisions. What does it require of platforms? How does it balance speech interests with victim protection?
c) Evaluate the First Amendment implications of laws restricting political deepfakes. Under what conditions might such laws be constitutional? What is the "strict scrutiny" standard, and does it apply to deepfake regulation?
Part B: Applied Analysis Exercises
Exercise 9: Deepfake Detection Practice
Visit one or more of the following platforms that host practice deepfake detection challenges (as of 2024):
- The MIT Media Lab's "Detect Fakes" challenge: detectfakes.media.mit.edu
- Microsoft Video Authenticator (if publicly accessible)
- Any current deepfake detection challenge hosted by DARPA or similar
a) Complete the available detection challenge. Record your accuracy rate. b) What characteristics did you use to identify deepfakes? Were they consistent with the forensic indicators described in the chapter? c) What types of deepfakes were most difficult to detect? What does this tell you about the current state of the technology?
Exercise 10: C2PA Hands-On
Adobe's "Content Credentials" feature implements C2PA in Adobe Photoshop and other products.
a) Find a publicly available image with C2PA Content Credentials attached (Adobe has published example images). Examine the credentials: what information is included? What operations are logged?
b) Using the Content Credentials Verify tool at contentcredentials.org, verify the credentials on an image and interpret the output.
c) What happens to the C2PA credentials when an image is: (1) re-saved as a new JPEG, (2) screenshotted, (3) posted and re-downloaded from Twitter/X, (4) uploaded and downloaded from a C2PA-aware platform? What are the implications for metadata persistence?
Exercise 11: Synthetic Media Audit
Select one major platform (YouTube, TikTok, Instagram, or X/Twitter) and conduct a structured audit of its synthetic media policies.
a) Locate the platform's official policy on AI-generated content and deepfakes. What specific conduct is prohibited? What is permitted?
b) Assess the enforcement mechanisms: how does the platform detect policy violations? What are the consequences for violators?
c) Research documented cases where the platform was criticized for either over-removing content (false positives) or failing to remove harmful deepfakes. What do these cases reveal about the challenges of platform enforcement?
d) Compare the platform's policy to the recommendations made in the EU AI Act's requirements for synthetic media labeling. What gaps exist?
Exercise 12: Regulatory Comparative Analysis
Compare the regulatory approaches to deepfakes in three jurisdictions: the United States, the European Union, and China.
a) United States: Characterize the current U.S. approach as primarily federal, state, or platform-based, and evaluate its effectiveness and gaps.
b) European Union: The EU AI Act (2024) includes provisions on synthetic media. Research these provisions: what are AI systems required to do regarding disclosure of AI-generated content? What are the enforcement mechanisms?
c) China: China enacted regulations on "deep synthesis" (深度合成) technologies in 2022, requiring labeling of all AI-generated content. Research these regulations. What is the scope? How is enforcement structured?
d) Which approach is likely to be most effective? Most protective of legitimate speech? Most enforceable internationally?
Exercise 13: The News Media Response
Major news organizations have developed policies for using or reporting on synthetic media.
a) Research the Associated Press's policies on AI-generated imagery. When will AP use or distribute AI-generated images? What verification standards apply?
b) The BBC has participated in developing the C2PA standard and has implemented Content Credentials. Research how the BBC is implementing provenance standards in its journalism.
c) Design a comprehensive synthetic media policy for a mid-sized local newspaper. Address: use of AI tools in production, verification standards for user-submitted video, disclosure requirements, and procedures for reporting on deepfakes.
Exercise 14: Deepfake Economics
Analyze the economic incentives driving deepfake creation and distribution.
a) Research the business models of websites hosting non-consensual deepfake intimate imagery. How do they generate revenue? What financial intermediaries enable their operation?
b) Some jurisdictions have pursued payment processors (Visa, Mastercard) to cut off services to NCII distribution platforms, analogous to the FOSTA-SESTA approach to online sex trafficking. Evaluate this approach: how effective has it been when applied to similar platforms?
c) On the production side: estimate the current cost of producing a convincing political deepfake video. Consider cloud computing costs, software (much is free), and labor. How has this cost changed since 2018? What does the cost trajectory imply for future threat levels?
Exercise 15: Designing a Verification Protocol
You are advising a national television news organization on how to handle video evidence received from external sources — citizen journalists, whistleblowers, and wire services.
Design a comprehensive video verification protocol that:
a) Identifies the minimum metadata and provenance information that should accompany any externally sourced video.
b) Specifies the technical analysis steps that should be applied to each received video before broadcast.
c) Establishes editorial standards for how uncertainty about video authenticity should be disclosed to viewers.
d) Addresses the speed/accuracy tradeoff — how to maintain competitive news speed while not broadcasting deepfakes.
Exercise 16: Psychological Effects Research
Research the psychological literature on how people respond to evidence of deepfakes.
a) The "truth default theory" (Levine, 2014) suggests humans tend to presume truth in communication by default. How does this theory predict people will respond to deepfakes? Does existing research support or challenge this prediction?
b) Research the "illusory truth effect" — the phenomenon where repeated exposure to a claim increases belief in it regardless of truth. How might this interact with deepfake disinformation campaigns that repeatedly expose audiences to false claims?
c) Research any studies on how awareness of deepfakes affects trust in genuine video. Does knowing deepfakes exist reduce appropriate trust in authentic footage? What does the evidence say?
Exercise 17: Audio Deepfake Detection
The chapter describes technical approaches to detecting audio deepfakes.
a) Research the ASVspoof challenge — a recurring competition for voice liveness detection and speaker verification. What are the current state-of-the-art methods? What detection rates are achieved on the benchmark?
b) Design a simple audio authentication test for phone-based identity verification. What questions or prompts would be most effective at distinguishing live human speakers from voice clone playback? What are the limitations of your approach?
c) Audio deepfake detector research papers on arXiv (search "audio deepfake detection") frequently report accuracy rates above 95% on test sets. Why do researchers caution that these numbers may not reflect real-world performance? What is the difference between in-distribution and out-of-distribution performance?
Exercise 18: Deepfake Literacy Curriculum Design
You have been asked to design a one-hour deepfake literacy module for high school students (ages 15-18).
a) What three core learning outcomes would you prioritize for this age group?
b) Design one hands-on activity that would help students develop deepfake detection skills without requiring specialized software.
c) How would you address the Liar's Dividend in language accessible to high school students? What examples would you use?
d) What would you identify as the three most important "deepfake warning signs" to teach students to look for?
Exercise 19: Conflict Zone Synthetic Media
The Zelensky surrender video (Case Study 18-1) illustrates deepfake use in armed conflict.
a) Research documented examples of synthetic media use in other armed conflicts since 2020 (consider the Israel-Hamas conflict, Ethiopia-Tigray conflict, or Myanmar civil conflict). For each example: (1) what was the synthetic media? (2) who created it? (3) what was the target audience? (4) what was the actual impact?
b) The use of deepfakes in armed conflict may implicate international humanitarian law (IHL). Research whether the laws of war — specifically the prohibition on perfidy (deception that betrays the enemy's confidence in legal protections) — could apply to state-sponsored deepfakes in armed conflict.
c) International humanitarian organizations (ICRC, UN OCHA) respond to conflict-related disinformation. Research how these organizations are addressing synthetic media in conflict contexts. What frameworks have they developed?
Exercise 20: Platform Accountability
The chapter notes that platform policies on synthetic media vary significantly.
a) Compare the deepfake policies of YouTube, TikTok, Meta (Facebook/Instagram), and X/Twitter across five dimensions: (1) prohibition scope, (2) detection mechanisms, (3) labeling requirements, (4) enforcement penalties, and (5) transparency reporting.
b) The Digital Services Act (DSA) in the EU requires large platforms to assess and mitigate systemic risks, including disinformation. How does this framework apply to synthetic media risks? What specific mitigation measures would the DSA require?
c) Design a model deepfake policy for a hypothetical new social media platform. Include: prohibited conduct, required disclosures, detection commitments, enforcement procedures, appeals processes, and transparency reporting requirements.
Exercise 21: Generative AI Company Policies
Major generative AI companies have implemented policies to prevent misuse of their systems.
a) Research OpenAI's usage policies regarding deepfakes and synthetic media. What specific uses are prohibited? How are these restrictions enforced technically? What happens when users attempt to circumvent them?
b) Evaluate the effectiveness of "safety training" approaches — training AI models to refuse requests for harmful content. Research documented examples of "jailbreaks" that circumvent these restrictions. What does the pattern of successful and unsuccessful jailbreaks reveal about the limitations of this approach?
c) The open-source AI community has released models without safety restrictions (models like "uncensored" LLaMA variants). What are the implications for deepfake policy when the technology is freely available without restrictions?
Exercise 22: Physiological Detection Research
Research the physiological signal detection approach to deepfake detection.
a) The technique of remote photoplethysmography (rPPG) detects blood pulse from subtle color variations in skin visible on camera. Research the original academic work on this detection method for deepfakes. What accuracy rates were achieved? Under what conditions?
b) Can this detection approach be defeated by generating artificial physiological signals in the synthetic video? Research whether any work exists on this adversarial approach.
c) Other physiological signals that might be detectable include pupil dilation, subtle facial muscle movements (FACS action units), and micro-expressions. Are any of these currently used in deepfake detection research? What are the prospects?
Exercise 23: Economic Consequences of Deepfakes
The chapter focuses on reputational, psychological, and political harms. Assess the economic dimensions.
a) Estimate the total annual economic damage from voice cloning financial fraud globally. What sources and methodologies would you use for this estimate? What is the range of reasonable estimates?
b) Research the litigation costs imposed on companies and individuals by reputation-based deepfake attacks. What does it cost to defend against a deepfake-based reputation attack? Are legal remedies economically practical for ordinary individuals?
c) The deepfake detection industry has grown as a commercial sector. Research three commercial deepfake detection companies, their technology approaches, and their target markets. What does the commercialization of detection tell us about the perceived scale of the threat?
Exercise 24: Epistemic Infrastructure
The chapter argues that appropriate verification cultures may be more durable than purely technical solutions.
a) What social and institutional structures currently perform the function of verifying audiovisual evidence? How are journalists, courts, intelligence agencies, and academic researchers currently equipped to authenticate video and audio evidence?
b) As deepfake technology improves, these institutions will need to adapt. For each of the following, identify one specific capability that should be developed: (1) journalism schools, (2) law schools/legal practitioners, (3) courts (evidence rules), (4) intelligence agencies, (5) high school media literacy curricula.
c) "Epistemic inequality" — the unequal distribution of verification tools and skills — is identified as a concern. Who currently has access to professional deepfake detection tools? How might this inequality be addressed through policy or technology?
Exercise 25: Future Scenario Analysis
It is 2030. Generative AI can produce real-time, indistinguishable deepfakes of any public figure, requiring only a photograph and a text prompt. Voice cloning requires 5 seconds of audio.
a) Write a two-page scenario analysis describing how political campaigning has changed in this environment. What new norms, technologies, and regulations have emerged?
b) Write a two-page scenario analysis of how legal proceedings have adapted. How do courts handle audiovisual evidence? What new authentication requirements have been established?
c) Write a one-page argument for why this scenario, while challenging, would not necessarily destroy democratic discourse — specifically, what institutions and practices might prevent the worst outcomes.
Exercise 26: Content Provenance Implementation
You are advising a smartphone manufacturer on implementing C2PA in their camera application.
a) Describe the technical steps required to implement C2PA at the hardware/firmware level in a smartphone camera. What cryptographic operations are required?
b) What threat model does C2PA protect against? Specifically, which attacks can C2PA definitively counter, and which can it not?
c) Some privacy advocates have raised concerns about C2PA: that requiring cameras to sign images creates a permanent, auditable record of what people photograph, which could be used for surveillance. Evaluate this concern. How should it be balanced against the anti-deepfake benefits?
Exercise 27: Deepfakes in Commercial and Creative Contexts
Not all deepfakes are malicious. The technology has legitimate commercial and creative applications.
a) Research how the film industry uses facial de-aging, posthumous appearance (e.g., Peter Cushing in Rogue One), and digital doubles. What ethical standards have emerged in the industry?
b) The entertainment industry has been negotiating with AI companies and labor unions about synthetic likenesses. Research the SAG-AFTRA AI agreements of 2023-2024. What protections for actors' synthetic likenesses were negotiated?
c) Design a framework for "ethical deepfakes" — synthetic media use cases that you would permit, require to be labeled, or prohibit outright. Justify your framework.
Exercise 28: Psychological Research Methods
Researchers study how people perceive and respond to deepfakes using experiments.
a) Design an experiment to measure how exposure to deepfake content affects trust in subsequently seen authentic video. What would be your experimental design? What ethical considerations apply?
b) Research the concept of "motivated skepticism" — the tendency to be more skeptical of information that contradicts one's beliefs. How might motivated skepticism interact with deepfakes in a polarized information environment?
c) Research any published studies on individual differences in deepfake detection ability. Are some people systematically better at detecting deepfakes? What demographic or cognitive characteristics are associated with better detection?
Exercise 29: International Governance
Deepfakes present challenges for international governance because generation and distribution are frequently cross-border.
a) Research the Budapest Convention on Cybercrime. Does it cover deepfake-related crimes? What would be required to extend it to cover NCII deepfakes internationally?
b) UNESCO has issued guidelines on AI and information integrity. Research these guidelines and evaluate how they address synthetic media.
c) Design a model international treaty provision on deepfakes in electoral contexts. What conduct would it prohibit? How would enforcement be structured between sovereign states?
Exercise 30: Critical Evaluation of Detection Research
The academic deepfake detection literature has been criticized for methodological limitations.
a) Research the FaceForensics++ benchmark — one of the most widely used datasets for training and evaluating deepfake detectors. What manipulation methods does it include? What are its known limitations?
b) Many detection papers report accuracy rates above 95% but acknowledge poor generalization. Research the concept of "cross-dataset generalization" in deepfake detection. Why do detectors fail to generalize, and what research approaches address this problem?
c) Write a critical evaluation (one page) of what the high in-distribution accuracy rates reported in detection papers actually mean for real-world deployment. What questions would you want answered before trusting a detection system with an accuracy rate of 98% on a benchmark?
Exercise 31: Synthetic Audio in Music
Voice cloning and AI music generation have created specific controversies in the music industry.
a) Research the case of the AI-generated "Drake and The Weeknd" song that circulated in April 2023. What was the song? Who created it? What was the response from the artists, their label (Universal Music Group), and streaming platforms?
b) The legal status of cloning a musician's voice is unclear under current copyright law — a person's voice is not copyrightable, but specific recordings are. Research the Right of Publicity doctrine and evaluate how it applies to AI voice cloning of musicians.
c) Research any proposed legislation specifically addressing AI-generated music or voice cloning in the entertainment industry. What has been proposed, and what is the status of any legislation?
Exercise 32: Detection Tool Evaluation
Research and evaluate three publicly available deepfake detection tools.
For each tool, assess: a) What types of synthetic media does it claim to detect? b) What is its reported accuracy on standard benchmarks? c) What independent evaluations, if any, exist of its performance? d) Is it accessible to ordinary users without technical expertise? e) What are its known limitations?
Conclude with an assessment of the current practical utility of publicly available detection tools for ordinary users.