Chapter 35: Exercises — Generative AI Ethics

25 Exercises for Business and Policy Professionals


Foundational Understanding

Exercise 1: Hallucination Audit Select a professional topic relevant to your field (a legal standard, a regulatory requirement, a medical protocol, a financial regulation). Use a publicly available LLM to generate five factual claims about the topic. Then verify each claim against authoritative sources. Document: which claims were accurate, which were inaccurate, which were partially accurate, and how confident the model appeared in each case. Write a 500-word reflection on what this exercise reveals about verification requirements for professional AI use.

Exercise 2: The Confident-Wrong Taxonomy Research three documented cases of professional harm caused by AI hallucination (the Schwartz case is one; find two others in different domains). For each case, identify: (a) the professional context, (b) the nature of the hallucination, (c) the harm that resulted, (d) the verification step that would have prevented harm, and (e) the governance failure that allowed the error to reach a harmful stage. What common patterns do you identify across cases?

Exercise 3: Generative vs. Discriminative AI Create a table distinguishing five common enterprise AI applications as either primarily generative or primarily discriminative. For each, describe the primary ethical risk — and explain why the generative/discriminative distinction matters for that risk. Discuss in class: does this taxonomy help organizations think clearly about AI governance?

Exercise 4: The Emergent Capability Problem Read about a capability of a large language model that was not anticipated by its developers (several examples have been documented in published AI research). Write a brief case analysis: What was the capability? How was it discovered? What harm potential does it carry? What does its emergence reveal about the limits of pre-deployment safety evaluation?


Deepfakes and Synthetic Media

Exercise 5: Deepfake Spectrum Analysis List ten uses of synthetic media technology and place them on a spectrum from clearly ethical to clearly harmful. For the cases in the middle of the spectrum — satire, historical recreation, entertainment — identify the factors that determine whether a particular use is acceptable or not. Draft a framework of three to five principles that could guide institutional decision-making about synthetic media use.

Exercise 6: NCII Legislative Comparison Research the current state NCII law in your jurisdiction and two other states. Compare: (a) whether AI-generated NCII is explicitly covered, (b) whether the law creates criminal or civil liability or both, (c) what remedies are available to victims, and (d) what defenses are available to perpetrators. Based on your analysis, draft a model state statute that addresses AI-generated NCII comprehensively.

Exercise 7: Platform Governance Design You are a Trust and Safety executive at a major social media platform. Design a comprehensive policy framework for addressing AI-generated NCII on your platform. Your framework should address: detection (how will you identify NCII?), removal (what process will you use?), speed (what timelines will you commit to?), appeals (how will content be restored if wrongly removed?), and victim support. What resources would implementing this framework require?

Exercise 8: The Verification Crisis Consider the following scenario: A video surfaces during an election campaign showing a candidate making a statement they deny making. The candidate claims it is a deepfake. The campaign that released the video claims it is authentic. No one with access to the original footage is cooperative. Write a 600-word analysis of: how journalists should cover this scenario; how courts might handle it as evidence; and what technical and governance mechanisms would be necessary to resolve disputes of this type.


Exercise 9: The Fair Use Analysis The four-factor fair use test requires analyzing purpose and character, nature of the copyrighted work, amount used, and market effect. Apply this analysis to the following scenario: A company trains a generative AI model on 500,000 news articles scraped from various news publications without license. The resulting model can generate news-style articles on current events and is sold commercially. Does the training use constitute fair use? Present arguments on both sides, then give your assessment.

Exercise 10: Artist Impact Interview Interview a working creative professional — a writer, visual artist, photographer, musician, or voice actor — about their experience of generative AI. Prepare at least ten questions covering: direct commercial impacts they have experienced; their understanding of how AI systems were trained; their views on fair compensation frameworks; and what governance mechanisms they believe would protect their interests. Write a 750-word case analysis based on the interview.

Exercise 11: Creative Labor Displacement Modeling Research available data on the economic impact of generative AI on at least two creative labor markets (for example, stock photography, commercial copywriting, or voice-over work). What does the available evidence show about demand changes? How is this comparable to prior technological labor disruptions (digitization of music, desktop publishing)? What is different about the generative AI disruption, if anything? What policy responses, if any, are appropriate?

Exercise 12: The Style Question Copyright protects specific expression, not style. Research the ongoing artist class actions against AI image generation companies and identify: (a) what specific legal claims are being made; (b) what evidence the plaintiffs have offered that style imitation constitutes harm; and (c) what the defendants' primary counterarguments are. Draft a judicial opinion — 500 words — that resolves the central legal question in one direction and explains the reasoning.


Bias and Representation

Exercise 13: Image Generation Bias Audit Using an accessible AI image generation tool, generate twenty images using prompts that do not specify demographic characteristics (for example: "a doctor in a hospital," "a software engineer at a desk," "a judge at a bench," "a criminal defendant," "a CEO at a meeting," "a nurse," "a construction worker"). Analyze the results for patterns in gender presentation, racial presentation, and age. Document your findings in a research memo. What do the patterns reveal about the model's training data and the representational choices embedded in the model?

Exercise 14: The Gemini Controversy Analysis Research the February 2024 controversy surrounding Google Gemini's image generation. What specifically did the system produce that generated controversy? What competing values were in tension (historical accuracy vs. representation, diversity vs. contextual appropriateness)? How did Google respond? Assess Google's response. What would you have done differently in designing or correcting the system?

Exercise 15: Stereotype Amplification Documentation Design a systematic prompt study to test whether a specific LLM amplifies stereotypes in a domain you specify (professional roles and gender, crime and race, intellectual ability and nationality, etc.). Develop a methodology, conduct the study if tools are accessible, and write up findings and implications for professional deployment of LLMs in human resources or content creation contexts.


Privacy and Data

Exercise 16: GDPR Training Data Analysis The GDPR's lawfulness requirements apply to the personal data used to train AI models. Identify the three most likely legal bases that an AI company might claim for training data containing personal information (consent, legitimate interest, and others). For each, analyze: (a) what conditions must be met; (b) whether those conditions are plausibly met by typical web-scraping practices; and (c) what documentation an AI company would need to demonstrate compliance. Consult available guidance from European data protection authorities.

Exercise 17: The Right to Erasure Problem An individual discovers that a photograph of them and details of their personal life were included in the training data of a commercial LLM. They submit a data deletion request under GDPR. The AI company responds that technical deletion from the model is not currently feasible. Write a legal analysis: (a) Is the company's response legally adequate under GDPR? (b) What remedies might the individual seek? (c) What governance obligations should AI companies have established before deploying models trained on personal data?

Exercise 18: Memorization Risk Assessment Research published papers on training data extraction attacks against LLMs (several published papers from research groups at Google, DeepMind, and academic institutions are publicly available). Describe the memorization risk: What types of data are most likely to be memorized? How can memorized data be extracted? What harm can result? What technical mitigations reduce memorization risk? Assess whether current mitigation approaches are adequate.


Manipulation, Transparency, and Governance

Exercise 19: Political Deepfake Response Planning You are the communications director for a political campaign during an election year. A deepfake video depicting your candidate making a damaging statement begins circulating on social media four days before the election. Design a response plan: Who do you notify first? What public statement do you issue and when? How do you seek platform intervention? What technical evidence do you pursue? What legal options do you consider? How do you communicate with voters? Evaluate the constraints you face and the tradeoffs in your response.

Exercise 20: FTC Enforcement Scenario Research the FTC's published guidance on AI-generated endorsements and deceptive advertising. Analyze the following scenario: A direct-to-consumer health supplement company uses AI to generate thousands of personalized health testimonials, presented as authentic customer reviews, across its social media accounts. The testimonials are tailored to recipients based on their browsing histories and use emotionally resonant language. (a) What FTC regulations or guidance apply? (b) What elements of the conduct are likely to constitute violations? (c) What remedies might the FTC seek? (d) What disclosure framework would bring this practice into compliance?

Exercise 21: Enterprise AI Governance Framework Development You are the Chief Ethics Officer of a mid-sized professional services firm (5,000 employees) that has decided to implement a comprehensive generative AI governance framework. Draft the framework, addressing: approved tools and access controls; prohibited uses; information classification and what may be entered into AI systems; human review requirements by risk category; training requirements for employees; vendor evaluation standards; incident reporting procedures; and ongoing monitoring and review. Present the framework in a format suitable for distribution to all employees.

Exercise 22: C2PA Adoption Analysis Research the current state of C2PA adoption: which camera manufacturers, AI image generation tools, and platforms have committed to implementing the standard. Analyze: (a) What percentage of images produced and distributed today carry C2PA metadata? (b) What are the main technical obstacles to broader adoption? (c) What are the main business obstacles? (d) Even if adoption were universal, what adversarial techniques could circumvent C2PA's effectiveness? (e) What governance mechanisms beyond technical standards are necessary to address the verification crisis?

Exercise 23: Ethics Washing Detection Research the published AI ethics principles, responsible AI frameworks, or safety commitments of three AI companies of your choice. For each, identify: (a) the specific commitments made; (b) documented cases in which those commitments were tested by deployment decisions; (c) whether the company's actions were consistent with its stated principles; and (d) what accountability mechanisms, if any, would allow external verification of compliance. Write a comparative assessment of the substantiveness of each company's ethical commitments.


Integration and Application

Exercise 24: The Liability Map A company deploys a customer-facing chatbot built on a third-party foundation model (Foundation Model Company), developed by an application developer (App Dev Inc.), and operated by the company (Deployer Corp.) as a customer service interface. A customer relies on the chatbot's advice about medication interactions and suffers harm because the chatbot provided incorrect medical information. Map the potential liability of each party: Foundation Model Company, App Dev Inc., and Deployer Corp. What legal theories might apply? What contractual arrangements between the parties are relevant? What governance measures would have reduced the risk?

Exercise 25: Regulatory Horizon Scan Survey the current state of generative AI regulation across three jurisdictions: the European Union, the United States, and one other jurisdiction of your choice (China, the United Kingdom, Brazil, or Canada are good options). For each, describe: (a) what binding legal requirements currently apply to generative AI; (b) what requirements are proposed or under development; (c) what enforcement mechanisms exist; and (d) how a company operating in all three jurisdictions would need to structure its generative AI governance to comply. Identify where the regulatory requirements conflict or are incompatible, and propose a compliance strategy.