Chapter 35: Quiz — Generative AI Ethics
20 Questions
Instructions: Select the best answer for each multiple-choice question. For short-answer questions, write two to four sentences.
Question 1 In the context of large language models, "hallucination" refers to:
A) The model generating content that is intentionally misleading B) The model producing factually incorrect or fabricated content presented with the same confidence as accurate content C) The model refusing to answer questions it cannot verify D) The model generating content that is biased against certain demographic groups
Correct Answer: B Explanation: Hallucination in LLMs refers to the generation of confident-seeming but factually incorrect or entirely fabricated content. It is not intentional deception — it is a structural feature of how language models generate text by predicting statistically plausible completions, without a reliable internal fact-checking mechanism.
Question 2 The Schwartz case (Mata v. Avianca) is significant to AI ethics primarily because it demonstrated:
A) That AI companies can be held liable for hallucinations in professional contexts B) That professional responsibility frameworks apply to AI-assisted work, and that LLM self-confirmation is not verification C) That courts will dismiss cases in which attorneys use AI tools D) That ChatGPT is unreliable for all legal research purposes
Correct Answer: B Explanation: The case established that professional responsibility standards (Rule 11, ABA competence requirements) apply to AI-assisted work product, and critically revealed that asking an LLM to verify its own outputs does not constitute independent verification — the system simply generated additional fabricated content when asked to confirm the false citations.
Question 3 Which of the following best describes the "dual-use" nature of generative AI?
A) Generative AI can be used by both technical and non-technical users B) Generative AI can operate in both online and offline environments C) The same capabilities that enable legitimate uses also enable harmful applications D) Generative AI can generate both text and images
Correct Answer: C Explanation: Dual-use refers to technology whose core capabilities serve both legitimate and harmful purposes. The ability to generate realistic text enables writing assistance and sophisticated phishing; the ability to generate realistic images enables creative expression and non-consensual intimate imagery. This is a fundamental characteristic that shapes governance approaches.
Question 4 The term "liar's dividend" in the context of deepfakes refers to:
A) The financial benefit that deepfake creators derive from their content B) The ability to dismiss authentic video as a deepfake, eroding the evidentiary value of all video C) The profit that platforms derive from hosting deepfake content D) The advantage that liars gain when they can use AI to generate supporting evidence
Correct Answer: B Explanation: The liar's dividend describes a second-order effect of deepfake proliferation: even authentic video evidence can be dismissed by bad actors claiming it is a deepfake, degrading the epistemic authority of video as a medium.
Question 5 In Thaler v. Perlmutter, the federal court held that:
A) AI-generated content automatically receives copyright protection under U.S. law B) Copyright protection requires human authorship; entirely AI-generated content is not copyrightable C) AI companies that train on copyrighted data infringe copyright D) Artists' styles are protected by copyright law
Correct Answer: B Explanation: The court held that copyright protection requires human authorship and that content created entirely by an AI system without human creative input falls outside copyright protection.
Question 6 Retrieval-augmented generation (RAG) addresses hallucination by:
A) Training models on higher-quality data to reduce error rates B) Grounding model outputs in specific retrieved documents relevant to the query C) Using multiple models to cross-check each other's outputs D) Requiring human review of all model outputs before display
Correct Answer: B Explanation: RAG reduces hallucination by retrieving relevant documents and providing them as context for the model's response, encouraging outputs grounded in the retrieved material rather than pure parametric memory. It reduces but does not eliminate hallucination.
Question 7 The C2PA standard is primarily designed to:
A) Detect AI-generated images using hidden watermarks B) Attach cryptographic provenance metadata to media to enable verification of origin C) Create a database of known deepfakes for platform reference D) Require disclosure labels on AI-generated content
Correct Answer: B Explanation: C2PA (Coalition for Content Provenance and Authenticity) attaches signed cryptographic metadata to media files recording the origin and editing history. This enables downstream verification of where content came from and whether it was AI-generated.
Question 8 Research on AI-generated non-consensual intimate imagery has found that approximately what percentage of NCII targets are women?
A) 50% B) 65% C) 80% D) 90%
Correct Answer: D Explanation: Research has consistently found that approximately 90% of NCII — both authentic and AI-generated — targets women, reflecting the gendered nature of this form of technology-facilitated sexual harm.
Question 9 The "shadow AI" problem in organizational governance refers to:
A) The use of AI to conduct surveillance of employees without their knowledge B) Employee use of unapproved AI tools that the organization cannot monitor or govern C) The unknown capabilities of AI models that have not been discovered by developers D) AI systems that operate in the background without visible user interfaces
Correct Answer: B Explanation: Shadow AI is the organizational governance challenge created when employees use personal accounts with commercial AI tools — entering confidential information into unapproved systems — without organizational knowledge or oversight, creating security, data protection, and compliance risks.
Question 10 The EU AI Act's transparency obligations for generative AI include:
A) Mandatory human review of all AI-generated outputs before publication B) Disclosure that a system is AI when it interacts with humans, and labeling of AI-generated content C) Publication of all training data used to train generative models D) Registration of all generative AI models with a central EU authority before deployment
Correct Answer: B Explanation: The EU AI Act requires disclosure of AI identity in human interactions and labeling of AI-generated text, audio, images, and video. It also requires providers of general-purpose AI models to publish summaries of training data, but not the full training data itself.
Question 11 Which of the following best describes "stereotype amplification" in large language models?
A) The tendency of LLMs to refuse to generate content involving stereotyped groups B) The pattern by which LLMs may produce outputs more stereotyped than the average of their training data C) The ability of LLMs to detect and correct stereotypes in user inputs D) The systematic exclusion of minority groups from LLM training data
Correct Answer: B Explanation: Stereotype amplification occurs because models generate statistically probable completions. The most probable completion of a stereotyped prompt may be the most stereotyped version of the stereotype, meaning models can produce outputs that exceed the average level of stereotyping in their training data.
Question 12 The New York Times v. OpenAI lawsuit centers primarily on which claim?
A) That OpenAI's models discriminate against Times journalists B) That OpenAI trained its models on Times articles without authorization and can reproduce near-verbatim Times content C) That OpenAI's models generate misinformation that harms the Times's reputation D) That OpenAI uses Times content in its responses without attribution
Correct Answer: B Explanation: The Times lawsuit alleges unauthorized training on millions of Times articles and documents specific instances in which ChatGPT reproduced lengthy Times articles near-verbatim, raising both training data fair use questions and memorization concerns.
Question 13 Which of the following is NOT a recognized approach to reducing AI hallucination?
A) Retrieval-augmented generation (RAG) B) Reinforcement learning from human feedback (RLHF) C) Chain-of-thought prompting D) Larger training datasets exclusively from social media
Correct Answer: D Explanation: Simply increasing training data size from social media does not reliably reduce hallucination and may increase it if social media content contains more errors, speculation, and misinformation than curated sources. The other options — RAG, RLHF, and chain-of-thought prompting — are recognized (if imperfect) approaches to hallucination reduction.
Question 14 The WGA and SAG-AFTRA strikes in 2023 addressed AI in the entertainment industry by:
A) Achieving a complete ban on AI-generated scripts and performances in Hollywood B) Obtaining contract language requiring consent and compensation for AI use of writers' work and performers' likenesses C) Establishing an industry-wide ethics board to review AI use in productions D) Creating mandatory disclosure requirements for AI-assisted content
Correct Answer: B Explanation: The settlements established consent and compensation requirements — not bans — requiring studios to obtain union member consent and provide compensation before using AI to generate scripts resembling a writer's work or AI performances imitating a specific actor's likeness.
Question 15 "Ethics washing" in the generative AI context refers to:
A) Using AI tools to identify and remove unethical content from training data B) The practice of publishing ethical principles without implementing substantive governance that constrains harmful practices C) The legal process by which AI companies protect themselves from liability through terms of service D) A technical process for removing bias from generative AI outputs
Correct Answer: B Explanation: Ethics washing refers to the gap between stated ethical commitments — principles documents, ethics boards, responsible AI teams — and operational practices that may not reflect those commitments. It describes the use of ethical language primarily for reputational benefit rather than substantive harm reduction.
Question 16 Short Answer: Why is it insufficient for an organization to verify AI-generated legal or medical information by asking the same AI system to confirm it?
Model Answer: Asking an LLM to verify its own outputs does not constitute verification because the system simply generates additional AI output — another statistically plausible completion — rather than checking against any authoritative external source. As the Schwartz case demonstrated, ChatGPT confirmed the existence of fabricated case citations when asked, because generating a confirmation is as statistically plausible as generating the original fabrication. Verification must involve comparison against authoritative external sources (legal databases, peer-reviewed literature, official publications) — not AI self-reference.
Question 17 Short Answer: What is the "right to erasure" problem for large language models, and why is it technically challenging?
Model Answer: The GDPR's right to erasure (Article 17) entitles individuals to request deletion of their personal data. Traditional databases can delete records. LLMs, however, do not store training data as discrete retrievable records — they encode statistical patterns from training data across billions of model parameters. Deleting specific information from a trained LLM without retraining the entire model is not currently technically feasible in a clean, verifiable way. This creates a gap between legal requirements and technical capabilities that AI operators and regulators are actively working to address through emerging "machine unlearning" research.
Question 18 Short Answer: How does emergence create governance challenges for generative AI that did not exist for prior AI systems?
Model Answer: Emergent capabilities are abilities that appear in large AI models without explicit training — the models develop them as a consequence of scale. Developers cannot fully enumerate a model's capabilities before deployment because some capabilities only appear at certain scales and are discovered through use. This means pre-deployment safety evaluation cannot be comprehensive: the space of possible harmful applications includes capabilities that developers have not yet identified. Standard product safety approaches (test for all failure modes before release) are inadequate, requiring ongoing post-deployment monitoring as a governance necessity, not an option.
Question 19 Short Answer: What makes the labor displacement caused by generative AI ethically distinct from prior technological labor displacements (such as the displacement of typographers by desktop publishing)?
Model Answer: Prior technological labor displacements typically automated repetitive, lower-skill tasks while creating new demand for higher-skill work (typographers were displaced, but graphic designers proliferated). Generative AI displaces creative, cognitively demanding, and high-skill work — writing, visual art, music composition, voice performance — that was previously assumed to be protected from automation. Additionally, creative workers' own work is being used, without compensation, to train the systems that displace them — creating a harm that is both economic and normative. The question of whether new roles will emerge at comparable scale and compensation remains genuinely uncertain.
Question 20 Short Answer: Explain the collective action problem that makes AI-generated NCII difficult to address through industry self-governance alone.
Model Answer: Responsible AI image generation developers implement safety filters that prevent NCII generation, incurring costs and losing users who want that capability. Irresponsible developers or open-source model releases without safety filters serve those users, gaining market share from responsible actors. The market rewards the less responsible actors, meaning safety costs are private while benefits are social. Individual developers cannot unilaterally solve this because responsible actors face competitive disadvantage relative to irresponsible ones. This collective action problem argues for regulatory intervention — mandatory minimum safety requirements applicable to all developers — to level the competitive playing field and ensure that responsible behavior is not economically punished.