Chapter 39: Quiz

Multiple Choice

1. The Collingridge dilemma holds that:

a) AI systems are inherently too complex to govern effectively b) A technology's impacts cannot be predicted until it is widely adopted, but are then difficult to control c) Democratic governance is too slow to keep pace with technological change d) Proactive ethics is impossible because we cannot know what harms AI will cause

Answer: b The Collingridge dilemma describes the paradox of technological governance: prediction requires adoption, but adoption makes change costly. This is an argument for proactive ethics, not fatalism.


2. Agentic AI systems differ from conventional AI systems primarily in that they:

a) Use more advanced machine learning algorithms b) Take sequences of actions in pursuit of goals with limited step-by-step human oversight c) Have greater accuracy in generating text and images d) Require less computational power to operate

Answer: b The defining feature of agentic AI is autonomous action — pursuing goals through sequences of steps without human approval at each step. This is a qualitative shift from AI-as-tool to AI-as-agent.


3. The "principal-agent problem" in the context of AI agents refers to:

a) The difficulty of training AI systems to pursue human-specified goals b) The legal question of who is responsible for AI agent actions c) The misalignment between an AI agent's behavior and the goals of those it is supposed to serve d) The challenge of designing AI agents that can work autonomously

Answer: c The principal-agent problem describes the misalignment that arises when an agent (the AI) has different information or objectives than the principal (the human who set its goals). At scale, AI agents face this problem in ways that accumulate into consequential outcomes.


4. Cognitive liberty, as described by Nita Farahany, primarily refers to:

a) The right to use AI tools for cognitive enhancement b) The right to mental self-determination, including freedom from unwanted access to and manipulation of one's cognitive states c) The legal right to have AI systems explain their decisions affecting you d) The academic freedom to conduct AI research without government interference

Answer: b Cognitive liberty is the right to mental self-determination: freedom from unwanted surveillance of cognitive states, manipulation through cognitive means, and access to neural data without consent.


5. The "Brussels Effect" in the context of AI regulation refers to:

a) The EU's tendency to over-regulate AI relative to other jurisdictions b) The phenomenon whereby EU regulations effectively become global standards because companies apply them worldwide c) The coordination among EU member states on AI governance d) The influence of Brussels-based lobbying on EU AI policy

Answer: b The Brussels Effect describes how EU regulations become de facto global standards because multinational companies find it more efficient to apply consistent global practices rather than jurisdiction-specific ones. GDPR is the canonical example; the EU AI Act may replicate this effect.


6. The Jevons paradox, as applied to AI and climate, suggests that:

a) AI efficiency improvements will inevitably reduce total energy consumption b) Efficiency gains from AI may increase total energy consumption by making economic activity cheaper and more extensive c) The environmental costs of AI are outweighed by its contributions to climate science d) AI systems cannot achieve both efficiency and environmental sustainability

Answer: b The Jevons paradox holds that efficiency improvements tend to increase total resource consumption by reducing the cost of the relevant activity and thereby increasing demand. Applied to AI, this means that efficiency gains might not reduce total energy consumption if they increase total economic activity.


7. "Prompt injection" in the context of AI agent security refers to:

a) A technique for improving AI agent performance through better initial instructions b) A method for training AI agents on domain-specific data c) An adversarial attack where external content manipulates an AI agent's behavior by embedding override instructions d) The process of providing an AI agent with its initial task specifications

Answer: c Prompt injection attacks embed instructions in external content (websites, emails, documents) that an AI agent reads, attempting to override its original instructions. This is a specific security risk for AI agents that interact with external data sources.


8. The 2024 New Hampshire primary deepfake involved:

a) AI-generated images of candidates in false scenarios b) AI-generated voice calls impersonating President Biden, advising Democratic voters not to vote c) AI-generated video of a candidate making fabricated statements d) AI-generated campaign advertising without disclosure of its AI origin

Answer: b In January 2024, AI-generated robocalls using a synthetic version of President Biden's voice were sent to New Hampshire Democratic voters, advising them not to vote in the primary. The incident represented one of the clearest documented cases of AI being used for voter suppression.


9. According to the chapter, what is the "epistemic corrosion" risk of AI-generated disinformation?

a) That AI systems will be trained on disinformation and generate more of it b) That the mere existence of AI-generated content undermines trust in all content, including authentic content c) That AI systems will become better at generating disinformation than humans are at detecting it d) That disinformation will corrupt the training data of future AI systems

Answer: b Epistemic corrosion refers to the erosion of shared epistemic ground: when any piece of content might be AI-generated and difficult to verify, rational actors apply skepticism uniformly, including to authentic content. This undermines the shared information environment on which democratic deliberation depends.


10. AI ethics as a "practice" rather than a "compliance exercise" primarily means:

a) That AI ethics should be the responsibility of dedicated ethics teams rather than all employees b) That organizations should develop ongoing ethical judgment rather than applying rules mechanically c) That ethics audits should be replaced by continuous monitoring d) That AI ethics principles should be tailored to each specific deployment context

Answer: b Ethics as practice means cultivating practical wisdom — the ongoing capacity for ethical judgment in new situations — rather than treating ethics as a checklist to be completed. Rules and frameworks are tools for ethical reasoning, not substitutes for it.


True/False with Explanation

11. Meaningful human oversight of AI agent actions necessarily means humans must approve each individual action the agent takes.

False. Meaningful oversight does not require approval of every individual action, which would eliminate the efficiency benefits of agentic AI. It means thoughtful identification of which decision types require human review — those with high potential for harm, outside normal parameters, or irreversible — and system design that enforces human review for those specific decisions.


12. The 2024 electoral cycle demonstrated that AI-generated disinformation did not significantly affect any major election results.

False (with nuance). The 2024 cycle did not produce the catastrophic collapse of electoral integrity that some predicted, and attributing specific electoral outcomes to AI-generated disinformation is methodologically extremely difficult. However, multiple documented incidents occurred — including the New Hampshire voter suppression robocalls, deepfakes in India and elsewhere, and AI-generated campaign content at scale — that represent real harms and precedents for more serious future challenges.


13. The economics of frontier AI development tend toward oligopoly because of the enormous capital requirements for training frontier models.

True. Training state-of-the-art AI models requires capital at the scale of hundreds of millions to billions of dollars, as well as proprietary data assets, specialized hardware supply chains, and large teams of skilled researchers. These barriers to entry are financial and logistical, not primarily technical, and they produce market concentration among a small number of highly capitalized actors.


14. De-skilling from AI use is inevitable and cannot be mitigated by organizational or educational choices.

False. De-skilling is a real risk but not an inevitable outcome. Educational systems can respond to changing technological environments by emphasizing capabilities that AI does not replace. Organizations can make explicit choices to maintain human expertise in critical areas as a resilience measure. These are choices that require deliberate attention, but they are achievable.


15. The EU AI Act's risk-based framework is the model most likely to be adopted by other jurisdictions as a template for AI regulation.

True (with hedging). The risk-based framework — categorizing AI uses by risk level and imposing proportionate requirements — is the most comprehensive AI regulatory approach yet adopted by a major jurisdiction. The Brussels Effect has made EU frameworks global standards in data protection; it is plausible that the same dynamic will apply to AI regulation, though the specific provisions will be contested and adapted in other jurisdictions.


Short Answer

16. Explain the accountability gap in agentic AI and describe two specific governance mechanisms that could help close it.

The accountability gap arises because AI agents take many actions with limited human oversight, making it difficult to identify who is responsible when an agent causes harm — the developer, the deploying organization, the manager who set the task, or someone else. Current legal frameworks do not clearly assign liability for AI agent actions, and the technical opacity of agent decision-making makes post-hoc accountability difficult. Governance mechanisms that could help close this gap include: (1) comprehensive logging requirements, so that agent action sequences can be reconstructed after the fact; and (2) clear organizational ownership requirements, designating a named accountable person for each deployed agent's behavior.


17. What is the "detection gap" in AI-generated electoral content, and why is it described as structural rather than temporary?

The detection gap refers to the inability to reliably identify AI-generated content at scale, creating an asymmetry where generating convincing AI content is much easier than detecting it. It is described as structural rather than temporary because the underlying dynamics favor generation over detection: detection tools require reliable signals that can be engineered away; watermarking approaches can be stripped by lossy compression; and human detection is unreliable. This is not primarily a problem of inadequate current technology that will be solved with better tools — it reflects a fundamental asymmetry in the task.


18. What is the difference between AI ethics as a compliance exercise and AI ethics as a practice? Why does the distinction matter practically?

AI ethics as compliance means checking boxes against a list of prohibitions, obtaining certifications, and publishing policies — producing paper conformance. AI ethics as practice means cultivating ongoing ethical judgment: the capacity to reason well about new situations in light of general principles, to recognize ethical dimensions of decisions that don't fit established frameworks, and to integrate ethical reasoning into day-to-day work. The distinction matters practically because compliance produces only the behaviors that are specifically required and monitored, while genuine practice produces ethical judgment that can navigate novel situations. Given the pace of AI development, novel situations are the norm.


19. The chapter describes "epistemic corrosion" as potentially more damaging than specific disinformation incidents. Explain why.

Specific disinformation incidents cause identifiable harms — a false claim believed, a vote suppressed, a candidate's reputation damaged. These harms are serious but bounded. Epistemic corrosion is the systematic erosion of the shared epistemic ground on which democratic deliberation depends. When any piece of content might be AI-generated and difficult to verify, the rational response is pervasive skepticism — including toward authentic content. This undermines the ability of citizens to form shared beliefs about facts, to evaluate political claims, and to engage in productive democratic deliberation. It is a systemic harm to the conditions of democracy rather than a specific harm within a functioning democratic system.


20. What does the chapter mean by "healthy human-AI relationships," and what conditions or practices does it identify as necessary to achieve them?

Healthy human-AI relationships are characterized by: (1) maintained human accountability — humans who use AI remain responsible for the outcomes of AI-assisted decisions; (2) maintained human competence — humans retain the skills and judgment to evaluate AI outputs and function without AI when necessary; (3) transparency — humans who use AI are honest about that use with those affected by their decisions; and (4) appropriate scope — AI is used for tasks where it genuinely improves outcomes, not as a substitute for human judgment where human judgment is essential. These characteristics require active cultivation through individual habits, organizational design choices, and educational investment, because the default market incentives tend toward dependency and de-skilling rather than healthy integration.