Exercises: Accountability and Audit
These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.
Difficulty Guide: - ⭐ Foundational (5-10 min each) - ⭐⭐ Intermediate (10-20 min each) - ⭐⭐⭐ Challenging (20-40 min each) - ⭐⭐⭐⭐ Advanced/Research (40+ min each)
Part A: Conceptual Understanding ⭐
Test your grasp of core concepts from Chapter 17.
A.1. Section 17.1.1 defines accountability as comprising three elements: answerability, attributability, and enforceability. In your own words, explain each element. Then identify which element is most disrupted by machine learning systems that cannot explain their reasoning, and explain why.
A.2. Explain the accountability gap as described in Section 17.1.3. Why is the gap not merely a temporary technical limitation that will be resolved once AI systems become more transparent?
A.3. Section 17.1.2 draws a comparison between algorithmic accountability challenges and Max Weber's analysis of bureaucratic rationality. Identify two similarities and two differences between the accountability challenges posed by bureaucracies and those posed by algorithmic systems.
A.4. Define the "many hands problem" as presented in Section 17.4. In one sentence, explain why this problem is particularly acute for algorithmic systems compared to traditional organizational decision-making.
A.5. Section 17.2.1 distinguishes between internal and external algorithmic audits. Summarize the key differences. Why does the chapter argue that both are necessary and neither is sufficient alone?
A.6. Explain the difference between a code audit, an outcome audit, and a user experience audit as described in Section 17.2. For each type, give one example of a finding that only that audit method would likely uncover.
A.7. Section 17.3 introduces Algorithmic Impact Assessments (AIAs). What is the relationship between an AIA and an algorithmic audit? Can you have one without the other?
Part B: Applied Analysis ⭐⭐
Analyze scenarios, arguments, and real-world situations using concepts from Chapter 17.
B.1. Consider the VitraMed accountability chain described in Section 17.1.4. Each actor in the chain — the developer, the deployer, the hospital administrator, the physician, and the data providers — offers a plausible claim of non-liability. For each actor, evaluate whether their claim is fully convincing, partially convincing, or unconvincing, and explain your reasoning. Then propose a governance structure that would prevent this chain of non-liability from forming.
B.2. A city government uses a predictive policing algorithm to allocate police patrols. The algorithm sends more officers to neighborhoods that historically have higher arrest rates. An audit reveals that these neighborhoods are disproportionately communities of color, and that the higher arrest rates partly reflect historical over-policing rather than higher crime rates. Apply the three audit methods from Section 17.2 — code audit, outcome audit, and user experience audit — to this system. What would each method reveal? Which method is most essential in this case?
B.3. Section 17.5 discusses the emerging institutional landscape for algorithmic accountability, including audit firms, regulatory sandboxes, and legislative proposals. A critic argues: "We don't need new institutions. We just need to enforce existing anti-discrimination laws more vigorously." Evaluate this argument using at least three specific points from the chapter.
B.4. Ray Zhao's company, NovaCorp, deploys a customer service chatbot that uses a machine learning model to prioritize support tickets. After six months, a pattern emerges: tickets from customers with non-English names are consistently deprioritized. NovaCorp conducts an internal audit and finds that the model learned this pattern from historical data in which human agents, on average, took longer to resolve tickets from non-English-speaking customers (because of language barriers), and the model interpreted "longer resolution time" as "lower priority."
Analyze this scenario using: - (a) The accountability framework from Section 17.1 - (b) The audit methodology from Section 17.2 - (c) The liability frameworks from Section 17.4
B.5. Section 17.3 describes the limitations of Algorithmic Impact Assessments. One limitation is "capture" — the risk that AIAs become a box-checking exercise rather than a genuine governance tool. Identify three specific design choices that would make an AIA process more resistant to capture and three that would make it more vulnerable. Draw on the Environmental Impact Assessment analogy discussed in the chapter.
B.6. Sofia Reyes argues in Section 17.5 that the audit ecosystem needs both technical auditors (who can evaluate code and models) and domain experts (who understand the social context of deployment). Construct a scenario where a technically flawless audit misses a significant harm because it lacks domain expertise. Then construct a scenario where domain experts identify a problem but lack the technical capacity to verify it.
Part C: Real-World Application Challenges ⭐⭐-⭐⭐⭐
These exercises ask you to investigate real-world systems and apply the chapter's frameworks.
C.1. ⭐⭐ Audit an Algorithm in Your Life. Identify an algorithmic system that affects you personally — a recommendation algorithm, a grading system, a credit scoring tool, a content moderation system, or a hiring platform. Using the six-question Applied Framework from Chapter 1 and the audit methodology from Section 17.2, conduct a preliminary "audit" of this system. What data does it use? What outcomes does it produce? Can you identify any disparate impacts? What accountability mechanisms exist? Document your findings in a one-page report.
C.2. ⭐⭐ Legislative Analysis. Research one of the following legislative proposals discussed in Section 17.5: the Algorithmic Accountability Act (US), the EU AI Act's audit requirements, or New York City's Local Law 144 (bias audits for automated employment decision tools). Summarize what the law requires, who it applies to, and what enforcement mechanisms exist. Then evaluate: Does this law adequately address the accountability gap described in Section 17.1?
C.3. ⭐⭐⭐ Design an AIA. Select a specific algorithmic system — real or hypothetical — and design a complete Algorithmic Impact Assessment using the framework from Section 17.3. Your AIA should include: (a) a system description, (b) a stakeholder analysis, (c) an assessment of potential harms, (d) a disparate impact analysis, (e) proposed mitigation measures, and (f) a monitoring plan. Be explicit about the limitations of your assessment.
C.4. ⭐⭐⭐ The Audit Report. You are an external auditor hired to evaluate a health insurance company's algorithmic claims processing system. The system approves or denies claims automatically for approximately 70% of submissions; the remainder are routed to human reviewers. Your audit reveals that the denial rate for claims from patients in low-income ZIP codes is 23% higher than for comparable claims from high-income ZIP codes, even after controlling for diagnosis, treatment type, and provider. Write a two-page audit report summarizing your findings, identifying possible causes, and recommending corrective actions. Address the report to the company's board of directors.
Part D: Synthesis & Critical Thinking ⭐⭐⭐
These questions require you to integrate multiple concepts and think beyond the material presented.
D.1. The chapter presents three liability frameworks for algorithmic harm: strict liability, negligence, and product liability (Section 17.4). Each has advantages and disadvantages. Write a comparative analysis (400-600 words) evaluating which framework is most appropriate for algorithmic systems used in (a) healthcare, (b) criminal justice, and (c) consumer lending. You may conclude that different frameworks are appropriate for different domains — if so, explain what factors drive the difference.
D.2. Dr. Adeyemi states in Section 17.1.3: "We have built systems that make consequential decisions about people's lives while simultaneously making it harder to identify who is responsible, harder to explain the reasoning, and harder to challenge the outcome. That is not a technical glitch. That is a governance crisis." Unpack this statement. Why does Dr. Adeyemi call it a governance crisis rather than a technical problem? What does the distinction imply for the kinds of solutions that are needed? Write a response (300-500 words) that references at least three specific concepts from the chapter.
D.3. The "many hands problem" (Section 17.4) describes a situation in which responsibility is distributed across so many actors that no one is effectively accountable. Some scholars argue that the solution is to designate a single "accountability holder" for every algorithmic system — an identifiable person or entity that bears ultimate responsibility regardless of the contributions of other actors. Evaluate this proposal. What are its strengths? What are its practical challenges? Could it create perverse incentives? Use the VitraMed accountability chain as a test case.
D.4. Mira observes in Section 17.1.4 that the accountability gap is "not that nobody is responsible" but "that the responsibility is so distributed that it's functionally the same as if nobody is." This raises a deeper question: Is the distribution of responsibility a deliberate design choice — one that benefits certain actors — or an unintended consequence of system complexity? Drawing on the entire chapter, argue for one position or the other (or a synthesis). Support your argument with at least two examples.
Part E: Research & Extension ⭐⭐⭐⭐
These are open-ended projects for students seeking deeper engagement. Each requires independent research beyond the textbook.
E.1. Algorithmic Auditing in Practice. Research one of the following real-world algorithmic audits: (a) the ORCAA audit of HireVue's hiring algorithms, (b) the Markup's investigation of the COMPAS recidivism algorithm, (c) ProPublica's audit of Facebook's ad delivery system, or (d) the Stanford Internet Observatory's audit of content moderation systems. Write a 1,000-word report covering: what was audited, what methodology was used, what was found, what action resulted, and how the audit connects to the frameworks presented in this chapter.
E.2. Comparative Governance. Compare the approaches to algorithmic accountability in three jurisdictions: the European Union (EU AI Act), the United States (federal and state-level proposals), and one additional jurisdiction of your choice (e.g., Canada, Brazil, Singapore, or China). For each, identify: (a) whether algorithmic audits are required, (b) who bears liability for algorithmic harm, (c) what enforcement mechanisms exist, and (d) how the many hands problem is addressed. Write a comparative analysis (800-1,200 words) identifying convergences and divergences.
E.3. The Audit Industry. Research the emerging algorithmic audit industry — companies like ORCAA, Holistic AI, Credo AI, and others. Write a critical assessment (600-1,000 words) evaluating: (a) what services they offer, (b) what methodologies they use, (c) what conflicts of interest they face (parallels to financial auditing are relevant), (d) whether current market incentives are sufficient to produce rigorous, independent audits, and (e) what regulatory structures might be needed to ensure audit quality.
Solutions
Selected solutions are available in appendices/answers-to-selected.md.