Chapter 13: Exercises
Difficulty Scale: - ⭐ Comprehension - ⭐⭐ Application - ⭐⭐⭐ Analysis - ⭐⭐⭐⭐ Synthesis/Evaluation
† = Recommended for assessment or deeper engagement
Part A: Comprehension Exercises
Exercise 1 ⭐ Define the three types of opacity described in Chapter 13 — technical opacity, institutional opacity, and opacity to users — and provide an original example of each (not drawn from the chapter). For each example, identify which stakeholders are affected and why the opacity matters.
Exercise 2 ⭐ Create a simple diagram showing the spectrum of AI interpretability described in Section 13.3, from fully interpretable models to fully opaque systems. Place at least six specific model types or systems on the spectrum and briefly explain why each is placed where it is.
Exercise 3 ⭐ What is the "Rashomon effect" in statistical modeling, and what are its implications for the accuracy-interpretability trade-off debate? In your response, explain what Cynthia Rudin's 2019 paper argues and identify at least one domain where she believes her argument is strongest and one where it is more qualified.
Exercise 4 ⭐ Summarize the key facts and holding of State v. Loomis (2016) in 150–200 words, as if explaining the case to a business colleague who has no legal background.
Exercise 5 ⭐ What does GDPR Article 22 provide, and how does it differ from the legal protections available to individuals subject to automated decisions in the United States? Be specific about what Article 22 requires and where its limitations lie.
Part B: Application Exercises
Exercise 6 ⭐⭐ † Scenario: You are a compliance officer at a regional bank that uses a machine learning model to make initial credit card approval decisions. The model was purchased from a vendor and the bank does not have access to its source code. A denied applicant complains that the adverse action notice they received is not meaningful — it lists "payment history" and "credit utilization" as factors but does not explain how these were weighted or combined.
Write a 400-word memo to the bank's Chief Risk Officer identifying: (1) the legal obligations at stake; (2) the ethical problems with the current approach; and (3) three specific steps the bank could take to improve compliance and fairness.
Exercise 7 ⭐⭐ Mapping the accountability gap. Identify a real-world sector — not criminal justice or social media (both covered in the case studies) — where AI opacity creates an accountability gap. Describe: (a) what decisions AI is making in that sector; (b) who is affected and how; (c) where the accountability gap appears; and (d) what a specific organizational or regulatory intervention could do to close it. (Suggested sectors: immigration, education, healthcare, public housing, insurance.)
Exercise 8 ⭐⭐ Review the following fictional adverse action notice for a denied mortgage application:
"We are unable to approve your application. The primary reasons are: (1) Insufficient income; (2) Credit score below threshold; (3) Debt-to-income ratio; (4) Other factors considered by our decision model."
Evaluate this notice against the ECOA's requirements and the ethical standards discussed in Chapter 13. What information is missing? Rewrite the notice to be more explanatory and ethically adequate, inventing plausible specific details as needed.
Exercise 9 ⭐⭐ The procurement decision. A county government is considering deploying an automated system to screen applicants for emergency housing assistance. The vendor proposes a proprietary AI system with high predictive accuracy on a validation dataset. An alternative is available: a simpler logistic regression model with documented methodology, slightly lower accuracy on the validation data, and no cost for the algorithm.
Write a 500-word recommendation to the county's director of social services, arguing for one approach over the other. Draw explicitly on at least three concepts from Chapter 13.
Exercise 10 ⭐⭐ Identify a company that has made public statements about its AI ethics commitments (look for published AI principles, ethics statements, or public testimony). Then identify at least two cases or documented reports suggesting a gap between those statements and the company's actual practice. Analyze the gap using the "ethics washing" concept from Chapter 13. What conditions enabled the ethics washing, and what transparency requirements might have prevented it?
Part C: Analysis Exercises
Exercise 11 ⭐⭐⭐ † The auditor's problem. You have been hired to audit the fairness of a commercial recidivism risk assessment tool used by a state corrections department. The vendor has refused to provide access to the model or training data, citing trade secrecy. The state has provided you with a dataset of 15,000 individuals who received risk scores over the past three years, along with their race, gender, age, criminal history summary, risk score, and two-year recidivism outcome.
Write a detailed research design (600–800 words) for conducting the most informative audit possible under these constraints. Address: (a) what you can learn from output data alone; (b) what questions remain unanswerable; (c) how you would present the findings to the state; and (d) what access you would recommend the state require from the vendor as a condition of continued contract.
Exercise 12 ⭐⭐⭐ Compare and contrast the transparency approaches of two real jurisdictions — one US state and one non-US country — in their regulation of AI use in criminal justice. What has each jurisdiction required? What has each left unregulated? Which approach better serves the due process and equal protection values described in Chapter 13, and why?
Exercise 13 ⭐⭐⭐ The defense attorney's dilemma. A defense attorney is representing a client who has been sentenced partly on the basis of a commercial risk assessment tool. The attorney wants to challenge the use of the tool but has no access to its algorithm. Research real cases in which defense attorneys have challenged algorithmic sentencing tools and analyze the legal strategies that have succeeded and failed. Based on your research and Chapter 13, what is the most promising legal avenue for challenging the use of a proprietary risk assessment tool in sentencing, given current US law?
Exercise 14 ⭐⭐⭐ The chapter discusses the "Rashomon effect" — the existence of many models with similar accuracy but very different logic. Find a published empirical study that demonstrates this effect in a high-stakes application domain (criminal justice, healthcare, or credit). Summarize the study's findings and explain their implications for the policy question of whether accuracy gains justify using opaque models over interpretable alternatives.
Exercise 15 ⭐⭐⭐ † The SHAP explanation problem. SHAP values are widely used to "explain" predictions from complex ML models. A healthcare company uses a deep neural network to predict patient readmission risk, and provides physicians with SHAP-based explanations showing which factors most influenced each patient's score.
Identify at least four ways that SHAP-based explanations could mislead a physician who takes them at face value. For each, explain the mechanism of the potential misleading and suggest what a physician or hospital administrator should do to mitigate the risk.
Part D: Synthesis and Evaluation Exercises
Exercise 16 ⭐⭐⭐⭐ Design an AI transparency standard. Imagine you are advising the Federal Trade Commission on developing a national standard for AI transparency in "high-stakes" applications — those that significantly affect a person's employment, credit, housing, healthcare, education, criminal justice status, or public benefits eligibility.
Write a 1,000–1,200 word policy recommendation covering: (1) which systems should be covered; (2) what documentation and disclosure should be required; (3) what rights affected individuals should have; (4) what enforcement mechanisms would be effective; and (5) how the standard should handle the tension between transparency and trade secrecy. Draw on the chapter's analysis of existing frameworks (GDPR Article 22, EU AI Act, sector-specific US requirements) to justify your choices.
Exercise 17 ⭐⭐⭐⭐ † The ethics of the Frances Haugen disclosure. Frances Haugen disclosed confidential Facebook research documents to Congress and journalists. Facebook argued that she violated her employment agreement and disclosed proprietary business information. Haugen argued that she had an ethical obligation to disclose information about serious public harm.
Analyze this ethical conflict from at least two distinct ethical frameworks (e.g., consequentialism and deontology; stakeholder theory and virtue ethics). Reach a reasoned conclusion about whether Haugen's disclosure was ethically justified, and identify what conditions would need to be satisfied for similar disclosures to be ethically justified in the future. Consider also what role corporate transparency obligations should play in the analysis — that is, whether Haugen would have needed to act if Facebook had been required to disclose the relevant information through regulatory channels.
Exercise 18 ⭐⭐⭐⭐ Interpretability in tension with performance — a sector analysis. Choose a sector other than criminal justice where AI systems are used for consequential decisions (suggestions: clinical diagnosis, credit underwriting, child welfare risk assessment, college admissions screening, fraud detection). For your chosen sector:
(a) Describe the AI systems in use and their level of interpretability. (b) Assess the evidence that more complex, less interpretable models provide meaningful performance advantages over interpretable alternatives in this domain. (c) Argue for a specific interpretability standard that should apply to AI systems in this sector — justifying both the level you choose and the mechanism for enforcing it. (d) Identify which stakeholders would support and oppose the standard you propose, and explain why.
Exercise 19 ⭐⭐⭐⭐ Global comparison and multinational governance. A multinational consumer financial services company operates in the United States, Germany, Brazil, and India, using AI models for credit decisions in each market. Each jurisdiction has different requirements for AI transparency and explainability.
Write a 900–1,100 word analysis identifying: (1) the transparency and explainability requirements that apply in each jurisdiction; (2) the gaps between requirements across jurisdictions; (3) whether the company should adopt a single global standard or jurisdiction-specific standards; and (4) what the chosen approach would mean for the company's model development and governance processes.
Exercise 20 ⭐⭐⭐⭐ A day in the life of algorithmic opacity. Map all the AI-driven decisions that affect a hypothetical person — call her Maria, a 35-year-old Black woman living in a major US city — over the course of a single day: from her social media feed in the morning, to her commute (rideshare pricing), to a job application she submits, to a credit card application she makes, to the health insurance pre-authorization she seeks for a prescription, to the news she sees in the evening. For each decision:
(a) Describe the AI system likely involved. (b) Assess the degree of opacity Maria experiences. (c) Identify the potential consequences if the system makes a biased or erroneous decision. (d) Evaluate the legal and practical recourse available to Maria.
Conclude with a reflection on what this mapping reveals about the cumulative weight of algorithmic opacity in a single person's life.
Part E: Applied Research Exercises
Exercise 21 ⭐⭐ Find and read a real algorithmic audit report — produced by a third party, a regulatory body, or an investigative journalism organization — for any AI system (suggestions: search for audit reports on hiring algorithms, criminal risk assessment, social media content moderation, or financial AI). Summarize: (a) what the audit examined; (b) what access the auditors had to the system; (c) what methods were used; (d) what findings were reported; and (e) what limitations the auditors acknowledged.
Exercise 22 ⭐⭐ Research NYC Local Law 144, which requires employers using automated employment decision tools to conduct bias audits and publish results. Find at least two published bias audit reports that have been made available under the law. Evaluate: (a) what these audits measured; (b) what they did not measure; (c) what the results showed; and (d) whether the audit requirement as designed provides meaningful accountability. What reforms would make it more effective?
Exercise 23 ⭐⭐ The explainability interface. Find three real-world examples of organizations attempting to explain AI decisions to affected individuals (suggestions: Amazon showing why a recommendation was made, a credit bureau explaining a score, a social media platform explaining why a post was removed). Evaluate each explanation against the criteria discussed in Chapter 13 (specific, actionable, accurate, honest about uncertainty). Which comes closest to meaningful transparency and why?
Exercise 24 ⭐⭐⭐ † Designing the opt-out. The EU Digital Services Act requires large platforms to offer users the option to opt out of recommendation systems based on profiling and to receive recommendations based on other criteria. Design a user experience for implementing this requirement for a hypothetical social media platform. What alternative ranking criteria would you offer? How would you explain the difference between the profile-based and alternative feeds? What challenges do you anticipate in implementation, and how would you address them?
Exercise 25 ⭐⭐⭐⭐ The interpretability researcher's argument. Cynthia Rudin's 2019 paper "Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead" makes a strong claim that has been both celebrated and challenged. Find two peer-reviewed responses to Rudin's argument — either supporting or challenging her core claim that the accuracy-interpretability trade-off is exaggerated in high-stakes domains. Write a 700-word synthesis that: (a) fairly presents Rudin's argument; (b) presents the strongest counterarguments; (c) identifies the empirical question that, if resolved, would settle the debate; and (d) explains what the debate implies for practitioners and policymakers who must make decisions now, before the debate is settled.
For discussion of grading criteria and suggested assessment approaches, see the Instructor's Guide for Chapter 13.