Chapter 30 Exercises: Responsible AI in Practice


Section A: Recall and Comprehension

Exercise 30.1 Define the "principles-to-practice gap" in your own words. Identify the three reasons Professor Okonkwo gives for why AI ethics principles fail to translate into operational practice.

Exercise 30.2 Describe the three layers of the responsible AI stack (people, process, technology). For each layer, identify two specific components and explain how they contribute to closing the principles-to-practice gap.

Exercise 30.3 Compare and contrast the three organizational models for responsible AI teams (centralized, embedded, hybrid). For each model, identify one type of organization where it would be the best fit and explain why.

Exercise 30.4 Explain the difference between input metrics, process metrics, and outcome metrics for responsible AI. Provide one example of each that is not mentioned in the chapter.

Exercise 30.5 List the five levels of the responsible AI maturity model. For each level, write one sentence describing the key differentiator that separates it from the level below.

Exercise 30.6 Describe the sustainability paradox of AI. Why is it ironic that AI is used to address climate change, and what can organizations do to mitigate the environmental impact of their AI operations?


Section B: Application

Exercise 30.7: Red Team Design Exercise You are the VP of AI at a mid-sized online lending company. Your company uses an AI model to make initial loan approval/denial decisions, which are then reviewed by human underwriters. Design a red-teaming exercise for this system:

  • (a) Define the scope of the exercise (what failure modes will you test for).
  • (b) Identify the composition of your red team (at least five roles, with justification for each).
  • (c) Develop five specific test cases, including the input, the expected failure, and the potential business or social impact.
  • (d) Define the severity classification system you will use for findings (at least three levels).
  • (e) Outline the process for remediating findings and retesting.

Exercise 30.8: Bias Bounty Program Design Design a bias bounty program for a company of your choice (your employer, a well-known company, or a hypothetical company). Address each of the following:

  • (a) Scope: Which AI systems are included? What types of bias are in scope?
  • (b) Eligibility: Internal only, external, or both? Justify your choice.
  • (c) Submission requirements: What must a participant provide for a valid submission?
  • (d) Incentive structure: What rewards will you offer? How will they scale with severity?
  • (e) Triage process: Who reviews submissions, and what is the timeline for response?
  • (f) Transparency: What will you disclose publicly about the program's results?
  • (g) Success metrics: How will you measure whether the program is working?

Exercise 30.9: Responsible AI Maturity Assessment Assess an organization you are familiar with (current or former employer, or a well-known public company) against the five-level responsible AI maturity model.

  • (a) Place the organization at a specific level. Provide at least four pieces of evidence supporting your assessment.
  • (b) Identify the two most significant gaps preventing the organization from reaching the next level.
  • (c) Develop a six-month action plan to address those gaps, including specific deliverables, responsible parties, and success criteria.
  • (d) Estimate the budget required for your action plan and justify the investment in terms of risk reduction, regulatory readiness, and/or competitive advantage.

Exercise 30.10: Vendor AI Assessment You are evaluating a vendor's AI-powered customer service chatbot for deployment at your organization. Using the vendor AI assessment framework from the chapter, create a completed procurement scorecard for the following (hypothetical) vendor:

  • The vendor provides a one-page product description but no model card or detailed documentation.
  • The vendor says they have "tested for bias" but cannot provide specific fairness audit results.
  • The chatbot can provide confidence scores for its responses but cannot explain individual decisions.
  • The vendor is GDPR-compliant and conducts annual privacy audits.
  • The vendor does not offer ongoing fairness monitoring; updates are released quarterly.
  • The contract includes a right to audit but no specific bias remediation SLAs.
  • The vendor has no publicly known responsible AI incidents.

Score each criterion on the 1-5 scale, calculate the weighted total, and make a recommendation: approve, approve with conditions, or reject. Justify your recommendation.

Exercise 30.11: Sustainability Calculation Estimate the carbon footprint of an AI system using the following data:

  • Training: The model was trained on 64 NVIDIA A100 GPUs for 30 days. Each GPU consumes approximately 400 watts.
  • The training was conducted in a data center in Virginia, USA, where the grid carbon intensity is approximately 0.35 kg CO2 per kWh.
  • Inference: The model processes 500,000 requests per day. Each request requires approximately 0.001 kWh of energy.
  • The inference workload runs on a cloud provider whose Virginia data centers have a carbon intensity of 0.28 kg CO2 per kWh (due to partial renewable energy procurement).

Calculate: - (a) Total energy consumed during training (in kWh). - (b) Total CO2 emissions from training (in metric tons). - (c) Daily energy consumed during inference (in kWh). - (d) Annual CO2 emissions from inference (in metric tons). - (e) Total first-year carbon footprint (training + inference). - (f) If the model generates $10 million in annual revenue, what is its carbon efficiency ratio (revenue per metric ton of CO2)? - (g) If the training were moved to a data center in Iceland (grid carbon intensity of 0.02 kg CO2 per kWh), how would the training emissions change?

Exercise 30.12: Stakeholder Engagement Plan Your organization is deploying an AI system that scores job applicants and provides a ranked list to human recruiters. Design a stakeholder engagement plan covering:

  • (a) Employees: What will you communicate to HR staff, recruiters, and hiring managers? How will you train them to use the system responsibly?
  • (b) Applicants: What transparency will you provide to job applicants about the use of AI in the screening process? What recourse will you offer?
  • (c) Communities: How will you engage with workforce development organizations, educational institutions, and civil rights organizations that may be concerned about algorithmic hiring?
  • (d) Regulators: What proactive steps will you take to ensure compliance with employment discrimination laws and emerging AI regulations?

Section C: Analysis and Evaluation

Exercise 30.13: The Google AI Ethics Team Dissolution Google dissolved its AI ethics team in 2023, approximately five years after publishing its AI Principles. Analyze this decision:

  • (a) Using the principles-to-practice gap framework, what does this decision suggest about Google's responsible AI maturity level?
  • (b) What organizational, financial, and competitive pressures might have contributed to the decision?
  • (c) Is it possible to operationalize responsible AI effectively without a dedicated ethics team? What alternative structures might work?
  • (d) What signal does this decision send to other organizations about the sustainability of responsible AI investments?

Exercise 30.14: The NovaMart Dilemma Ravi describes NovaMart as a competitor that deploys AI aggressively with fewer ethical guardrails. Analyze the competitive dynamics:

  • (a) In what specific ways can less-governed AI deployment provide short-term competitive advantages? Be concrete.
  • (b) What are the risks NovaMart is accepting? Categorize them as legal, regulatory, reputational, talent-related, and operational.
  • (c) Under what market conditions would NovaMart's approach be sustainable long-term? Under what conditions would it be unsustainable?
  • (d) If you were advising Athena's board, how would you frame the competitive threat from NovaMart? Would you recommend any changes to Athena's responsible AI approach?
  • (e) Is there a middle ground between Athena's comprehensive approach and NovaMart's aggressive approach? What would it look like?

Exercise 30.15: Responsible AI Metrics Critique A company reports the following responsible AI metrics in its annual report:

  • "100% of our data scientists have completed responsible AI training."
  • "We conducted 12 model reviews this year."
  • "Our AI Ethics Board met quarterly."
  • "We published our first AI Transparency Report."

Critique these metrics: - (a) Which of these are input metrics, process metrics, and outcome metrics? - (b) What information is missing that you would need to assess the effectiveness of this company's responsible AI program? - (c) Propose five additional metrics that would provide a more complete picture of responsible AI performance. - (d) Could a company report all four of these metrics accurately and still have a deeply problematic responsible AI practice? Explain.

Exercise 30.16: Inclusive Design Audit Select an AI-powered consumer product (a voice assistant, a photo app, a translation tool, a navigation app, etc.). Conduct a brief inclusive design audit:

  • (a) Identify three user groups who might be underserved or disadvantaged by the product's AI features. For each, explain the specific failure mode.
  • (b) For each failure mode, propose a design change that would address it.
  • (c) Assess whether the design changes would improve the product for all users (not just the underserved groups).
  • (d) What data or user research would you need to validate your assessment?

Section D: Research

Exercise 30.17: Responsible AI at a Major Tech Company Select one of the following companies: Microsoft, Google, Meta, Salesforce, IBM, or Amazon. Research their responsible AI program:

  • (a) What are the company's published AI ethics principles? When were they published?
  • (b) What organizational structures (teams, boards, review processes) has the company established for responsible AI?
  • (c) Has the company experienced any notable responsible AI incidents or controversies? How did it respond?
  • (d) Has the company made any significant changes to its responsible AI program (expansions, reductions, restructurings) in the past two years?
  • (e) Using the responsible AI maturity model from the chapter, at what level would you place this company? Justify your assessment.

Exercise 30.18: AI Sustainability Data Research the environmental impact of AI:

  • (a) Find the most recent estimates of energy consumption by major AI data centers (Google, Microsoft, Amazon, Meta). How have these estimates changed over the past three years?
  • (b) What specific commitments have these companies made regarding renewable energy for AI workloads?
  • (c) Find a recent academic paper on the carbon footprint of training large language models. Summarize the key findings and compare them to the estimates cited in the chapter.
  • (d) What policy proposals have been made to address AI's environmental impact? Evaluate their feasibility.

Exercise 30.19: AI Red-Teaming in Practice Research a publicly documented AI red-teaming exercise (e.g., DEFCON's AI Village, Microsoft's red-teaming of GPT-4, or Anthropic's red-teaming practices):

  • (a) What system was tested?
  • (b) Who conducted the testing, and what expertise did they bring?
  • (c) What were the most significant findings?
  • (d) How were the findings addressed?
  • (e) What lessons from this exercise are transferable to a non-AI-native company like Athena?

Section E: Discussion and Debate

Exercise 30.20: Is Responsible AI a Luxury? For classroom debate or written argument. Position A: Responsible AI is a competitive advantage that builds trust, attracts talent, and reduces risk. Every organization should invest in it. Position B: Responsible AI is a luxury that only well-funded organizations can afford. For a startup competing against incumbents, responsible AI is a competitive disadvantage that slows deployment and increases costs.

Choose a position and argue it persuasively. Address the strongest arguments from the opposing position.

Exercise 30.21: Who Should Be on the Red Team? For classroom discussion. An AI system is being developed to assist judges in making bail decisions. Who should be on the red team? Consider:

  • Technical experts (data scientists, ML engineers)
  • Legal experts (prosecutors, defense attorneys, judges)
  • Social scientists (criminologists, sociologists)
  • Community representatives (formerly incarcerated individuals, families, advocacy organizations)
  • Ethicists and philosophers
  • Journalists

Should all of these groups be represented? Are there groups that should be excluded? What power dynamics should be considered? How should disagreements among red team members be resolved?

Exercise 30.22: The Maturity Model Paradox For classroom discussion or written reflection. The chapter notes that the responsible AI maturity model "is not a scorecard where higher is always better in every dimension." Discuss:

  • Can an organization be at Level 4 (Culture) in some dimensions and Level 1 (Awareness) in others?
  • Should organizations aim for Level 5 (Leadership), or is Level 3 (Practice) sufficient for most organizations?
  • What are the risks of organizations claiming a higher maturity level than their actual practice supports?
  • How should external stakeholders (investors, regulators, customers) evaluate an organization's responsible AI maturity claims?

Section F: Integrative Application

Exercise 30.23: Athena's Responsible AI Report Imagine you are Ravi Mehta. Draft Athena's first annual Responsible AI Report (3-5 pages). Include:

  • An executive summary of Athena's responsible AI journey (from the HR bias crisis in Chapter 25 through the governance framework in Chapter 27, the data breach in Chapter 29, and the current responsible AI program in this chapter)
  • A description of Athena's responsible AI governance structure
  • Key responsible AI metrics and their trends
  • Case examples of responsible AI in practice (red-teaming findings, bias bounty results, transparency portal)
  • Sustainability data (AI carbon footprint, carbon efficiency ratios)
  • Goals and commitments for the next year
  • An honest discussion of challenges and areas for improvement

Exercise 30.24: Board Presentation on Responsible AI You are the Chief Risk Officer of a $5 billion financial services company. The board of directors has requested a 15-minute briefing on the company's responsible AI risk profile. Prepare:

  • (a) A one-page executive summary of the company's responsible AI maturity (using the five-level model)
  • (b) The top three responsible AI risks the board should be aware of, with severity ratings and mitigation plans
  • (c) A responsible AI dashboard mockup with five key metrics, including current values and trends
  • (d) A budget request for responsible AI investments in the coming year, with expected risk reduction impact
  • (e) Three questions the board is likely to ask, with prepared responses

Exercise 30.25: Part 5 Integration This is the final chapter of Part 5. Write a two-page synthesis memo that connects the six chapters of Part 5 (Chapters 25-30) into a coherent narrative:

  • What is the central argument of Part 5?
  • How does each chapter build on the previous one?
  • What is the relationship between bias detection (Ch. 25), fairness and explainability (Ch. 26), governance (Ch. 27), regulation (Ch. 28), privacy and security (Ch. 29), and responsible AI in practice (Ch. 30)?
  • If you could only implement three recommendations from Part 5 at your organization, what would they be and why?

Answers to selected exercises are available in Appendix B.