Chapter 30 Quiz: Responsible AI in Practice
Multiple Choice
Question 1. According to the chapter, what percentage of large companies have adopted AI ethics principles but fewer than what percentage have operationalized them?
- (a) 72% have principles; fewer than 40% have operationalized them
- (b) 92% have principles; fewer than 25% have operationalized them
- (c) 85% have principles; fewer than 50% have operationalized them
- (d) 65% have principles; fewer than 15% have operationalized them
Question 2. Which of the following is NOT one of the three reasons Professor Okonkwo gives for the principles-to-practice gap?
- (a) Principles are abstract while practice requires specificity
- (b) Principles lack accountability structures
- (c) Principles are written by people who do not understand AI technology
- (d) Practice is expensive and inconvenient
Question 3. The responsible AI stack consists of three mutually reinforcing layers. Which of the following correctly identifies all three?
- (a) Legal, Technical, Organizational
- (b) People, Process, Technology
- (c) Strategy, Governance, Compliance
- (d) Detection, Prevention, Remediation
Question 4. In the context of AI red-teaming, what is the primary advantage of including non-technical staff on the red team?
- (a) They are less expensive than data scientists
- (b) They bring different perspectives and can identify failure modes that technical staff share blind spots about
- (c) They can write better reports for executive audiences
- (d) Regulations require non-technical participation in red-teaming exercises
Question 5. During Tom's red-teaming exercise of Athena's recommendation engine, which of the following was NOT among the findings?
- (a) Plus-size customers saw trending results dominated by standard-size items
- (b) ZIP code data correlated with purchase history led to price steering by location
- (c) The recommendation engine systematically promoted higher-margin products regardless of customer fit
- (d) Spanish-language users received fewer personalized recommendations than English-language users
Question 6. A bias bounty program is modeled after which type of existing program?
- (a) Employee referral programs
- (b) Corporate whistleblower programs
- (c) Cybersecurity bug bounty programs
- (d) Customer loyalty programs
Question 7. In Athena's internal bias bounty program, a bilingual store manager in Phoenix discovered a bias in which AI system?
- (a) The HR screening model
- (b) The customer service chatbot
- (c) The inventory optimization model
- (d) The dynamic pricing system
Question 8. Which principle of inclusive AI design states that designing for users with the most extreme needs creates systems that work for everyone?
- (a) Universal accessibility
- (b) Design for the margins
- (c) Diverse-by-default
- (d) Minimum viable fairness
Question 9. Strubell et al. (2019) estimated that training a single large transformer model can emit carbon dioxide equivalent to approximately:
- (a) One transatlantic flight
- (b) The annual energy consumption of an average American household
- (c) Five cars over their entire lifetimes
- (d) A small manufacturing plant operating for one month
Question 10. What is the "sustainability paradox" of AI as described in the chapter?
- (a) AI companies claim to be sustainable but invest more in marketing than in environmental initiatives
- (b) AI is simultaneously one of the most promising tools for addressing climate change and a significant contributor to environmental harm
- (c) Sustainable AI practices reduce model accuracy, making AI less useful for environmental applications
- (d) Government subsidies for AI development are diverted from sustainability research
Question 11. At which level of the responsible AI maturity model does responsible AI become embedded in organizational culture rather than just processes?
- (a) Level 2: Policy
- (b) Level 3: Practice
- (c) Level 4: Culture
- (d) Level 5: Leadership
Question 12. According to the chapter, which organizational model for responsible AI teams is most common among organizations that have operationalized responsible AI at scale?
- (a) Centralized model
- (b) Embedded model
- (c) Hybrid model
- (d) Outsourced model
Question 13. When reporting responsible AI metrics to the board, the chapter recommends leading with which frame?
- (a) Fairness definitions and technical metrics
- (b) Legal, regulatory, reputational, and financial risk
- (c) Comparison to competitor responsible AI programs
- (d) Number of models reviewed and training sessions delivered
Question 14. In the vendor AI assessment framework, which criterion receives the highest individual weight?
- (a) Vendor track record
- (b) Data practices and privacy
- (c) Transparency and documentation (tied with fairness testing evidence)
- (d) Contractual protections
Question 15. According to a Kaggle survey cited in the chapter, what percentage of data scientists said an employer's approach to responsible AI was a "significant factor" in their job choice?
- (a) 45 percent
- (b) 62 percent
- (c) 78 percent
- (d) 91 percent
Short Answer
Question 16. Explain the difference between a red-teaming exercise and standard model evaluation (accuracy, precision, recall). Why is standard evaluation insufficient for identifying all AI failure modes?
Question 17. The chapter presents five arguments for the business case for responsible AI: trust as competitive advantage, regulatory readiness, talent attraction and retention, risk reduction, and innovation quality. In three to four sentences, explain the argument you find most compelling and why.
Question 18. Athena's competitor NovaMart deploys AI aggressively with fewer ethical guardrails. Ravi acknowledges that this gives NovaMart short-term competitive advantages. In three to four sentences, explain why Ravi believes Athena's approach will be advantageous in the long term.
Question 19. The chapter describes NK's customer-facing Transparency Portal. Identify the four areas the portal covers and explain why NK says: "If we can't explain it clearly enough for a customer to understand, we haven't understood it clearly enough ourselves."
Question 20. Explain why the chapter argues that "Everyone owns ethics" is a recipe for nobody owning ethics. What organizational structure does the chapter recommend instead?
True or False (with Justification)
For each statement, indicate whether it is true or false and provide a one-sentence justification citing evidence from the chapter.
Question 21. According to the chapter, a company at Level 2 (Policy) of the responsible AI maturity model has embedded fairness testing into its model development pipeline.
Question 22. The chapter argues that the accuracy-fairness tradeoff in responsible AI typically involves a large sacrifice in model performance.
Question 23. Red-teaming for generative AI systems is particularly important because the output space is effectively infinite and standard test sets cannot cover all possible outputs.
Question 24. The chapter recommends that a minimum vendor AI assessment score of 3.0 (on a 5-point scale) should be required for any vendor AI product, regardless of the application's risk level.
Question 25. According to the chapter, proactive regulatory engagement is a strategic advantage because organizations that participate in shaping AI regulations understand the regulatory direction earlier and can influence rules to align with their existing practices.
Answer key available in Appendix B.