Chapter 27 Further Reading: AI Governance Frameworks


Foundational Governance Frameworks

1. National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0). NIST AI 100-1. US Department of Commerce. The primary document for the NIST AI RMF, including the four core functions (Govern, Map, Measure, Manage), categories, and subcategories. Essential reading for anyone implementing AI governance in a US context. The companion document, the NIST AI RMF Playbook, provides practical guidance and suggested actions for each subcategory. Both are available for free at ai.nist.gov.

2. International Organization for Standardization. (2023). ISO/IEC 42001:2023 — Information Technology — Artificial Intelligence — Management System. ISO. The first international certifiable standard for AI management systems. The standard itself is available for purchase from ISO. For a more accessible overview, see the ISO/IEC 42001 guidance documents and the interpretive materials published by major certification bodies (BSI, Bureau Veritas, TUV). Particularly valuable for organizations already operating within the ISO management system framework (ISO 27001, ISO 9001).

3. Organisation for Economic Co-operation and Development. (2019). Recommendation of the Council on Artificial Intelligence. OECD/LEGAL/0449. The foundational international policy document for AI governance, establishing the five OECD AI Principles. Endorsed by over 46 countries. The OECD AI Policy Observatory (oecd.ai) provides ongoing tracking of how these principles are being translated into national policy — an invaluable resource for organizations operating internationally.

4. Cihon, P., Schuett, J., & Baum, S. (2021). "Corporate Governance of Artificial Intelligence in the Public Interest." Information, 12(7), 275. An academic analysis of how corporate governance structures can be adapted to address AI-specific challenges. The authors propose a framework for integrating AI governance into existing corporate governance mechanisms — board oversight, risk committees, and shareholder engagement. Useful for connecting the AI governance concepts in this chapter to broader corporate governance theory.


Model Risk Management

5. Office of the Comptroller of the Currency. (2011). Supervisory Guidance on Model Risk Management. OCC Bulletin 2011-12 (SR 11-7). Board of Governors of the Federal Reserve System. The foundational regulatory document for model risk management in financial services. While the guidance predates the widespread deployment of AI models, its principles — the three lines of defense, independent validation, comprehensive documentation, and ongoing monitoring — are directly applicable to AI governance. Required reading for anyone in financial services; recommended for anyone managing models in any industry.

6. Board of Governors of the Federal Reserve System. (2024). Supervisory Guidance on Artificial Intelligence and Machine Learning Model Risk Management. Supplemental to SR 11-7. The Federal Reserve's extension of SR 11-7 to explicitly cover AI and machine learning models, including generative AI. The guidance addresses the specific challenges of AI model validation — complexity, data dependency, emergent behavior — and provides practical recommendations for adapting traditional model risk management practices. The most authoritative regulatory statement on AI model risk management as of 2025.

7. Raz, G., & Saha, S. (2023). "Model Risk Management for AI/ML Models." Journal of Risk Management in Financial Institutions, 16(3), 245-261. A practitioner-oriented analysis of how to extend model risk management frameworks to accommodate the unique characteristics of AI and machine learning models. Covers validation challenges, monitoring approaches, and documentation requirements specific to AI. More accessible than the regulatory documents and particularly useful for organizations outside financial services that want to adopt SR 11-7 principles voluntarily.


Ethics Committees and Organizational Design

8. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). "From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices." Science and Engineering Ethics, 26, 2141-2168. A comprehensive review of tools and methods for translating AI ethics principles into organizational practice — the exact challenge addressed in this chapter. Morley et al. catalog available approaches across five categories (tools, methods, frameworks, standards, and governance structures) and assess their maturity and applicability. Essential for organizations moving from principle to practice.

9. Metcalf, J., Moss, E., & boyd, d. (2019). "Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics." Social Research: An International Quarterly, 86(2), 449-476. A critical analysis of how technology companies have institutionalized ethics — including the creation of ethics committees, ethics boards, and responsible AI teams. The authors argue that internal ethics functions, while necessary, can also serve to contain and manage ethical criticism rather than genuinely addressing it. An important counterpoint to the chapter's largely prescriptive approach — a reminder that governance structures can be performative as well as substantive.

10. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT*), 33-44. A practical framework for internal algorithmic auditing — a key governance mechanism. Raji et al. propose a structured approach to auditing AI systems throughout their lifecycle, from design through deployment and monitoring. The paper draws on the authors' experience at Google and proposes concrete audit stages, documentation requirements, and accountability mechanisms. Directly relevant to the impact assessment and monitoring sections of this chapter.


AI Governance in Practice

11. Microsoft. (2024). Responsible AI Transparency Report 2024. Microsoft Corporation. Microsoft's annual report on its responsible AI practices, including the Responsible AI Standard, the Office of Responsible AI, and the governance tools it has developed and deployed. The report provides an unusually detailed look inside a major technology company's AI governance infrastructure — including candid acknowledgment of challenges and limitations. Essential companion reading to Case Study 1 in this chapter.

12. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). "Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI." Berkman Klein Center Research Publication, No. 2020-1. Harvard University. A comprehensive mapping of AI ethics principles across 36 prominent frameworks from governments, companies, and civil society organizations. The authors identify eight key themes — privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values — and trace how each theme appears across frameworks. Invaluable for organizations developing their own AI principles and wanting to align with emerging global consensus.

13. Mökander, J., & Floridi, L. (2023). "Operationalising AI Governance through Ethics-Based Auditing: An Industry Case Study." AI and Ethics, 3, 451-468. A detailed case study of how a European financial services company operationalized its AI governance framework through ethics-based auditing. The paper bridges the gap between governance theory and governance practice — showing how abstract principles translate into specific audit criteria, assessment procedures, and organizational workflows. Particularly useful for organizations in the early stages of operationalizing their governance frameworks.


Risk Assessment and Impact Assessments

14. Selbst, A. D. (2021). "An Institutional View of Algorithmic Impact Assessments." Harvard Journal of Law & Technology, 35(1), 117-191. A legal scholar's analysis of algorithmic impact assessments — what they are, how they should be structured, and what institutional conditions are necessary for them to be effective. Selbst argues that impact assessments are valuable but only if they are embedded in institutional structures that ensure accountability — a point strongly reinforced by this chapter's discussion of governance operating models and enforcement.

15. Ada Lovelace Institute. (2022). Algorithmic Impact Assessment: A Case Study in Healthcare. Ada Lovelace Institute. A practical case study demonstrating how algorithmic impact assessment can be applied in a specific high-stakes domain. The report walks through the assessment process step by step, from system description through stakeholder engagement to risk mitigation. Provides a concrete template that organizations can adapt for their own impact assessment processes.

16. European Commission. (2021). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). COM(2021) 206 final. (Final text adopted 2024.) While primarily a regulatory document (covered in depth in Chapter 28), the EU AI Act's risk classification framework — unacceptable, high, limited, and minimal risk — provides a useful model for organizational risk tiering. The Annexes, which list specific high-risk AI use cases, are a practical reference for organizations attempting to classify their own AI systems.


Governance Failures and Lessons Learned

17. US House Committee on Transportation and Infrastructure. (2020). The Design, Development & Certification of the Boeing 737 MAX: Final Committee Report. US House of Representatives. The definitive congressional report on the 737 MAX MCAS failure — the subject of Case Study 2 in this chapter. At 238 pages, the report provides exhaustive documentation of the design decisions, regulatory failures, and organizational dynamics that led to two fatal crashes. Essential reading for anyone who wants to understand what governance failure looks like in practice — and why the structures described in this chapter matter.

18. Nader, R. (2020). "The Boeing 737 MAX: Lessons for AI Governance." Ralph Nader Radio Hour, Episode 311 (transcript available). While not an academic source, Nader's analysis draws explicit parallels between the 737 MAX governance failure and the challenges of governing AI systems — including regulatory capture, documentation failures, and the diffusion of accountability. A useful perspective from the consumer safety movement for framing AI governance as a public interest issue.

19. Selbst, A. D., boyd, d., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). "Fairness and Abstraction in Sociotechnical Systems." Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency (FAT*), 59-68. An influential paper arguing that AI fairness cannot be addressed through technical solutions alone — it requires attention to the social, organizational, and institutional context in which AI systems operate. The authors identify five "traps" that arise from abstracting fairness from its context. Directly relevant to the chapter's argument that governance is not just a technical exercise but an organizational and cultural one.


Building Governance Culture

20. Hagendorff, T. (2020). "The Ethics of AI Ethics: An Evaluation of Guidelines." Minds and Machines, 30, 99-120. A critical evaluation of AI ethics guidelines, finding that most focus on high-level principles while neglecting implementation, enforcement, and the organizational structures needed to translate principle into practice. Hagendorff argues for moving beyond "ethics washing" to genuine governance — a theme central to this chapter's discussion of governance culture versus governance theater.

21. Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). "A Governance Model for the Application of AI in Health Care." Journal of the American Medical Informatics Association, 27(3), 491-497. A governance model specifically designed for healthcare AI, incorporating clinical ethics, patient safety, and regulatory compliance. While sector-specific, the model's approach to integrating governance into clinical workflows — making governance part of the care process rather than a separate administrative burden — offers lessons for any organization seeking to embed governance into operational workflows.

22. Shneiderman, B. (2020). "Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-Centered AI Systems." ACM Transactions on Interactive Intelligent Systems, 10(4), 1-31. A framework for connecting AI ethics principles to engineering practices, organized around reliability, safety, and trustworthiness. Shneiderman, a pioneering researcher in human-computer interaction, argues for governance approaches that combine internal self-regulation with external oversight — a position consistent with the chapter's emphasis on both internal governance structures and external validation.


Industry Reports and Reference Materials

23. Stanford University Human-Centered AI Institute. (2025). AI Index Report. Stanford HAI. The most comprehensive annual compilation of data on AI governance globally — tracking the adoption of governance frameworks, the proliferation of AI regulations, corporate governance practices, and public attitudes toward AI oversight. Chapter 7 of the 2025 report ("AI Policy and Governance") is particularly relevant to this chapter and Chapter 28.

24. World Economic Forum. (2024). AI Governance Alliance: Briefing Paper Series. WEF. A series of briefing papers from the WEF's AI Governance Alliance covering governance frameworks, risk management, industry-specific governance, and international coordination. The papers are written for a business audience and provide practical recommendations alongside strategic analysis. Useful for executives seeking a high-level overview of the governance landscape.

25. Anthropic. (2025). "Responsible Scaling Policy." Anthropic Research. An example of an AI developer's self-governance framework, defining commitment levels tied to model capability thresholds. The Responsible Scaling Policy illustrates how frontier AI developers are approaching governance of increasingly capable systems — a perspective that complements the enterprise governance focus of this chapter. Worth reading alongside Microsoft's Responsible AI Standard to understand how governance frameworks differ between AI developers and AI deployers.


Each item in this reading list was selected because it directly supports concepts introduced in Chapter 27 and developed throughout Part 5. Entries marked with specific chapter references connect to more detailed treatment elsewhere in the textbook.