Chapter 6: Further Reading and Resources
Annotations describe the content, audience, and relevance of each source. Sources are organized thematically. All sources were accurate as of early 2026; URLs for government and standards documents should be verified against current official sources.
Foundational Frameworks and Official Documents
1. NIST AI Risk Management Framework (AI RMF 1.0) National Institute of Standards and Technology (2023). Available at: https://airc.nist.gov/RMF
The most practically oriented organizational AI risk management framework available. Organized around four functions (Govern, Map, Measure, Manage), the AI RMF provides both a conceptual structure and detailed guidance on implementation. The companion AI RMF Playbook (also available at the NIST AI Resource Center) provides specific suggested actions for each function. Essential reading for any professional building or evaluating organizational AI governance. The document is written for a practitioner audience and is freely available. NIST also maintains an AI RMF Community Profile process through which sector-specific adaptations are developed.
2. EU AI Act (Regulation (EU) 2024/1689) European Parliament and Council (2024). Full text available through EUR-Lex: https://eur-lex.europa.eu
The world's first comprehensive AI law. The full text is lengthy and technical, but the introductory recitals provide an accessible statement of the regulation's philosophy and intent, and Chapters I through IV (covering definitions, prohibited practices, high-risk AI systems, and transparency obligations) are essential reading for any professional operating in or serving the EU market. The EU AI Office maintains guidance documents that explain the Act's requirements in more accessible terms. Pay particular attention to Annex III, which lists the high-risk AI application categories, and Annex IV, which specifies technical documentation requirements.
3. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence White House (October 2023). Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/
The primary federal AI governance instrument in the US as of early 2026. The Order covers: safety testing and reporting requirements for advanced AI models, biosecurity and dual-use AI risks, privacy-preserving research, equity in federal AI use, workforce development, and international AI governance leadership. The Order directed dozens of agency-specific actions with specific timelines. Understanding which of those actions have been implemented (and which have not) requires tracking agency-level reporting, but the Order itself is readable and provides the conceptual architecture for the Biden administration's AI governance approach.
4. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (Second Edition) IEEE (2019). Available at: https://ethicsinaction.ieee.org/
A comprehensive normative framework for human-centered AI developed by IEEE's Global Initiative on Ethics of Autonomous and Intelligent Systems. At 294 pages, it is more encyclopedic than practical, but its chapters on specific topics — affective computing, classical ethics applications, wellbeing, data agency, effectiveness — provide useful frameworks for practitioners who need to engage with specific ethical dimensions of AI. The companion IEEE P7000-series standards address specific governance topics (bias, transparency, privacy) in technically precise terms. Valuable for practitioners who want substantive engagement with the normative dimensions of AI design, not just principles statements.
5. ISO/IEC 42001:2023 — Artificial Intelligence Management System International Organization for Standardization / International Electrotechnical Commission (2023). Available for purchase through national standards bodies (e.g., ANSI in the US, BSI in the UK).
The first international standard specifically for AI management systems, ISO/IEC 42001 provides a structured approach to establishing, implementing, maintaining, and improving AI governance within an organization. It is organized like other ISO management system standards (comparable to ISO 9001 for quality or ISO 27001 for information security), which means organizations familiar with those frameworks will find it accessible. The standard includes requirements for AI policy, risk assessment, controls, and performance evaluation. Increasingly referenced in procurement and regulatory contexts as a baseline for organizational AI governance.
Academic Research on AI Governance
6. Dafoe, A. (2018). "AI Governance: A Research Agenda." Future of Humanity Institute, University of Oxford. Available at: https://www.fhi.ox.ac.uk/govai/
An influential research agenda paper that provides a systematic framework for thinking about AI governance challenges across multiple levels: technical (controllability, robustness), political-economic (concentration of power, geopolitics), and broader social (value alignment, coordination). Dafoe's framework has shaped subsequent academic work on AI governance and provides a useful map of the governance problem space. Accessible to non-specialists; approximately 40 pages.
7. Calo, R. (2017). "Artificial Intelligence Policy: A Primer and Roadmap." UC Davis Law Review, 51(2), 399–435. Available via SSRN.
A foundational legal analysis of AI governance challenges, written for a legal and policy audience but accessible to business professionals. Calo analyzes why AI poses distinctive governance challenges (it is diffuse, it is opaque, it is self-learning, it operates through intermediaries), and why traditional regulatory tools are imperfectly suited to addressing them. The paper's discussion of "robot law" as a distinctive category — rather than application of existing regulatory frameworks to AI — remains relevant despite its age.
8. Krakovna, V., Uesato, J., Mikulik, V., et al. (2020). "Specification Gaming: The Flip Side of AI Ingenuity." DeepMind Blog (April 2020). Available at: https://www.deepmind.com/blog
Not traditional academic research, but an important and accessible analysis of how AI systems optimize for specified metrics in ways that violate the intent behind those metrics — with implications for governance design. The document catalogs examples of specification gaming across AI systems and explains why this phenomenon makes governance by metrics specification systematically insufficient. Directly relevant to the chapter's discussion of adversarial compliance and governance gaming. Accessible to non-technical readers.
9. Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). "The Role and Limits of Principles in AI Ethics." Proceedings of AIES 2019. Available via ACM Digital Library.
An empirically grounded analysis of the proliferation of AI ethics principles frameworks. The authors document the convergence in principle content across many frameworks, and more importantly, analyze the structural reasons why principles are insufficient governance tools: they lack implementation guidance, they do not address power dynamics, and they are easily used for ethics washing. Directly relevant to Section 6.3 and the recurring theme of ethics washing versus genuine governance.
10. Jobin, A., Ienca, M., & Vayena, E. (2019). "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence, 1(9), 389–399.
A systematic analysis of 84 AI ethics guidelines from across the world, identifying areas of convergence (transparency, justice, beneficence, non-maleficence, accountability) and areas of divergence (particularly around privacy, responsibility, and liberty). The analysis reveals that governance frameworks cluster around Western liberal values despite their claimed universality — directly relevant to the chapter's discussion of global variation and whose values dominate international standards. The paper is methodologically rigorous and provides a useful empirical foundation for comparative governance analysis.
11. Metcalf, J., Moss, E., Watkins, E.A., Singh, R., & Elish, M.C. (2021). "Algorithmic Impact Assessments and Accountability: The Co-Construction of Impacts." Proceedings of FAccT 2021. Available via ACM Digital Library.
An empirical study of how algorithmic impact assessments function in practice, drawing on interviews with practitioners who have conducted or been subject to them. The paper reveals a gap between the governance theory of impact assessments (they surface harms and create accountability) and their practical function (they are often used to demonstrate compliance rather than improve systems). Directly relevant to the chapter's discussion of documentation as governance and the risk of performative governance.
On Platform Governance
12. Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
A foundational academic treatment of platform content governance. Gillespie analyzes how platforms construct their governance systems, how they exercise editorial power while claiming to be neutral intermediaries, and how the concept of "community standards" functions both as genuine governance and as legal and commercial protection. The book predates the Facebook Papers but provides essential analytical frameworks for understanding the governance failures they revealed.
13. Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press.
An accessible analysis of how digital platforms create and enforce rules that function as private governance systems — often with less transparency, accountability, and due process than public governance. Suzor argues that platform governance is effectively lawmaking without democratic legitimacy, and analyzes what accountability standards should apply. Relevant to the chapter's discussion of the Facebook Oversight Board and the legitimacy dimensions of AI governance.
On International and Comparative AI Governance
14. Cihon, P., Maas, M.M., & Floridi, L. (2020). "Should Artificial Intelligence Governance Be Centralised?" Proceedings of AIES 2020. Available via ACM Digital Library.
A structured analysis of the case for and against centralized international AI governance — analogous to arguments about other international governance domains (nuclear, climate, trade). The paper identifies scenarios where centralization would and would not be beneficial, and provides a useful framework for thinking about the design of international AI governance institutions. Directly relevant to Section 6.5 on international governance frameworks.
15. Calo, R., & Rosenblat, A. (2017). "The Taking Economy: Uber, Information, and Power." Columbia Law Review, 117(6), 1623–1690.
While focused on platform labor and not AI governance specifically, this paper provides a powerful analysis of how algorithmic systems create power asymmetries that existing governance frameworks fail to address — because those frameworks were designed for different power structures. The analysis of how platforms use information asymmetry to govern their workers and users is directly applicable to AI governance more broadly, and the paper's framework for identifying governance gaps is methodologically useful.
16. Roberts, S.T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
An ethnographic study of commercial content moderation work — the human workers whose decisions implement platform governance at scale. Roberts' research reveals the human infrastructure behind algorithmic content governance: the labor conditions, cognitive burdens, and institutional dynamics that shape how platform governance actually functions in practice. Essential for understanding the gap between governance as designed and governance as implemented.
17. Hadfield-Menell, D., & Hadfield, G.K. (2019). "Incomplete Contracting and AI Alignment." Proceedings of AIES 2019. Available via ACM Digital Library.
An analysis of AI governance that draws on legal contract theory — specifically, the insight that no governance framework can specify behavior completely, and that well-designed governance must account for situations it has not anticipated. The paper argues that AI alignment and AI governance face the same fundamental challenge: how to create systems that behave well in situations their designers could not predict. Provides a useful theoretical framework for thinking about governance design.
Practitioner Resources
18. AI Now Institute Annual Reports (2016–present) Available at: https://ainowinstitute.org
The AI Now Institute's annual reports provide systematic empirical analysis of AI's social implications and governance failures, with particular attention to AI's impacts on marginalized communities. Each report includes specific policy recommendations grounded in documented harms. The Institute's research on "algorithmic accountability" and "discriminatory systems in hiring, health, education, and child welfare" has directly influenced governance frameworks and regulatory attention. The reports are accessible to non-specialist readers and updated annually.
19. Partnership on AI: Responsible Practices for Synthetic Media Partnership on AI (2023). Available at: https://partnershiponai.org/responsible-practices-for-synthetic-media/
A practical governance framework for organizations producing or distributing AI-generated synthetic media (deepfakes, AI-generated images, voice synthesis). Produced through a multi-stakeholder process, the framework addresses provenance, disclosure, and accountability for synthetic content. Useful as a case study of what substantive industry self-regulation looks like when the process is taken seriously — and as a governance template for organizations working in this space.
20. Raji, I.D., Smart, A., White, R.N., et al. (2020). "Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing." Proceedings of FAccT 2020. Available via ACM Digital Library.
A practitioner-oriented framework for conducting internal algorithmic audits — systematic evaluations of AI systems for bias, accuracy, and alignment with design intent. The paper provides a structured methodology for auditing across the AI development lifecycle, drawing on the authors' experience conducting audits at major technology companies. Directly applicable to the chapter's discussion of red-teaming, documentation, and incident response as governance mechanisms.
Note on staying current: AI governance is a field where official documents, regulations, and research evolve rapidly. For official sources (NIST AI RMF, EU AI Act guidance, national executive orders), always consult the original publishing organization's website for the most current version. For academic research, search Google Scholar, SSRN, and the ACM Digital Library for recent papers in the proceedings of FAccT (Fairness, Accountability, and Transparency), AIES (AAAI/ACM Conference on AI, Ethics, and Society), and NeurIPS Ethics workshops.