Preface

Why This Book, Why Now

In the spring of 2023, a lawyer named Steven Schwartz filed a legal brief in a United States federal court. The brief cited six case precedents — cases with names, docket numbers, and specific legal holdings. There was one problem: none of them existed. Schwartz had used ChatGPT to research the law, and ChatGPT had confidently fabricated case law out of nothing. Schwartz faced sanctions. His client's case suffered. And the legal profession had its first major reckoning with the question that would come to define the decade: What happens when we trust AI systems without understanding them?

This book is an attempt to answer that question — not for lawyers specifically, but for anyone whose organization is being transformed by artificial intelligence. That means virtually everyone.

AI ethics has become one of the most urgent fields of our time, yet it remains poorly understood outside a relatively small circle of researchers and activists. The conversation is often too technical for non-specialists, or too abstract to connect to the organizational realities that managers actually face. Meanwhile, the technology advances at a pace that leaves governance frameworks perpetually behind.

This book tries to bridge that gap. It is written for business professionals — not because business is the only lens through which AI ethics matters, but because business leaders make consequential decisions about AI systems every day, and they need a framework for doing so responsibly.


What This Book Is

This is a comprehensive textbook on AI ethics — comprehensive in scope, authoritative in depth, but accessible in tone. It covers:

  • Conceptual foundations: What AI ethics is, where it came from, and what philosophical frameworks inform it
  • Empirical reality: What we know, from rigorous research, about bias, fairness, transparency, and harm in deployed AI systems
  • Institutional structures: How organizations, governments, and international bodies are attempting to govern AI
  • Practical application: What business leaders, product teams, compliance officers, and policymakers can actually do

The book deliberately avoids two failure modes that afflict much writing in this space. The first is solutionism — the assumption that technology problems have technology solutions, and that a new fairness metric or explainability technique will resolve fundamentally social and political tensions. The second is paralysis — the conclusion that because AI is risky and complex, organizations should simply wait for someone else to figure it out. Neither serves readers well.

What this book offers instead is informed judgment: the capacity to reason carefully about difficult trade-offs, understand what is known and what is contested, take your organization's specific context seriously, and act with appropriate humility about what you don't yet know.


How This Book Is Organized

The book is divided into eight parts, each addressing a major dimension of AI ethics:

Part 1: Foundations establishes the conceptual and institutional terrain. What is AI ethics? How did we get here? What philosophical frameworks do ethicists use, and which are most useful for business contexts? Who are the stakeholders, and how do organizations build the business case for taking ethics seriously? What does governance look like in practice?

Part 2: Bias and Fairness provides a thorough treatment of algorithmic bias — one of the most studied and most consequential problems in applied AI ethics. We examine sources of bias in data and models, the surprisingly difficult mathematics of fairness measurement, and the specific challenges that arise in high-stakes domains including hiring, financial services, and healthcare.

Part 3: Transparency and Explainability tackles the "black box" problem — the opacity that characterizes many modern AI systems, including large language models and deep neural networks. We examine the technical landscape of explainable AI (XAI), the organizational challenge of communicating AI decisions to affected parties, and the emerging legal right to explanation under European law.

Part 4: Accountability and Responsibility addresses governance from the inside out — who bears responsibility when AI systems cause harm, how auditing works in practice, the evolving landscape of AI liability law, and the difficult question of corporate governance. A dedicated chapter examines the important and underappreciated role of ethical dissent within AI organizations.

Part 5: Privacy and Security examines AI's relationship with personal data — from foundational privacy law through the business model of surveillance capitalism, the specific risks of biometric and facial recognition systems, and technical approaches to privacy-preserving AI.

Part 6: Societal Impact and Governance zooms out to examine AI's effects at the societal level. How is AI reshaping labor markets? What does it mean for democratic institutions and elections? How is it being used in criminal justice, and with what consequences? What is AI's environmental footprint? And how are nations and international institutions attempting to regulate a technology that doesn't respect borders?

Part 7: Emerging Issues examines the frontier of AI ethics — the special ethical challenges posed by generative AI, autonomous weapons systems, the question of whether AI systems can have moral status, and the challenge of anticipating tomorrow's ethical dilemmas before they become today's crises.

Part 8: Capstone Projects offers three integrative projects that require readers to synthesize knowledge across parts of the book in realistic organizational contexts.


Five Themes That Run Through Every Chapter

Five themes recur across all 39 chapters, weaving together what might otherwise feel like discrete topics into a coherent intellectual project:

Power and accountability. AI systems are not neutral. They encode choices about whose interests matter, and they concentrate power in the hands of those who build and deploy them. Every chapter asks: who holds power here, and who answers for it?

The tension between innovation and harm prevention. There is genuine value in deploying AI systems quickly — in healthcare, in climate change, in education. And there are genuine risks in deploying them prematurely. This book takes both sides of this tension seriously, refusing easy answers.

Ethics washing versus genuine ethics. The language of AI ethics has become ubiquitous in corporate communications. The practice has not kept pace. Throughout this book, we distinguish between organizations that are genuinely grappling with ethical AI and those that are using the vocabulary of ethics as a shield against accountability.

Diversity and inclusion. Many of AI's most serious problems trace back to the same root cause: the people building these systems do not represent the full range of people they affect. This theme appears in every part of the book, from the demographics of training data to the composition of AI governance boards.

Global variation. AI ethics is not a Western or American project, and the ethical frameworks, regulatory approaches, and power dynamics vary enormously across the world. Each chapter attends to this global variation, drawing examples from multiple regions and resisting the assumption that one society's approach should be universal.


A Note on the Scope of "AI"

Throughout this book, "AI" is used broadly to refer to machine learning systems, deep learning models, large language models, algorithmic decision systems, and related technologies. We are less concerned with precise technical definitions than with the ethical implications of systems that automate consequential decisions at scale. When technical specificity matters, it is provided. When it doesn't, we prioritize clarity over precision.


Acknowledgments and Invitation

A work of this scope is built on the labor of hundreds of researchers, journalists, activists, and affected communities who have documented AI ethics problems, developed frameworks for addressing them, and refused to accept "move fast and break things" as an adequate standard for systems that shape human lives. Their work is cited throughout; the bibliography and further reading sections are an invitation to go deeper.

This textbook is a living document. AI ethics evolves as the technology evolves, and some of what is written here will require updating as new research, cases, and regulations emerge. Readers who identify errors, important omissions, or promising developments are invited to contribute.

The stakes are high. Let's get to work.


The Author February 2026