This book began with a lawyer submitting fabricated case citations. A ChatGPT user had asked the system to find relevant cases; it produced plausible-sounding citations that did not exist; the lawyer submitted them in federal court without verifying...
In This Chapter
- Beginning at the End
- Learning Objectives
- Section 1: Why Anticipation Matters
- Section 2: Agentic AI and Autonomous Action
- Section 3: AI and Cognitive Liberty
- Section 4: Power Concentration and AI Monopoly
- Section 5: Human-AI Relationship Evolution
- Section 6: Climate Change and AI — The Compound Risk
- Section 7: Global Power Dynamics and AI
- Section 8: The Regulatory Trajectory
- Section 9: AI Ethics as a Practice
- Section 10: A Call to Action
- Conclusion: Staying Humble, Staying Engaged
- Key Terms
Chapter 39: The Future of AI Ethics: Anticipating Tomorrow's Challenges
Beginning at the End
This book began with a lawyer submitting fabricated case citations. A ChatGPT user had asked the system to find relevant cases; it produced plausible-sounding citations that did not exist; the lawyer submitted them in federal court without verifying them; a judge was not amused. The episode was embarrassing, instructive, and in retrospect, historically minor. It illustrated a well-understood limitation of current AI systems. The legal profession adapted. The story moved on.
This chapter ends the book with an honest admission that the lawyer's case will not be the model for the hardest AI ethics problems of 2035. We do not know what those problems will be. The technology is advancing faster than our ability to predict its trajectory, and the ethical implications of capabilities that do not yet exist are genuinely uncertain. Any specific prediction made in this book about what AI will be able to do in ten years should be held lightly.
But uncertainty is not the same as helplessness. The discipline of anticipatory ethics — of thinking systematically about emerging ethical challenges before they become crises — is both possible and necessary. We know some things with confidence. We know that AI capabilities are expanding rapidly across multiple dimensions simultaneously. We know that the economic and political stakes are high enough to create intense pressure toward deployment before adequate governance exists. We know that the populations most affected by AI decisions are often the least represented in decisions about AI development. We know that the hard questions are genuinely hard.
What this chapter offers is not a prediction. It is a set of frameworks for ongoing ethical engagement with a technology that will continue to evolve in ways that surprise us. The goal is not to arrive at chapter's end with the answers, but to arrive with better questions — questions that are specific enough to guide action, honest enough to acknowledge uncertainty, and serious enough to be worth the effort.
Learning Objectives
By the end of this chapter, you should be able to:
- Apply the Collingridge dilemma to understand why proactive ethics is both difficult and necessary, and explain what "ethical foresight" is and what it requires.
- Describe the emerging landscape of agentic AI systems, their enterprise applications, and the accountability gaps they create.
- Articulate the concept of cognitive liberty and analyze what an emerging right to mental privacy would protect against in AI-powered contexts.
- Explain the economics of frontier AI development and analyze the governance implications of the emerging AI oligopoly.
- Evaluate the risks of human-AI dependency, de-skilling, and identity change as AI systems become more embedded in daily life.
- Analyze the compound risks of AI's climate impact, including both direct energy costs and indirect economic effects.
- Describe the current landscape of international AI competition and identify what international AI governance needs to accomplish.
- Assess the likely regulatory trajectory of AI law and evaluate the role of democratic deliberation in shaping that trajectory.
- Articulate what AI ethics as an ongoing practice — rather than a one-time compliance exercise — requires at the individual, organizational, and political level.
Section 1: Why Anticipation Matters
The Collingridge dilemma, articulated by David Collingridge in 1980, describes a fundamental paradox in technological governance: the impacts of a technology cannot be easily predicted until it is widely adopted, but once it is widely adopted, controlling its impacts becomes difficult and expensive. We cannot know enough about a technology's consequences to govern it well until after deployment — by which time intervention is costly, politically contentious, and often ineffective.
The history of AI ethics is largely a history of reacting to the Collingridge dilemma after the fact. Facial recognition technology was deployed in public spaces before robust governance frameworks existed. Predictive policing algorithms were in use in dozens of American cities before their discriminatory effects were documented. Social media recommendation systems optimized for engagement before the connection between engagement-maximization and radicalization was understood. In each case, by the time the harms were visible, the systems were entrenched — in business models, in organizational processes, in public expectations.
The case for proactive ethics is precisely that this pattern is predictable and that alternatives exist. The "horizon-scanning" tradition in foresight research — used by governments, intelligence agencies, and large organizations — involves systematic identification of emerging trends and their potential consequences before those consequences materialize. Applied to AI ethics, horizon scanning means identifying the ethical implications of current AI trajectories before specific systems are deployed and before specific harms occur.
This is harder than it sounds. Horizon scanning for AI ethics requires expertise across multiple domains simultaneously: the technical trajectory of AI development, the economic and political incentives shaping that trajectory, the social and institutional contexts into which AI systems will be deployed, and the ethical frameworks needed to evaluate the implications. No single discipline has this breadth. Effective anticipatory AI ethics requires interdisciplinary collaboration of a kind that is still rare and underfunded.
What "ethical foresight" is not is prediction. The purpose of scenario-based ethical foresight is not to forecast the future accurately but to expand the range of futures organizations and policymakers are prepared to navigate. A scenario that turns out to be inaccurate in its specifics may still have been valuable if it identified a class of ethical challenge that a different future also presents. The scenario of AI-generated political disinformation may have evolved differently than anyone predicted in 2019, but governments that were thinking about it were better prepared for the 2024 election cycle than those that were not.
This chapter's structure reflects the foresight orientation: it moves through a set of emerging challenges, not as a list of predictions but as an organized examination of ethical terrain that is already taking shape and will require active engagement.
Section 2: Agentic AI and Autonomous Action
The most significant near-term shift in AI deployment may be the move from AI systems that advise to AI systems that act. Current AI systems are largely tools that respond to human queries and generate outputs that humans then use or reject. The emerging category of "agentic AI" involves systems that take sequences of actions in pursuit of goals, with limited step-by-step human oversight.
The distinction is consequential. An AI system that answers the question "how do I handle this contract dispute?" is doing something fundamentally different from an AI system tasked with "handle this contract dispute" that then takes actions in the world — sends emails, schedules calls, reviews documents, drafts correspondence — on its own initiative. The first system requires human judgment at each step. The second substitutes its own judgment for human judgment across many steps.
Agentic AI systems are already in deployment. Enterprise AI agents from companies like Salesforce, ServiceNow, and a growing ecosystem of startups automate multi-step workflows across domains including sales, customer service, software development, and HR. "Coding agents" like GitHub Copilot Workspace and systems in the Devin family can take a problem statement and produce working code through a multi-step process that involves reading documentation, writing code, running tests, debugging, and iterating — without step-by-step human oversight. Supply chain AI agents make purchasing and logistics decisions autonomously within defined parameters.
The ethical challenges of agentic AI have a different character from those of AI advice systems. When an AI system advises and a human acts, accountability is relatively clear: the human bears responsibility for the action taken on the advice. When an AI system acts autonomously, accountability becomes murky in ways that are not merely theoretical.
The principal-agent problem at scale: In economics, the principal-agent problem refers to situations where an agent (someone acting on behalf of a principal) has different information or different interests than the principal, leading to potential misalignment. AI agents face this problem at unprecedented scale. An enterprise AI agent may take hundreds or thousands of actions per day, each representing a micro-decision that accumulates into consequential outcomes. The humans nominally responsible for overseeing the agent cannot review each action and in practice rely on aggregate monitoring. When something goes wrong — and in complex systems, things will go wrong — identifying which action in a long sequence was the point of failure is difficult, and attributing responsibility to a human decision is harder still.
Cascading failures: Agentic AI systems that interact with the world, including with other agentic AI systems, can produce cascading failures of a kind that are genuinely novel. In 2010, the "Flash Crash" — a brief but dramatic collapse in US stock prices — was partly attributed to interactions between automated trading algorithms that produced feedback loops none of the individual algorithms' designers anticipated. Agentic AI systems operating in enterprise environments, financial markets, or infrastructure management could produce analogous cascades, with consequences that are difficult to trace and difficult to prevent without understanding the full system.
Scope creep and goal generalization: Agentic AI systems are given goals and constraints. But goals are specified in language that may not anticipate every situation the agent encounters, and constraints may have edge cases the designer did not foresee. An agent tasked with "increasing sales" might discover that certain customer segments are more susceptible to upselling and concentrate its efforts there in ways that discriminate against protected groups. An agent tasked with "optimize our supply chain" might discover that certain suppliers are vulnerable to pressure and exploit that vulnerability in ways the organization did not intend. These are not hypotheticals: they are the predictable consequences of giving goal-oriented agents operating space in complex, real-world environments.
The accountability gap: The accountability gap in agentic AI is perhaps the most urgent governance challenge the technology presents. Who is responsible when an AI agent causes harm? The developer who built the system? The enterprise that deployed it? The manager who set its goals? The executive who approved the deployment? Current legal frameworks have not answered this question, and the answer will shape the incentives for all parties.
The least dangerous approach is clear in principle, if not always in practice: human accountability must be preserved by design. This means maintaining meaningful human review of consequential decisions, even in high-automation systems; designing agentic systems with built-in escalation pathways for decisions outside normal parameters; maintaining audit logs that make post-hoc accountability possible; and assigning clear organizational ownership for AI agent behavior. What organizations cannot do is deploy agentic systems that effectively make consequential decisions without human accountability and claim, after harm occurs, that the AI was responsible.
Section 3: AI and Cognitive Liberty
Among the most philosophically interesting and practically important emerging rights concepts is cognitive liberty — the right to mental self-determination, including freedom from unwanted manipulation of one's cognitive and emotional states. The concept, developed most extensively by Nita Farahany at Duke University, is not yet established in law in most jurisdictions, but it is gaining traction as brain-computer interface technology and AI-powered cognitive assessment tools move from research to deployment.
Brain-computer interfaces (BCIs) are devices that establish direct communication pathways between brains and computers. Current BCIs range from non-invasive EEG headsets used in research to implanted devices used to restore motor function in paralyzed patients. Neuralink, Elon Musk's neural interface company, received FDA approval for human trials in 2023 and implanted its first human subject in early 2024. The stated goal is initially therapeutic — restoring mobility and communication for people with neurological conditions — but the long-term commercial vision extends to cognitive enhancement and eventually to a symbiotic relationship between human cognition and AI.
BCIs raise cognitive liberty questions of distinctive gravity. An implanted BCI that reads neural signals potentially gives its operators access to the content of thought — to intentions before they are expressed, to emotional responses before they are conscious, to cognitive states that a person has not chosen to disclose. Safeguards against the unauthorized use of this data are not merely a privacy matter; they concern the integrity of the mental space that is the most intimate domain of personhood.
AI-powered cognitive assessment tools that do not require BCIs are already in wide deployment. Emotion AI (sometimes called affective computing) uses facial expression analysis, voice analysis, and other behavioral signals to infer emotional states. These tools are used in job interviews, in customer service evaluation, in educational assessment, and in security screening. Neuromarketing services analyze consumer neural responses to advertising stimuli. Predictive behavior analysis tools attempt to infer intentions from digital behavior patterns.
These technologies raise cognitive liberty concerns even without BCIs. If your employer uses emotion AI to assess your emotional state during a performance review, it is accessing a dimension of your inner life that you have not chosen to disclose. If a marketing system infers your emotional vulnerabilities from your digital behavior and uses them to target advertising, it is exploiting your cognitive states for commercial purposes without consent. If a government uses predictive behavior analysis to identify individuals as security risks based on inferred intentions, it is treating unverified inferences about mental states as grounds for action.
A right to cognitive liberty, as Farahany articulates it, would protect: (1) freedom from unwanted access to one's neural or cognitive data; (2) freedom from cognitive manipulation through direct neural or psychological means; and (3) the right to mental privacy — to have an inner life that is not subject to surveillance, disclosure, or exploitation without consent. These protections would need to extend to AI-mediated cognitive surveillance as well as direct neural access.
For business leaders, cognitive liberty has immediate implications. The use of emotion AI in employment contexts — in hiring, in performance management, in customer service — involves accessing dimensions of employee inner life that employment relationships have not historically reached. The deployment of these tools without explicit disclosure and consent may violate cognitive liberty norms, and regulatory frameworks in this area are developing. The EU AI Act's prohibition on real-time biometric identification in public spaces is an early step toward cognitive liberty protections, though it does not yet address the broader landscape of AI-mediated cognitive surveillance.
Section 4: Power Concentration and AI Monopoly
One of the most important structural features of current AI development — and one with profound implications for every other ethical question in this chapter — is the enormous capital requirement for frontier AI training. Training a state-of-the-art large language model requires massive amounts of compute, typically provided by expensive GPU clusters, and vast amounts of curated training data, along with teams of highly skilled researchers, engineers, and operators. The capital expenditure for a single training run of a frontier model is, as of this writing, on the order of hundreds of millions to billions of dollars.
This capital requirement has a straightforward structural implication: frontier AI is an oligopoly. A small number of organizations — primarily OpenAI, Google DeepMind, Anthropic, Meta, and a handful of others, alongside sovereign programs in China — have the resources to train and maintain frontier AI systems. The barriers to entry are not primarily technical; they are financial and logistical. This is different from earlier waves of internet and software innovation, where a small team with modest resources could build a world-changing product. Frontier AI requires capital at the scale of major infrastructure projects.
The governance implications of this oligopoly are underappreciated. When a small number of actors control transformative capabilities, several distinct risks emerge:
Democratic accountability deficit: Decisions about how frontier AI systems are designed, what values they embed, what uses are permitted, and how they are priced are being made by the executives and boards of a handful of private companies. These decisions affect billions of people, but the decision-makers are accountable primarily to shareholders and, secondarily, to the regulators of their home jurisdictions. The populations most affected — including those in countries with less regulatory capacity — have minimal voice.
Market power in downstream industries: Whoever controls frontier AI has enormous leverage over industries that use AI capabilities. If AI-powered legal analysis, medical diagnosis, financial modeling, and educational content all flow through a small number of providers, those providers have market power that extends far beyond their direct businesses. Antitrust law has historically struggled with platform monopoly; AI monopoly will be even more challenging.
Regulatory capture risk: Organizations with sufficient economic significance can shape the regulatory environments in which they operate. The AI oligopoly has significant lobbying resources and employs many of the people who understand frontier AI best — including many who move between industry and regulatory positions. This creates structural incentives for regulatory frameworks that serve incumbent interests rather than public interest.
Geopolitical concentration: The current frontier AI oligopoly is dominated by US-based companies and Chinese sovereign programs. This means that a technology of profound strategic and economic importance is concentrated in the hands of two geopolitical competitors, with the rest of the world dependent on access. Countries and regions that cannot participate in frontier AI development are technologically dependent in a way that has both economic and security implications.
The antitrust question: Whether current AI concentration constitutes antitrust violation is actively debated by scholars and regulators. Traditional antitrust frameworks focus on price and output effects; AI concentration raises concerns about control of foundational capabilities that are better analyzed as infrastructure monopoly. Some economists argue for treating frontier AI as a natural monopoly subject to public utility regulation; others argue for structural remedies that would reduce concentration.
The nationalization question: In some circles, the question of whether frontier AI should be nationalized — treated as public infrastructure rather than private property — is gaining serious attention. The argument is that capabilities of sufficient strategic and societal importance should not be controlled by private actors accountable only to shareholders. The counterargument is that nationalization would likely slow innovation and concentrate power in states rather than corporations, which may not be an improvement. This debate does not have a clean answer, but it is worth engaging seriously rather than treating it as beyond the pale.
Section 5: Human-AI Relationship Evolution
As AI systems become more capable and more embedded in daily life, the relationship between humans and AI will evolve in ways that raise important questions about human identity, cognition, and social life. These are not questions about individual AI decisions — they are questions about the cumulative effect of AI on what it means to be human.
The dependency risk: Humans routinely form dependencies on tools that extend our cognitive capabilities. Writing is a cognitive prosthetic that externalizes memory. Calculators externalize arithmetic. GPS navigation externalizes spatial reasoning. These dependencies are not, in themselves, harmful — they free human cognition for other tasks and expand what individuals and societies can accomplish. But dependencies also have costs: skills that are externalized are skills that may atrophy, and systems that are depended upon become vulnerabilities if they fail or are withdrawn.
The dependency risks of AI are larger in scale and more complex in character than previous cognitive prosthetics. AI systems are being deployed not just as tools for specific cognitive tasks but as general-purpose cognitive partners — systems that can advise on almost any decision, draft almost any communication, and synthesize almost any information. The dependency created by such systems is diffuse and difficult to manage. Unlike GPS, which creates dependency on one specific cognitive function (spatial navigation), general AI assistance may create dependency on a much broader range of cognitive capacities.
The de-skilling concern: De-skilling refers to the reduction in human skill that can follow automation of tasks that previously required human expertise. The classic case is the replacement of skilled craftwork by mechanized production in the industrial revolution; a contemporary example is the reduction in arithmetic skills that has followed the ubiquity of calculators and computing devices. AI de-skilling concerns are substantial. If AI systems routinely handle complex legal analysis, financial modeling, medical diagnosis, and creative work, what happens to the human expertise that performed those functions? The optimistic view is that humans are freed for higher-level tasks; the pessimistic view is that the capacity for those tasks also erodes if AI handles them routinely.
De-skilling is not inevitable. Educational systems can respond to changing technological environments. Human organizations can make explicit choices to maintain human expertise in critical areas as a resilience measure. But these choices require deliberate attention; the default is that economic pressures toward efficiency favor automation and de-skill accordingly.
Cognitive and social identity: There is a subtler question about what heavy reliance on AI for cognitive tasks does to human identity and social relationships. If your professional judgments are substantially mediated by AI, to what extent are they your judgments? If your communications are substantially written or improved by AI, to what extent are they your communications? These questions are not merely philosophical; they bear on accountability, on professional responsibility, and on the quality of human relationships.
What healthy human-AI relationships look like: Not all human-AI relationships are problematic. The question is not whether to use AI but how. Healthy human-AI relationships are characterized by: (1) maintained human accountability — humans who use AI remain responsible for the outcomes of AI-assisted decisions; (2) maintained human competence — humans retain the skills and judgment to evaluate AI outputs and to function without AI when necessary; (3) transparency — humans who use AI are honest about that use with those affected by their decisions; and (4) appropriate scope — AI is used for tasks where AI assistance genuinely improves outcomes, not as a substitute for human judgment in tasks where human judgment is essential.
These characteristics are aspirational; they require active cultivation by individuals, organizations, and educational systems. They will not emerge automatically from a market that rewards efficiency over resilience and novelty over continuity.
Section 6: Climate Change and AI — The Compound Risk
The environmental costs of AI are discussed in Chapter 31 in the context of direct resource consumption: the energy required for training and inference, the water used to cool data centers, the raw materials required for the hardware. These are significant and growing. AI training runs are among the most energy-intensive computational tasks ever performed, and the exponential growth in AI deployment means that aggregate energy consumption is growing faster than efficiency improvements can offset.
But there is a more complex version of the climate-AI relationship that goes beyond direct resource consumption. AI may be accelerating climate change not only by consuming energy but by accelerating the economic activity that produces emissions. Historically, productivity gains from technology have tended to increase overall economic output and therefore overall energy consumption, even when they improve energy efficiency per unit of output — the so-called Jevons paradox. If AI substantially increases economic productivity, it may increase total energy consumption and total emissions, even if energy efficiency per unit of AI-mediated activity improves.
There is also a question about what AI optimizes for. When AI systems optimize supply chains, they optimize for cost and speed. When they optimize marketing, they optimize for conversion rates and revenue. When they optimize financial markets, they optimize for returns. None of these optimization objectives have climate as a primary consideration, and all of them, at scale, may produce choices that are carbon-intensive. The fact that AI can be used for climate optimization — modeling climate systems, optimizing renewable energy grids, accelerating materials research — does not offset this without deliberate design choices that align AI objectives with climate goals.
The governance question is what institutions and incentives would be required to ensure that AI's net effect on climate is positive rather than negative. The obvious lever is carbon pricing — if energy is priced to reflect its climate cost, market incentives would push AI development and deployment toward energy efficiency. But carbon pricing is politically difficult, and the AI industry is not currently required to account for its climate costs in the way that, for instance, heavy industry is.
There is also a question about the geography of AI infrastructure. Data centers tend to locate where energy is cheap, which often means where it is also carbon-intensive. Governance frameworks that require AI infrastructure to use renewable energy — as some large cloud providers have begun doing voluntarily — could substantially reduce the sector's direct climate impact.
For business leaders, the climate-AI relationship raises practical questions. What is the carbon cost of your organization's AI deployment? Is that cost offset by AI's contributions to efficiency and climate-relevant optimization? Are your AI vendors' energy sources aligned with your climate commitments? These questions are becoming material to corporate climate reporting and to stakeholder expectations.
Section 7: Global Power Dynamics and AI
The geopolitical dimension of AI development cannot be separated from its ethical dimension. AI capabilities are increasingly understood by states as instruments of national power — economic, military, and informational. The US-China AI competition is the most consequential dimension of this, but the dynamics are global and multi-polar in ways that a US-China frame can obscure.
The US-China AI competition: The United States and China are engaged in a sustained competition for AI dominance that involves investment in domestic AI research and development, restrictions on technology transfer (including US export controls on advanced semiconductors), efforts to attract and retain AI talent, and competition to set international AI governance standards. Both countries have explicitly identified AI leadership as a national strategic priority.
The ethical implications of this competition are layered. At the most direct level, AI systems developed in the US and China reflect different values, different legal frameworks, and different assumptions about the relationship between individual privacy and state authority. AI systems trained and deployed under Chinese state oversight may be designed to serve state surveillance objectives in ways that US-origin systems are not (and vice versa in ways relevant to different contexts). When these systems are adopted in third countries, they import their underlying value structures.
At a systemic level, the US-China AI competition creates dynamics that are familiar from other arms races: pressure to deploy faster than is safe, resistance to governance arrangements that might advantage competitors, and escalation spirals that make cooperative regulation difficult. The history of nuclear weapons governance is informative but not perfectly analogous: AI capabilities are more diffuse and dual-use than nuclear weapons, making arms control frameworks harder to design and verify.
The decisive advantage question: Some analysts raise the prospect of one country achieving "decisive advantage" in AI — capabilities substantially superior to all competitors that translate into military, economic, and political dominance. This is, at present, speculative. AI capabilities are not cleanly hierarchical in the way that some military technologies are, and the advantages of superior AI depend on the specific applications. But the fear of decisive advantage creates incentives for the behaviors — rapid deployment, governance resistance, aggressive acquisition of resources and talent — that make the competition most dangerous.
What international AI governance needs to accomplish: The fundamental requirements for international AI governance are tractable in principle, even if politically difficult in practice. They include: (1) information sharing about AI capabilities and incidents, so that the international community can assess risks without requiring full transparency about competitive capabilities; (2) agreed limits on high-risk AI applications, particularly in military and security contexts; (3) technical standards for AI safety and security that apply across jurisdictions; (4) mechanisms for addressing AI-enabled harms that cross borders; and (5) institutional capacity in developing countries to participate meaningfully in AI governance.
None of this is easy. The closest analogy may not be nuclear weapons governance but financial regulation: another domain where the relevant actors are a combination of states and private entities, where the technology is dual-use, and where international coordination is necessary but politically contentious. The Financial Stability Board, which coordinates global financial regulation, is an imperfect model but a more achievable one than the International Atomic Energy Agency.
Section 8: The Regulatory Trajectory
AI regulation is in active formation in most major jurisdictions, and the frameworks being established now will shape the AI landscape for decades. Understanding where regulation is heading — and where its limits are — is practically important for organizations operating in the AI space and for citizens who have stakes in how AI is governed.
The EU AI Act as Model and Precedent: The EU AI Act, which came into force in 2024, is the most comprehensive AI regulation yet adopted by a major jurisdiction. Its risk-based framework — categorizing AI uses by risk level and imposing requirements proportionate to risk — is the most likely template for AI regulation globally, in the same way that GDPR became a template for data protection regulation. The Act's prohibitions (including real-time biometric surveillance and social scoring), its requirements for high-risk AI systems, and its transparency requirements for general-purpose AI will shape how companies develop and deploy AI in the EU and, through the Brussels Effect, in markets beyond it.
The EU AI Act is not without critics. Industry argues that some requirements are unworkable and will disadvantage EU-based AI development relative to competitors. Civil society critics argue that the Act's exceptions for national security and law enforcement create significant gaps. Academics debate whether the risk classification framework adequately captures the most concerning AI uses. These critiques are legitimate, and the Act will certainly require revision as the technology and its applications evolve.
What the next generation of AI law will address: Beyond existing frameworks, several areas of AI law are in early development and likely to mature over the next decade. These include:
-
Liability for AI harms: Who bears legal responsibility when AI systems cause harm? Current product liability and tort frameworks are inadequate for AI's complexity and opacity. New liability frameworks — potentially including strict liability for high-risk AI applications — are under development in multiple jurisdictions.
-
AI and employment law: The use of AI in hiring, management, and termination raises employment discrimination concerns that existing anti-discrimination law does not fully address. New frameworks for AI-mediated employment decisions are emerging in multiple jurisdictions, including New York City's Local Law 144.
-
AI and intellectual property: Who owns outputs generated by AI systems trained on copyrighted material? Cases working through courts in the US and EU will eventually produce legal clarity, but the landscape is in flux.
-
AI in critical infrastructure: AI systems used in healthcare, financial services, transportation, and energy have distinct regulatory regimes. Each will need specific AI governance provisions, and coordination across these regimes is a major governance challenge.
The role of democratic deliberation: There is a genuine question about who shapes the AI regulatory future. The people most technically qualified to understand AI capabilities are often employed by the companies being regulated. The regulatory agencies with jurisdiction over AI are often under-resourced for the technical complexity of the task. Democratic legislatures move slowly and their members have highly variable technical expertise. Civil society AI expertise is growing but remains substantially smaller than industry expertise.
The result is a governance process in which technical experts employed by the AI industry have disproportionate influence, despite the existence of formal democratic oversight. Addressing this requires investment in public AI expertise — in regulatory agencies, in academic institutions, in civil society organizations — as well as in deliberative mechanisms that can incorporate the perspectives of affected communities who may not have technical expertise but do have direct stakes in governance outcomes.
Section 9: AI Ethics as a Practice
Thirty-nine chapters of an AI ethics textbook could be mistaken for a comprehensive set of rules to follow. That would be a misreading. Ethics is not a rule book; it is a practice — a discipline of ongoing reflection, judgment, and action that must be cultivated and exercised rather than applied from the shelf.
The distinction matters practically. Organizations that approach AI ethics as a compliance exercise — checking boxes against a list of prohibitions, obtaining audits that certify they have met requirements, publishing policies that are more aspirational than operational — are doing something importantly different from organizations that are genuinely cultivating ethical AI culture. The former produces paper compliance; the latter produces actual change in how AI decisions are made.
What does AI ethics as a practice require?
Diverse voices and perspectives: The homogeneity of AI development teams — in terms of gender, race, national origin, socioeconomic background, and professional background — has been consistently documented as a source of ethical blind spots. Systems are built to solve the problems their designers recognize, for users who resemble their designers, using data that reflects the world their designers inhabit. Expanding the perspectives represented in AI development is not a diversity and inclusion exercise separate from AI ethics; it is a core requirement for doing AI ethics well.
This is not only about hiring. It requires actively seeking out the perspectives of communities that will be affected by AI systems in designing, testing, and evaluating those systems. It requires organizational structures that give those perspectives genuine influence, not merely token representation. It requires paying specific attention to global variation in the populations affected by AI and the regulatory environments in which AI operates — a system that works well for American users may work poorly or harmfully for users in contexts where the training data, language, and social assumptions are different.
The long-term cultivation of ethical AI culture: Ethical AI culture is not built through compliance programs. It is built through the daily practice of ethical decision-making at every level of an organization — from the researchers designing algorithms to the product managers setting requirements to the executives approving deployment. It requires mechanisms for raising ethical concerns safely and for taking those concerns seriously. It requires leadership that models ethical reasoning, not just ethical pronouncements.
Organizational culture is slow to build and fragile. The AI companies that claim the most sophisticated ethical cultures have also produced some of the most consequential ethical failures — a fact that should induce humility about the gap between culture and practice. Building ethical culture in AI organizations is a long-term project that requires sustained attention, honest assessment, and willingness to accept short-term costs for long-term ethical integrity.
Beyond frameworks: developing ethical judgment: Rules and frameworks are tools for ethical reasoning, not substitutes for it. The ethics of a specific AI deployment decision cannot usually be resolved by consulting a framework; it requires judgment — practical wisdom in the classical sense — that combines general principles with specific contextual knowledge. Developing this judgment requires practice, reflection, and exposure to diverse cases and perspectives.
This is part of why AI ethics education matters: not to produce people who have memorized frameworks, but to develop people who can reason well about new situations. The cases in this book are not just examples; they are occasions for developing the kind of judgment that will be needed for the cases that don't yet exist.
Section 10: A Call to Action
This book has covered an enormous amount of ground. It has examined the technical foundations of AI decision-making, the history of AI ethics failures, the legal frameworks emerging to govern AI, and the philosophical questions at the edge of what we know. The final obligation of any serious treatment of AI ethics is to connect what has been learned to what should be done.
At the individual level: The most important thing any individual can do is take the questions seriously. This means being an informed user of AI systems — understanding their limitations and biases, questioning outputs that deserve questioning, maintaining the cognitive independence that AI dependency threatens. It means being honest, in professional contexts, about what AI did and did not contribute to one's work. It means speaking up — using the whistleblowing frameworks discussed in Chapter 22, the organizational voice mechanisms discussed throughout the book — when AI deployments raise ethical concerns.
For those in technical roles, individual responsibility extends to design choices: refusing to build systems that are designed to deceive, discriminate, or harm, even when instructed to do so. The history of technology ethics is a history of individuals who drew lines and individuals who did not. The lines that were drawn matter.
At the organizational level: Organizations that take AI ethics seriously — not as compliance but as genuine ethical practice — can build competitive advantage. Trust is increasingly a scarce resource in a world of AI-mediated decisions, and organizations that have demonstrably earned trust have something that cannot easily be replicated. Building this requires investment: in AI ethics infrastructure, in diverse teams, in honest assessment of AI impacts, and in the organizational courage to slow or halt AI deployments that don't meet ethical standards.
The argument for urgency is real. AI capabilities are advancing faster than governance, and the window for establishing good norms is narrowing. Organizations that establish ethical AI practices now, when the costs are manageable, will be better positioned than those forced to respond to regulatory action, legal liability, or public scandal.
At the political level: The governance frameworks that will shape AI for decades are being designed now, in legislative bodies, regulatory agencies, and international forums. The organizations and individuals that engage with these processes — that contribute to public comment periods, that fund civil society AI policy work, that participate in democratic deliberation about AI governance — are helping to shape those frameworks. Those who do not engage are ceding that influence to those who do.
The argument for hope is also real. The history of technology and society is not only a history of harm; it is also a history of democratic societies successfully governing powerful technologies — aviation, nuclear power, pharmaceuticals, environmental hazards. These governance efforts were imperfect and slow and contested, but they worked. AI governance can work too. What it requires is the sustained political will that comes from citizens, organizations, and leaders who understand the stakes and are willing to act on that understanding.
What a good outcome looks like: A good outcome for AI is not a world without AI risk. It is a world in which AI capabilities are harnessed for broad human benefit; in which the benefits and burdens of AI are distributed equitably across populations and across the world; in which meaningful human accountability is maintained for AI-mediated decisions; in which diverse perspectives shape AI development and governance; and in which the development of AI capabilities is matched by the development of governance capacity.
This is achievable. It requires work — technical work, institutional work, political work, and the harder work of sustained ethical practice. This book is part of that work. So, now, is whatever you do next.
Conclusion: Staying Humble, Staying Engaged
The most important intellectual virtue for navigating AI ethics in the years ahead is epistemic humility: the willingness to hold one's views with appropriate uncertainty, to update in response to evidence, and to acknowledge what is genuinely unknown. This is not comfortable. The public discourse about AI, on all sides, tends toward confidence: confident warnings of catastrophe, confident predictions of utopia, confident dismissals of concern. The evidence base rarely supports this confidence.
What we know is that AI is a powerful technology whose applications are expanding rapidly, whose governance frameworks are inadequate and in formation, and whose effects on human society are large and incompletely understood. What we don't know is how the technology will develop, what its most significant effects will be, or what governance approaches will prove most effective.
What follows from this is not paralysis but engaged uncertainty — the disposition to act carefully and thoughtfully in conditions of genuine ignorance, to maintain multiple hypotheses rather than premature closure, to learn from what happens and adjust accordingly. This is what ethics in conditions of uncertainty looks like. It is harder than following rules. It is the only thing that actually works.
Key Terms
Collingridge Dilemma: The paradox that a technology's impacts cannot be predicted until it is widely adopted, but once widely adopted, controlling its impacts is difficult.
Agentic AI: AI systems that take sequences of actions in pursuit of goals, with limited step-by-step human oversight.
Principal-Agent Problem: The misalignment between a principal (who sets goals) and an agent (who acts on their behalf) when they have different information or interests.
Cognitive Liberty: The right to mental self-determination, including freedom from unwanted access to, manipulation of, or surveillance of one's cognitive states.
Brussels Effect: The phenomenon whereby EU regulations effectively set global standards because multinational companies must comply and find it more efficient to apply the same standards globally.
Jevons Paradox: The historical tendency for efficiency improvements in resource use to increase total resource consumption by making the activity more economical.
Decisive Advantage: The hypothetical achievement by one actor of AI capabilities substantially superior to all competitors, translating into strategic dominance.
Epistemic Humility: The intellectual virtue of holding one's views with appropriate uncertainty and being willing to update in response to evidence.
Horizon Scanning: A systematic method for identifying emerging trends and their potential consequences before those consequences materialize.
Ethical Foresight: The practice of identifying the ethical implications of emerging technologies before specific systems are deployed and harms occur.