Part 6: Societal Impact and Governance — AI at the Scale of Society
Introduction
Every part of this book up to this point has examined AI ethics primarily at the level of organizations and individuals: the companies that build and deploy AI systems, the professionals who make decisions about them, and the people directly affected by specific algorithmic decisions. Part 6 zooms out. It asks what happens when AI's effects are not isolated incidents affecting individual people but structural changes affecting entire societies — the labor market, democratic institutions, the justice system, the natural environment, the global distribution of economic and political power.
At this scale, the ethical questions change in character. The harms at stake are not always traceable to specific decisions by specific actors. The timelines are longer. The affected populations include people who are not yet alive. The feedback loops between AI systems and social structures run in multiple directions: AI shapes institutions that shape AI. And the governance responses that are adequate at the organizational level — an ethics board, an audit process, a disclosure policy — are plainly insufficient when the question is whether AI will concentrate or distribute economic power at a global scale, or whether algorithmic content curation will strengthen or erode the epistemic foundations of self-governance.
Part 6 is not pessimistic. It does not treat AI's societal effects as predetermined or its governance challenges as insurmountable. What it does insist upon is honest confrontation with the scale of the stakes. The chapters in this part address questions that organizations cannot answer alone, that existing national regulatory frameworks are inadequate to address, and that will shape human societies for generations. Business professionals working on AI systems are participating in these larger dynamics whether they know it or not. Understanding them is part of the responsibility that comes with that participation.
From Organizational to Societal Ethics
The transition from organizational to societal ethics requires a shift in analytical frame. At the organizational level, the primary questions are: what should this organization do, what can it be required to do, and who is accountable when it does the wrong thing? At the societal level, the primary questions are: what are the structural dynamics that AI creates or accelerates, what collective responses are adequate to those dynamics, and how should governance authority be distributed across national governments, international institutions, civil society, and the private sector?
These questions do not replace the organizational questions. Organizations remain the primary sites where AI is built and deployed, and organizational ethics remains essential. But organizational ethics is insufficient to address effects that emerge from the interaction of many organizations' decisions and that fall on populations that no single organization's accountability framework encompasses. Part 6 develops the analytical vocabulary and empirical grounding for engaging with AI ethics at the societal scale.
Chapter Previews
Chapter 28: AI and the Future of Employment The relationship between AI and employment is one of the most consequential and most contested questions in contemporary economics and policy. This chapter surveys the evidence on AI's effects on labor markets — which jobs are being automated, which are being augmented, what the distributional consequences of displacement are, and what policy responses (retraining, social insurance, new labor market institutions) are on the table. It argues that the relevant questions are not just economic but ethical: who bears the transition costs, who captures the productivity gains, and what obligations do organizations deploying labor-displacing AI have to the workers affected?
Chapter 29: AI and Democracy AI is reshaping the informational environment in which democratic politics takes place. This chapter examines the documented effects of algorithmic content curation on political polarization, the use of AI-generated disinformation and synthetic media to manipulate political discourse, the deployment of AI in political advertising and voter targeting, and the emerging use of AI in election administration. It also examines more speculative but consequential possibilities: what happens to democratic deliberation when AI systems can generate persuasive political content at scale and at negligible cost?
Chapter 30: AI in Criminal Justice The use of AI in criminal justice — predictive policing, risk assessment at bail and sentencing, automated parole decisions, facial recognition in law enforcement — is among the most ethically fraught applications in contemporary AI. This chapter examines the documented racial disparities in these systems, the due process concerns raised by algorithmic decision-making in legal proceedings, and the fundamental tension between the efficiency claims advanced for criminal justice AI and the individual rights those systems affect. It also examines the movement for restrictions on specific criminal justice AI applications and the regulatory and legislative responses in multiple jurisdictions.
Chapter 31: AI and the Environment AI's environmental footprint is large and growing. Training large AI models consumes enormous quantities of energy and water. Cryptocurrency mining, which shares infrastructure with some AI workloads, has measurable local and global environmental effects. At the same time, AI is being used for environmental monitoring, energy optimization, and climate modeling in ways that may produce significant environmental benefits. This chapter examines both sides of the AI-environment relationship, with attention to the distributional questions of who bears the environmental costs and who captures the environmental benefits of AI deployment.
Chapter 32: Global AI Governance AI development is global, but governance is national. This structural mismatch creates regulatory arbitrage opportunities, coordination failures, and the possibility that AI development races to the bottom on safety and ethics standards. This chapter surveys the landscape of international AI governance efforts — from the OECD AI Principles to the UN Secretary-General's AI Advisory Body, from bilateral technology agreements to the emerging "AI governance" agenda in multilateral institutions — and assesses their adequacy relative to the governance challenges AI creates. It also examines how different political economies and value systems produce different approaches to AI governance.
Chapter 33: AI Regulation — Comparative Approaches The EU AI Act, the US approach of sector-specific guidance, the UK's pro-innovation framework, China's algorithmic regulation, and the various national AI strategies that have proliferated since 2017 represent genuinely different approaches to AI regulation. This chapter analyzes these comparative approaches — their underlying theories of regulation, their enforcement mechanisms, their treatment of high-risk AI applications, and their implications for organizations operating across jurisdictions. It also examines the limits of national regulation as a response to a global technology and the ongoing debate about what international coordination would require.
Chapter 34: AI Ethics in Emerging Markets and the Global South The AI ethics discourse has been dominated by perspectives from North America, Europe, and East Asia. This chapter examines how AI's risks and opportunities appear from the perspective of emerging economies in the Global South, where AI deployment is accelerating, regulatory capacity is often limited, and both the harms and the potential benefits of AI may differ significantly from those in high-income contexts. It also examines the structural dynamics of AI development — the concentration of AI capabilities in a small number of countries and companies — and their implications for global economic equity and technological sovereignty.
Key Questions This Part Addresses
- What are AI's most significant effects on employment and economic distribution, and what obligations do organizations deploying AI have to affected workers?
- How is AI reshaping the informational and institutional foundations of democratic governance, and what governance responses are adequate?
- What does fairness require in the specific context of criminal justice AI, where the stakes are liberty and the power asymmetries are extreme?
- How should the environmental costs of AI be accounted for and allocated, and what does responsible environmental stewardship require of AI developers and deployers?
- What are the structural limitations of national AI regulation, and what would adequate global AI governance look like?
- Whose perspectives and whose values are reflected in the dominant AI ethics frameworks, and how does the view from emerging economies challenge or complicate those frameworks?
The Five Recurring Themes in Part 6
Power distribution is the master theme of this part, operating at the level of entire societies and global institutions. AI is not a neutral technology that distributes its effects evenly. It tends to concentrate power — economic, informational, and political — in the hands of those who already have it: large technology companies, high-income countries, and the governments and institutions that have the capacity to regulate and deploy AI at scale. Every chapter in this part is partly a map of how this concentration of power operates and what it implies.
Who bears harms and who captures benefits takes on its most consequential form at the societal scale. When AI automates manufacturing jobs, the displaced workers bear the transition costs while the productivity gains accrue primarily to capital owners. When AI is used in criminal justice, the efficiency benefits accrue to the system while the due process costs fall on defendants who are disproportionately poor and from marginalized communities. When AI consumes environmental resources, the benefits of AI applications may be distributed broadly while the environmental costs are concentrated locally. Part 6 traces these distributional structures throughout.
Governance under uncertainty is the defining challenge of Chapters 32 and 33. AI governance at the international level is being built largely from scratch, under conditions of significant uncertainty about the technology's future trajectory, significant disagreement about appropriate regulatory approaches, and significant power asymmetries among the countries and institutions involved. The chapters in this part do not pretend that governance is more settled than it is.
Innovation versus precaution runs through Chapter 29 (Democracy) and Chapter 30 (Criminal Justice) in particular, where the efficiency and accuracy claims made for AI applications must be weighed against risks — to democratic discourse, to due process — that are structural rather than individual and that may be difficult to reverse once the systems are embedded in social practice.
Technical systems and human values takes its most expansive form in Part 6. At the societal level, the values at stake — democratic self-governance, global equity, environmental sustainability, cultural diversity — are not the values of any single individual or organization but the values embedded in the institutions and traditions through which societies organize their collective life. AI's effects on these values require analysis that is both more ambitious and more uncertain than the organizational ethics that earlier parts developed.
Cross-References Within Part 6
Chapter 28 (Employment) and Chapter 31 (Environment) share a common analytical structure: both examine situations in which AI creates externalities — costs that fall on parties outside the AI development and deployment relationship — and ask what governance responses are appropriate. Reading them together illuminates the general problem of AI externalities that runs through the entire part.
Chapters 32 and 33 (Global Governance and Comparative Regulation) form a pair and should be read sequentially. Chapter 32 examines the structural challenges of governing a global technology through international institutions; Chapter 33 examines how different national approaches have responded to those challenges. Together they constitute the book's most complete treatment of AI regulatory design.
Chapter 30 (Criminal Justice) connects directly backward to Chapter 9 (Measuring Fairness) and Chapter 26 (Biometrics) in earlier parts. The fairness measurement concepts developed in Chapter 9 are directly applicable to evaluating criminal justice AI systems, and the biometric technology examined in Chapter 26 is a key component of the law enforcement AI landscape examined in Chapter 30. Readers of Chapter 30 who have not read Chapters 9 and 26 will benefit from doing so.
Chapter 34 (Emerging Markets) is a corrective to the entire preceding analysis in this part — and in the book as a whole. The governance frameworks examined in Chapters 32 and 33 were largely designed by and for high-income countries. Chapter 34 examines both their limitations from a global perspective and the alternative frameworks being developed in contexts where AI's opportunities and risks look different.
Chapters in This Part
- Chapter 28: AI and Employment — Disruption and Opportunity
- Chapter 29: AI and Democratic Processes
- Chapter 30: AI in Criminal Justice Systems
- Chapter 31: The Environmental Cost of AI
- Chapter 32: Global AI Governance Frameworks
- Chapter 33: Regulation and Compliance — GDPR, EU AI Act, and Beyond
- Chapter 34: AI Ethics in Emerging Markets