59 min read

> — Melvin Kranzberg, Kranzberg's First Law of Technology (1986)

Chapter 4: Stakeholders in the AI Ecosystem

"Technology is neither good nor bad; nor is it neutral." — Melvin Kranzberg, Kranzberg's First Law of Technology (1986)


Opening: Who Is in the Room?

It is a Tuesday morning in a mid-sized American city. The police chief is presenting to the city council. On screen is a map of the city overlaid with heat zones — red blocks indicating areas where the department's new predictive policing algorithm expects crime to occur in the next 24 hours. The system is called Geolitica (formerly PredPol). Officers will be dispatched to red zones before crimes happen. The department calls it data-driven policing. The council votes to continue the program. The meeting lasts forty-five minutes.

Now make a list of every party affected by this decision.

Start with the obvious ones: the police department and its officers, who will now patrol different areas. The mayor's office, which approved the budget. The technology vendor, which earned a contract worth hundreds of thousands of dollars per year. The city council members, who will face voters if crime rates change.

Keep going. The residents of the neighborhoods flagged as high-risk — who will now be subject to intensified police presence they did not request and did not consent to. The young men in those neighborhoods, who will be stopped and questioned at higher rates because an algorithm assigned risk to their block, not to them individually. The families of those men. The community organizations that work in those neighborhoods. The local businesses whose foot traffic will shift. The journalists covering city hall who may or may not understand what the algorithm actually does.

Continue. The civil liberties lawyers who will eventually challenge the system's constitutionality. The academic researchers at UCLA who will spend three years documenting whether the algorithm is simply amplifying historical arrest patterns — patterns shaped by decades of racially discriminatory policing. The ACLU of California. The residents of Santa Cruz who will watch their city become the first in the United States to ban predictive policing, in 2020, after a sustained campaign by exactly the kinds of people who had no voice in the original Tuesday morning meeting.

Further still. The officers in other cities watching this deployment as a case study. The voters in the next election cycle who will choose between candidates with different views on algorithmic policing. Future residents of the city whose relationship with law enforcement will be shaped by decisions made this Tuesday. Children in the red-zone neighborhoods who are growing up under a surveillance infrastructure they will inherit.

The list is much longer than most technology deployment decisions acknowledge. And here is the critical observation: of all the parties just named, the vast majority had no seat at the table on that Tuesday morning. The people with the most at stake — residents of over-policed neighborhoods, data subjects whose historical arrest records fed the model's training data, community members who will bear the consequences for years — were not consulted, not represented, and in many cases not even known to the decision-makers in the room.

This chapter is about that gap. It is about who counts as a stakeholder in AI systems, why the full list is always longer and more diverse than it first appears, what happens when powerful stakeholders make decisions that impose costs on powerless ones, and what genuine stakeholder engagement looks like as opposed to the performative version that has become disturbingly common in corporate AI ethics programs.


Learning Objectives

By the end of this chapter, you should be able to:

  1. Define "stakeholder" in the context of AI systems and articulate why the standard business definition of stakeholder is inadequate for AI ethics purposes.
  2. Map the AI value chain from research labs through to affected communities, identifying the distinctive interests, power, and responsibilities of each tier.
  3. Identify internal organizational stakeholders in AI development and deployment, including the ethical tensions inherent in each role.
  4. Identify external stakeholders — including regulators, civil society organizations, academic researchers, and international bodies — and explain their roles in AI accountability.
  5. Explain why data subjects and affected communities are systematically underrepresented in AI governance, and articulate the ethical and practical consequences of that underrepresentation.
  6. Apply a structured stakeholder analysis methodology to a novel AI deployment scenario, including stakeholder mapping, interest assessment, and engagement strategy design.
  7. Analyze stakeholder conflicts — particularly conflicts between powerful stakeholders (shareholders, executive teams) and less powerful ones (affected communities, workers) — and evaluate governance structures designed to produce more equitable outcomes.
  8. Compare how different regulatory regimes (EU, US, China, Global South) construct the category of "stakeholder" differently, with concrete implications for who has rights and recourse.

Section 4.1: Why Stakeholder Analysis Matters

The Blind Spot of Technology-Centric AI Development

When engineers build a machine learning model, they are solving an optimization problem. They define a metric — accuracy, precision, recall, F1 score, click-through rate, time-on-platform — and they train the model to maximize it. This is technically rigorous and often produces systems that perform impressively on the chosen metric. The problem is that the choice of metric is itself a value judgment, and that value judgment typically reflects the interests of whoever commissioned the system — not the full range of parties who will be affected by it.

The field has a name for this: Goodhart's Law, often stated as "when a measure becomes a target, it ceases to be a good measure." When a predictive policing algorithm is optimized to predict crime using historical arrest data, it learns to predict policing — because arrests are a product of policing decisions as much as of actual criminal behavior. When a content recommendation algorithm is optimized for engagement, it learns that outrage and fear generate more clicks than accurate, boring news. When a hiring algorithm is optimized to predict who will succeed in a job using historical hiring data, it learns to reproduce whatever biases shaped those historical decisions.

In each case, the optimization is technically coherent. The ethical failure is not in the math. It is in the choice of what to optimize for, and for whose benefit. That choice is almost always made by a small group of powerful stakeholders, typically with significant financial interests in a particular outcome, and almost never with meaningful input from the parties who will bear the costs.

Freeman's Stakeholder Theory Applied to AI

Business ethics has had a framework for this problem since 1984, when philosopher R. Edward Freeman published "Strategic Management: A Stakeholder Approach." Freeman's core argument was that the purpose of a firm is not solely to maximize returns to shareholders, but to create value for all stakeholders — defined as any group or individual who can affect, or is affected by, the achievement of the organization's objectives.

Freeman's stakeholder theory was controversial when published because it challenged the dominant Friedman doctrine that the social responsibility of business is to increase its profits. Four decades later, Freeman's framework has become mainstream in business education even as genuine stakeholder engagement remains the exception rather than the rule.

Applied to AI, stakeholder theory produces a much larger and more complex stakeholder map than the typical technology project acknowledges. An AI system does not merely affect the company that built it and the customers who pay for it. It affects everyone who interacts with it, everyone whose data trained it, everyone who is subject to decisions it influences, and — through its systemic effects — communities, institutions, and social structures far beyond the system's immediate users.

Three Categories: Shaping, Deploying, Shaped

For working purposes, it is useful to organize AI stakeholders into three broad categories:

Those who shape AI: Research institutions that set the intellectual agenda. Technology companies that design and train foundation models. Governments and funders that direct resources toward particular research questions. Standards bodies that define what "safe" or "fair" AI means. These stakeholders exercise upstream influence — they determine what kinds of AI get built and on what terms.

Those who deploy AI: Organizations that purchase, configure, and implement AI systems. Product managers who decide which AI capabilities to incorporate into products. Systems integrators who connect AI components into operational workflows. These stakeholders exercise midstream influence — they determine who gets exposed to which AI systems and under what conditions.

Those who are shaped by AI: End users who interact with AI systems directly. Data subjects whose information trains AI systems, often without their knowledge. Affected communities whose lives are altered by AI-influenced decisions. Future generations who will inherit the AI infrastructure, norms, and institutions we create today. These stakeholders experience AI's effects downstream, often with little ability to influence what happens to them.

The crucial observation is that power flows in the opposite direction from impact. Those who shape AI — researchers, foundation model providers, enterprise technology companies — hold the most power in the AI ecosystem. Those who are shaped by AI — particularly affected communities and data subjects — experience the greatest impact while holding the least power. This is the defining power asymmetry of the AI era.

The Representation Problem

Who is represented in AI development? The answer is well-documented and deeply skewed. A 2021 study by the AI Now Institute found that women represent less than 20% of AI research faculty and about 15% of AI research staff at major technology companies. Black and Latino workers are severely underrepresented in the AI industry even relative to their share of the technology workforce more broadly. The geographic concentration is similarly stark: the overwhelming majority of foundation model development occurs in a small number of cities — primarily San Francisco, Seattle, and New York in the United States, plus London, Beijing, and a handful of others globally.

The people building AI systems are not a representative sample of humanity. They are disproportionately male, disproportionately white or Asian-American, disproportionately from elite universities, disproportionately from high-income backgrounds, and disproportionately concentrated in a few wealthy countries. This matters because the blind spots, assumptions, and preferences of the people who build a system are embedded in that system — in the choice of training data, the definition of success metrics, the identification of edge cases worth worrying about, and the prioritization of which users to serve.

Vocabulary Builder

Stakeholder: Any individual, group, or organization that can affect or is affected by the achievement of an organization's objectives (Freeman, 1984). In the AI context, stakeholders include not only those with a formal relationship to the AI system (employees, customers, investors) but also parties who are affected by the system's outputs without having chosen to engage with it.

Data subject: The identifiable individual to whom personal data relates. The term originates in EU data protection law (GDPR Article 4) but has broader ethical relevance as a category describing people whose information is collected, processed, or used to train AI systems, often without their meaningful awareness or consent.

Principal-agent problem: A conflict of interest that arises when one party (the agent) is expected to act on behalf of another (the principal) but has different interests. In AI ethics, this pattern appears at multiple levels: AI systems that are agents whose objectives may diverge from their users' interests; data scientists who are agents whose professional incentives may not align with affected communities' interests; and corporate ethics teams that are nominally agents for ethical values but whose organizational power is typically subordinated to commercial objectives.

Power asymmetry: The unequal distribution of power among stakeholders in a system, such that some parties have substantially greater ability to influence decisions, set agendas, and resist accountability than others. Power asymmetry is endemic to AI ecosystems and is a primary reason why stakeholder analysis is ethically necessary: without deliberate effort, AI governance processes will systematically overweight the interests of powerful stakeholders while ignoring those of less powerful ones.


Section 4.2: The AI Value Chain — Who Builds, Who Buys, Who Uses, Who Is Used

AI systems do not spring fully formed from the minds of individual engineers. They emerge from a complex, multi-layer value chain in which dozens of organizations and hundreds of decisions shape what ultimately reaches — and affects — end users and communities. Understanding this value chain is essential for understanding where ethical responsibility lies and how it can be diffused or concentrated.

Research Labs and Academia

At the foundation of the AI value chain are the institutions that generate the scientific knowledge on which AI systems are built: university research groups, government-funded research laboratories, and the research divisions of major technology companies.

Who sets the research agenda matters enormously. Academic AI research is shaped by the availability of funding, which in turn is shaped by the priorities of funding bodies — primarily the National Science Foundation and DARPA in the United States, Horizon Europe in the EU, and increasingly, the research grant programs of major technology companies. When companies like Google, Microsoft, and Meta fund university AI research programs, they shape — often unintentionally but sometimes deliberately — which questions get asked. Research on AI safety, fairness, and social impact has historically been underfunded relative to research on AI capabilities. The Distributed AI Research Institute (DAIR), founded by Timnit Gebru after her highly publicized departure from Google, was created explicitly to fill this gap.

The demographics of academic AI research compound the agenda-setting problem. If the people asking the research questions are a narrow demographic slice of humanity, the questions they ask will reflect their experiences and priorities. Research on facial recognition accuracy, for example, focused for years on overall accuracy metrics before Joy Buolamwini's landmark 2018 study demonstrated that error rates for dark-skinned women were up to 34 percentage points higher than for light-skinned men — a disparity that researchers without that lived experience had little motivation to investigate.

Foundation Model Providers

The term "foundation model" was coined in 2021 by researchers at Stanford to describe large-scale AI models trained on broad datasets that can be adapted for a wide range of downstream tasks. GPT-4, Gemini, Claude, and LLaMA are examples. A small number of companies — OpenAI, Google DeepMind, Anthropic, Meta, Mistral, and a handful of others — develop and control the most capable foundation models.

This concentration of power at the foundation model layer has profound implications. Decisions made by a handful of companies about what data to train on, what content policies to enforce, what capabilities to enable or restrict, and what prices to charge propagate through the entire AI value chain to affect billions of people. OpenAI's decision to restrict GPT-4's ability to generate certain categories of content, for example, affects every application built on the GPT-4 API — which includes millions of downstream applications. The ethical choices embedded in foundation models are not merely choices by and for the companies that make them; they are choices on behalf of everyone downstream.

Foundation model providers carry responsibilities commensurate with this power. They make decisions about training data (including whether to respect copyright, consent, and privacy), about safety testing (red-teaming, alignment research), about access (open-source vs. closed, pricing tiers), and about the terms under which developers can build on their models. These decisions are largely made internally, with limited external accountability.

Platform Companies: Cloud Providers as AI Infrastructure

The major cloud providers — Amazon Web Services, Microsoft Azure, and Google Cloud Platform — are the infrastructure layer of the AI economy. They provide the computing power on which AI models are trained and inference is run; the data storage systems that hold training datasets; the development tools that developers use to build AI applications; and increasingly, pre-built AI services (facial recognition, natural language processing, speech recognition) that organizations can incorporate without building models from scratch.

Cloud providers' AI infrastructure decisions shape who can access AI capabilities and at what cost. High compute costs create barriers to entry that favor large organizations over small ones and wealthy-country researchers over those in the Global South. The geographic location of data centers affects privacy protections available to users in different jurisdictions. The pricing and terms of AI services shape what kinds of applications developers can build.

Independent Software Vendors and AI Application Builders

Between the foundation model providers and end users sits a large and diverse layer of companies that build AI-powered products and services. This includes dedicated AI companies (Palantir, C3.ai, Scale AI), traditional software companies that have incorporated AI capabilities (Salesforce, SAP, Workday), and the enormous ecosystem of startups building AI-native applications in every vertical.

These companies make critical ethical decisions: which foundation models to use, what customizations or fine-tuning to apply, what guardrails to implement, how to present AI outputs to users, and what disclosures to make about the AI's role in their products. They are often in the position of translating capabilities developed by foundation model providers into products deployed by enterprise buyers — and they bear significant ethical responsibility for that translation.

Enterprise Buyers: Organizations That Deploy AI

Enterprise buyers — large corporations, government agencies, hospitals, universities, financial institutions — purchase AI tools and deploy them within their organizations and for their customers. These organizations are often the most consequential decision-makers in the AI ecosystem for affected communities, because they determine the specific contexts in which AI will be applied: who will be screened, scored, or surveilled; under what conditions AI decisions will be reviewed by humans; what happens when the AI makes an error.

Enterprise buyers frequently underestimate their ethical responsibility. Because they are purchasing a product rather than building one, they sometimes behave as if ethical responsibility lies upstream with the vendor. This is legally and ethically wrong. An organization that uses an AI hiring tool to screen job applicants is responsible for the outcomes of that screening regardless of who built the tool. The Equal Employment Opportunity Commission has been unambiguous on this point: employers are liable for discriminatory outcomes in hiring regardless of whether those outcomes were produced by algorithmic or human decision-making.

End Users: The People Who Interact with AI

End users are the people who directly interact with AI-powered products: the customer who chats with an AI customer service agent, the employee who uses an AI writing assistant, the patient who receives a diagnosis informed by an AI diagnostic tool. End users are sometimes well-informed about the AI's role in what they experience, but more often they are not. The extent to which end users can meaningfully consent to AI interactions, and the extent to which they have recourse when AI systems fail them, varies enormously.

Data Subjects: The People Who Are Used

Data subjects are often the most overlooked stakeholder category in the AI value chain. They are the people whose data — browsing history, purchase behavior, social media activity, health records, location data, facial images, voice recordings — has been collected and used to train AI systems. They may never interact with the AI system that learned from their data. They may not know their data was used. They may not have meaningfully consented to its use, having clicked "I agree" on a terms of service document that few read and fewer understand.

The category of data subjects is massive in scale. The training datasets for large language models contain text from billions of web pages, representing the digital traces of hundreds of millions of people. Image recognition models have been trained on billions of photographs, including images scraped from social media without photographers' or subjects' consent. Facial recognition models have been trained on databases assembled from driver's license photos, mugshots, and social media images. In each case, the people whose data trained the system are stakeholders in that system — their privacy, their intellectual property, and their autonomy have been implicated — but they have no representation in governance of the system and often no knowledge that they are stakeholders at all.

Third-Party Affected Parties

Beyond data subjects, there is a final and often invisible stakeholder category: people who are affected by AI decisions without having any relationship — as users, customers, or data subjects — to the AI system. The neighbor whose insurance rates rise because a risk-scoring algorithm assigns their zip code a high-risk rating. The job applicant who is screened out by an AI resume parser that has learned to penalize names associated with particular racial or ethnic groups. The community whose public spaces are monitored by facial recognition systems deployed by a government or corporation they have no ability to hold accountable.

These third-party affected parties illustrate why stakeholder analysis in AI must go beyond the typical business-ethics framework. In most business contexts, the stakeholder map is bounded by contractual and market relationships: customers, suppliers, employees, regulators. AI systems break through those boundaries. Their effects radiate outward to people who have no contract, no account, no application on file — people who are affected by AI not through any choice of their own but simply because they live in a world saturated with AI-influenced decisions.

Visual Note: The AI value chain can be visualized as a directed graph showing the flow of data, money, power, and accountability between tiers: Research/Academia → Foundation Model Providers → Cloud Infrastructure → ISVs/Application Builders → Enterprise Buyers → End Users. Running parallel to this chain, and receiving effects from every tier, are Data Subjects and Affected Communities, who are connected to the chain by arrows of impact but have no arrows of influence flowing back upstream. Power flows down and to the left; harm flows down and to the right; accountability is diffuse and contested at every node.


Section 4.3: Internal Organizational Stakeholders

When a company builds or deploys an AI system, a wide range of internal roles shape the decisions that determine the system's ethical character. Understanding the distinct perspective, incentive structure, and ethical responsibility of each role is essential for designing AI governance processes that actually work.

C-Suite and Board of Directors

The chief executive officer, chief technology officer, chief product officer, and the board of directors make the strategic decisions that set the context for all AI development and deployment within the organization. They approve AI investments, set organizational risk tolerance, define corporate values (or fail to), and — crucially — determine whether AI ethics receives genuine organizational resources or remains a compliance function with no teeth.

The liability exposure question concentrates executive attention on AI ethics in ways that abstract value statements do not. When the FTC or a state attorney general opens an investigation into an AI system's discriminatory outputs, it is the CEO who faces the congressional hearing. When a discriminatory AI hiring system results in a class-action lawsuit, it is the board that must explain to shareholders why the organization is paying a nine-figure settlement. This dynamic creates a genuine, if imperfect, incentive for executive engagement with AI ethics — though the tendency to address liability exposure through PR and legal defensibility rather than substantive remediation remains a significant problem.

Product Managers

Product managers occupy a pivotal position in AI ethics because they make the concrete trade-off decisions that determine what features are built, how AI capabilities are incorporated into products, and what constraints are placed on AI behavior. These are not abstract ethical decisions; they are product decisions with ethical consequences.

The product manager who decides to A/B test two versions of an AI content recommendation algorithm — one optimized for engagement, one optimized for a broader definition of user wellbeing — is making an ethical choice. The product manager who decides to launch an AI feature without waiting for bias testing results, because the launch timeline is a priority, is making an ethical choice. The product manager who decides to make an AI system's role in a product decision more transparent to users — even though transparency reduces conversion rates — is making an ethical choice.

Product managers are typically evaluated on metrics that measure short-term business performance: user growth, engagement, revenue. AI ethics considerations that conflict with those metrics are therefore structurally disadvantaged in product decision-making, regardless of how sincere individual product managers' commitments to ethical AI may be. This is a systemic problem, not an individual one.

Data Scientists and Machine Learning Engineers

The practitioners who build and train AI systems make hundreds of technical decisions with ethical implications: which data to use for training; how to handle missing or imbalanced data; which features to include in a model; how to define and measure fairness; how to document the model's capabilities and limitations; how to present uncertainty and confidence intervals to downstream users.

Many of these decisions are made under time pressure and with limited organizational support for ethical deliberation. A data scientist asked to build a hiring algorithm in six weeks does not have time to conduct a comprehensive audit of the training data for historical bias, to consult with I/O psychologists about the validity of proxy metrics, to convene a diverse stakeholder advisory group, and to write a thorough model card documenting limitations and failure modes. They make judgment calls, often good ones, but the organizational and temporal constraints mean that ethical considerations routinely lose out to practical constraints.

The professional norms of the field also shape what data scientists consider their ethical responsibilities. The machine learning community has increasingly developed norms around documentation (model cards, datasheets for datasets), fairness measurement, and algorithmic impact assessment — but adoption of these practices remains uneven, and the gap between best practice and common practice is substantial.

The legal and compliance function's relationship to AI ethics is complex. On one hand, legal and compliance teams provide a concrete and enforceable floor of ethical behavior: they ensure that AI systems comply with privacy law, anti-discrimination law, consumer protection regulations, and sector-specific requirements (HIPAA in healthcare, FCRA in consumer credit, and so on). On the other hand, legal and compliance framing can actively harm AI ethics efforts by substituting legal compliance for ethical analysis.

"Is it legal?" and "Is it ethical?" are not the same question. Many AI applications that are legal are nonetheless ethically problematic — and the history of AI ethics is littered with cases where legal review cleared a system that later caused serious harm. A company that uses AI to dynamically price insurance premiums based on zip code may be complying with applicable law while engaging in what amounts to proxy discrimination against minority communities. Legal clearance of that practice does not make it ethically acceptable.

Human Resources

HR departments sit at the intersection of some of the most ethically sensitive AI applications in the enterprise: AI-assisted hiring and resume screening; AI-driven performance management and evaluation; AI-powered workforce planning and organizational restructuring decisions; monitoring of employee behavior, productivity, and even emotional state. The AI applications that most directly affect workers' livelihoods are HR applications.

HR professionals are typically neither trained in AI nor empowered to push back on AI implementations adopted at the executive level. When a CHRO is told that the organization is adopting an AI performance management platform, the CHRO is typically expected to implement it, not to evaluate its validity, bias profile, or effects on employee wellbeing. This creates significant ethical risk, because AI HR applications are both highly consequential for workers and frequently poorly validated.

Customer Success and Support

Frontline customer-facing teams — customer success managers, support agents, technical support staff — have a unique window into AI system failures. They are the people who hear from customers when an AI system produces an incorrect output, makes a biased decision, or behaves in ways that cause confusion or harm. They have ground-level knowledge of the AI system's real-world performance that is often not available to the data scientists and engineers who built it.

This knowledge is frequently not systematically captured or fed back into AI development processes. Customer complaints about AI behavior are handled as individual service issues rather than as signals of systemic problems. When a loan applicant calls to understand why they were denied credit by an AI scoring system, the support agent who takes that call may have no information about the system's decision-making and no channel through which to escalate the caller's experience to the model development team. This is a significant failure of organizational design with real ethical consequences.

Ethics and Responsible AI Teams

Many large technology companies and major AI-deploying enterprises have created dedicated AI ethics or "responsible AI" teams in recent years. The composition, authority, and effectiveness of these teams varies enormously.

At their best, responsible AI teams provide independent expert review of AI systems before deployment, develop organizational policies and standards for AI ethics, conduct bias audits and fairness assessments, educate colleagues across the organization, and maintain ongoing monitoring of deployed systems. Companies including Microsoft, Google, IBM, and Salesforce have invested substantially in these functions.

At their worst, responsible AI teams are a form of ethics washing: a public-relations function dressed up as governance, designed to project a commitment to ethics without meaningfully constraining business decisions. A responsible AI team that has no authority to delay or halt a product launch, that reports to the marketing division, that produces ethics guidance that is treated as advisory rather than binding, and that is systematically excluded from early-stage product decisions is not providing governance. It is providing cover.

The structural indicators that distinguish genuine from performative responsible AI functions include: reporting structure (to the CEO or board rather than to the marketing or communications function); authority (ability to delay or halt product decisions); resources (staffing and budget commensurate with the organization's AI footprint); independence (ability to surface concerns without career risk); and transparency (publication of audit findings and policy standards externally, not only internally).

Employees at Large

The broader workforce is a stakeholder in the organization's AI development and deployment decisions in at least two ways. First, employees are subject to AI systems used internally — performance monitoring tools, scheduling algorithms, workflow automation — and have a direct stake in whether those systems are fair and accurate. Second, employees who know about AI systems that are causing harm and have the ability and willingness to surface those concerns are a crucial check on organizational behavior.

The role of worker voice in AI ethics — including the legal protections available to employees who raise concerns, the organizational cultures that encourage or suppress whistleblowing, and the labor organizing efforts by technology workers around AI ethics questions — is examined in depth in Chapter 22.


Stakeholder Perspective Boxes

From a Senior Data Scientist: "My manager said the model needed to be in production in Q2. I flagged that we hadn't done thorough bias testing for underrepresented groups in the dataset. He told me to document the concern and we'd address it post-launch. That documentation sat in a Confluence page for eight months. I raised it twice in sprint planning. After the third raise it got on the backlog — not the sprint backlog, the product backlog. I genuinely believe the team cares about doing the right thing. But there's no mechanism by which 'we haven't finished the bias audit' blocks a launch the way a security vulnerability does."

From a Product Manager: "I have an ethics review checklist I'm supposed to fill out before I write a launch brief. It's six questions. Two of them are yes/no. I fill it out in ten minutes. Nobody reads it unless something goes wrong, and then it protects me rather than helping anyone. I think real ethical review of a product would take weeks and involve people I've never met. That doesn't fit anywhere in my planning cycle."

From a General Counsel: "My job is to keep the company out of court and out of regulators' crosshairs. I'm very good at that job, and I take it seriously. But I want to be honest about something: when I tell a product team 'this AI system is legally compliant,' I am not telling them it is ethical. I'm telling them that we have a defensible legal position. If the question of whether it's the right thing to do is important to them — and it should be — they need to have a different conversation with different people."

From an AI Ethics Lead: "I have a small team and a very large organization. I get brought in at the end of product development cycles to review things that have already been decided. My team has produced detailed guidance documents that are referenced in ethics review checklists and read by almost no one. I have been in meetings where I said clearly and with evidence that a system would disproportionately harm a specific demographic group, and I watched the launch happen anyway. I am not naive about what this function is. What I hold onto is that in two or three cases this year, something I raised actually changed a decision. Two or three cases is not enough. But it's not zero."


Section 4.4: External Stakeholders

AI systems operate in an environment populated by external parties whose interests, authority, and actions shape how AI is developed, deployed, and governed. These external stakeholders are not passive observers; they are active forces that constrain, challenge, and sometimes redirect corporate AI behavior.

Customers and End Users

Customers who purchase AI-powered products and the end users who interact with them have interests that go beyond functionality. They have interests in understanding when AI is involved in decisions that affect them; in having recourse when AI systems produce errors; in having their personal data used only in ways they have meaningfully consented to; and in not being subjected to algorithmic manipulation of their behavior, beliefs, or emotions.

The gap between what customers are told about AI systems and what those systems actually do is often substantial. Consumer-facing AI applications frequently present AI outputs as simple recommendations or features without disclosing the algorithmic processes behind them. The Federal Trade Commission has increasingly focused on this gap, issuing guidance and enforcement actions addressing AI-related deception in consumer markets.

Competitors

Competitors are not typically thought of as AI ethics stakeholders, but they shape the ethical landscape of AI in important ways. When one company in an industry deploys an AI application that lowers costs or increases conversion rates, competitors face pressure to adopt similar capabilities — including capabilities with known ethical problems. This competitive dynamic can produce a race to the bottom in which no individual company wants to absorb the competitive cost of ethical AI while competitors do not.

Conversely, AI ethics can function as a competitive differentiator. Companies that can credibly demonstrate that their AI systems are more accurate, fairer, more transparent, or less prone to harmful failures may attract customers who care about those properties — particularly in B2B markets where enterprise buyers face their own regulatory and reputational exposure from AI failures. The challenge is that "credibly demonstrate" requires substantive evaluation rather than marketing claims, and the mechanisms for credible third-party evaluation of AI systems are still immature.

Regulators and Government Agencies

The regulatory landscape for AI in the United States is fragmented across agencies with sector-specific jurisdictions. The Federal Trade Commission has authority over unfair and deceptive practices in consumer markets and has applied this authority to AI-related deception and algorithmic harm. The Equal Employment Opportunity Commission has jurisdiction over discrimination in employment, including discrimination by AI hiring tools. The Consumer Financial Protection Bureau regulates AI use in consumer lending. The Department of Health and Human Services oversees AI in healthcare through both HIPAA and FDA (for AI as a medical device). State attorneys general in California, New York, Illinois, and elsewhere have been active in AI enforcement.

In the UK, the Information Commissioner's Office (ICO) regulates data protection under UK GDPR, with specific guidance on automated decision-making. In France, the Commission Nationale de l'Informatique et des Libertés (CNIL) plays a similar role and has been active in enforcement against major technology platforms. The EU AI Act, which entered into force in 2024 and is being implemented through 2026-2027, creates the most comprehensive AI regulatory framework to date, establishing risk-based requirements and creating the EU AI Office as a new supervisory authority for the most capable AI models.

Civil Society Organizations

Civil society organizations — advocacy groups, legal aid organizations, community-based nonprofits, and human rights organizations — play an essential role in AI accountability that no other stakeholder can fully replicate. They represent communities that lack organizational resources to advocate for themselves; they conduct research and documentation on AI-related harms; they bring litigation that establishes legal precedent; and they advocate for policy changes that protect affected communities.

The ACLU has been particularly active on AI-related civil liberties issues, including facial recognition, predictive policing, and AI in criminal justice. The Electronic Frontier Foundation advocates on digital rights and AI surveillance. Color of Change has focused on algorithmic discrimination against Black communities. Mijente has worked on immigration enforcement technologies. These organizations exist in a competitive funding environment and often have fewer resources than the companies they are holding accountable, but their investigative and advocacy work has been central to AI accountability in cases where regulatory and market mechanisms have failed.

Media and Journalists

Investigative journalism has been one of the most effective external accountability mechanisms for AI systems. The landmark ProPublica series on the COMPAS risk assessment algorithm, published in 2016, sparked a national conversation about algorithmic fairness in criminal justice. The Markup has built a dedicated investigative team covering algorithmic discrimination and surveillance. MIT Technology Review covers AI ethics with depth and technical sophistication. Wired and The New York Times have produced important investigations on facial recognition, content moderation, and AI in employment.

The watchdog function of journalism depends on several conditions that are under pressure: investigative journalism requires time and resources that most news organizations increasingly lack; technically sophisticated AI coverage requires reporters with specialized skills; and AI companies have become skilled at shaping coverage through strategic disclosure and PR. Nonetheless, the threat of investigative journalism creates accountability that operates independently of regulatory processes and market incentives — it works even when regulators are underfunded and customers are uninformed.

Academic Researchers

Independent academic research provides a form of accountability that differs from journalism, litigation, and regulation. Researchers with access to AI systems can conduct systematic empirical evaluations of system performance, bias, and harm. Academic publication norms create accountability for claims and methods. Academic tenure (where it exists and is meaningful) provides some protection for researchers who document findings that are unwelcome to industry.

The field of algorithmic auditing has grown significantly, producing methodologies for systematic external evaluation of AI systems across dimensions including accuracy, fairness, privacy, and robustness. Organizations like the Algorithmic Justice League (founded by Joy Buolamwini), the AI Now Institute, the Center for Democracy and Technology, and the Partnership on AI conduct research and policy work that bridges academic rigor and practical advocacy.

International Bodies

AI governance is increasingly a concern of international institutions. The OECD AI Principles (2019), endorsed by 46 countries, established a framework for trustworthy AI focused on transparency, accountability, robustness, and human rights. UNESCO's Recommendation on the Ethics of AI (2021), adopted by 193 member states, represents the first global normative instrument on AI ethics. The G7 has established an AI governance work stream. The G20 has endorsed the OECD principles. The ITU hosts the AI for Good platform focused on AI's role in sustainable development.

These bodies do not have enforcement authority, but their normative frameworks shape national policy, influence corporate standards, and provide legitimacy for accountability claims by civil society organizations.


Section 4.5: The Invisible Stakeholders — Data Subjects and Affected Communities

Some stakeholders are invisible not because they do not exist but because the systems and institutions of AI governance have not been designed to see them. This section examines the parties who are most affected by AI systems and least represented in decisions about them.

Data Subjects: The Unacknowledged Participants

Every AI system that learns from data is learning from people. The training data for a language model contains text written by authors, journalists, academics, bloggers, and social media users who never agreed to have their words used to train a commercial AI system. The training data for a facial recognition model contains photographs of people who may never know their likeness has been used. The training data for a predictive policing algorithm contains arrest records of people who in some cases were innocent of the offenses for which they were arrested, and in others were arrested for offenses that reflect patterns of discriminatory policing rather than patterns of criminal behavior.

The legal concept of the data subject, established in European data protection law and increasingly recognized elsewhere, provides a starting point for thinking about the ethical standing of people whose data trains AI systems. But legal rights only partly capture the ethical issue. Even where data subjects have legal rights — the right to know what data is held about them, the right to correct errors, the right to object to certain processing — those rights are only meaningful if data subjects know to exercise them. Most people do not know which AI systems have used their data, do not know what those systems have inferred about them, and do not know what decisions have been influenced by those inferences.

The scale of this problem is staggering. In a world where language models are trained on the entire accessible internet, virtually every person who has a digital footprint is in some sense a data subject for some AI system. This is not a niche privacy problem affecting a small number of people with unusual privacy preferences. It is the baseline condition of life in digitally connected societies.

Affected Communities: Beyond the Individual

Affected communities are a distinct category from data subjects. They are the people who are subject to AI-influenced decisions without necessarily having any individual data relationship with the AI system making or informing those decisions. The predictive policing example from the chapter's opening is illustrative: residents of neighborhoods flagged as high-risk by a predictive policing algorithm are affected by that system's outputs regardless of whether their individual data is in the training set. The algorithm predicts activity in their neighborhood and dispatches police accordingly. They live with the consequences.

Affected communities are disproportionately communities of color, low-income communities, immigrant communities, and other groups that have historically been subject to surveillance, discrimination, and exclusion. This is not coincidental. AI systems that deploy predictive and risk-scoring technologies tend to concentrate their effects in communities that have historically been over-policed, under-served by financial institutions, excluded from employment, and targeted by government surveillance. The AI system learns from historical data that reflects those patterns, and its outputs tend to reproduce and intensify them.

The demographics of AI's victims versus AI's builders — predominantly wealthy, highly educated, and white or Asian-American in the latter case, and disproportionately low-income and nonwhite in the former — constitute one of the most troubling moral features of the current AI landscape. The people who have the least power in the AI ecosystem consistently bear the greatest burden of its costs.

Future Generations

Decisions about AI made today will shape the world that future generations inherit. The AI systems deployed now will establish precedents, create path dependencies, and build institutional structures that will be difficult to reverse. A world in which predictive policing algorithms, AI-driven credit scoring, and automated hiring filters become normalized infrastructure will be a world in which future generations have far less practical autonomy than the generation that made those deployment decisions.

Future generations cannot participate in current AI governance processes. They have no voice and no vote. They cannot bring lawsuits, lobby regulators, or organize community opposition. This is a genuine ethical problem that parallels the challenge of intergenerational justice in environmental policy: the costs of decisions made today are partially externalized to people who do not yet exist and cannot advocate for themselves.

Non-Human Actors

In the specific context of AI systems deployed in environmental and ecological applications, animals and ecosystems have standing as affected parties. AI systems used in wildlife management, conservation planning, fisheries management, and agricultural optimization make decisions that affect non-human lives and ecological systems. The ethical standing of non-human animals and ecosystems is contested philosophically, but it is well-established that AI systems can cause or prevent significant harm to non-human life — and that this consideration should be part of stakeholder analysis for relevant applications.

The Power Asymmetry in Focus

The core problem is this: the parties who are most affected by AI systems — data subjects, affected communities, future generations — have the least power to influence AI governance. The parties who have the most power to influence AI governance — foundation model providers, enterprise buyers, investors — are the parties that often benefit most from AI systems and bear the least of their costs.

This power asymmetry is not inevitable. It is a product of institutional design choices: choices about who has legal standing to bring claims, who has organizational resources to engage in regulatory processes, who has access to the technical expertise needed to evaluate AI systems, who has the financial resources to litigate, and who has the social capital to attract media and political attention. Each of these is a design choice that can be made differently.


Ethical Dilemma Box

The 500 and the 50,000

A regional insurance company is deploying an AI risk-scoring system that will affect its 500 commercial clients (the customers who pay premiums and have a direct contractual relationship with the company) and approximately 50,000 people who live and work in properties those clients insure. The 50,000 non-customers — tenants in insured apartment buildings, employees in insured commercial spaces, visitors to insured venues — will be affected by the system's risk assessments because those assessments will influence property owners' decisions about security, maintenance, and lease terms. They have no direct relationship with the insurance company, no contractual standing, and no awareness that an AI system is shaping decisions about their lives.

Questions for reflection: Who should have a say in the governance of this AI system? What mechanisms could be designed to give the 50,000 a meaningful voice without making the deployment process unworkable? Is there a threshold of impact below which it is reasonable to proceed without affected-party consultation, and what factors should determine that threshold? What would the insurance company's legal and ethical exposure look like if the system's outputs led to discriminatory outcomes for protected groups among the 50,000?


Section 4.6: Stakeholder Analysis in Practice

Stakeholder analysis is not merely a conceptual framework. It is a practical methodology that, if applied rigorously, can surface ethical issues before they become crises and identify opportunities for genuine engagement that produce better outcomes for all parties. The following five-step process provides a structure for conducting stakeholder analysis on AI systems.

Step 1: Identify — Who Has a Stake?

The first step is identification. Who has an interest in this AI system — and who is affected by it, regardless of whether they have expressed or even recognized that interest?

The identification step requires deliberate effort to look beyond the obvious: beyond paying customers, beyond the product team, beyond the immediate deployment context. Useful prompts for comprehensive identification include:

  • Whose data does this system use or learn from?
  • Who will interact with this system directly?
  • Who will be subject to decisions influenced by this system?
  • Who benefits if the system works well?
  • Who is harmed if the system fails or is biased?
  • Who has regulatory authority over this system's domain?
  • Which civil society organizations advocate for the communities this system affects?
  • Which academic communities research the domain this system operates in?
  • What are the geographic and demographic communities most affected by this system's deployment context?

Step 2: Classify — Power and Interest

The classic stakeholder classification tool is the Power-Interest matrix, which places stakeholders in a 2×2 grid based on their level of power (ability to influence decisions about the AI system) and their level of interest (stake they have in the system's outcomes). The resulting quadrants suggest different engagement strategies:

  • High power, high interest: Primary governance participants. Engage actively, involve in decision-making.
  • High power, low interest: Manage carefully. Keep informed. Understand their potential veto power.
  • Low power, high interest: The ethical imperative. The parties most affected with the least voice. Require deliberate mechanisms for representation.
  • Low power, low interest: Monitor for changes in position.

For AI systems, the most important ethical action is in the "low power, high interest" quadrant. These are the affected communities, data subjects, and vulnerable populations who bear significant costs from AI systems while having little institutional power. Standard stakeholder management frameworks often overlook or underweight this quadrant because it offers no immediate strategic benefit to the deploying organization. Ethical stakeholder analysis must actively resist this tendency.

Step 3: Assess — Interests, Concerns, and Leverage

For each identified stakeholder (or stakeholder group), the analysis should assess:

  • What do they want? What outcomes would they consider positive? What capabilities, rights, or protections are they seeking?
  • What do they fear? What outcomes would they consider harmful? What risks are they trying to avoid?
  • What leverage do they have? Can they delay or block deployment? Can they generate regulatory attention? Can they mobilize community opposition? Can they bring litigation? Can they generate negative media coverage?
  • What do they know? Are they aware of the AI system and its implications? Do they have the technical capacity to evaluate it?

Step 4: Engage — Designing Participation

The engagement step is where stakeholder analysis becomes stakeholder governance. Identifying and classifying stakeholders is necessary but insufficient; the ethical obligation extends to designing meaningful mechanisms through which stakeholders — especially less powerful ones — can actually influence decisions.

Engagement mechanisms range from minimal (one-way notification) to maximal (participatory co-design). Different stakeholders and different deployment contexts call for different levels of engagement. Regulators require formal compliance processes. Major enterprise customers may require contractual commitments. Affected communities may require community benefit agreements, advisory panels, or participatory design processes. Data subjects require meaningful disclosure and opt-out mechanisms.

The key ethical distinction is between consultation (asking stakeholders for input that decision-makers can choose to use or ignore) and genuine participation (creating mechanisms through which stakeholders have substantive influence over decisions). Consultation without genuine participation is a form of ethics washing — it produces the appearance of engagement while protecting the decision-maker's ability to do what they intended regardless of stakeholder input.

Step 5: Monitor — Ongoing Stakeholder Impact Assessment

Stakeholder impacts are not static. An AI system that is neutral in its effects at deployment may become harmful as it encounters new populations, as the world it operates in changes, or as its model degrades. Governance must include ongoing monitoring of stakeholder impacts, with mechanisms to detect and respond to emerging harms.

Monitoring mechanisms include: ongoing collection of user feedback and complaint data; regular bias audits by internal or external evaluators; community liaison functions that maintain ongoing relationships with affected communities; regulatory monitoring for new requirements; and academic research partnerships that produce independent evaluation.

Case Application: Stakeholder Analysis for a Hospital AI Diagnostic Tool

A regional hospital network is deploying an AI tool that analyzes radiology images to detect early-stage lung cancer. The system will be used by radiologists to prioritize which scans to review first, flag findings for attention, and in some cases provide confidence scores that inform biopsy recommendations.

Identification reveals a long stakeholder list: - Patients who receive scans — their diagnoses, treatment, and survival outcomes depend on the tool's accuracy - Radiologists who use the tool — their clinical judgment, professional liability, and workload are affected - Hospital administrators — cost savings, liability exposure, accreditation - The AI vendor — revenue, reputation, regulatory approval - Primary care physicians whose referrals feed into the system - Insurance companies whose reimbursement decisions may be informed by AI outputs - The FDA, which has authority over AI as a medical device - Patient advocacy organizations for lung cancer - Underrepresented populations (particularly Black patients, rural patients) who may have lower representation in the training data - Researchers studying AI diagnostic accuracy across demographic groups - Future patients who will benefit or be harmed by the precedent this deployment sets

Classification: The hospital, radiologists, and FDA are high-power, high-interest. Insurance companies are high-power, medium-interest. Patients are high-interest, often low-power; rural and Black patients have particular low-power status within the patient group. Academic researchers are low-power but potentially high-interest.

Assessment: Patients fear misdiagnosis — false negatives that miss cancer, false positives that lead to unnecessary biopsies. Radiologists fear liability exposure for AI-influenced errors and replacement of their professional judgment. Underrepresented populations fear that the tool will perform worse for them, reflecting underrepresentation in training data.

Engagement design: Patients cannot meaningfully participate in AI design but must receive clear disclosure. Radiologists should be involved in the implementation design and given mechanisms to flag AI errors. Patient advocacy organizations should be consulted on disclosure protocols. Academic researchers should be given access to evaluate the system's demographic performance. A community advisory panel including representatives of underrepresented populations should review the deployment plan before launch.


Template Box: Stakeholder Analysis Worksheet

Stakeholder Category Power (H/M/L) Interest (H/M/L) What They Want What They Fear Engagement Mechanism Monitoring Method
[Name/Group] Internal/External/Data Subject/Affected Community

Suggested categories for the Stakeholder column: C-Suite; Product Team; Data Science; Legal; HR; Customers; End Users; Regulators; Civil Society; Affected Communities; Data Subjects; Media; Academic Researchers; International Bodies; Future Generations.


Section 4.7: Stakeholder Conflict and Trade-offs

Stakeholder interests conflict. They always do. One of the most important things a business professional can understand about AI ethics is that there is no version of AI deployment in which all stakeholders get everything they want. The question is not whether trade-offs will be made but how, by whom, and in whose favor.

Common Conflicts in AI Deployment

Shareholder returns vs. worker welfare. AI-powered automation creates value for shareholders through cost reduction while displacing workers or reducing their wages. Organizations deploying AI automation face a genuine conflict between returns to capital and wellbeing of labor that cannot be resolved by pretending both interests are compatible.

Efficiency vs. fairness. AI systems are typically optimized for aggregate performance — accuracy, cost-efficiency, throughput. Fairness often requires accepting lower aggregate performance in order to reduce disparate impact on disadvantaged groups. A hiring algorithm that is 85% accurate overall but has a 70% accuracy rate for Black candidates is more "efficient" than a fairer algorithm with 82% overall accuracy and equal accuracy across demographic groups. The choice between these is a trade-off between efficiency and fairness that cannot be resolved technically — it requires an ethical judgment about the relative importance of aggregate performance and equitable distribution.

User convenience vs. privacy. AI systems that learn from user behavior to personalize experiences require collecting and processing behavioral data that has privacy implications. Users typically prefer more personalized, more convenient experiences — and they express lower privacy preferences when asked in abstract terms than when confronted with specific data collection practices. The convenient design is to collect data freely and optimize aggressively; the ethical design imposes friction through meaningful consent mechanisms and data minimization.

Short-term commercial interests vs. long-term community impact. A company that deploys a highly engaging social media algorithm that maximizes platform time and advertising revenue may be acting in the short-term interests of shareholders while generating long-term harms for users, communities, and democratic institutions. These interests are in genuine conflict.

Power Determines Outcomes

When stakeholder interests conflict, power typically determines whose interests prevail. Shareholders have the power to replace executives whose decisions damage financial returns. Regulators have the power to impose fines and enjoin practices. Enterprise customers have the power to take their contracts elsewhere. Communities affected by AI have, in most cases, very little formal power — their leverage comes from organized collective action, media attention, litigation, and political mobilization, all of which are costly, slow, and uncertain.

This asymmetry is why governance structures that artificially equalize power across stakeholders — AI ethics boards with real authority, community advisory panels with veto rights, mandatory impact assessments with public disclosure — are ethically important. They exist precisely to counteract the tendency of unstructured market and organizational processes to resolve conflicts in favor of the most powerful.

Governance Structures That Force Broader Representation

AI ethics boards with genuine authority — boards that include external members, have access to product development processes, and have authority to delay or require modification of AI deployments — are more effective than boards that are purely advisory. The distinction matters: a board that can say no creates accountability; a board that can only say "we recommend considering the following concerns" does not.

Community advisory panels bring representatives of affected communities into ongoing governance processes. They require genuine investment: identifying and compensating community representatives, creating information-sharing protocols that allow panels to actually evaluate AI systems, and establishing clear pathways through which panel input influences decisions.

External auditors provide independent technical evaluation of AI systems for bias, accuracy, robustness, and alignment with stated ethical standards. The AI auditing field is nascent — methodologies are still being developed, there is no established credentialing system, and the conflicts of interest inherent in paid auditing relationships are not yet resolved — but external auditing is a rapidly growing and important accountability mechanism.

The Dual Newspaper Test

A practical heuristic for identifying when stakeholder trade-offs have been handled poorly is the "dual newspaper test," articulated by various business ethics practitioners. It asks two questions:

  1. Would this decision be reported as harmful, discriminatory, or reckless by an investigative journalist at ProPublica or The Markup?
  2. Would this decision be reported as needlessly cautious, paternalistic, or innovation-killing by a business journalist at the Wall Street Journal or Forbes?

The ideal AI deployment decision passes both tests — it is neither causing harm that journalists would expose nor imposing unnecessary restrictions that business journalists would criticize. Decisions that fail the first test are ethically problematic; decisions that fail only the second test are likely more defensible than they feel in the moment. The test is not a replacement for rigorous ethical analysis, but it is a useful quick filter for identifying obvious failures in either direction.


Section 4.8: Global Variation in Stakeholder Relationships

Who counts as a stakeholder in AI — and what rights they have — varies significantly across regulatory and cultural contexts. A business professional working in AI governance needs to understand that the stakeholder frameworks they apply in one jurisdiction may not apply in another, and that the rights and recourse available to affected communities look very different depending on where those communities are located.

The EU's approach to AI governance is built on a foundation of individual rights. The General Data Protection Regulation (GDPR), which has been in force since 2018, gives data subjects legally enforceable rights: the right to know what data is held about them; the right to access and correct that data; the right to have data deleted; the right to object to certain kinds of processing; and specific rights relating to automated decision-making, including the right not to be subject to solely automated decisions that have significant effects.

The EU AI Act, the world's first comprehensive AI regulation, creates additional rights and protections structured around risk classification. High-risk AI systems — defined to include AI used in employment, credit, education, law enforcement, and other consequential domains — are subject to requirements for transparency, human oversight, accuracy testing, and bias assessment. The AI Act also prohibits specific practices including AI-based social scoring by public authorities and real-time remote biometric identification in public spaces (with limited exceptions).

The EU framework treats individuals as rights-holders, not merely consumers. This philosophical commitment has practical implications: EU residents have greater legal standing to challenge AI-influenced decisions than users in most other jurisdictions. The framework also creates market pressure beyond the EU's borders, because companies serving EU markets must comply regardless of where they are headquartered.

United States: Market Orientation and Sectoral Regulation

The United States does not have a comprehensive federal AI regulation equivalent to the EU AI Act. AI governance in the US is primarily sectoral — different agencies regulate AI in their specific domains — and primarily market-oriented, treating users as consumers whose choices discipline market behavior rather than as rights-holders with legally enforceable claims.

The Biden administration's AI Executive Order (2023) initiated a whole-of-government approach to AI governance and directed agencies to develop sector-specific guidance, but it did not create new statutory rights. The FTC's existing authority over unfair and deceptive practices provides some federal accountability for AI applications in consumer markets. State legislation — particularly in California (CCPA/CPRA, AB 2930 on automated employment decisions), Illinois (BIPA on biometric data), and Colorado (SB 169 on AI in insurance) — has moved faster than federal law.

The US framework tends to produce more permissive AI deployment environments and less formal stakeholder representation for affected communities. Legal recourse for people harmed by AI decisions is available but requires identifying a specific legal cause of action, which is often difficult when AI decisions are presented as proprietary and when evidence of discriminatory intent or disparate impact is not readily accessible.

China: State as Primary Stakeholder

China's AI governance framework reflects a fundamentally different theory of the state's relationship to technology and society. Chinese AI regulation is designed primarily to serve state interests — economic development, social stability, national security — rather than to protect individual rights against either state or corporate actors. Regulations such as the Algorithmic Recommendation Regulation (2022), the Deep Synthesis (Deepfakes) Regulation (2022), and the Generative AI Regulation (2023) impose requirements on AI providers that are oriented toward content control, political stability, and national competitiveness.

In the Chinese framework, individuals have limited standing as stakeholders in AI governance. The state — specifically the Chinese Communist Party — is the primary stakeholder. This has implications for multinational companies operating in China and for the governance of AI systems that serve Chinese users: the relevant accountability structure is political rather than legal or market-based, and the constraints on AI behavior are oriented toward serving state interests rather than protecting individual rights.

Global South: Subject Without Voice

Perhaps the most ethically significant global variation is the position of communities in the Global South — Africa, Latin America, South and Southeast Asia — in the global AI ecosystem. These communities are increasingly subject to AI systems designed, trained, and deployed by companies and governments primarily in North America, Europe, and China. They are among the most affected communities in the global stakeholder map, and they are among the least represented in AI governance processes.

This represents a continuation and intensification of historical patterns of technological colonialism: powerful countries and companies deploy technologies in less powerful ones, extracting data and economic value while exporting the costs. The training data for major AI systems disproportionately represents English-language internet content and the cultural norms of wealthy Western countries. AI systems trained on this data may perform poorly for users in other linguistic and cultural contexts, may embed cultural assumptions that are foreign or harmful in those contexts, and may be deployed under regulatory frameworks that offer little protection.


Section 4.9: Building Meaningful Stakeholder Engagement

Understanding who the stakeholders are is necessary but not sufficient. The ethical obligation extends to engaging them in ways that are genuine rather than performative. This section examines what meaningful stakeholder engagement looks like in practice, what makes it difficult, and what examples of genuine engagement exist.

Consultation vs. Genuine Participation

The distinction between consultation and participation is the central practical challenge of stakeholder engagement. Consultation means asking stakeholders for their views and then making decisions that may or may not reflect those views. Participation means creating mechanisms through which stakeholders have genuine influence over decisions — mechanisms with teeth.

Most corporate AI ethics processes operate at the consultation end of the spectrum, and many operate below even that: providing information about AI systems to stakeholders without creating any mechanism for input, feedback, or objection. Genuine participation requires accepting that stakeholder input may change decisions in ways that are costly to the deploying organization. This is uncomfortable for organizations whose governance processes are oriented toward managing stakeholder perceptions rather than incorporating stakeholder views.

Community Benefit Agreements

Community benefit agreements (CBAs) are legally binding contracts between a developer or deployer of a major project and representatives of affected communities, specifying the benefits the community will receive and the conditions that must be met. Originally developed in the context of urban real estate development, CBAs are beginning to be applied in AI contexts.

A community benefit agreement for an AI deployment in a healthcare context might specify: that the AI system will be independently audited for bias across demographic groups before deployment; that the audit results will be publicly disclosed; that affected community representatives will serve on an ongoing advisory panel with defined authority; that if the system is found to perform differently across demographic groups after deployment, specific remediation steps will be taken within a defined timeframe; and that employment and training opportunities related to the AI system will be made available to community members.

Participatory Design

Participatory design — involving the people who will be affected by a system in the design of that system — has roots in Scandinavian labor organizing of the 1970s and has been applied in software development for decades. Applied to AI, participatory design means involving affected communities not just in reviewing systems that have already been designed but in the design process itself.

This is genuinely difficult. It requires that AI developers engage with communities that may lack technical expertise; that they create accessible ways for non-technical stakeholders to participate meaningfully; that they be willing to let community input change technical decisions; and that they maintain these relationships over time rather than treating community engagement as a one-time pre-launch checkbox. These are significant organizational and resource commitments that many AI development processes do not make.

Examples of Genuine Engagement

The city of Amsterdam has experimented with algorithm registries — public lists of all algorithms used by city government, with plain-language descriptions of what each algorithm does, what data it uses, and how its outputs are used. Residents can query the registry and submit concerns. This is a form of passive transparency rather than active participation, but it is substantively different from opacity.

Several community-based organizations in the United States have developed "algorithmic impact assessments" as tools for communities to evaluate AI systems that affect them before those systems are deployed. The Algorithmic Justice League's advocacy for mandatory pre-deployment bias audits in public sector AI is an example of civil society organizations working to institutionalize community protection.

The EU AI Act's requirements for fundamental rights impact assessments for high-risk AI systems represent a legislative mandate for structured consideration of affected parties — a significant expansion of formal stakeholder engagement requirements beyond what most jurisdictions have required.

Practical Constraints and Honest Trade-offs

Genuine stakeholder engagement takes time and resources that are genuinely scarce in AI development and deployment contexts. A startup developing an AI product on a six-month runway cannot conduct the same comprehensive stakeholder engagement process that a well-resourced government agency should conduct before deploying an AI system that will affect millions of people. Scale of impact, nature of potential harm, and reversibility of deployment decisions should all inform the depth of stakeholder engagement required.

What is not acceptable is using resource constraints as a universal justification for minimal engagement. An organization with sufficient resources to deploy an AI system at scale has sufficient resources to fund substantive engagement with the communities that system will affect. The question is whether the organization chooses to allocate those resources to engagement or to other priorities — and that choice is an ethical one.

The chapters ahead will explore specific dimensions of this challenge: Chapter 21 examines corporate governance structures for AI accountability; Chapter 22 examines the role of employee whistleblowing; Chapter 33 examines the regulatory compliance landscape in depth.


Discussion Questions

  1. The predictive policing case reveals a long list of stakeholders who were never consulted in the deployment decision. What practical mechanisms would you recommend for ensuring broader stakeholder representation before a city government deploys an AI system with significant civil liberties implications? Who should have the authority to convene that process?

  2. The principal-agent problem appears at multiple levels in AI development. Identify three specific examples from this chapter where an agent's interests diverge from their principal's interests, and explain what governance mechanisms might better align those interests.

  3. Freeman's stakeholder theory asks organizations to create value for all stakeholders, not only shareholders. In practical terms, how should a company weigh the interests of paying customers against the interests of affected community members who are not customers, when those interests conflict?

  4. The "low power, high interest" quadrant of the stakeholder matrix contains the parties with the most at stake and the least voice. What specific design features — organizational, legal, technical — can be built into AI governance processes to ensure these stakeholders are not systematically ignored?

  5. The dual newspaper test is a practical heuristic, not a rigorous ethical framework. What are its limitations? When might a decision that passes the dual newspaper test still be ethically wrong? When might a decision that fails one side of the test be ethically correct?

  6. China and the EU represent two fundamentally different theories of who counts as a stakeholder in AI governance. A multinational company with operations in both jurisdictions faces genuinely different regulatory requirements. Beyond compliance, what ethical framework should guide the company's approach to stakeholder engagement across these different contexts?

  7. Future generations cannot participate in current AI governance processes. What institutional mechanisms could give future generations a voice — even an indirect one — in decisions about AI systems being deployed today? Are there precedents in other policy domains (climate change, constitutional law, long-term fiscal planning) that offer useful models?


Chapter 4 continues with detailed case studies, exercises, quiz, and further reading in the accompanying files.