Chapter 4: Quiz
Chapter 4 — Stakeholders in the AI Ecosystem Total questions: 20 | Suggested time: 45–55 minutes
Part I: Multiple Choice (8 questions, 2 points each)
1. R. Edward Freeman's stakeholder theory, introduced in 1984, defines a stakeholder as:
a) Any party that holds equity or debt in the organization b) Any individual, group, or organization that can affect or is affected by the achievement of the organization's objectives c) Any party that has a contractual relationship with the organization d) Any individual who interacts directly with the organization's products or services
2. In the context of the AI value chain, "data subjects" are best defined as:
a) The customers who pay for and use AI-powered products b) The data scientists and engineers who build AI training datasets c) The identifiable individuals to whom the personal data processed by an AI system relates d) The companies that supply data infrastructure and storage services
3. The predictive policing algorithm PredPol claimed demographic neutrality because it did not use demographic variables as explicit inputs. Academic researchers criticized this claim primarily because:
a) The algorithm was too complex for law enforcement officers to understand b) The training data — historical arrest records — reflected prior racially discriminatory policing patterns that the algorithm then reproduced c) Demographic neutrality is an impossible standard for any algorithm to meet d) The algorithm was not validated on racially diverse populations before deployment
4. Which of the following best describes the concept of "ethics washing" in the context of corporate AI programs?
a) Conducting rigorous independent audits of AI systems before deployment b) Performing the signifiers of ethical commitment without implementing governance structures that actually constrain AI behavior c) Applying ethical frameworks from academic philosophy to practical AI development decisions d) Removing demographic data from AI training sets to eliminate potential bias
5. In the Power-Interest stakeholder matrix, the quadrant requiring the greatest ethical attention — because it contains the parties most affected with the least formal voice — is:
a) High power, high interest b) High power, low interest c) Low power, high interest d) Low power, low interest
6. The Facebook emotional contagion experiment (Kramer et al., 2014) exposed a critical gap in human subjects research ethics oversight because:
a) Facebook failed to submit a required FDA application for behavioral research b) The research involved a sample size too large for standard IRB review processes c) Facebook was not a federally funded research institution, so standard IRB oversight requirements did not apply d) Cornell University researchers were prohibited from conducting industry-partnered research under academic ethics rules
7. Under the EU AI Act, which type of AI application is explicitly prohibited (not merely regulated)?
a) AI systems used to make employment decisions b) AI systems used in consumer credit scoring c) Real-time remote biometric identification in public spaces by public authorities (with limited exceptions) d) AI systems used in healthcare triage
8. The "dual newspaper test," as described in the chapter, asks decision-makers to check whether a decision:
a) Would be supported by both domestic and international media b) Would satisfy both legal and ethical standards simultaneously c) Would survive scrutiny from both an investigative journalist covering AI harm and a business journalist covering over-cautious innovation d) Would be praised by both civil society organizations and industry trade associations
Part II: True or False (5 questions, 2 points each)
For each statement, write TRUE or FALSE and provide one to two sentences of explanation.
9. An enterprise buyer (a company that purchases and deploys an AI hiring tool built by a vendor) bears no legal responsibility for discriminatory outcomes produced by that tool, because responsibility rests with the vendor that built and sold it.
10. Freeman's stakeholder theory argues that the ethical purpose of a firm requires creating value for all stakeholders, not solely maximizing returns to shareholders.
11. Under the EU's General Data Protection Regulation (GDPR), individuals have an absolute right to be exempt from any automated decision-making, regardless of circumstances.
12. The Belmont Report's principle of informed consent requires that research participants understand the nature and purpose of research, its risks and benefits, and their right to withdraw, before agreeing to participate.
13. A responsible AI team that reports to the Chief Marketing Officer, produces advisory guidance with no binding authority, and is excluded from early-stage product development processes is a strong indicator of a genuine governance function.
Part III: Short Answer (4 questions, 5 points each)
Each answer should be 100–150 words.
14. Explain the concept of the "principal-agent problem" and give two specific examples of this problem as it appears in AI ecosystems. Your examples should be distinct from each other and from any examples given in the question itself.
15. Section 4.5 of the chapter argues that "power flows upstream; harm flows downstream" in the AI value chain. Explain what this means using a concrete example drawn from either the predictive policing case or the Facebook emotional contagion case. Then explain why this directional asymmetry is ethically significant rather than merely descriptively interesting.
16. What is the difference between consultation and genuine participation in stakeholder engagement? Give one example of a stakeholder engagement process that qualifies as genuine participation and explain what makes it so. Give one example of a process that qualifies only as consultation and explain the difference.
17. The chapter discusses how the demographics of AI's builders differ substantially from the demographics of those most affected by AI's failures. Explain the ethical significance of this gap, using the concept of "representation" in AI development and the concrete example of the Joy Buolamwini facial recognition accuracy research.
Part IV: Applied Scenario (3 questions, 8 points each)
Each response should be 200–300 words.
18. The Algorithmic Benefits Screener
A state government is deploying an AI system to screen applications for Medicaid, food assistance (SNAP), and housing vouchers. The system will automatically approve, deny, or flag applications for human review based on applicant data and program eligibility rules. Approximately 200,000 applications per year will be processed.
Using the stakeholder analysis framework from Section 4.6: - Identify six key stakeholders (including at least two from the "low power, high interest" quadrant) - For each "low power, high interest" stakeholder, describe one specific engagement mechanism - Identify the single most significant ethical risk in this deployment and explain how you would recommend addressing it
19. The Global South Deployment Decision
A European AI company has developed a credit-scoring algorithm trained primarily on data from Western European consumers. The company is now being approached by microfinance institutions in three West African countries who want to use the algorithm to make small-business lending decisions for entrepreneurs who lack traditional credit histories.
Applying the chapter's frameworks on global variation in stakeholder relationships (Section 4.8) and affected communities (Section 4.5): - Identify three specific risks that arise from deploying a model trained on European data in West African contexts - Identify which stakeholders are most likely to bear the costs if those risks materialize - Recommend two minimum requirements that should be met before the deployment proceeds
20. The HR AI Deployment
A 15,000-employee retail company is deploying three AI systems simultaneously: 1. An AI resume screening tool that ranks applicants for open positions 2. An AI scheduling system that creates employee work schedules based on predicted foot traffic and sales data 3. An AI performance monitoring system that tracks productivity metrics for warehouse workers and flags underperforming employees for manager review
For each of the three systems: - Identify the primary affected stakeholder group and the nature of the ethical risk to that group - Identify the relevant regulatory body with jurisdiction over that risk in the United States - Recommend one governance safeguard the company should implement before deploying that system
Answer Key
Part I: Multiple Choice
-
b — Freeman's definition explicitly extends beyond contractual and financial relationships to include any party that can affect or be affected by the organization's objectives.
-
c — Data subjects are identifiable individuals to whom personal data relates. They may or may not be users or customers of the AI system.
-
b — The core "dirty data" critique: even without explicit demographic inputs, an algorithm trained on arrest records that reflect prior discriminatory policing will reproduce those patterns in its outputs.
-
b — Ethics washing describes the gap between the performance of ethical commitment and the substance of ethical governance.
-
c — Low power, high interest stakeholders bear the greatest costs from AI systems while having the least formal ability to influence those systems. They require deliberate mechanisms for representation.
-
c — Facebook was not subject to federal research regulations because it was not a federally funded institution. The Cornell co-authors' affiliations did not retroactively impose IRB requirements on research they participated in after data collection was complete.
-
c — The EU AI Act explicitly prohibits real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions for specific serious crimes). Options a, b, and d describe high-risk applications that are regulated but not prohibited.
-
c — The dual newspaper test checks for failure in both directions: harmful enough to generate investigative journalism, or so cautious as to generate business-press criticism.
Part II: True or False
-
FALSE. Enterprise buyers bear significant legal responsibility for the outcomes of AI tools they deploy. The EEOC has been explicit that employers are liable for discriminatory hiring outcomes regardless of whether those outcomes are produced by their own employees, external recruiters, or algorithmic tools.
-
TRUE. Freeman's stakeholder theory directly challenges the Friedman doctrine that the firm's sole obligation is to shareholders, arguing instead that firms have obligations to all parties who affect or are affected by their activities.
-
FALSE. GDPR Article 22 grants individuals a right not to be subject to solely automated decisions that produce significant legal or similarly significant effects, but this right is subject to exceptions — including when necessary for a contract, authorized by law, or based on explicit consent. It is not absolute.
-
TRUE. The Belmont Report (1979) establishes that informed consent requires disclosure of research purpose and procedures, risks and anticipated benefits, disclosure of alternatives, and a statement of the right to withdraw — and that participation must be voluntary and based on understanding.
-
FALSE. This description is a portrait of an ethics-washing function, not a genuine governance function. Reporting to the CMO, advisory-only authority, and exclusion from early product decisions are all indicators of a function designed to produce ethical credibility without genuine governance power.
Part III: Short Answer (Model Responses — Accept answers that capture the core concepts; grader discretion applies)
-
The principal-agent problem arises when an agent authorized to act on behalf of a principal has interests that diverge from the principal's. In AI contexts: (1) AI recommendation systems act as agents for users (their principal) but are optimized for platform engagement goals that may diverge from user wellbeing — recommending addictive or misleading content that keeps users on-platform even when this is harmful to them; (2) AI hiring systems act as agents for employers (their principal) but may optimize for historical hiring patterns rather than actual job performance, producing discriminatory outputs that the employer would not sanction if they understood them.
-
In the PredPol case: power to design and deploy the system resided upstream with the LAPD leadership and the technology vendor; the harm flowed downstream to residents of neighborhoods flagged as high-risk, who experienced intensified police surveillance without having had any voice in the deployment decision. The ethical significance: this is not merely a description but a structural condition that makes harm likely and accountability difficult. Downstream stakeholders cannot prevent harm they were not consulted about and cannot easily seek redress when institutional accountability mechanisms are designed by and for upstream stakeholders.
-
Consultation: gathering stakeholder input that decision-makers may use or disregard. Example: an AI company holds focus groups with community members before product launch; those groups identify concerns that the company notes but does not incorporate into the product design. Genuine participation: mechanisms through which stakeholder input has substantive influence over decisions. Example: a city government establishes a community advisory panel with a defined governance role — including authority to require a pause on AI deployment pending further review — for a predictive policing deployment decision, and that panel's concerns result in modifications to the deployment plan.
-
The demographics of AI builders — disproportionately male, white or Asian-American, from elite universities, concentrated in wealthy cities — create systematic blind spots. People who build systems do not encounter as users or affected parties the experiences of those who look and live differently from them. Buolamwini's 2018 facial recognition study demonstrated this concretely: she found error rates for dark-skinned women up to 34 percentage points higher than for light-skinned men. Researchers who are light-skinned and male had not investigated this disparity because they had not encountered it in their own experience. The representation gap at the design stage became a harm gap at the deployment stage.
Part IV: Applied Scenario (Model Responses — grader judgment applies; full credit requires accurate stakeholder identification, plausible engagement mechanisms, and ethical risk analysis)
-
Key stakeholders: (High power, high interest) State agency administrators, federal CMS (which oversees Medicaid funding), state legislature. (Low power, high interest) Applicants denied benefits by automated screening (particularly those with disabilities, elderly applicants, non-English speakers, and individuals with complex household situations not captured by standard data); legal aid organizations representing benefit claimants; community organizations serving low-income populations. Engagement mechanisms for low-power stakeholders: (1) Mandatory plain-language explanation of automated denial decisions with specific grounds and clear appeal pathway; (2) Advisory panel of legal aid lawyers and community advocates with defined input into the algorithm's design and ongoing audit rights. Primary ethical risk: automated denial of benefits to eligible applicants who cannot effectively challenge the decision, particularly those with disabilities or limited digital literacy. Mitigation: human review required for all denials, with a 48-hour review timeline.
-
Three risks: (1) Model underperformance for West African applicants because behavioral and economic patterns in the training data do not match West African contexts — producing systematically higher false-denial rates; (2) Proxy discrimination from variables (social network characteristics, mobile usage) that may correlate with protected characteristics in specific West African contexts differently than in European contexts; (3) Absence of regulatory protection for affected borrowers in countries without GDPR-equivalent protections. Cost-bearers if risks materialize: small business owners denied credit they would have repaid; marginalized entrepreneurs (women, ethnic minorities) who face intersecting discrimination. Minimum requirements: (1) Validation study on data from each target country before deployment, with public disclosure of results; (2) Human review for all denials with a clear appeal mechanism.
-
System 1 (Resume screener): Primary affected group — job applicants, particularly those from demographic groups underrepresented in historical hiring data. Ethical risk: discriminatory screening outcomes that perpetuate historical exclusions. Relevant regulator: EEOC (employment discrimination). Safeguard: disparate impact audit before deployment, with results reviewed by HR leadership and external evaluator. System 2 (Scheduling): Primary affected group — hourly employees, particularly parents and people with caregiving responsibilities who need schedule predictability. Ethical risk: unpredictable scheduling that makes it impossible to plan family responsibilities or hold second jobs. Relevant regulator: Department of Labor (fair labor standards; also state predictive scheduling laws in some jurisdictions). Safeguard: minimum advance notice requirements and employee input mechanism built into the scheduling system. System 3 (Performance monitoring): Primary affected group — warehouse workers, particularly those with disabilities or whose work style is efficient but not easily captured by the metrics being tracked. Ethical risk: discriminatory termination outcomes based on metrics that are not valid proxies for job performance. Relevant regulator: EEOC (disability discrimination); NLRB (surveillance of protected concerted activity). Safeguard: human review required before any termination recommendation generated by the system; metric validity study before deployment.
Quiz developed for Chapter 4: Stakeholders in the AI Ecosystem. Recommended use: individual assessment after chapter reading. All four parts may be assigned together or separately depending on course structure.