Capstone Project 3: Stakeholder Impact Assessment

A Comprehensive Framework for Evaluating the Human Consequences of AI Deployment


1. Project Overview

Purpose

This project requires you to conduct a comprehensive stakeholder impact assessment (SIA) for a proposed AI deployment. A stakeholder impact assessment is a structured analytical process that identifies who is affected by a decision — directly and indirectly — assesses how they are affected and to what degree, designs processes for including affected parties in decision-making, and develops recommendations for reducing harm and enhancing benefit.

Stakeholder impact assessment is distinct from technical evaluation of AI systems. It does not primarily ask: "Does this system work?" It asks: "Who is affected by this system working this way, and what does that mean for how it should be designed, governed, and deployed — or whether it should be deployed at all?"

This form of assessment is increasingly embedded in regulatory requirements. The EU AI Act requires fundamental rights impact assessments for certain AI applications. Multiple U.S. cities and states require algorithmic impact assessments before deploying automated decision systems. The Canadian Directive on Automated Decision-Making requires impact assessments before federal departments deploy algorithmic systems. This project gives you hands-on experience conducting an assessment that meets or exceeds these emerging standards.

Connection to Real-World Practice

Stakeholder impact assessments are conducted in practice by a range of actors: government agencies evaluating proposed procurements, civil society organizations scrutinizing proposed deployments, organizations assessing their own AI systems, and consultants engaged to provide independent assessment. The methodology you learn in this project is directly applicable in all of these roles.

Learning Objectives

By completing this project, you will be able to:

  • Systematically identify all stakeholders affected by an AI deployment, including those typically overlooked
  • Apply a structured impact assessment methodology that distinguishes types, probabilities, and severities of harm
  • Identify which stakeholder groups are most vulnerable and least able to protect themselves
  • Design meaningful stakeholder engagement processes that go beyond consultation theater
  • Develop concrete mitigation recommendations grounded in evidence about likely harms
  • Design a monitoring framework that can track actual impacts post-deployment
  • Communicate assessment findings to multiple audiences including technical teams, executives, and affected communities
  • Synthesize concepts from across this course in a single integrated analytical exercise

2. Scenario Selection

Choose one of the following scenarios for your assessment. If none of these scenarios fits your interests or professional context, you may propose your own scenario with instructor approval. Proposals should be submitted by the end of Week 1.

Scenario A: Facial Recognition in Public Spaces

A city government of 750,000 residents is proposing to deploy facial recognition cameras in 12 designated "high-crime areas" for real-time identification of individuals with outstanding arrest warrants. The system would use a commercial facial recognition API integrated with local law enforcement's records management system. Officers would receive alerts on mobile devices when a flagged individual is detected. The department proposes a 12-month pilot before evaluating broader rollout.

Key tensions in this scenario: public safety benefits vs. civil liberties costs; differential accuracy across demographic groups; the chilling effect of surveillance on lawful behavior; scope creep risk; and the definition of "high-crime area" and its embedded assumptions.

Scenario B: AI for Organ Transplant Prioritization

A regional organ procurement organization serving 14 hospitals is evaluating an AI system that would supplement the existing United Network for Organ Sharing (UNOS) allocation protocol by predicting individual patient outcomes and survival probability. The system would not make final allocation decisions but would provide ranked recommendations to transplant committees.

Key tensions in this scenario: the potential to save more lives through optimized matching vs. the risk of perpetuating healthcare disparities; the role of human judgment vs. algorithmic efficiency in life-or-death decisions; privacy of highly sensitive health data; and accountability when an allocation decision contributes to a patient's death.

Scenario C: Continuous Employee Performance Monitoring

A financial services company with 4,500 employees is proposing to deploy an AI-powered workforce analytics platform that continuously monitors employee activity across digital work channels: email metadata, calendar patterns, application usage, document activity, and collaboration tool interactions. The platform generates individual productivity scores, flags behavioral anomalies, and provides managers with weekly dashboards. The company framing is that the system will improve productivity and identify employees at risk of burnout.

Key tensions in this scenario: employer productivity interests vs. employee privacy and dignity; the labor law implications of continuous monitoring; differential impacts on employees with disabilities or caregiving responsibilities; the accuracy and validity of productivity proxies; and the power asymmetry between employer and employee.

Scenario D: AI Credit Underwriting at Scale

A large regional bank is replacing its existing rule-based credit underwriting model with a machine learning model trained on 10 years of loan performance data across 2 million historical loans. The new model uses 140 variables — including some non-traditional variables such as patterns of bill payment, account balance volatility, and geographic data — and is projected to expand credit access for marginally qualified applicants while improving default prediction by 15%.

Key tensions in this scenario: expanded credit access vs. disparate impact risk embedded in historical data; use of non-traditional variables as potential proxies for protected characteristics; consumer rights to explanation and appeal; regulatory compliance under ECOA and fair lending law; and the appropriate role of human underwriters.

Scenario E: AI News Curation on Social Media

A social media platform with 200 million monthly active users in the United States is proposing to replace its current algorithmic news feed — which optimizes for engagement — with a new system that incorporates "information quality" signals including source reputation scores, fact-check labels, and user credibility metrics. The stated goal is to reduce misinformation spread. The system will determine what news content approximately 60 million politically engaged users see daily.

Key tensions in this scenario: misinformation reduction vs. editorial power and content moderation bias; differential impacts on political speech from different perspectives; effects on local and independent journalism; the epistemological implications of a single system shaping political information for millions; and democratic accountability for a privately operated public information infrastructure.

Scenario F: Student Risk Prediction in Schools

A school district serving 48,000 students is considering deploying a predictive analytics system that would identify students at high risk of dropout, chronic absenteeism, or behavioral incidents. The system would use academic records, attendance data, discipline records, and socioeconomic indicators to generate risk scores for individual students. These scores would be used by counselors and administrators to target intervention resources.

Key tensions in this scenario: resource targeting to help at-risk students vs. the labeling and stigmatization of children; racial and socioeconomic disparities embedded in predictive variables; FERPA and student privacy; the self-fulfilling prophecy risk of behavioral prediction; parental consent and student voice; and the adequacy of intervention resources relative to algorithmic identification of need.


3. The Assessment Framework: A Seven-Step Methodology

Your assessment must follow this seven-step methodology in sequence. Each step builds on the preceding ones. Do not compress or skip steps.

Step 1: Stakeholder Identification

Objective: Produce a comprehensive list of all parties with a stake in this AI deployment — everyone who might be affected by it, interested in it, or have influence over it.

Method: Use multiple identification lenses:

  • Direct users: Who will interact with the system as operators (deploying organization staff, law enforcement officers, clinicians, loan officers)?
  • Direct subjects: Who will have decisions made about them by or influenced by the system?
  • Indirect affected parties: Who is affected by the deployment's outcomes without directly interacting with it (family members of subjects, communities where the system operates, competitors, downstream service providers)?
  • Institutional stakeholders: What organizations have formal authority or responsibility connected to this deployment (regulators, oversight bodies, unions, professional associations)?
  • Civil society: What advocacy organizations, community groups, or civil society actors represent affected populations or have positions on this type of AI use?
  • Future parties: Who will be affected in the future — future subjects, next-generation community members — who are not yet identified?

Common omission to avoid: Assessments frequently identify the most visible stakeholders (direct users, organizational decision-makers) and miss less visible but equally affected groups (workers in lower-level roles, community members who will not directly interact with the system, people whose data is used but who are not the system's direct subjects).

Output: A comprehensive stakeholder inventory listing every identified stakeholder group, a brief description of their relationship to the AI deployment, and the basis for their inclusion.

Step 2: Stakeholder Mapping

Objective: Develop a structured understanding of stakeholder relationships, including their relative power, level of interest, and relationships to each other.

Method: Construct a power-interest grid placing each stakeholder group in one of four quadrants:

  • High power, high interest: These stakeholders can significantly influence the deployment and are highly motivated to do so. Engagement strategy: manage closely, collaborate on decisions.
  • High power, low interest: These stakeholders can influence the deployment but are not currently focused on it. Engagement strategy: keep informed; unmet concerns from this group can derail deployment if activated.
  • Low power, high interest: These stakeholders are deeply affected but have limited ability to influence the outcome. Engagement strategy: this is where the most significant ethical attention belongs. How are their interests protected when they lack power to protect themselves?
  • Low power, low interest: These stakeholders have limited stake and limited influence. Engagement strategy: monitor; their status can change.

In addition to the power-interest grid, map the relationships between stakeholder groups: which groups are in conflict, which are allied, which have formal authority over others.

Output: A visual stakeholder map (power-interest grid) with all stakeholder groups positioned, plus a written narrative analyzing the key stakeholder dynamics and relationships.

Step 3: Impact Assessment

Objective: For each stakeholder group, systematically identify and evaluate the likely impacts of the AI deployment — both positive and negative.

Method: For each stakeholder group, complete the following analysis:

Positive impacts: What benefits might this deployment create for this stakeholder group? Be specific and evidence-based, not merely speculative.

Negative impacts: What harms might this deployment cause or contribute to for this stakeholder group? Distinguish between: - Direct harms: harms caused directly by the system's decisions or operations - Indirect harms: harms caused by the system's systemic effects on institutions, norms, or power dynamics - Dignitary harms: harms to people's sense of self-worth, autonomy, or standing, even absent material injury

Probability: How likely is each negative impact to materialize? Base this assessment on evidence: published research on similar systems, domain expertise, historical patterns.

Severity: If the impact materializes, how serious is it? Consider: reversibility (can the harm be undone?), breadth (how many people are affected?), depth (how severely is each person affected?), duration (is it temporary or permanent?).

Distribution: Does this impact fall disproportionately on any group defined by race, gender, age, disability, economic status, or other characteristics?

Output: An impact matrix covering all stakeholder groups, with documented positive and negative impacts, probability and severity assessments, and distributional analysis.

Step 4: Vulnerability Analysis

Objective: Identify which stakeholder groups are least able to protect themselves from the negative impacts identified in Step 3, and why.

Vulnerability is not a fixed characteristic of people; it is a relationship between a person or group's characteristics and circumstances, and the specific risks created by a particular system or decision. A person who is highly capable of protecting their interests in one context may be deeply vulnerable in another.

Factors that create vulnerability in AI system contexts:

  • Limited information access: The stakeholder does not know the system exists, how it works, or that they have been subject to it.
  • Limited ability to contest: There is no meaningful process for disputing an AI-influenced decision, or access to that process requires resources the stakeholder lacks.
  • Power asymmetry: The deploying organization has substantially greater power than the stakeholder (employer vs. employee; government vs. resident; lender vs. applicant).
  • Lack of alternatives: The stakeholder cannot avoid the system by choosing a different service or provider.
  • Intersecting disadvantages: The stakeholder faces multiple forms of disadvantage that compound the risk of harm (e.g., low income + limited English + lack of legal representation).
  • Political marginalization: The stakeholder group lacks political voice to advocate for policy changes.
  • Representational gaps: The stakeholder group is absent from the organizations and institutions that make decisions about the AI deployment.

Output: A vulnerability analysis that identifies the most vulnerable stakeholder groups, explains the sources of their vulnerability in this specific context, and explicitly notes where vulnerability is distributed unequally across demographic groups.

Step 5: Engagement Plan

Objective: Design a specific, operational plan for engaging each stakeholder group in the AI deployment decision.

Meaningful engagement is not the same as providing information or conducting a survey. Meaningful engagement gives stakeholders a genuine opportunity to influence decisions, incorporates their perspectives into the assessment, and creates accountability for how their input was used.

For each stakeholder group, specify:

  • Engagement method: What process will be used? Options range across a spectrum from information provision (least participatory) through consultation, collaboration, and co-design (most participatory). The level of participation should correspond to the stakeholder's level of impact and vulnerability.
  • Timeline: When in the decision process will engagement occur? Engagement that happens after deployment decisions are made is not genuine engagement.
  • Who leads the engagement: Is the deploying organization engaging directly, or is a trusted intermediary facilitating engagement with groups that have reason not to trust the organization?
  • How findings are incorporated: What process ensures that engagement findings actually influence the design, governance, or deployment decision?
  • Accountability: How will the organization demonstrate to stakeholders that their input was taken seriously?

Common failure mode to avoid: Engagement plans often include substantive engagement with high-power stakeholders (regulators, partner organizations, prominent advocacy groups) while providing only information to low-power, high-impact stakeholders. Strong engagement plans invest the most in engaging those most affected.

Output: A complete engagement plan specifying engagement activities for each stakeholder group with sufficient operational detail to be implemented.

Step 6: Mitigation Design

Objective: Design specific changes to the AI system, its governance, or its deployment that would reduce the harms identified in Step 3 while preserving or enhancing benefits.

Types of mitigation to consider:

  • Design mitigations: Changes to how the system is built — different training data, different features, different output format, different decision thresholds
  • Deployment mitigations: Changes to how the system is used — restricted scope, mandatory human review, minimum confidence thresholds for action, prohibited uses
  • Governance mitigations: Changes to oversight structures — independent audit requirements, community oversight boards, mandatory impact reporting
  • Process mitigations: Changes to surrounding processes — grievance mechanisms, appeals processes, redress for verified harms
  • Conditional mitigation: Requirements that must be met before deployment proceeds — minimum accuracy standards, bias testing protocols, community consent processes

For each significant harm identified in Step 3, identify at least one feasible mitigation. For each mitigation, provide: - A description of the proposed change - Which harms it addresses - An estimate of implementation cost (order of magnitude) - Residual risk after mitigation: what risk remains even if this mitigation is implemented - Any tradeoffs: does this mitigation reduce one harm while creating or increasing others?

Output: A mitigation recommendations document organized by harm type, with cost estimates, residual risk assessments, and tradeoff analysis.

Step 7: Monitoring Plan

Objective: Design a monitoring framework that will track the system's actual impacts on stakeholders post-deployment, enabling early detection of emerging harms and accountability for commitments made.

What a monitoring plan must include:

  • Metrics: What will be measured? Metrics should be linked to specific potential harms identified in Step 3. Disaggregated data (broken down by demographic group) is essential for detecting disparate impact.
  • Data sources: Where will monitoring data come from? System performance logs, user complaints, affected-party surveys, community reporting, administrative records?
  • Frequency: How often will monitoring data be collected and reviewed?
  • Thresholds: At what level of measured impact will the organization take action? Specify trigger levels, not just general commitments to respond to concerns.
  • Governance: Who reviews monitoring data? Who has authority to require changes to the system based on monitoring findings?
  • Transparency: Will monitoring findings be published? If so, in what form and how often?
  • Community access: Will affected communities have access to monitoring data or monitoring reports?
  • Sunset clause: What conditions would lead to discontinuing the system?

Output: A comprehensive monitoring plan in the format described above.


4. Primary Research Component

Your assessment must incorporate primary research: information gathered directly from stakeholders or domain experts, not solely from published sources.

Requirement: Conduct at least three interviews with stakeholders or domain experts and incorporate their perspectives substantively into your assessment.

Selecting interviewees: Interviewees should represent multiple perspectives, not just one viewpoint. Prioritize interviews with members of affected communities, particularly vulnerable stakeholder groups identified in Step 4. Also include at least one interview with a domain expert (researcher, practitioner, regulator) with specific knowledge of this type of AI system.

Interview methodology: Prepare a structured interview guide for each interview. Conduct interviews using IRB-appropriate protocols if your institution requires it. Take notes or, with permission, record interviews. Report interview findings with appropriate confidentiality protections: use first name only or role description unless the interviewee consents to identification.

Integration requirement: Interview findings must be integrated throughout the assessment, not siloed in a separate section. Where interview findings confirm or challenge your documentary research, note the intersection explicitly. Where interview findings reveal perspectives or concerns not present in published sources, give them appropriate weight.

Reflecting genuine perspectives: One of the most common failures in stakeholder impact assessments is conducting community engagement and then using the findings selectively — citing community support while discounting community concerns. Your assessment should reflect the full range of perspectives gathered, including perspectives that create inconvenient complications for the proposed deployment.


5. Deliverables

Required Outputs

Full Impact Assessment Report (25–35 pages). The complete assessment following the seven-step methodology. Each step should be clearly labeled and address all required elements. The report should include an executive summary (3–4 pages) as its opening section. Factual claims should be documented with citations.

Stakeholder Map (visual). The power-interest grid from Step 2, presented as a professional visual suitable for inclusion in the report and for standalone use in presentations. The map should be legible at standard print size and include a legend.

Engagement Plan (operational detail). The engagement plan from Step 5, formatted as a standalone document with sufficient operational detail that it could be implemented by organizational staff who did not write it. Include timelines, responsible parties, estimated costs, and success metrics.

Mitigation Recommendations with Cost Estimates. The mitigation recommendations from Step 6, formatted as a standalone recommendations document. Organize by priority: critical (should be required before deployment), important (should be addressed within the first deployment year), and desirable (should be addressed as resources allow).

Monitoring Dashboard Design. A visual design for the monitoring dashboard from Step 7. This need not be a functioning dashboard — it is a design specification showing what metrics would be displayed, in what format, and with what contextual information. Accompany with a brief written explanation of each metric's significance and its threshold for escalation.

Executive Briefing (12–15 slides). A professional slide deck presenting the assessment findings to organizational decision-makers — the people who will decide whether and how to proceed with deployment. The briefing should not merely summarize the report; it should tell a clear story: here is what we found about the impacts of this deployment, here is who is most at risk, here is what we recommend, and here is what it will cost to do this responsibly.

Community-Facing Summary (2 pages). A summary of the assessment written for members of the affected community, using accessible language (target: 8th-grade reading level). This document should describe the proposed AI deployment in plain terms, summarize the assessment's key findings about likely impacts on community members, explain how community input was gathered and used, and describe the mitigation recommendations and how community members can engage further.


6. Evaluation Criteria

Criterion Weight Excellent Adequate Inadequate
Comprehensiveness of Stakeholder Identification 20% All significantly affected stakeholder groups are identified, including indirect, future, and easily overlooked groups. Analysis explains the basis for inclusion of each group. Most significant stakeholder groups are identified. Some less visible groups are missed. Assessment focuses primarily on the most visible stakeholders and misses significant affected groups.
Rigor of Impact Assessment 25% For each stakeholder group, both positive and negative impacts are identified with specificity. Probability and severity are assessed with reference to evidence. Distributional questions are addressed. Impact assessment is complete for most stakeholders. Probability and severity are addressed but some assessments lack evidentiary support. Impact assessment is superficial, focuses primarily on one type of impact (usually harms), or omits probability and severity analysis.
Vulnerability Analysis 15% Vulnerability analysis is specific to this deployment context. Sources of vulnerability are explained and connected to specific risks. Intersecting disadvantages are identified. Vulnerability analysis is present and identifies most vulnerable groups but is less specific about sources of vulnerability or intersectionality. Vulnerability analysis conflates general social disadvantage with context-specific vulnerability, or is absent.
Quality of Mitigation Design 20% Mitigations are specific, feasible, and clearly linked to identified harms. Cost estimates are provided. Residual risks and tradeoffs are acknowledged. Mitigations are relevant and generally feasible but some are vague. Cost estimates or tradeoff analysis may be incomplete. Mitigations are generic, aspirational, or disconnected from identified harms.
Primary Research Integration 10% Three or more interviews are conducted with appropriately selected interviewees. Interview findings are integrated throughout the assessment and not merely summarized in a standalone section. Three interviews are conducted but integration is uneven — findings are present but not fully woven into the analysis. Fewer than three interviews, or interview findings are present but not meaningfully integrated.
Accessible Communication 10% Community-facing summary is genuinely accessible: clear, non-technical, honest about risks, and written for a non-specialist audience. All deliverables are appropriate for their intended audiences. Community-facing summary is mostly accessible but retains some technical language or is less direct about risks. Community-facing summary is not meaningfully different in register or complexity from the main report.

7. Worked Example: Predictive Policing Assessment (Partial)

The following partial example illustrates how the methodology should be applied, using a predictive policing deployment as the scenario. This example covers Steps 1 through 3 for a subset of stakeholders to illustrate the expected depth of analysis.

Scenario (for example only)

A city of 600,000 residents proposes to deploy a predictive policing system that generates "risk scores" for street segments and generates daily patrol allocation recommendations. The system is trained on five years of historical arrest and incident data.

Step 1: Stakeholder Identification (Partial)

Direct subjects — residents of predicted high-risk areas: These are the individuals most directly affected by increased police presence in their neighborhoods. They experience both the potential safety benefits of police presence and the costs of intensified surveillance, police encounters, and the dignitary harm of living in a zone classified as high-risk.

Indirect subjects — people with prior system involvement: People with past arrests or convictions whose data is used in the training dataset and who may have elevated risk scores as a result. They did not consent to this use of their data and cannot contest their contribution to the model.

Organizational users — patrol officers: Officers who receive and act on the system's recommendations. Their discretion in using recommendations will substantially determine the system's impact. They also bear the occupational risk of a system that may produce unreliable recommendations.

Civil society — community advocacy organizations: Organizations representing affected communities, particularly communities with high concentrations of people targeted by the system. They have high interest and, depending on local political conditions, may have significant influence.

Oversight bodies — city council, civilian police oversight commission: Bodies with formal authority over police department operations. High power, variable interest depending on current political priorities.

Missing stakeholders frequently overlooked in practice: (a) Small business owners in predicted high-risk areas, who experience both the benefit of reduced crime and the cost of reduced foot traffic if the area develops a reputation. (b) People who will be stopped, questioned, or searched as a result of patrol recommendations but who will not be arrested — the invisible majority of police encounters that do not generate official records. (c) Children who live in high-risk zones and will grow up under intensified police surveillance during formative years.

Step 2: Stakeholder Mapping (Excerpt)

Residents of predicted high-risk areas: Low power, high interest. These residents are most affected by the deployment and simultaneously have the least formal influence over it. They are not represented on the police commission, are unlikely to have effective legal recourse against patrol decisions, and the communities predicted to be highest-risk tend to be disproportionately communities of color with reduced political power due to historical disenfranchisement and lower civic participation rates.

This low-power/high-impact position — where a group bears high costs but has limited ability to influence decisions — is the central ethical challenge of this assessment and warrants the deepest analytical attention.

Step 3: Impact Assessment (Excerpt — Residents of Predicted High-Risk Areas)

Positive impacts: Potential reduction in crime victimization if increased patrol presence deters crime. Some residents in high-crime areas report valuing increased police presence for safety. These benefits are real but should not be assumed to be uniformly valued by all residents in affected areas.

Negative impacts:

Increased police encounters and their associated harms: Evidence from studies of predictive policing systems, including RAND evaluations of ShotSpotter and academic analysis of PredPol, consistently finds that system deployment increases police presence in target areas. Increased presence correlates with increased stops, searches, and arrests. For residents in affected areas, this means elevated probability of police encounters with associated risks: use of force, arrest for low-level offenses, time spent in detention pending arraignment, and the cumulative effect of living under intensified surveillance. Probability: High, based on consistent evidence from comparable deployments. Severity: High to very high for individuals who experience serious use of force; moderate for individuals who experience stops without further action.

Feedback loop amplification of bias: Systems trained on historical arrest data reflect historical policing patterns, which research consistently shows are not geographically or demographically neutral — they reflect where police historically chose to patrol. Deploying a system that recommends patrols in areas where historical data shows high arrest rates, then using new arrests to update the model, creates a feedback loop that amplifies existing geographic and racial patterns. Probability: High, as this dynamic is documented in multiple published studies of similar systems. Severity: High and systemic — this is not a harm to individual residents but a structural bias that affects all residents of predicted high-risk areas over time.

Dignitary harm of surveillance and classification: Living in a zone formally classified as high-risk by the government carries dignitary costs independent of direct police encounters. It affects how residents, businesses, and outside parties perceive the neighborhood, and how residents may come to perceive themselves and their communities. Probability: Moderate to high. Severity: Moderate — difficult to quantify but documented in community research on the psychological effects of policing on residents in highly-policed areas.

Distributional concern: Research on predictive policing systems consistently finds that predicted high-risk areas are disproportionately neighborhoods with high proportions of Black and Hispanic residents. This means all negative impacts identified above fall disproportionately on communities of color. This distributional pattern should be treated as a central finding, not a footnote.


8. Connection to Course Material

This capstone project is explicitly designed to synthesize concepts from across the entire curriculum. The following connections illustrate how each part of the book informs a rigorous stakeholder impact assessment.

Part 1 — Foundations of AI Ethics: The ethical frameworks introduced in Part 1 — utilitarian, deontological, and justice-based approaches — provide the analytical vocabulary for Step 3's impact assessment. A utilitarian analysis asks whether aggregate benefits outweigh aggregate harms; a Kantian analysis asks whether affected parties are being treated as ends in themselves; a Rawlsian analysis asks what those in the worst-affected positions would endorse.

Part 2 — Bias and Fairness: The fairness concepts and metrics introduced in Part 2 are directly applicable to the distributional analysis in Step 3 and the vulnerability analysis in Step 4. The worked example illustrates how disparate impact analysis applies to predictive policing; the same analysis applies to each of the assessment scenarios.

Part 3 — Transparency and Explainability: The engagement plan in Step 5 must address whether affected parties can understand how the system works and how it has affected them. The monitoring plan in Step 7 must consider what information will be made transparent and to whom. The transparency concepts from Part 3 inform both.

Part 4 — Accountability and Governance: The governance structure for the deployment — who is accountable for it, what authority they have, how they will be held responsible — is a core concern of Step 7's monitoring plan and is relevant to Step 3's assessment of whether adequate oversight exists to protect stakeholders.

Part 5 — Privacy: For every scenario, the AI system uses personal data about people who may not be aware of or have consented to its use. The privacy analysis from Part 5 informs Step 3's assessment of privacy-related harms and Step 6's identification of privacy-protective mitigations.

Part 6 — Societal and Institutional Impact: Step 3 requires assessment of indirect and systemic impacts that go beyond direct harms to individual stakeholders. The frameworks for analyzing institutional and societal effects from Part 6 are directly applicable, particularly for scenarios (A), (E), and (F) where the systemic effects may be more significant than the direct individual-level impacts.

Part 7 — Law and Regulation: The engagement plan in Step 5 and the mitigation recommendations in Step 6 should reflect applicable legal requirements. For each scenario, relevant law constrains the options available to the deploying organization and creates minimum standards. The regulatory frameworks from Part 7 provide the baseline below which mitigation recommendations should never fall.


This capstone project is the integrative culmination of the course curriculum. Students are encouraged to review the connections to course material above and to draw explicitly on chapter concepts, frameworks, and vocabulary throughout the assessment. Assessments that are grounded in course concepts will be evaluated more favorably than those that treat the assessment as a purely practical exercise divorced from analytical frameworks.

See also: Appendix E (Stakeholder Engagement Methods Reference), Appendix F (Impact Assessment Template), and Appendix G (Community-Facing Communication Guide).