Appendix F: Templates and Worksheets

Practical Tools for Organizational AI Ethics Practice


Introduction

The templates in this appendix are designed for immediate organizational use. They can be adopted as presented, adapted to specific organizational contexts, or used as starting points for developing bespoke tools. Each template includes guidance notes where the purpose or content may not be self-evident.

These templates are illustrative tools, not legal advice. Organizations should have their AI governance materials reviewed by qualified legal counsel, particularly for compliance with applicable law in their jurisdiction.


Template 1: AI Project Ethics Checklist

40-Item Pre-Deployment Ethics Checklist

Instructions: Complete this checklist before deploying any AI system that makes or materially influences decisions affecting individuals. A "No" or "Uncertain" answer requires documentation of the concern and a mitigation plan before deployment. Document the date completed, who completed it, and who reviewed it.


STAGE 1: PROBLEM DEFINITION (Complete at project initiation)

  1. [ ] Has the problem this AI system is intended to solve been clearly defined in writing?
  2. [ ] Has the team considered whether AI is the appropriate solution, or whether non-AI approaches would better serve the objective?
  3. [ ] Have the populations who will be affected by this AI system been identified?
  4. [ ] Have representatives of affected communities been consulted in the problem definition phase?
  5. [ ] Has the team documented what human process (if any) this AI system will replace or assist, and whether the AI approach has advantages over the human process?
  6. [ ] Has a preliminary assessment of potential harms been conducted?
  7. [ ] Has the team confirmed that the intended use of this AI system is consistent with applicable law?

STAGE 2: DATA AND TRAINING (Complete before model training)

  1. [ ] Have all training data sources been documented, including their origin, collection method, and any known limitations?
  2. [ ] Has the training data been examined for representation of demographic groups that will be affected by the system's outputs?
  3. [ ] Has the team identified protected characteristics (race, gender, age, disability, religion, national origin) that the system must not use as factors?
  4. [ ] Has the team identified proxy variables — variables not themselves protected characteristics but correlated with them (e.g., zip code, name, graduation year) — and assessed whether their use is justified?
  5. [ ] Has the training data been reviewed for historical bias that may be encoded in labels?
  6. [ ] Have data quality issues (missing data, measurement error, sampling gaps) been documented and their potential effects assessed?
  7. [ ] Has the data collection and use been reviewed for compliance with applicable privacy law (GDPR, CCPA, HIPAA, etc.)?
  8. [ ] Have data subjects been adequately informed about and consented to the use of their data for this purpose?

STAGE 3: MODEL DEVELOPMENT (Complete before model finalization)

  1. [ ] Has the team selected and documented the fairness metric(s) that will be used to evaluate this system, with justification for the choice?
  2. [ ] Has model performance been measured separately for all relevant demographic subgroups, including subgroups defined by intersections of characteristics?
  3. [ ] Does model performance meet the established fairness thresholds for all demographic groups?
  4. [ ] Has the team documented how the model makes decisions, at a level of detail sufficient for explanation to affected individuals?
  5. [ ] If the model is a "black box" that cannot be fully explained, has the team assessed whether a more interpretable alternative would perform adequately?
  6. [ ] Has the model been tested for vulnerability to adversarial manipulation?
  7. [ ] Has the model been validated on a test dataset that is genuinely separate from the training dataset?

STAGE 4: DEPLOYMENT DESIGN (Complete before deployment)

  1. [ ] Has a human review process been established for high-stakes decisions made or recommended by the AI system?
  2. [ ] Is there a mechanism for individuals affected by AI decisions to request human review?
  3. [ ] Is there a mechanism for individuals to receive an explanation of AI-assisted decisions that is appropriate to the context?
  4. [ ] Has the team established monitoring processes that will detect performance degradation over time, including in fairness metrics?
  5. [ ] Has an incident response plan been established specifying who is responsible for responding to reports of AI-caused harm?
  6. [ ] Have all personnel who will interact with AI system outputs been trained on appropriate use and limitations?
  7. [ ] Has the team documented the intended use conditions and explicitly identified uses that are out of scope?
  8. [ ] Has liability, insurance, and legal responsibility been assessed and allocated?

STAGE 5: GOVERNANCE AND ACCOUNTABILITY (Complete before deployment)

  1. [ ] Has an Algorithmic Impact Assessment been conducted (see Template 4)?
  2. [ ] Has the AI system been reviewed by a governance body (AI ethics committee, risk committee, legal, or equivalent)?
  3. [ ] Has leadership (at appropriate level given risk) reviewed and approved deployment?
  4. [ ] Has the organization documented who is responsible for the AI system's ongoing performance and ethics compliance?
  5. [ ] Has a schedule for periodic review of the AI system been established (recommended: annually, or upon significant changes)?
  6. [ ] If the AI system involves vendor-provided technology, has the vendor completed the vendor due diligence questionnaire (Template 3)?
  7. [ ] Have relevant regulatory requirements and guidance documents been reviewed and compliance confirmed?
  8. [ ] Has the AI system been registered in the organization's AI inventory?
  9. [ ] Have any required disclosures to regulators or individuals about the use of AI been made?
  10. [ ] Has the team established a process for documenting and learning from post-deployment incidents?

Checklist Completion Record

Field Entry
AI System Name
Project Lead
Checklist Completed By
Date Completed
Reviewed By
Date Reviewed
Number of "No" or "Uncertain" Responses
Mitigation Plans Documented For All "No/Uncertain" Responses? Yes / No
Approved for Deployment? Yes / No / Conditional

Template 2: Stakeholder Analysis Worksheet

Instructions

Complete this worksheet early in an AI project to identify who is affected by and who has influence over the AI system being developed. Review and update at each major project milestone.


PART A: STAKEHOLDER IDENTIFICATION

For each category below, list specific stakeholders relevant to this AI project:

Stakeholder Category Specific Stakeholders Notes
Direct subjects (people whose data is used or who are directly affected by AI decisions)
Frontline users (employees or staff who interact with or act on AI outputs)
Organizational decision-makers (business unit leaders, executives)
Technical team (developers, data scientists, product managers)
Legal and compliance
Procurement and vendor management
Community organizations representing affected populations
Regulators and government bodies with jurisdiction
Civil society organizations (advocacy groups, watchdogs)
Academic researchers in the relevant domain
Media (journalists covering this sector)
Other

PART B: STAKEHOLDER MAPPING

For each key stakeholder (or stakeholder group), complete the following:

Stakeholder Level of Impact (High/Med/Low) Level of Influence (High/Med/Low) Current Awareness Engagement Status

Level of Impact: How significantly will this stakeholder be affected by the AI system's deployment? Level of Influence: How much influence does this stakeholder have over the AI system's design, deployment, or continued use? Current Awareness: Are they aware that this AI system is being developed? Engagement Status: Have they been consulted? Informed? Not yet engaged?


PART C: ENGAGEMENT PLANNING

For each High-Impact or High-Influence stakeholder, complete an engagement plan:

Stakeholder: __ Engagement objective: __ Engagement method: __ Timing: __ Responsible team member: __ How feedback will be incorporated: __ Follow-up required: ___


PART D: EQUITY ASSESSMENT

Answer the following questions:

  1. Which stakeholders are most vulnerable to harm from this AI system? ___
  2. Which stakeholders have the least power to influence this AI system's design or challenge its outputs? ___
  3. What specific steps will be taken to include the perspectives of the most vulnerable and least powerful stakeholders in the design process? ___
  4. Are there stakeholders whose interests are systematically unrepresented in the current stakeholder list? ___

Template 3: AI Vendor Due Diligence Questionnaire

Instructions

Send this questionnaire to AI vendors prior to procurement. Request written responses. Unsatisfactory or missing responses should factor significantly into procurement decisions. Retain completed questionnaires as part of the vendor contract file.


SECTION A: TRAINING DATA

  1. What data was used to train the AI system? Describe sources, volume, and collection period.

  2. What demographic groups are represented in the training data? What are any known gaps in representation?

  3. Were data subjects informed of and did they consent to their data being used for AI training?

  4. How was the training data labeled? Who performed labeling, and what quality controls were applied?

  5. Is the training dataset available for independent inspection? If not, what is provided?


SECTION B: BIAS TESTING AND PERFORMANCE

  1. What fairness metrics were used to evaluate this AI system? Why were these metrics chosen?

  2. What are the documented accuracy rates for the AI system, broken down by relevant demographic groups (race, gender, age, disability status, national origin as applicable)?

  3. What are the error rates for the AI system, broken down by relevant demographic groups?

  4. Has the AI system been tested for proxy discrimination — the use of non-protected variables that are correlated with protected characteristics?

  5. Has the AI system been independently audited? By whom, using what methodology? Please provide the audit report.

  6. What performance degradation has been observed over time since deployment? How is performance monitored?


SECTION C: EXPLAINABILITY AND TRANSPARENCY

  1. How does the AI system make decisions? Describe the mechanism in terms accessible to non-technical users.

  2. What explanation can be provided to individuals affected by AI decisions? What format does it take?

  3. Is technical documentation of the system (model card, system card, data sheet) available? Please provide it.

  4. What aspects of the system are proprietary and cannot be disclosed? How does this limit your ability to independently evaluate the system?


SECTION D: GOVERNANCE AND ACCOUNTABILITY

  1. Who is responsible within your organization for the ethical performance of this AI system?

  2. What internal processes exist to identify and address bias or harm in this AI system?

  3. Do you maintain an ethics review process for AI systems? Please describe it.

  4. Have any regulatory actions, enforcement proceedings, or significant legal disputes involved this AI system or substantially similar systems? If so, provide details.

  5. Have you received complaints about this AI system from users or affected individuals? What were the outcomes?


SECTION E: INCIDENT RESPONSE

  1. What is your process for responding to reports that this AI system has caused harm?

  2. What is your notification timeline if you discover a defect or bias in the AI system after our deployment?

  3. What remediation options are available if we discover that your system has discriminated against our customers or employees?

  4. What contractual provisions govern liability for AI-caused harm?


SECTION F: ONGOING COMPLIANCE

  1. What changes to the AI system (retraining, updates, model changes) may affect its fairness or performance? How will we be notified?

  2. Will you provide ongoing monitoring data disaggregated by demographic group?

  3. Do you commit to providing access for independent auditing of the AI system as deployed in our context?

  4. Do you comply with the EU AI Act requirements relevant to this system (as applicable)?

  5. What certifications, standards, or frameworks does the AI system comply with (e.g., NIST AI RMF, ISO 42001)?

  6. What is your roadmap for addressing identified fairness concerns in future versions of the system?


Template 4: Algorithmic Impact Assessment Template

Overview

An Algorithmic Impact Assessment (AIA) is a structured process for evaluating the likely effects of an AI system before deployment. It should be conducted by a multi-disciplinary team that includes technical, legal, operational, and community perspectives.


SECTION 1: SYSTEM DESCRIPTION

Field Response
AI System Name
System Purpose (2–3 sentences)
Deployment Context
Type of AI System
Development Stage
Assessment Date
Assessment Team

Describe the decision or recommendation this AI system will make or inform:

Describe the human process (if any) that this AI system will assist or replace:

List all data inputs to the AI system:


SECTION 2: AFFECTED POPULATION ANALYSIS

Who are the primary subjects of this AI system (people whose data is used or who receive its outputs)?

What are the relevant demographic characteristics of this population?

Are there subpopulations particularly vulnerable to harm from this system? Identify and describe:

What existing inequalities, power imbalances, or systemic disadvantages affect this population?

Have representatives of the affected population been consulted? Describe:


SECTION 3: IMPACT ANALYSIS

For each impact category, describe: (a) the potential positive impacts; (b) the potential negative impacts; and (c) the groups most likely to experience each type of impact.

Individual Rights and Dignity:

Access to Opportunities and Resources:

Privacy and Data Security:

Physical Safety:

Economic Effects:

Psychological and Emotional Effects:

Community and Social Effects:

Democratic and Civic Effects:


SECTION 4: FAIRNESS AND DISCRIMINATION ANALYSIS

Which fairness metric(s) will be used to evaluate this system? Justify the choice:

What are the documented performance disparities across demographic groups?

Are protected characteristics or proxies for them used in the model? If so, how is this justified?

Are there mechanisms for affected individuals to challenge AI-driven decisions?

What legal anti-discrimination requirements apply, and how does the system comply?


SECTION 5: RISK ASSESSMENT

Risk Likelihood (1-5) Severity (1-5) Risk Score Mitigation Plan
Discriminatory outcomes
Privacy violation
Security breach
Unauthorized use or mission creep
Performance degradation over time
Manipulation by adversarial inputs
Harm to vulnerable populations
Reputational harm to organization
Regulatory enforcement
Other:

Overall Risk Level: Low / Medium / High / Critical


SECTION 6: MITIGATION AND DECISION

List proposed mitigations for identified risks:

List residual risks remaining after mitigation:

Assessment Recommendation: - [ ] Proceed with deployment as described - [ ] Proceed with deployment subject to conditions (list below) - [ ] Do not proceed; require redesign (explain below) - [ ] Do not proceed; reject this use case

Conditions or required redesign:

Sign-offs Required:

Role Name Signature Date
Project Lead
Legal Counsel
AI Ethics Reviewer
Business Unit Leader
Executive Sponsor

Template 5: AI Incident Response Template

Immediate Response Protocol (First 24 Hours)

Step 1 — Incident Detection and Documentation Document the following immediately upon receiving a report of AI-caused harm: - Date and time of incident - Date and time of report - Name and contact information of reporter - Description of alleged harm - AI system and version involved - Population affected (individual, group, or unknown) - Apparent severity (Low / Medium / High / Critical) - Source of report (internal, external, media, regulator)

Step 2 — Immediate Triage Assign severity level and determine immediate response: - Critical: Harm is ongoing and life-threatening or liberty-affecting; system must be immediately suspended pending investigation. Notify executive leadership immediately. - High: Significant harm to one or more individuals; initiate investigation immediately. Notify legal counsel. - Medium: Potential harm identified; begin investigation within 48 hours. - Low: Concern raised; document and review within 30 days.

Step 3 — Stakeholder Notification (Critical and High only) Notify within 24 hours: - [ ] General Counsel / Legal Team - [ ] Chief Risk Officer or equivalent - [ ] AI Ethics Lead - [ ] Business Unit Leader responsible for AI system - [ ] Communications / PR team - [ ] Directly affected individuals (as legally required or appropriate) - [ ] Regulators (as legally required — check GDPR 72-hour breach notification, state breach notification laws)


INVESTIGATION PHASE (Days 2–14)

Investigation Team: (Identify)

Questions to Address: 1. What exactly happened? (Technical description) 2. Who was harmed, and how severely? 3. Is harm ongoing? 4. What caused the incident? (Data issue, model failure, deployment error, misuse, adversarial manipulation, other) 5. Was this incident foreseeable given what was known at deployment? 6. Was this incident foreseeable given information available before deployment that was not reviewed? 7. What other populations are at risk of similar harm? 8. What are the legal implications?

Interim Measures to Consider: - Suspend AI system pending investigation - Implement additional human review for AI-assisted decisions - Halt expansion of AI system to new contexts - Preserve all relevant data and logs for legal review


REMEDIATION AND CLOSURE

Remediation Plan Template: - Description of harm addressed: ___ - Steps taken to remediate the harm to affected individuals: ___ - Steps taken to prevent recurrence: ___ - Verification method that the fix works: ___ - Responsible party for implementation: ___ - Implementation timeline: ___ - Review date to confirm remediation: ___

Post-Incident Report Requirements: Document and retain: all communications about the incident; technical investigation findings; remediation steps taken; regulatory communications; legal analysis; and lessons learned.

Lessons Learned Review: Schedule within 30 days of incident closure. Include all relevant stakeholders. Document systemic changes to prevent similar incidents across all AI systems.


Template 6: Fairness Testing Protocol

Step-by-Step Process for Demographic Bias Testing

Phase 1: Scope Definition

Define the population the AI system will affect and identify the relevant demographic subgroups for testing. At minimum, test for: race/ethnicity, gender, age group. Add: disability status, national origin, religion where applicable to the use case.

Define the relevant outcome variable(s): What is the AI system deciding? What constitutes a positive and negative outcome?

Select fairness metric(s) appropriate to the context (refer to Quick Reference Card 1). Document why selected metrics are appropriate for this use case and population.

Establish acceptable thresholds before testing begins. Do not select thresholds after seeing results.


Phase 2: Test Dataset Construction

Construct or obtain a test dataset that: - Is genuinely separate from training data - Is representative of the population the system will affect in production - Includes demographic labels (race, gender, age, etc.) for each record - Is large enough to support statistical inference for each subgroup (aim for n > 100 per subgroup)

Document all characteristics of the test dataset, including its limitations.


Phase 3: Outcome Rate Analysis

Calculate and document for each demographic subgroup: - Overall positive outcome rate (e.g., loan approval rate, job offer rate, low-risk score rate) - Apply the 4/5 rule: if any group's positive outcome rate is less than 80% of the group with the highest rate, an adverse impact concern exists

Calculate and document for each subgroup: - True positive rate (recall): of those who should get a positive outcome, what fraction do? - False positive rate: of those who should get a negative outcome, what fraction get a positive outcome? - False negative rate: of those who should get a positive outcome, what fraction get a negative outcome? - Precision: of those who receive a positive outcome, what fraction deserve it?


Phase 4: Statistical Testing

Apply appropriate statistical tests to determine whether observed disparities are statistically significant: - Chi-square test for differences in outcome rates - Logistic regression with demographic group as a predictor to control for legitimate predictive factors - Report both statistical significance (p-value) and effect size

A disparity that is statistically significant but very small may not require remediation; a disparity that is large but not statistically significant (due to small sample size) may still be a concern.


Phase 5: Analysis and Decision

If testing reveals disparities: 1. Document all findings transparently, including confidence intervals 2. Assess whether the disparity exceeds acceptable thresholds 3. Investigate the source of the disparity: training data? Feature selection? Label quality? Model architecture? 4. Evaluate whether mitigation is technically feasible and what trade-offs it would introduce 5. Consult legal counsel about whether the disparity creates legal risk 6. Make a documented decision: proceed, mitigate and retest, or reject deployment

Document this process and retain records.


Template 7: AI Ethics Policy Template

[ORGANIZATION NAME] — Artificial Intelligence Ethics Policy

Effective Date: __ Approved By: __ Owner: __ Review Date: __


1. Purpose and Scope

This policy establishes [Organization Name]'s principles and requirements for the ethical development, procurement, and deployment of artificial intelligence systems. It applies to: - All AI systems developed internally - All AI systems procured from third-party vendors - All employees, contractors, and vendors involved in the development or use of AI systems - All AI applications that [description of scope — e.g., make or materially influence decisions affecting customers, employees, or the public]


2. Core Principles

[Organization Name] is committed to the following principles in all AI development and deployment:

Fairness: AI systems must not discriminate on the basis of [list applicable protected characteristics] or on the basis of proxies for these characteristics. AI systems must be tested for disparate impact before deployment and monitored for disparate impact in deployment.

Transparency: [Organization Name] will be transparent with individuals about when AI systems are used to make or influence significant decisions about them. We will provide explanations of AI-assisted decisions appropriate to the context.

Accountability: [Organization Name] maintains human oversight of all AI systems that make or materially influence significant decisions. [Describe specific human review requirements.] The [designated role] is accountable for AI ethics compliance.

Privacy: AI systems must comply with all applicable privacy laws and [Organization Name]'s Privacy Policy. Collection and use of personal data for AI training and deployment must be limited to what is necessary and appropriate.

Safety: AI systems must not cause unreasonable harm to individuals, communities, or society. [Organization Name] will conduct impact assessments for high-risk AI applications and will not deploy AI systems whose risks exceed their benefits.


3. Governance Requirements

All AI systems must be registered in the organizational AI inventory before deployment.

AI systems assessed as high-risk [define what constitutes high risk] require: - Completion of the AI Project Ethics Checklist (Form AIE-001) - Completion of an Algorithmic Impact Assessment (Form AIE-004) - Review and approval by the [AI Ethics Committee / Chief Risk Officer / other designated body]

All AI systems involving vendor-provided technology require completion of the AI Vendor Due Diligence Questionnaire (Form AIE-003) before contract execution.


4. Prohibited AI Practices

[Organization Name] will not develop, deploy, or procure AI systems that: - Use race, color, religion, sex, national origin, disability, age, or other legally protected characteristics as factors in decisions where such use is prohibited by law - Deploy surveillance of employees or customers without their knowledge in contexts where [describe] - [Add organization-specific prohibitions]


5. Incident Reporting

Any employee who becomes aware of an AI system causing harm or producing discriminatory outputs must report this to [designated contact / AI ethics hotline] within [timeframe]. Reports may be made anonymously. Retaliation against good-faith reporters is prohibited.


6. Training

All employees who develop, deploy, or make decisions based on AI system outputs must complete [AI ethics training program] within [timeframe] of beginning work with AI systems and annually thereafter.


7. Compliance and Enforcement

Violations of this policy may result in disciplinary action, up to and including termination. Vendors who violate this policy are subject to contract termination.

This policy will be reviewed annually and updated as laws, technologies, and best practices develop.


Template 8: Employee AI Ethics Training Curriculum Outline

4-Hour AI Ethics Training Program

Program Objectives Upon completion, participants will be able to: 1. Explain the major types of AI bias and how they arise 2. Apply a structured framework for identifying AI ethics concerns in their work 3. Know what questions to ask about AI systems before using their outputs 4. Know how and when to escalate AI ethics concerns 5. Describe the organization's AI ethics policy and their obligations under it


Module 1: What Is AI and Why Does Ethics Matter? (45 minutes)

  • Overview of AI systems used in the organization (15 min)
  • What can go wrong: case studies of AI failures (15 min)
  • Cases: COMPAS false arrests, healthcare algorithm bias, Amazon hiring tool
  • The ethical stakes: who is affected, what are the harms (15 min)

Materials: Video case studies, organizational AI overview handout


Module 2: Understanding AI Bias (60 minutes)

  • Sources of bias: training data, feedback loops, proxy variables, label bias (20 min)
  • Fairness metrics: introduction to key concepts (20 min)
  • What is disparate impact? How is it measured?
  • Why "our algorithm doesn't use race" doesn't solve the problem
  • Interactive exercise: bias identification in a case scenario (20 min)

Materials: Bias taxonomy reference card, fairness metrics cheat sheet


Module 3: The Ethics of AI in Your Role (45 minutes)

This module is customized for specific job functions:

For AI developers and data scientists: - Ethics by design: building fairness considerations into the development process - Documentation obligations: model cards, datasheets - When to escalate and how

For managers and business leaders: - Questions to ask before deploying AI - Your accountability for AI outcomes in your area - The procurement process and vendor oversight

For all employees: - Appropriate reliance on AI outputs: when to defer, when to question - Recognizing potential AI errors in daily work - The organization's AI incident reporting process


Module 4: Laws, Policy, and Your Obligations (45 minutes)

  • Key legal requirements: GDPR, CCPA, ECOA, FHA, EEOC, relevant state laws (20 min)
  • The organization's AI ethics policy: requirements and obligations (15 min)
  • How to report concerns; anti-retaliation protections (10 min)

Materials: Quick reference card on key laws; AI ethics policy


Application and Assessment (45 minutes)

  • Case study analysis: small group exercise applying framework to a new scenario (25 min)
  • Q&A and debrief (10 min)
  • Assessment: 15-question quiz covering key concepts (10 min)

Follow-Up Resources

  • AI ethics policy document
  • Quick reference cards (see Appendix G)
  • Contact information for AI ethics questions and reporting
  • Links to continuing education resources

Completion requirements: Score of 80% or above on quiz, or remedial session


Template 9: AI Procurement Contract Provisions

Model Contract Language for AI Ethics Obligations

Instructions: The following provisions should be reviewed and adapted by legal counsel. They may be incorporated into master service agreements, statements of work, or addenda to existing vendor contracts.


Article ___: AI Ethics Obligations

[VENDOR NAME] ("Vendor") agrees to the following AI ethics obligations with respect to the AI System(s) provided under this Agreement:

1. Bias Testing and Documentation

1.1 Vendor warrants that, prior to delivery, the AI System has been tested for performance disparities across demographic groups including, at minimum: race and ethnicity, gender, and age, as applicable to the intended use case.

1.2 Vendor shall provide [Purchaser] with documentation of bias testing methodology, test datasets used, and results of bias testing, including performance metrics disaggregated by demographic group.

1.3 Vendor shall maintain and provide to [Purchaser] a current model card or equivalent technical documentation describing the AI System's training data, intended use, performance characteristics, and known limitations.

2. Audit Rights

2.1 [Purchaser] shall have the right, upon reasonable notice, to conduct or commission an independent audit of the AI System's performance in [Purchaser]'s deployment context.

2.2 Vendor shall cooperate with audits by providing access to: technical documentation; training data descriptions; performance logs; and technical personnel for inquiry.

2.3 Vendor shall not condition audit access on confidentiality restrictions that would prevent [Purchaser] from reporting material findings to regulators or affected individuals.

3. Incident Notification

3.1 Vendor shall notify [Purchaser] within [72 hours / specify] of discovering any defect, bias, vulnerability, or error in the AI System that has caused or is reasonably likely to cause harm to individuals.

3.2 Vendor shall provide a written incident report within [10 business days] of initial notification, describing the nature of the defect, affected populations, and Vendor's remediation plan.

4. Compliance with Law

4.1 Vendor warrants that the AI System, as configured for [Purchaser]'s use, complies with applicable anti-discrimination laws in [Purchaser]'s jurisdiction, including [specify relevant laws].

4.2 Vendor shall promptly notify [Purchaser] of any regulatory investigation or enforcement action involving the AI System or substantially similar systems.

5. Updates and Changes

5.1 Vendor shall provide advance notice of [60 days] before implementing material changes to the AI System, including retraining on new data, changes to model architecture, or changes to performance thresholds.

5.2 Vendor shall provide updated bias testing documentation following material changes to the AI System.

6. Liability

6.1 In the event that use of the AI System results in a finding of unlawful discrimination against [Purchaser]'s customers or employees, Vendor shall [indemnify / share liability according to specified allocation / describe].

6.2 [Purchaser]'s right to audit and Vendor's notification obligations under this Article survive termination of this Agreement for [specify period].


Template 10: Board AI Risk Briefing Template

Quarterly AI Risk Report to [Board / Executive Committee]

Prepared by: [AI Ethics Lead / Chief Risk Officer / CISO] Date: ___ Classification: Confidential


EXECUTIVE SUMMARY

[3–5 sentence overview of the quarter's AI ethics landscape: overall risk level, major developments, issues requiring board attention]

Overall AI Ethics Risk Level This Quarter: Low / Medium / High / Critical


SECTION 1: AI SYSTEM INVENTORY UPDATE

Metric This Quarter Prior Quarter Change
Total AI systems in production
New systems deployed
Systems retired
High-risk systems under enhanced monitoring
Systems pending ethics review

SECTION 2: INCIDENT REPORT

Incident Summary Severity Status Resolution

Total incidents this quarter: ___ Total incidents year-to-date: ___


SECTION 3: REGULATORY AND LEGAL DEVELOPMENTS

[List significant regulatory developments, enforcement actions in the industry, relevant legislation, and assessment of their implications for the organization]

Actions Required: - [ ] Action item 1 - [ ] Action item 2


SECTION 4: FAIRNESS MONITORING RESULTS

[Summarize results of bias monitoring for high-risk AI systems deployed in production. Flag any systems showing performance disparities that warrant attention.]


SECTION 5: KEY METRICS

Metric Target This Quarter
% AI systems with completed ethics checklist 100%
% high-risk systems with completed AIA 100%
% vendors with completed due diligence questionnaire 100%
% AI staff with current ethics training completion 95%
Mean time to resolve AI ethics incidents 30 days

SECTION 6: ISSUES REQUIRING BOARD ATTENTION

[Identify specific issues that require board-level decision, guidance, or awareness. For each: describe the issue, present options, and make a recommendation.]


SECTION 7: FORWARD LOOK

[Anticipated AI deployments next quarter, known regulatory developments, planned audits or assessments, resource needs]


This report should be retained pursuant to the organization's document retention policy.