Chapter 22: Exercises — Whistleblowing and Ethical Dissent in AI Organizations

Difficulty ratings: ⭐ (foundational) through ⭐⭐⭐⭐ (advanced). Exercises marked with † involve external research or fieldwork.


Foundational Exercises (⭐)

Exercise 1 — Dissent Spectrum Mapping For each of the following employee actions, identify where it falls on the dissent spectrum described in Section 22.1 (informal, formal internal, internal escalation, external disclosure, or resignation):

(a) An engineer sends an email to her team lead flagging a potential fairness problem in a model before it goes to review. (b) A data scientist files an anonymous report through the company's ethics hotline about a training dataset he believes was collected without adequate informed consent. (c) A responsible AI reviewer, after having her concerns about a bias problem dismissed by three levels of management, sends a detailed memo to the board's audit committee describing the problem and the dismissals. (d) A product manager resigns from a company after being told that her concerns about a surveillance product will not delay the product's launch. (e) A research engineer provides internal documents to a journalist documenting her company's suppression of internal research on its AI system's discriminatory outcomes.

For each action, identify: (1) the spectrum level; (2) the likely personal risk to the individual; (3) the likely organizational impact; and (4) what prior actions on the spectrum the individual likely attempted before reaching this level.


Exercise 2 — Moral Disengagement Identification For each of the following statements by AI industry employees, identify which of Bandura's moral disengagement mechanisms is operating and explain how the mechanism works in this context:

(a) "The model is only slightly less accurate for darker-skinned people, and overall it's way more accurate than the manual process it replaces." (b) "I just design the architecture — what use cases it gets applied to is a business decision." (c) "We call it 'engagement optimization,' but if you step back, what we're really doing is maximizing outrage." (d) "The legal team reviewed it. The ethics team reviewed it. Multiple VPs signed off on it. There's no way everyone missed something serious." (e) "Every AI system has some bias — that's just the nature of training on real-world data. We can't be held to an impossible standard."

After identifying the mechanism in each case, propose a reframing or organizational intervention that would interrupt that specific mechanism.


Exercise 3 — Legal Protection Assessment For each of the following employee situations, assess what federal whistleblower protections might apply, what the most significant legal gap is, and what the employee should do before making any disclosure:

(a) An engineer at a publicly traded AI company discovers that the company's product performs significantly worse for protected demographic groups and that executives knew about this before making positive statements to investors about the product's fairness. (b) A data scientist at a federal AI contractor discovers that the AI system the company sold to a government agency does not meet the performance specifications the company represented in the contract. (c) A responsible AI researcher raises concerns internally about a social media algorithm's amplification of health misinformation; is told her concerns will be addressed; and then observes that no changes are made over the following six months. (d) An engineer at a California-based AI company believes she was passed over for promotion in retaliation for her vocal opposition to a product decision she believed was ethically problematic.


Intermediate Exercises (⭐⭐)

Exercise 4 — Psychological Safety Audit Design a psychological safety audit for an AI product team. Your audit should include:

  • A list of 10 specific interview questions (suitable for individual interviews with team members) designed to assess whether team members feel safe raising ethics concerns
  • A list of 5 behavioral indicators that a manager or observer could look for in team meetings and working sessions
  • A list of 3 organizational documents (team norms, performance review criteria, incident reports, etc.) that would provide additional evidence
  • An assessment rubric that translates audit findings into a psychological safety rating with specific improvement recommendations

After designing the audit, describe how you would present findings to team leadership in a way that is likely to result in genuine change rather than defensive dismissal.


Exercise 5 — The Pre-Mortem Working in a group of 4–6, conduct a structured pre-mortem for the following AI project:

A major bank is deploying an AI system for credit underwriting decisions — determining which applicants receive loans and at what interest rates. The system was trained on 10 years of the bank's historical lending data. It has been validated to outperform the bank's existing scorecard on standard accuracy metrics. It will be deployed in all markets where the bank operates, including several states with significant racial wealth disparities in homeownership.

For the pre-mortem exercise: 1. Imagine it is three years after deployment. The system has caused a significant harm — a harm serious enough to generate regulatory action and public attention. What is that harm? 2. Working backward from that imagined harm, identify the decisions, assumptions, and process failures that led to it. 3. For each identified failure, propose a specific governance intervention (a review step, a test, a documentation requirement, a decision gate) that would have caught the problem. 4. Produce a "pre-mortem report" summarizing your findings that could be presented to the project's ethics review board.


Exercise 6 — Responsible Disclosure Ethics You are an engineer at a medium-sized AI company. You have discovered, through your own analysis, that the company's AI hiring screening tool — sold to many corporate clients — produces significantly worse outcomes for women and for people who attended Historically Black Colleges and Universities. You have raised this concern twice through internal channels; both times you were told that the finding reflects real-world patterns in the labor market, not a flaw in the system. You have been offered a promotion to a role that would give you less access to this system.

Work through the following questions as a structured ethical analysis: 1. How would you assess whether internal channels have been "genuinely exhausted"? 2. Who are the populations harmed by this system, and what is the nature of the harm? 3. What external disclosure options are available to you, and what are the legal and professional risks of each? 4. What is the ethical calculus for choosing among disclosure at the scale of the affected clients, disclosure to a regulator, public disclosure, or resignation? 5. What specific steps would you take, in what order, before making any external disclosure?

Conclude with a clear recommendation and defense of your chosen course.


Exercise 7† — Organizational Silence Analysis Interview two to four employees at the same organization (any organization, not necessarily AI-specific) about their experience with raising concerns — whether about ethics, quality, safety, or other issues — through internal channels. Ask: - Have you observed situations where you or colleagues chose not to raise a concern you believed was serious? What factors led to that decision? - Have you seen colleagues raise concerns and face adverse consequences? What happened? - What would make you more likely to raise a concern in the future? - What would make you less likely?

After your interviews, write a 500-word analysis connecting what you heard to the organizational silence and moral disengagement concepts discussed in this chapter. Where does the theory explain your interviewees' experiences? Where does it fall short?


Advanced Exercises (⭐⭐⭐)

Exercise 8 — Whistleblower Protection Policy Design You have been asked by a major technology company's board of directors to design a comprehensive employee ethics reporting and protection policy. The company employs 15,000 people and develops AI systems used in healthcare, financial services, and consumer applications. The board has asked specifically for a policy that genuinely protects employees who raise ethics concerns — not merely a policy that formally complies with legal requirements.

Design the policy, addressing: - The types of concerns covered (what qualifies as a reportable ethics concern) - Available reporting channels and their independence from management - Anonymity protection and its limits - Non-retaliation provisions with specific enforcement mechanisms - The process for investigating reported concerns - Reporting to the board on ethics concern volume, nature, and resolution - Escalation to regulators (when and how) - Protection for employees who escalate to regulators over the company's objection - Review and updating process

After drafting the policy, identify the three most significant ways your policy differs from typical corporate ethics hotline policies and explain why those differences matter.


Exercise 9 — Case Comparison Analysis Using the cases examined in this chapter and Case Study 22.1 and 22.2, conduct a comparative analysis of the Timnit Gebru and Frances Haugen disclosures across the following dimensions:

  1. Triggering events: What specific events precipitated the decision to disclose externally?
  2. Preparation and legal strategy: What preparation preceded external disclosure, and what legal frameworks were invoked?
  3. Disclosure targets and sequence: Who was told first, second, and third? Why?
  4. Organizational response: How did the organization respond to disclosure, and what did that response reveal about the organization's culture?
  5. Legal outcome: What legal protections were available and how effective were they?
  6. Policy impact: Did the disclosure change laws, regulations, organizational practices, or public understanding?
  7. Personal costs: What costs did the disclosing individual bear?

Based on your comparison, draw five lessons applicable to employees who may face similar situations in AI organizations.


Exercise 10† — Legal Landscape Research Using publicly available legal resources (Westlaw, legal databases, government websites, law review articles), research the current state of AI ethics whistleblower protection in: - US federal law (identify specific statutes and their applicability to AI ethics concerns) - California state law (Section 1102.5 and any additional protections) - The EU (Whistleblower Protection Directive and AI Act provisions) - One additional jurisdiction of your choice

Produce a comparative analysis (2,000–2,500 words) assessing which jurisdiction provides the most comprehensive protection for AI ethics whistleblowers, what the most significant remaining gaps are in each jurisdiction, and what legislative changes would provide adequate protection. Cite specific statutory provisions and, where available, relevant cases.


Expert Exercises (⭐⭐⭐⭐)

Exercise 11 — Organizational Culture Redesign You are a newly appointed Chief Ethics Officer at a major AI company. Shortly before you were hired, the company's most senior AI ethics researcher was fired in circumstances widely attributed to retaliation for her research on the company's AI systems' harms. The AI ethics field's assessment of your company's culture is poor; talented AI ethics researchers are reluctant to work there. Your board has given you a mandate to genuinely rebuild the organization's ethics culture — not merely its reputation.

Design a 12-month plan for your first year, addressing: - Immediate actions in the first 30 days to signal genuine change - Structural changes to governance (ethics body authority, reporting lines, enforcement) - Changes to the responsible AI team's authority and mandate - Changes to the ethics reporting and retaliation protection infrastructure - Changes to incentive structures and performance management - How you will measure progress over 12 months - How you will communicate progress to employees, regulators, and the public

Be specific about which structural changes are most important and why, and what you anticipate the organizational resistance to each will be.


Exercise 12 — The Suppression Pattern Analysis The chapter identifies a consistent pattern across high-profile AI ethics cases: internal concerns were raised and not adequately addressed; the individuals who raised them experienced adverse consequences; organizations offered formal, policy-based explanations for adverse actions that critics attributed to retaliation; and the individuals who ultimately disclosed externally paid significant personal costs.

Write a 2,000-word analysis addressing: 1. Why does this pattern repeat? What organizational dynamics produce it regardless of organization-specific factors? 2. What would need to change — in organizational culture, legal framework, or governance structure — for the pattern to change? 3. Is it possible for a large commercial AI organization to have a culture that genuinely welcomes ethics dissent, given the structural tensions between ethics and commercial performance? What evidence supports your answer? 4. What role should external mechanisms — regulatory oversight, independent auditing, professional licensing — play in filling the gap that internal governance structures have consistently failed to fill?


Exercise 13† — Policy Design and Stakeholder Engagement You have been asked to draft model legislation for a state AI ethics whistleblower protection law. The law should protect employees who raise AI ethics concerns that cause or risk significant harm to the public, even when those concerns do not clearly violate existing law.

Your legislative proposal should address: - Definitions: what qualifies as a protected AI ethics disclosure? - Covered employers: which organizations are subject to the law? - Protected activities: what types of disclosure are protected (to regulators, to employers, to the public)? - Anti-retaliation provisions: what actions are prohibited? - Remedies: what are the consequences for retaliation? - Enforcement: who enforces the law, and what are the enforcement mechanisms? - Relationship to federal law and NDAs: how does the state law interact with federal whistleblower statutes and existing confidentiality agreements?

After drafting the proposal, identify and engage (by letter or email) with at least two stakeholders — an employee advocacy organization, a technology company trade association, a civil society organization, or a relevant government agency — seeking their perspective on the proposal. Incorporate the feedback you receive into a revised draft with an explanatory memorandum.


Exercise 14 — Role-Play: The Ethics Hotline Investigation Working in groups of three, role-play the following scenario:

Person A is an anonymous reporter to an ethics hotline. She is a data scientist who has discovered that a healthcare AI system her company deploys produces recommendations that are systematically less accurate for elderly patients of color, a population that is underrepresented in the training data. She has raised this informally with her manager, who told her the finding was an artifact of the evaluation methodology. She has filed an anonymous hotline complaint.

Person B is the ethics hotline investigator assigned to the case. She has no technical AI background but is an experienced HR professional.

Person C is the technical reviewer that Person B has asked to assist with the investigation.

Conduct the investigation role-play, addressing: - How does the investigator approach the case with limited technical background? - What evidence does she seek, and how does she evaluate it? - What is the relationship between the investigator's institutional role and her obligation to the reporting employee? - At what point, if any, does the investigation conclude that external escalation is required? - How is the outcome communicated to the anonymous reporter?

After the role-play, each participant writes a 300-word individual reflection on what the exercise revealed about the limits of internal investigation processes for technically complex AI ethics concerns.


Exercise 15† — Designing an Independent AI Ethics Audit Function One proposed solution to the organizational suppression of internal AI ethics dissent is the creation of an independent external AI ethics audit function — analogous to financial auditing but focused on AI ethics — that would give third-party auditors access to AI systems, training data, and internal documentation and produce published assessments of AI ethics compliance.

Research the current state of AI ethics auditing (several organizations offer this service; academic literature on algorithmic auditing is relevant), then design a framework for such an audit function, addressing: - What would AI ethics auditors assess? (Criteria, methodology) - Who would conduct audits, and what qualifications would they require? - Who would have access to audit findings? - How would audit findings be enforced? - What protections would employees have who cooperate with auditors, including providing information contrary to management interests? - How would conflicts of interest (auditors hired by the companies they audit) be managed? - What role should regulators play?

Present your framework as a proposal to a relevant regulatory body (EU, FTC, SEC, or another of your choice), addressing both the case for independent auditing and the significant practical and political challenges to implementing it.