Appendix I: Frequently Asked Questions
AI Ethics — Questions and Answers for Business Professionals
Introduction
The questions in this appendix represent the most common concerns raised by business professionals, graduate students, and organizational leaders engaging with AI ethics for the first time. They are organized by topic, but most questions have implications that cross multiple areas. Cross-references are provided where related questions illuminate each other.
Part A: General AI Ethics
Q1. Is AI inherently biased?
No system — human or algorithmic — is bias-free, and "bias" in the technical sense is unavoidable: any prediction system that performs better than random must learn some generalizations from data, and those generalizations can be more or less accurate for different subgroups. But the question most people mean is whether AI systems produce unjust outcomes for identifiable groups, and on that question the empirical record is clear: many widely deployed AI systems do produce unjust outcomes. The NIST FRVT documented that most commercial facial recognition systems have significantly higher error rates for darker-skinned and female faces. Obermeyer et al. found that a healthcare algorithm used by 200 million people systematically underestimated Black patients' health needs. The issue is not whether AI is inherently biased in some philosophical sense but whether AI systems as actually built and deployed produce discriminatory outcomes — and they frequently do.
Q2. Can you be ethical and use AI?
Yes. Ethical AI use is possible; it simply requires intentionality and rigor. Many AI applications involve straightforward tasks (manufacturing quality control, logistics optimization, fraud detection) where bias concerns are manageable and benefits are clear. Even in higher-stakes applications, AI systems can be built and deployed ethically when organizations invest in bias testing, human oversight, transparency, accountability mechanisms, and genuine engagement with affected communities. The alternative — that AI use is categorically unethical — is both implausible (given the range of applications) and unhelpful (it forecloses engagement with the real work of making AI more just). The harder, more honest answer is: being ethical while using AI requires more effort, investment, and governance than most organizations currently provide.
Q3. Don't AI algorithms just reflect reality? How can a mirror be biased?
This is the "mirror" objection — the claim that AI systems merely reflect patterns in historical data, which are themselves a neutral record of reality. The objection fails for several reasons. First, historical data encodes historical injustice: if Black job applicants were historically rejected due to discrimination, a model trained on hiring outcomes will learn to reject Black applicants. The mirror reflects a distorted room. Second, AI models do not passively reflect data — they learn generalizations and extend them, often amplifying patterns in the data beyond their original frequency. Third, the choice of what data to collect, how to measure outcomes, and what to predict is itself a value-laden decision that shapes what the model learns. There is no such thing as a perfectly neutral dataset.
Q4. Isn't the real problem human bias, not AI bias?
AI bias and human bias are related but distinct problems. Human bias in hiring, lending, and criminal justice is real and well-documented. But the fact that humans are biased does not excuse AI systems that are also biased — particularly because AI systems have properties that make their bias especially concerning. Unlike individual human decisions, algorithmic decisions scale to millions of people simultaneously; a biased algorithm is biased for everyone it touches. Algorithmic decisions are also harder to challenge: a person who suspects a human decision was discriminatory can appeal to a manager; a person who suspects an algorithm was discriminatory has minimal practical recourse. And algorithmic decisions can create feedback loops that entrench and amplify bias over time in ways human decisions typically do not.
Q5. What is the difference between AI safety and AI ethics?
These terms are used inconsistently across the field, which creates confusion. In rough terms: "AI safety" often refers to ensuring AI systems work reliably as intended — that they don't crash, hallucinate dangerously, or behave unexpectedly. In the context of long-termist AI research, "AI safety" also encompasses existential risk from advanced AI. "AI ethics" is the broader inquiry into the values and social implications of AI systems, including fairness, accountability, transparency, privacy, and the distribution of AI's benefits and harms. The relationship is one of subset and superset: a safe AI system (one that works reliably) might still be deeply unethical (if it reliably and accurately discriminates). Ethical AI encompasses safety but goes beyond it.
Part B: Technical Questions
Q6. What is the difference between bias and fairness?
"Bias" refers to systematic error — departures from some benchmark of correct or neutral behavior. In machine learning, bias has a technical meaning (the bias-variance trade-off), but in AI ethics it typically means systematic error that disadvantages certain groups. "Fairness" is the broader normative concept: what distribution of outcomes would be just? Bias is an empirical property (measurable, subject to dispute about what the correct benchmark is); fairness is a normative property (dependent on values and contested).
A key insight: there is no single, objective definition of fairness, and multiple fairness metrics conflict with each other (see the Chouldechova impossibility result). Choosing a fairness metric is a value judgment. When someone claims their system is "fair," ask: fair by which metric? For which groups? According to whose values?
Q7. What is a model card?
A model card is a short document accompanying a machine learning model that describes its intended use, performance characteristics, evaluation results, limitations, and ethical considerations. The concept was introduced by Margaret Mitchell and colleagues at Google in 2019. A model card typically includes: the model's intended application; datasets used for training and evaluation; performance metrics (including disaggregated metrics by demographic group); known limitations and biases; recommendations for appropriate and inappropriate use; and information about who developed the model. Model cards are now increasingly expected by professional norms and in some contexts by regulation. Some AI vendors provide model cards; practitioners should request them as part of vendor due diligence.
Q8. What is a data sheet for datasets?
A datasheet for datasets (proposed by Gebru et al., 2018) is the data equivalent of a model card — a structured document describing a dataset's composition, collection process, preprocessing steps, intended uses, distribution, and maintenance. Datasheets document: why the dataset was created; who created it; what does it contain; how it was collected; how it was labeled; what exclusions were made; who might be harmed by its use; and whether the data contains personal information. Datasheets are essential for understanding what biases a model might inherit from its training data, and their absence is a red flag in vendor due diligence.
Q9. What is explainability, and why does it matter?
Explainability (also called interpretability) refers to the degree to which a human can understand how an AI system reached a particular output. An explainable AI system is one that can tell you, in human-comprehensible terms, why it made a specific prediction or recommendation. Explainability matters for several reasons: it enables individuals to understand and challenge decisions that affect them (due process); it enables auditors and regulators to assess compliance; it enables developers to identify and correct errors and biases; and it builds organizational and public trust. The EU AI Act and GDPR both establish explainability-related requirements. Explainability is constrained by technical complexity — deep neural networks are genuinely difficult to explain — but tools like LIME and SHAP provide partial explanations even for black-box models.
Q10. What is the difference between AI, machine learning, and deep learning?
Artificial intelligence (AI) is the broad field concerned with creating systems that can perform tasks that typically require human intelligence. Machine learning (ML) is a subset of AI that uses statistical techniques to enable computers to learn from data, rather than being explicitly programmed with rules. Deep learning is a subset of ML that uses neural networks with many layers (hence "deep") to learn representations of data. Deep learning is responsible for most of the AI advances since 2012. Understanding the distinction matters for AI ethics because different techniques have different explainability properties, different data requirements, and different failure modes. A rule-based system is fully explainable; a deep learning model may be genuinely opaque.
Q11. What is a large language model (LLM), and how does it generate text?
A large language model is a neural network trained on enormous quantities of text to predict the next token (roughly, the next word or word-piece) in a sequence. Given a prompt, an LLM generates text by repeatedly predicting the most likely next token, producing a sequence that is statistically typical of text in its training data. LLMs do not "understand" language in any philosophically robust sense — they are sophisticated pattern matchers. This is the basis for the "stochastic parrot" critique (Bender et al.): LLMs produce text that looks meaningful because it matches statistical patterns of meaningful text, not because the model has any underlying understanding or factual knowledge. This makes LLM outputs unreliable for high-stakes factual claims and means that confident-sounding LLM text can be completely wrong.
Q12. What is the alignment problem?
The alignment problem refers to the difficulty of ensuring that AI systems pursue the objectives their designers intend, rather than technically satisfying their specified objective in ways that violate the intent. A simple example: an AI system instructed to "maximize user engagement" might learn that outrage and polarization maximize engagement — satisfying the objective while violating the intent. At larger scale: an AI system given imprecise objectives might learn to pursue them in ways that are harmful to humans while remaining technically on-target. The alignment problem is particularly important in discussions of advanced AI systems that might develop the capacity to pursue objectives in sophisticated ways.
Part C: Legal Questions
Q13. Is my company liable if an AI vendor's tool discriminates against our customers or employees?
Almost certainly yes, to at least some extent. Anti-discrimination laws — Title VII, the Fair Housing Act, the Equal Credit Opportunity Act — impose liability on employers, lenders, and housing providers regardless of whether discriminatory decisions were made by a human or an algorithm. The "I relied on a vendor's system" defense has not been recognized by courts as relieving the deploying organization of liability for discriminatory outcomes. The EEOC has made clear that employer use of a vendor-provided AI tool is subject to the same anti-discrimination requirements as employer decisions made by humans. Organizations can seek contractual indemnification from vendors, and they should — but contractual allocation of liability does not change statutory liability. The practical implication: organizations cannot outsource their non-discrimination obligations to vendors.
Q14. What does GDPR require for AI?
The GDPR imposes several requirements particularly relevant to AI:
-
Article 22 (Right Not to Be Subject to Automated Decision-Making): Individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The default is prohibition; organizations must obtain explicit consent, justify the decision as necessary for a contract, or rely on applicable law. Even when automated decisions are permitted, individuals must be provided with information about the logic and the right to human review.
-
Lawful basis: Any use of personal data (including training AI) requires a lawful basis: consent, contract, legitimate interest, legal obligation, or vital interest.
-
Purpose limitation: Personal data collected for one purpose cannot be freely repurposed for AI training without legal basis.
-
Data minimization: AI systems must not use more personal data than necessary.
-
Privacy by Design: Organizations must build data protection into AI system design from the outset.
-
Data Protection Impact Assessments: Required for AI systems likely to result in high risk to individuals' rights and freedoms.
Q15. What is the EU AI Act, and does it apply to my company?
The EU AI Act is the world's first comprehensive AI regulation, adopted in 2024 and progressively entering into force. It applies to providers (developers) and deployers (users) of AI systems placed on the EU market or affecting people in the EU — regardless of where the developer or deployer is located. If your company develops AI used in the EU, or uses AI to make decisions about people in the EU, the Act likely applies. It uses a risk-tiered framework: some AI applications are prohibited outright; high-risk applications face mandatory requirements including bias testing and human oversight; other applications face transparency requirements; and minimal-risk applications are unregulated. The penalties for non-compliance are significant — up to 35 million euros or 7% of global annual turnover for violations of prohibited practice provisions.
Q16. What is disparate impact, and how does it apply to AI?
Disparate impact is a legal doctrine — established for Title VII by Griggs v. Duke Power Co. (1971) and extended to housing and credit by subsequent legislation — that makes employment, housing, and credit practices unlawful if they disproportionately disadvantage protected groups, even without any discriminatory intent. Under disparate impact doctrine, the EEOC's four-fifths (80%) rule creates a presumption of adverse impact if a selection rate for a protected group is less than 80% of the rate for the most-selected group. Courts and regulators have confirmed that disparate impact doctrine applies to algorithmic selection tools. An employer cannot defend a discriminatory hiring algorithm by arguing it was unintentional.
Q17. Do I need to disclose to employees or customers that AI is being used to make decisions about them?
It depends on jurisdiction and context. Under GDPR Article 22, individuals must be informed of significant automated decision-making and provided meaningful information about the logic. Under New York City Local Law 144, employers must notify job candidates and employees when Automated Employment Decision Tools are used and provide an opportunity to request an alternative process. Under CCPA/CPRA, consumers have the right to know what categories of personal information are collected and how they are used. The CFPB requires adverse action notices that specify reasons for adverse credit decisions — and has indicated these requirements apply when AI is used. Federal trade secrecy law does not exempt companies from disclosure requirements. The evolving consensus across jurisdictions is toward greater transparency requirements.
Part D: Practical Questions
Q18. How do I start building an AI ethics program?
Start with inventory, governance, and the highest-risk use cases. A practical sequence: (1) Conduct an AI system inventory — document every AI system in use or development, including vendor-provided systems. (2) Classify systems by risk level using a tiered framework (the EU AI Act's four tiers provide a reasonable starting point). (3) Establish governance — designate responsibility for AI ethics at appropriate organizational levels; this might be a chief AI ethics officer, a cross-functional committee, or initially an assigned responsibility within legal or risk functions. (4) Apply rigorous ethics processes to your highest-risk systems first: complete an Algorithmic Impact Assessment, conduct bias testing, implement human oversight, establish incident reporting. (5) Build organizational capacity: train AI teams and business units, update vendor management processes, review contracts. (6) Iterate and improve based on what you learn.
Q19. What questions should I ask an AI vendor?
The Vendor Due Diligence Questionnaire (Template 3 in this appendix) provides 30 detailed questions. Key areas to probe: What data was the system trained on, and what demographic groups are represented? What fairness testing was conducted, and what were the results by demographic subgroup? Has the system been independently audited? By whom? What is the incident notification process if problems are discovered? What contractual protections exist if the system is found to discriminate? What technical documentation is available? What are the documented limitations and known failure modes? A vendor who cannot or will not answer these questions should not receive your AI business.
Q20. How do I explain AI ethics to my board or executive team?
Frame it in terms of risk, responsibility, and value — the language boards understand. Risk: regulatory enforcement risk (GDPR penalties, CFPB enforcement, EEOC actions); litigation risk (discrimination lawsuits); reputational risk (the ProPublica-style investigation of your AI system is a concrete possibility). Responsibility: as AI systems make or influence more decisions, organizational accountability for those decisions increases — the board is ultimately responsible. Value: trustworthy AI creates competitive advantage through customer trust, employee confidence, and regulatory goodwill. Use concrete examples: the $115 million Facebook settlement over discriminatory ad targeting; the companies subject to CFPB fair lending enforcement; the organizations that avoided these outcomes through proactive AI governance. The Board AI Risk Briefing Template (Template 10) provides a structure for ongoing board reporting.
Q21. What does "human in the loop" actually mean in practice?
"Human in the loop" is frequently invoked but rarely defined. Meaningful human oversight means: (1) A human with adequate information, time, and authority reviews consequential AI recommendations before action is taken. (2) The human can and does override AI recommendations when appropriate. (3) The human is not rubber-stamping — they have the training, information, and institutional support to exercise genuine judgment. (4) The organization tracks override rates and treats very low override rates as a warning sign of automation bias. Automation bias — the tendency to defer to algorithmic outputs without sufficient scrutiny — is a well-documented human psychological phenomenon. "Human in the loop" processes that fail to address automation bias provide little actual protection.
Q22. How should I handle an AI system that is producing biased or harmful outputs?
Immediately: (1) Document what is happening — preserve evidence of the problematic outputs. (2) Assess severity — is harm ongoing? Is it affecting many people? Is it legally significant? (3) Notify appropriate internal stakeholders (legal, executive leadership, AI ethics function). (4) Consider interim measures — can the system's outputs be reviewed by a human before action is taken? Can deployment be paused in affected contexts? (5) Initiate investigation. Then: (6) Implement remediations — changes to the model, data, deployment, or process to reduce harm. (7) Verify that remediations work. (8) Remediate harm to affected individuals where possible. (9) Report as legally required. (10) Conduct a lessons-learned review and update organizational processes to prevent recurrence. The AI Incident Response Template (Template 5) provides detailed guidance.
Part E: Philosophical Questions
Q23. Can AI be conscious?
This is among the deepest questions in philosophy of mind, and there is no consensus answer. What we can say: current AI systems — including large language models — do not have consciousness by any standard scientific account. They process inputs and generate outputs without the subjective experience, self-awareness, or continuous existence that characterize consciousness. Whether future AI systems could become conscious depends on contested questions about what consciousness is and what physical systems can instantiate it. The practical implication for AI ethics: we should not make moral decisions about AI systems based on unfounded claims about AI consciousness. We should, however, remain attentive to the possibility that sufficiently advanced systems might eventually have morally relevant properties — and we should develop the conceptual tools to evaluate such claims carefully rather than dismissing them entirely or accepting them credulously.
Q24. What ethical framework is best for AI?
No single framework is best; each illuminates different dimensions of AI ethics. Consequentialist frameworks direct attention to outcomes and help with cost-benefit analysis of AI systems. Kantian frameworks generate strong prohibitions on treating people merely as means — foundational for privacy and dignity arguments. Rawlsian justice provides a powerful tool for evaluating whether AI systems are fair to the least advantaged. The capabilities approach attends concretely to what AI does to people's ability to live flourishing lives. Virtue ethics asks what it means for an organization to be trustworthy and responsible, not just compliant.
The practical answer for business professionals: use multiple frameworks as analytical tools. Ask: what are the consequences? Do people's rights and dignity require prohibitions? Would someone behind the veil of ignorance consent to this system? Does this expand or contract human capabilities? These questions together generate more complete ethical analysis than any single framework.
Q25. Is it possible for an AI system to be a moral agent?
Current AI systems are not moral agents in any philosophically robust sense: they have no intentions, no understanding, no capacity for genuine moral deliberation, and no interests of their own that could ground moral responsibility. They are complex tools — very sophisticated ones, but tools nonetheless. Moral responsibility for AI systems rests with the humans who design, develop, deploy, and regulate them. This matters practically: it means that "the algorithm decided" is not an acceptable defense for discriminatory outcomes, and that "AI made the decision" does not reduce organizational accountability.
Whether future AI systems — potentially far more capable than current ones — could be moral agents is a more open question. It depends on contested philosophical questions about what grounds moral agency (rationality? sentience? autonomy?) and whether any computational system can genuinely possess these properties.
Part F: Governance Questions
Q26. What should an AI ethics board include?
An effective AI ethics board or committee should include: technical AI expertise (to understand what the systems actually do); legal expertise (to understand regulatory requirements and liability); business leadership (to ensure governance is integrated with business decisions, not siloed); affected community perspectives (either through community representatives or a defined process for incorporating external input); and independent members (to provide accountability that internal-only governance cannot). Common failure modes: ethics committees that are purely internal without external accountability; ethics committees without technical expertise that cannot evaluate the claims made to them; ethics committees that advise but do not have decision-making authority or escalation paths; and ethics committees whose members face institutional pressure to approve decisions rather than challenge them.
Q27. What makes an AI ethics policy real versus performative?
Several markers distinguish substantive AI ethics policies from window dressing. Substantive policies: apply to specific, named AI systems; have specific, measurable requirements; designate accountable individuals for compliance; have enforcement mechanisms and consequences for non-compliance; have been operationalized into actual workflows and processes; are updated based on incidents and lessons learned; and have been accompanied by investment in training, tools, and staffing. Performative policies: are written at a level of generality that requires no change in practice; apply to no specific systems; have no enforcement; exist as documents that are not integrated into decision-making; were written in response to external pressure without internal commitment; and cannot point to specific decisions they have influenced.
Q28. How does an organization move from AI ethics principles to practice?
The gap between principles and practice is the central challenge of organizational AI ethics. Key mechanisms for closing the gap: (1) Operationalize principles into mandatory processes — checklists, impact assessments, review boards — that must be completed before deployment. (2) Designate accountable individuals with authority and resources. (3) Create incentives aligned with ethics goals (ethics is integrated into performance evaluation, not just a sidebar). (4) Build organizational capacity through training and tools. (5) Establish independent audit mechanisms — internal reviews, external auditors, regulatory compliance functions. (6) Create safe channels for employees to raise ethics concerns without retaliation. (7) Regularly measure and report on ethics outcomes — bias metrics, incident rates, training completion — with the same rigor applied to financial metrics. Research (Sloane et al.) has documented that ethics principles without these mechanisms produce no measurable change in AI practice.
Q29. What is the role of AI auditing, and how does it work?
AI auditing is the systematic evaluation of an AI system's compliance with ethical, legal, and technical standards. Audits can be conducted internally (by a risk or compliance function), externally (by independent auditors), or by regulators. An AI audit typically examines: the system's technical performance, including disaggregated performance by demographic group; the processes used to develop and deploy the system; the governance and oversight mechanisms in place; and compliance with applicable legal requirements. External AI auditing is still a nascent industry with limited standardization — there are currently no universally accepted auditing standards, no required auditor qualifications, and limited regulatory mandates for auditing (though NYC Local Law 144 requires annual bias audits for employment decision tools). The absence of standardization means that an audit's value depends heavily on the auditor's methodology and independence.
Q30. What should AI ethics look like for small organizations that lack dedicated AI ethics staff?
Smaller organizations cannot support the full apparatus of AI ethics governance, but they still face the same legal obligations and ethical responsibilities as large ones. Practical approaches for smaller organizations: (1) Use vendor due diligence rigorously — if you are buying AI from vendors, make the vendor responsible for bias testing and documentation as a contractual matter. (2) Start with the highest-risk systems — apply ethics processes first to AI systems making consequential decisions about employees or customers. (3) Use existing frameworks — the NIST AI RMF, the EU AI Act's requirements, and the templates in this appendix provide starting points that do not require building from scratch. (4) Consult external expertise — law firms with AI practices, AI ethics consultants, and sector-specific guidance from regulators can supplement internal capacity. (5) Participate in industry coalitions — sector-specific AI ethics working groups can produce shared resources and standards that individual organizations cannot develop alone.
Part G: Additional Questions
Q31. What is "ethics washing" or "AI ethics washing"?
Ethics washing describes the pattern in which organizations publish AI ethics principles, create ethics review bodies, or fund AI ethics research as a substitute for binding accountability — using the language and appearance of ethical commitment to reduce pressure for substantive change or binding regulation. The term was introduced by AI Now Institute researchers. Characteristics of ethics washing include: principles documents with no enforcement mechanisms; ethics boards that are advisory only; ethics research that is published but does not change development practices; lobbying against regulation while claiming to support "responsible AI"; and investing in ethics communication while underfunding ethics implementation. The existence of ethics washing does not mean all voluntary ethics initiatives are performative — it means they should be evaluated by their specific content and mechanisms, not their stated commitments.
Q32. What is the difference between responsible AI and trustworthy AI?
These terms are used differently by different organizations. "Responsible AI" is used primarily by U.S. technology companies (Microsoft, Google, IBM) and emphasizes organizational responsibility for building AI that is fair, reliable, and safe. "Trustworthy AI" is the term favored by the European Union (EU Ethics Guidelines for Trustworthy AI) and emphasizes building AI systems that are worthy of users' trust — a slightly different emphasis on the system's properties rather than the organization's behavior. In practice the terms are often used interchangeably. Neither has a universally agreed definition, so when you encounter either term, ask: trustworthy by whose standard? Responsible to whom?
Q33. What is a foundation model or general-purpose AI, and why does it matter for ethics?
A foundation model (or general-purpose AI, GPAI, in EU AI Act terminology) is a large AI model trained on broad data at scale that can be adapted to a wide range of tasks — examples include GPT-4, Claude, and Gemini. Foundation models are ethically distinctive because: they are developed by a small number of well-resourced organizations; their training data encompasses essentially all publicly available human text, with all its biases and harms; they can be fine-tuned and deployed for countless specific applications; and the organization developing the foundation model may have limited visibility into how it is used. The EU AI Act has special provisions for GPAI models with high systemic risk, requiring additional transparency and safety testing.
Q34. What is algorithmic accountability?
Algorithmic accountability refers to the mechanisms — legal, technical, organizational, and social — through which AI systems and their developers can be held responsible for the outcomes they produce. Strong accountability requires: transparency (you must be able to see what the system is doing before you can hold anyone accountable for it); clearly assigned responsibility (someone must be responsible for the system's behavior); meaningful enforcement (there must be consequences for harmful outcomes); and access to redress (those harmed must have practical means to challenge decisions and receive remedy). Current AI governance typically falls short on all four dimensions: many systems are opaque; responsibility is diffuse across developers, vendors, and deployers; enforcement is limited; and affected individuals have minimal practical recourse.
Q35. How should I think about AI ethics in the Global South and non-Western contexts?
Most AI ethics scholarship, regulation, and technology development is concentrated in the United States and Europe. This creates several concerns. First, AI systems developed in Western contexts often perform poorly in non-Western contexts — trained on Western faces, Western language patterns, Western social structures. Second, data from Global South populations is often used to train AI systems that primarily benefit Western organizations and consumers. Third, AI governance frameworks developed in the U.S. and EU may be ill-suited to the political, legal, and social contexts of other countries. Organizations operating globally must engage with AI ethics as a global, not just Western, question — including the specific harms AI systems cause in the communities from which data is extracted.
Q36. What is the "move fast and break things" problem in AI ethics?
The Silicon Valley philosophy of rapid iteration — accepting failures as the cost of speed — is poorly suited to AI systems that affect human welfare. When products "break," the people who break are often not the technology company's customers but third parties who have no contractual relationship with the company and no mechanism for seeking redress. The recidivism algorithm that wrongly flags defendants as high-risk, the hiring tool that screens out qualified candidates, the healthcare algorithm that underestimates patient needs — in each case, the cost of "breaking" is borne by people who are not party to the iteration process. AI ethics requires a different pace in high-stakes domains: one that invests in understanding harms before deployment rather than discovering them afterward.
Q37. How do I handle employees who raise AI ethics concerns?
Creating genuine channels for AI ethics concerns requires: a clear, accessible reporting mechanism (hotline, designated ombudsperson, or process for raising concerns); explicit anti-retaliation protection that goes beyond legal minimum; documentation that concerns were received, considered, and responded to; and — most importantly — a demonstrated track record of actually acting on concerns. The Timnit Gebru and Margaret Mitchell cases at Google, the Project Maven employee petition, and multiple other incidents demonstrate that employees who raise AI ethics concerns can face significant professional consequences even at organizations with published ethics commitments. Building genuine psychological safety for ethics concerns is a governance challenge that requires demonstrated leadership commitment, not just policy.
Q38. What does it mean to center affected communities in AI ethics?
Centering affected communities means treating the perspectives of those most affected by AI systems — particularly those who bear the costs of AI failures — as authoritative rather than advisory. In practice this includes: involving community members in the design process from the beginning, not just at the end for validation; including community organizations as equity holders in governance structures; ensuring that community perspectives can influence decisions, not just inform them; providing compensation for community expertise and labor; and building ongoing relationships rather than one-time consultation. Organizations that "consult" communities by presenting them with a completed design and asking for feedback are not centering affected communities; they are seeking legitimacy for decisions already made.
Q39. Is there such a thing as "AI washing" in financial services?
Yes. Financial services firms sometimes describe products as "AI-powered" or "algorithmically optimized" when the underlying systems are actually simple rules-based calculations, statistical models that have been used for decades, or systems with minimal genuine AI components. This matters for AI ethics because: (1) "AI" carries an aura of objectivity and technical sophistication that may cause customers or regulators to defer to outputs they would otherwise question; (2) claiming AI capabilities that don't exist is a deceptive practice under FTC standards; and (3) when AI-washing occurs in regulated contexts (lending, insurance), it may implicate regulatory requirements that apply to AI systems. If a vendor claims their system is AI-powered, ask to understand the technical architecture in detail.
Q40. What single thing can a business professional do this week to improve their organization's AI ethics practice?
Conduct an AI inventory: find out what AI systems your organization is currently using, including vendor-provided systems. Most organizations discover they are using more AI systems than anyone realized — in HR, in customer service, in fraud detection, in marketing, in operations. Without knowing what systems exist, none of the governance tools in this textbook can be applied. The inventory should document: the system's name and vendor; its purpose and the decisions it influences; the populations it affects; whether a bias audit has been conducted; and who is responsible for its performance. This inventory is the foundation on which a genuine AI ethics program can be built.
These questions reflect the most common concerns raised by practitioners engaging with AI ethics. The field is moving quickly; answers that are accurate today may require updating as law, technology, and best practices develop. The Resource Directory (Appendix H) identifies sources for staying current.