Appendix H: Resource Directory

An Annotated Guide to Organizations, Tools, Journals, and Communities for Ongoing AI Ethics Engagement


Introduction

AI ethics is a rapidly evolving field. No textbook can remain current with the pace of research, regulation, and technological change. This directory is designed to help practitioners maintain their knowledge after completing this course — identifying the organizations, tools, journals, media outlets, and communities where the most important work in AI ethics is being done.

Each entry includes a brief annotation explaining what the resource offers and when to consult it.


Section 1: Research Organizations

AI Now Institute

Website: ainowinstitute.org Location: New York University / independent

The AI Now Institute is the preeminent AI policy and social impact research organization. Founded by Kate Crawford and Meredith Whittaker in 2017, AI Now conducts research on the power structures underlying AI development, the labor implications of AI, and the gap between AI ethics principles and practice. Their annual AI Now Report is the most comprehensive annual accounting of AI ethics developments. For practitioners: consult AI Now for research on corporate AI ethics (including critiques of ethics washing), labor and AI, and AI in government contexts. Publications are freely available on their website.

Signature resources: Annual AI Now Reports; Atlas of AI (Kate Crawford's book); research on AI and labor


Algorithmic Justice League (AJL)

Website: ajl.org Location: Cambridge, MA / distributed

Founded by Joy Buolamwini following her Gender Shades research, AJL combines research, art, and advocacy to challenge algorithmic bias, with particular focus on facial recognition and the communities most harmed by AI deployment. AJL maintains one of the most accessible databases of facial recognition actions and policy developments. For practitioners: the AJL website provides non-technical introductions to AI bias for organizational training programs and stakeholder communications.

Signature resources: Gender Shades research (see Key Studies Appendix); Unmasking AI (Buolamwini's book); facial recognition policy tracker


Data & Society Research Institute

Website: datasociety.net Location: New York, NY

Data & Society conducts qualitative and social science research on the social, cultural, and political implications of data and AI. Their work is particularly strong on the labor implications of AI systems, the lived experience of algorithmic management, and the sociology of technology organizations. For practitioners: Data & Society's reports on AI in specific sectors (healthcare, education, journalism, gig economy) are essential context for sector-specific AI ethics programs.

Signature resources: Reports on AI and labor, AI in healthcare, algorithmic accountability


Partnership on AI

Website: partnershiponai.org Location: San Francisco, CA

Partnership on AI is a multi-stakeholder organization that brings together technology companies (including Google, Microsoft, Amazon, Meta, and Apple), civil society organizations, and academic researchers to develop AI governance frameworks. PAI is notable for being one of the few spaces where technology companies and their critics work together on governance. For practitioners: PAI develops practical guidance documents on AI governance topics, including responsible sourcing of training data, AI incident response, and AI and media integrity.

Signature resources: Guidance documents on responsible AI practice; AI Incident Database (maintained with CSET)


Center for AI Safety (CAIS)

Website: safe.ai Location: San Francisco, CA / distributed

CAIS focuses primarily on long-term AI safety, including alignment research and governance for advanced AI systems. It attracted significant attention when it organized the 2023 "Statement on AI Risk" signed by hundreds of AI researchers warning about AI as a potential existential risk. For practitioners interested in existential risk perspectives and governance of advanced AI systems.

Signature resources: AI Safety literature review; statements on AI risk; research on AI governance


Georgetown CSET (Center for Security and Emerging Technology)

Website: cset.georgetown.edu Location: Georgetown University, Washington, DC

CSET focuses on the intersection of AI and national security, with particular attention to AI policy, semiconductor supply chains, and AI governance. Maintains the AI Incident Database (with Partnership on AI). For practitioners: CSET produces policy-relevant research on AI governance frameworks and AI regulatory developments.


Oxford Internet Institute

Website: oii.ox.ac.uk Location: University of Oxford, UK

The OII is a multi-disciplinary research center studying the social implications of the internet and digital technologies, including AI. Their research covers algorithmic systems, platform governance, data ethics, and the political economy of digital technology. For practitioners: OII's working papers and policy briefs provide rigorous academic analysis of AI governance questions.


Alan Turing Institute

Website: turing.ac.uk Location: London, UK

The UK's national institute for AI and data science, the Alan Turing Institute conducts research across the full spectrum of AI including ethics, safety, and governance. The Institute produces the AI Ethics and Society conference and funds research on public interest AI. For practitioners: the Turing Institute's "Understanding Artificial Intelligence Ethics and Safety" report is a widely-used practical introduction.


Distributed AI Research Institute (DAIR)

Website: dair-institute.org Location: Distributed

Founded by Timnit Gebru after her departure from Google, DAIR is an independent AI research institute that conducts research on the social impacts of AI without corporate funding or control. DAIR's work focuses particularly on how AI systems affect marginalized communities globally. For practitioners seeking perspectives on AI ethics that are independent of tech industry funding.


Electronic Frontier Foundation (EFF)

Website: eff.org Location: San Francisco, CA

EFF is the leading civil liberties organization focusing on digital rights, with extensive work on AI surveillance, biometric privacy, automated decision-making, and legal frameworks. EFF's legal team litigates AI ethics cases and produces policy advocacy. For practitioners: EFF's explainers provide excellent non-technical introductions to AI law and civil liberties.


Section 2: Academic Journals

Big Data & Society

Publisher: SAGE Open Access Focus: Social science and humanistic research on big data and AI Access: Open access; all articles freely available Impact: The leading interdisciplinary journal for empirical AI ethics research

Big Data & Society publishes empirical, theoretical, and policy-oriented research on the social implications of AI and big data. It has published foundational AI fairness papers including early work by Barocas and Selbst. Essential reading for understanding the social science scholarship on AI.


AI & Society

Publisher: Springer Focus: Philosophical and social science perspectives on AI Access: Subscription; some open access Impact: Long-running, broad coverage

One of the oldest AI ethics journals, AI & Society publishes research on the philosophical, social, and cultural implications of AI. More theoretical and philosophical than empirical in orientation.


Proceedings of ACM FAccT (Fairness, Accountability, and Transparency)

Publisher: ACM Digital Library Focus: Technical and sociotechnical research on algorithmic fairness Access: ACM DL (subscription); many papers on arXiv (free) Impact: The premier venue for algorithmic fairness research

ACM FAccT (formerly FAT*) is the most important conference in algorithmic fairness research. Its proceedings include both technical papers (new fairness algorithms, interpretability methods) and sociotechnical papers (audits, policy analysis, qualitative research). ProPublica researchers and academic researchers have both published here. Every AI ethics practitioner should bookmark the proceedings.


Ethics and Information Technology

Publisher: Springer Focus: Philosophical ethics of digital technology Access: Subscription Impact: Strong coverage of privacy, autonomy, and AI ethics

Ethics and Information Technology publishes philosophical analysis of ethical issues raised by digital technologies. Papers tend to be more theoretically rigorous than policy-oriented. Useful for practitioners wanting philosophical depth on AI ethics questions.


Harvard Journal of Law & Technology

Publisher: Harvard Law School Focus: Legal analysis of technology Access: Open access online Impact: Leading technology law journal

HJLT publishes legal scholarship on technology including AI liability, regulatory frameworks, and emerging legal challenges. Essential reading for practitioners who need to understand the legal landscape.


Journal of Artificial Intelligence Research (JAIR)

Publisher: AI Access Foundation (open access) Focus: Computer science AI research Access: Fully open access Impact: Top-tier AI research journal

JAIR is one of the leading AI research journals. While primarily technical, it increasingly includes papers on AI safety, fairness, and interpretability. Free access is a notable advantage.


Section 3: Open-Source Tools

Fairlearn

Website: fairlearn.org / github.com/fairlearn/fairlearn Developer: Microsoft (open source) Language: Python

Fairlearn is a Python toolkit for assessing and improving the fairness of AI systems. It includes fairness metrics, visualization tools for fairness assessment, and mitigation algorithms. Integrates with scikit-learn. The documentation includes extensive conceptual explanations as well as technical guides. Best for: practitioners who want to implement fairness testing in Python pipelines.


AI Fairness 360 (AIF360)

Website: aif360.mybluemix.net / github.com/Trusted-AI/AIF360 Developer: IBM (open source) Language: Python and R

AIF360 is IBM's comprehensive fairness toolkit, offering over 70 fairness metrics and 10 bias mitigation algorithms. It includes a web-based demo and extensive educational resources. Supports a wider range of metrics and algorithms than Fairlearn. Best for: comprehensive fairness audits with multiple metrics; academic research.


SHAP

Website: github.com/shap/shap Developer: Scott Lundberg / open source community Language: Python

SHAP (SHapley Additive exPlanations) is the leading tool for model explainability using Shapley value attribution. It provides consistent, theoretically grounded feature importance for any model. Best for: explaining individual predictions and understanding global feature importance in production models.


LIME

Website: github.com/marcotcr/lime Developer: Marco Ribeiro / open source Language: Python

LIME provides local, model-agnostic explanations for individual predictions. Easier to use than SHAP for beginners; less theoretically grounded but often sufficient for practical use. Works with tabular data, text, and images. Best for: quick individual prediction explanations; text and image models.


Aequitas

Website: aequitas.dssg.io / github.com/dssg/aequitas Developer: Data Science for Social Good, University of Chicago Language: Python

Aequitas is an open-source bias audit toolkit with a focus on decision-making systems used in public policy contexts. It includes a web-based audit tool (no coding required) and comprehensive documentation. Best for: non-technical audits of predictions from existing systems; public sector applications.


Google What-If Tool

Website: pair-code.github.io/what-if-tool Developer: Google PAIR (People + AI Research)

A visual, interactive tool for investigating machine learning models. Allows exploration of model behavior across a dataset, testing of counterfactual examples, and visualization of fairness metrics. No coding required for basic use. Best for: visual exploration of model behavior and bias; stakeholder demonstrations.


Responsible AI Toolbox

Website: github.com/microsoft/responsible-ai-toolbox Developer: Microsoft Language: Python

Microsoft's comprehensive responsible AI toolkit combining interpretability (SHAP), fairness analysis, causal analysis, and counterfactual explanation. More comprehensive than individual tools. Best for: enterprise deployments requiring integrated responsible AI capabilities.


Section 4: Government and Regulatory Bodies

Federal Trade Commission (FTC) — AI Resources

Website: ftc.gov/ai The FTC regulates unfair and deceptive practices and has increasing authority over AI systems in the U.S. The FTC has published "Aiming for Truth, Fairness, and Equity in Your Company's Use of AI" (2021) and brought enforcement actions against algorithmic deception. Essential reading: the FTC's AI guidance documents and their enforcement actions against companies making false AI claims.

EEOC — AI and Employment Discrimination

Website: eeoc.gov/ai The EEOC published Technical Assistance on AI (2023) clarifying that AI-powered employment decisions are subject to Title VII, ADA, and ADEA requirements. Essential for any organization using AI in hiring, promotion, or performance management.

CFPB — Algorithmic Fairness in Credit

Website: consumerfinance.gov The CFPB has published guidance on fair lending requirements for algorithmic underwriting, including requirements for adverse action notices when AI is used in credit decisions. Their blog (consumerfinance.gov/about-us/blog/) provides accessible updates on regulatory developments.

NIST AI Resource Center

Website: airc.nist.gov NIST's AI resource center hosts the AI Risk Management Framework, the AI RMF Playbook, and resources for AI developers and deployers. The AI RMF is the primary voluntary AI governance framework in the U.S. and is increasingly referenced in government procurement requirements.

EU AI Office

Website: digital-strategy.ec.europa.eu/en/policies/ai-office The EU AI Office, established under the EU AI Act, is the EU's central authority for enforcing the AI Act. Its website hosts the official text of the Act, guidelines, and implementation materials.

ICO (Information Commissioner's Office) — UK

Website: ico.org.uk/ai The UK's data protection regulator publishes AI guidance covering algorithmic transparency, data protection impact assessments for AI, and fairness in automated decision-making. Particularly useful for organizations operating in the UK.

CNIL (Commission Nationale de l'Informatique et des Libertés) — France

Website: cnil.fr France's data protection authority has published extensive AI ethics guidance and enforcement actions, particularly regarding facial recognition and algorithmic management. Available in French and English.


Section 5: Professional Associations

ACM FAccT (Fairness, Accountability, and Transparency)

Website: facctconference.org The annual FAccT conference is the primary academic venue for AI fairness research. Conference proceedings are freely available. The conference's community includes both academic researchers and practitioners. Attending FAccT is one of the best ways to stay current with cutting-edge fairness research.

ACM Code of Ethics

Website: acm.org/code-of-ethics The Association for Computing Machinery's Code of Ethics (2018) is the primary professional ethics code for computing professionals. Section 1 establishes general ethical principles including avoiding harm, honesty, and fairness. Relevant for AI developers who may be governed by ACM's code.

IEEE Standards Association — AI Ethics

Website: ieee.org/ethically-aligned-design The IEEE has developed Ethically Aligned Design, a comprehensive framework for AI ethics. IEEE also develops formal AI standards including IEEE 7000 (ethical design), IEEE 7001 (transparency), and IEEE 7010 (wellbeing). Relevant for engineers who work within IEEE professional standards.

SIOP (Society for Industrial-Organizational Psychology)

Website: siop.org/research-publications/items-of-interest/ai-in-selection SIOP represents I-O psychologists who work on employment assessment. SIOP has developed guidance on the use of AI in hiring and selection, and their perspectives on the application of adverse impact doctrine to algorithmic hiring are highly relevant for HR AI practitioners.

IAPP (International Association of Privacy Professionals)

Website: iapp.org IAPP is the leading professional association for privacy practitioners. While broader than AI ethics, IAPP has extensive resources on AI and data protection including GDPR compliance for AI, data protection impact assessments, and algorithmic accountability. The IAPP certification (CIPP) is a recognized credential for privacy professionals.


Section 6: News and Media

The Markup

Website: themarkup.org The Markup is a nonprofit, technology-focused investigative journalism outlet that has produced some of the most important AI bias investigations, including the algorithmic redlining study, investigations of social media recommendation, and coverage of surveillance technology. They publish methodology alongside their stories. Required reading for practitioners: one of the most technically rigorous journalism outlets covering AI.

MIT Technology Review

Website: technologyreview.com MIT Technology Review provides substantive, accurate coverage of AI developments including ethics and policy. Their "AI" section and the podcast "In Machines We Trust" are excellent resources for practitioners.

Algorithm Watch

Website: algorithmwatch.org European nonprofit that monitors and documents algorithmic systems affecting public life. Particularly strong coverage of EU regulatory developments, facial recognition in Europe, and automated decision-making in public services. Publishes in English and German.

ProPublica's Machine Bias Series

Website: propublica.org/series/machine-bias ProPublica's investigative series on algorithmic discrimination, which began with the COMPAS investigation in 2016, remains an essential archive of documented AI harms. The series covers criminal justice, lending, hiring, and advertising algorithms.

Rest of World

Website: restofworld.org Covers technology's impact in Asia, Africa, Latin America, and the Middle East — providing perspectives on AI ethics that are often absent from Western-centric coverage. Essential for organizations operating internationally.


Section 7: Courses and Training

AI Ethics — Coursera (multiple providers)

Several universities offer AI ethics courses on Coursera. Notable offerings: - "AI Ethics" from the University of Michigan - "AI For Everyone" from Andrew Ng/DeepLearning.AI (foundational AI literacy) - "Human-Computer Interaction for AI" from UC San Diego

Elements of AI — University of Helsinki

Website: elementsofai.com Free online course designed for non-technical learners. Covers AI basics including ethics chapters. Available in 30+ languages. Excellent for organizational AI literacy training.

FastAI Practical Ethics in AI

Website: ethics.fast.ai Practical guide to AI ethics for practitioners and developers, developed by Rachel Thomas. Covers bias, disinformation, privacy, and safety with hands-on orientation.

Executive Education Programs

  • MIT Sloan School: AI and Machine Learning for Business (executives)
  • Harvard Kennedy School: AI in Government executive program
  • Stanford HAI: AI governance and policy programs
  • Wharton: AI for Business program

Certificates

  • IAPP AI Governance Professional (AIGP): Certification in AI governance and compliance
  • CIPP/E (Certified Information Privacy Professional/Europe): Covers GDPR including AI provisions
  • AI Audit Certification (being developed by multiple bodies): Watch for emerging standards

Section 8: Communities

Online Communities

Twitter / X: Despite platform changes, Twitter/X remains active for AI ethics discourse. Key accounts to follow include: - @jovialjoy (Joy Buolamwini) - @timnitGebru - @mmitchell_ai (Margaret Mitchell) - @random_walker (Arvind Narayanan) - @sarahbookwriter (Sarah Wachter-Boettcher) - @TheMarkup - @AIethics (AI Ethics Institute) - @algorithmwatch

LinkedIn: Active AI ethics communities exist on LinkedIn. The "AI Ethics" and "Responsible AI" groups have tens of thousands of members and active discussions.

Mastodon / Bluesky: Following the fragmentation of Twitter, many AI ethics researchers have moved to Mastodon (particularly sigmoid.social, a social.coop instance) and Bluesky.

Research Communities

ACM FAccT Community: The conference community has active Slack and Discord channels, open to practitioners and researchers. Join through the FAccT conference website.

Women in AI Ethics: A community organization focused on increasing diversity in the AI ethics field. Runs mentoring programs and networking events.

Black in AI: A community of Black researchers in AI. Runs workshops at major AI conferences and advocates for diversity in AI research and development.

Queer in AI: Community supporting LGBTQ+ researchers in AI and advocates for LGBTQ+ inclusive AI ethics.

Newsletters

  • Import AI (Jack Clark): Weekly newsletter on AI research developments, including safety and ethics
  • The Algorithm (MIT Technology Review): Weekly newsletter on AI news and analysis
  • Algorithm Watch Newsletter: Coverage of European AI policy and regulatory developments
  • AI Alignment Forum Digest: Summary of alignment research for practitioners interested in safety perspectives

This directory was current as of early 2026. The AI ethics landscape changes rapidly — organizations are founded and defunded, regulatory bodies expand their AI work, and new tools emerge. Use this directory as a starting point and follow the organizations listed here to track subsequent developments.