Further Reading — Chapter 29: Algorithmic Fairness and Bias in Compliance Systems
Foundational Research Papers
Hardt, M., Price, E., and Srebro, N. (2016). "Equality of Opportunity in Supervised Learning." Advances in Neural Information Processing Systems (NeurIPS) 29. The paper that formalised equalized odds and equality of opportunity as fairness criteria for supervised machine learning. Hardt et al. demonstrate that these criteria can be achieved through post-processing of model outputs — adjusting decision thresholds by group — without requiring retraining. This remains the foundational technical reference for equalized odds as a fairness metric. Essential reading for any practitioner working with algorithmic fairness in compliance systems.
Chouldechova, A. (2017). "Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments." Big Data, 5(2), 153–163. The paper that established the impossibility theorem showing that well-calibrated risk scores and equal false positive rates cannot both be achieved when base rates differ across groups. Chouldechova's analysis was conducted in the context of criminal justice risk assessment tools (COMPAS), but the mathematical result is universal and directly applicable to credit scoring, KYC, and AML systems. The paper's clear exposition of the incompatibility of competing fairness criteria is indispensable for understanding why fairness is a choice among values, not a technical optimisation problem.
Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016). "Inherent Trade-Offs in the Fair Determination of Risk Scores." Proceedings of Innovations in Theoretical Computer Science (ITCS 2017). The companion impossibility result to Chouldechova, demonstrating the fundamental tension between calibration and group fairness criteria. Kleinberg et al. approach the problem from a statistical decision theory perspective and provide formal conditions under which the incompatibility is unavoidable. Together with Chouldechova (2017), this paper establishes the mathematical foundations that every compliance professional working with fairness metrics needs to understand.
Barocas, S., Hardt, M., and Narayanan, A. (2019). "Fairness and Machine Learning: Limitations and Opportunities." Available at: https://fairmlbook.org/ The most comprehensive freely available textbook on algorithmic fairness. Covers all major fairness criteria, their mathematical relationships, the sources of bias in machine learning systems, and the social and regulatory context. Written by leading academic researchers who also engage extensively with policy and industry practice. Essential for practitioners who want to develop a deep technical understanding of the field beyond what can be covered in a single chapter.
Empirical Studies
Grother, P., Ngan, M., and Hanaoka, K. (2019). "Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects." NIST Interagency Report 8280, National Institute of Standards and Technology. Available at: https://doi.org/10.6028/NIST.IR.8280 The definitive empirical study of demographic disparities in commercial facial recognition algorithms. Grother et al. evaluated 189 algorithms from 99 developers against 18.27 million images across four government use cases. The study found false non-match rates up to 100 times higher for African American and Asian faces in some algorithms, as well as systematic disparities by gender and age. Any compliance professional responsible for a KYC system incorporating facial matching should read this report.
Regulatory and Legal Sources
Financial Conduct Authority (2022). "PS22/9: A new Consumer Duty." FCA Policy Statement, July 2022. Available at: https://www.fca.org.uk/publications/policy-statements/ps22-9-new-consumer-duty The primary regulatory source for the Consumer Duty obligations discussed in this chapter. The Policy Statement sets out the final rules and guidance, including the firm's obligations to monitor outcomes across customer segments and take action where outcomes are not consistently good. Section 4 of the Policy Statement addresses outcomes monitoring in detail. Firms should also consult the accompanying Consumer Duty Implementation Guide and the FCA's ongoing publications on Consumer Duty implementation.
UK Equality Act 2010. Available at: https://www.legislation.gov.uk/ukpga/2010/15/contents The foundational equality legislation. Part 2 (Prohibited Conduct) and Part 4 (Premises) are most directly relevant to financial services contexts. Section 19 defines indirect discrimination. Section 158 and Section 159 define the positive action provisions that are relevant to fairness remediation approaches. The Equality and Human Rights Commission's statutory codes of practice provide interpretive guidance.
Equality and Human Rights Commission. "Employment Statutory Code of Practice." Available at: https://www.equalityhumanrights.com/en/publication-download/employment-statutory-code-practice While focused on employment, the EHRC's statutory codes provide authoritative interpretation of indirect discrimination, justification, and proportionality that applies across the Equality Act's scope, including financial services.
European Parliament and Council (2024). "Regulation (EU) 2024/1689 of the European Parliament and of the Council on Artificial Intelligence (the EU AI Act)." Official Journal of the European Union. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 The EU AI Act establishes the regulatory framework for AI systems in the EU. Articles 9–15 set out the requirements for high-risk AI systems, including data governance requirements under Article 10 and human oversight requirements under Article 14. Annex III identifies credit scoring and systems for access to essential services as high-risk. Firms operating in or serving EU markets must ensure compliance with the Act's requirements for high-risk AI systems.
Consumer Financial Protection Bureau (2022). "CFPB Circular 2022-03: Adverse Action Notification Requirements and the Equal Credit Opportunity Act." Available at: https://www.consumerfinance.gov/compliance/supervisory-guidance/ CFPB guidance clarifying that the Equal Credit Opportunity Act's adverse action notification requirements apply to algorithmic credit decisioning. The circular addresses the requirement to provide specific reasons for credit denial and the challenges posed by complex algorithmic models. Essential for US-facing compliance professionals.
Tools and Technical Resources
Microsoft Fairlearn (2023). Open-source Python package for assessing and improving fairness in machine learning models. Available at: https://fairlearn.org/ Fairlearn provides tools for assessing demographic parity, equalized odds, and other fairness metrics, as well as mitigation algorithms including reductions (constrained optimisation during training), post-processing (threshold calibration), and preprocessing (data transformation). The package integrates with scikit-learn. The project website includes a comprehensive user guide and worked examples relevant to financial services applications.
IBM Research (2018). "AI Fairness 360: An Extensible Toolkit for Detecting and Mitigating Algorithmic Bias." IBM Journal of Research and Development, 63(4/5), 4:1–4:15. Available at: https://aif360.mybluemix.net/ AI Fairness 360 provides over 70 fairness metrics and 11 bias mitigation algorithms, covering preprocessing, in-processing, and post-processing approaches. It includes a comprehensive tutorial library with worked examples across multiple domains including credit scoring and recidivism prediction. The associated paper provides technical documentation of the fairness metrics implemented.
Regulatory Guidance on AI and Data Protection
Information Commissioner's Office (2022). "Guidance on AI and data protection." Available at: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ The ICO's guidance addresses the intersection of AI systems and UK GDPR obligations, including fairness requirements under Article 5(1)(a), rights in relation to automated decision-making under Article 22, and the data protection impact assessment requirements for AI systems that present high risks to individuals. Chapter 4 of the guidance addresses bias and discrimination in AI systems specifically.
Alan Turing Institute (2021). "Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector." Available at: https://doi.org/10.5281/zenodo.3240529 While focused on the public sector, this accessible introduction to AI ethics and safety — including algorithmic fairness — is widely used as a framework reference in regulated industries. It provides practical guidance on fairness impact assessment, transparency, and human oversight that is applicable across sectors.
Practitioner Resources
FCA (2022). "Our approach to Consumer Duty: how we will monitor, investigate and enforce." FCA Guidance Consultation, December 2022. Sets out the FCA's supervisory approach to the Consumer Duty, including how the FCA will assess outcome monitoring and what constitutes evidence of good outcome delivery. Firms designing fairness monitoring programmes should read this alongside PS22/9 to understand what the regulator will expect to see in any supervisory review.
Financial Services and Markets Act 2000 (as amended). Available at: https://www.legislation.gov.uk/ukpga/2000/8/contents Section 1C (FCA's consumer protection objective) and the regulatory principles at section 3B provide the statutory foundation for the FCA's powers and responsibilities that underpin the Consumer Duty and supervisory approach to algorithmic fairness.