Further Reading: Privacy Impact Assessments and Ethical Reviews
The sources below provide deeper engagement with the themes introduced in Chapter 28, spanning legal frameworks, practical guidance, academic analysis, and critical perspectives on impact assessment practice.
Privacy Impact Assessments and DPIAs
UK Information Commissioner's Office. "Conducting Privacy Impact Assessments: Code of Practice." ICO, 2014 (updated 2018). The ICO's practical guide to PIAs and DPIAs remains one of the most accessible and comprehensive resources available. It includes step-by-step guidance, screening questions, risk assessment templates, and worked examples. Directly relevant to the PIA/DPIA template discussed in Section 28.6 and useful as a reference for practitioners conducting their first assessment.
Commission Nationale de l'Informatique et des Libertes (CNIL). "Privacy Impact Assessment: Methodology." CNIL, 2018. The French data protection authority's PIA methodology provides a complementary approach to the ICO's guidance, with particular strength in risk assessment methodology and stakeholder consultation guidance. The CNIL's approach is more prescriptive than the ICO's, making it useful for organizations seeking a structured framework.
European Data Protection Board. "Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is 'Likely to Result in a High Risk' for the Purposes of Regulation 2016/679." EDPB, 2017. The authoritative interpretation of GDPR Article 35 from the body responsible for coordinating data protection enforcement across the EU. The guidelines define the nine criteria for determining high risk (cited in Section 28.2.1) and clarify when a DPIA is mandatory. Essential reading for anyone conducting DPIAs under GDPR.
Wright, David, and Paul De Hert (eds.). Privacy Impact Assessment. Dordrecht: Springer, 2012. The most comprehensive academic treatment of PIAs, with chapters covering PIA methodology, international practice, sectoral applications, and the relationship between PIAs and regulation. The book pre-dates GDPR but provides the conceptual foundations that inform current DPIA practice.
Algorithmic Impact Assessments
Government of Canada, Treasury Board. "Algorithmic Impact Assessment Tool." Government of Canada, 2019. The Canadian AIA tool described in Section 28.4.2, publicly available and usable. The tool provides a structured questionnaire that classifies algorithmic systems into four impact levels with corresponding governance requirements. It is the most developed government AIA framework and serves as a model for other jurisdictions.
Reisman, Dillon, Jason Schultz, Kate Crawford, and Meredith Whittaker. "Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability." AI Now Institute, April 2018. The foundational paper proposing algorithmic impact assessments for government agencies. Reisman et al. argue that existing environmental and privacy impact assessment frameworks can be adapted for algorithmic systems, and they propose specific components including public notice, access to the system's technical details, and affected community input. Directly relevant to Section 28.4's discussion of AIA design.
Selbst, Andrew D. "An Institutional View of Algorithmic Impact Assessments." Harvard Journal of Law & Technology 35, no. 1 (2021): 117-191. A legal analysis of how AIAs should be designed as institutional governance mechanisms, not just technical evaluations. Selbst argues that AIAs must be embedded in organizational decision-making processes with adequate authority and accountability -- echoing the chapter's themes about the gap between assessment and action.
Ada Lovelace Institute. "Algorithmic Impact Assessment: A Case Study in Healthcare." Ada Lovelace Institute, 2022. A practical case study applying algorithmic impact assessment methodology to a healthcare AI system. The study demonstrates how AIA principles translate into practice in a high-stakes, regulated sector. Useful for understanding the specific challenges of assessing AI systems that affect vulnerable populations.
Institutional Review Boards and Research Ethics
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. "The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research." U.S. Department of Health, Education, and Welfare, 1979. The foundational document for research ethics in the United States, establishing the principles of respect for persons, beneficence, and justice that underpin the IRB system. Essential reading for understanding the origins and logic of ethical review, and for evaluating whether corporate data practices should be subject to equivalent oversight.
Metcalf, Jacob, and Kate Crawford. "Where Are Human Subjects in Big Data Research? The Emerging Ethics Divide." Big Data & Society 3, no. 1 (2016): 1-14. An analysis of the gap between traditional research ethics (governed by IRBs and the Belmont Report) and the ethical challenges posed by big data research in corporate settings. Metcalf and Crawford argue that the IRB model does not translate to corporate data practice and propose alternative governance mechanisms. Directly relevant to Section 28.3.2's discussion of the IRB translation problem.
Vitak, Jessica, Katie Shilton, and Zahra Ashktorab. "Beyond the Belmont Principles: Ethical Challenges, Practices, and Beliefs in the Online Data Research Community." In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing, 941-953. 2016. An empirical study of how online researchers navigate ethical challenges that fall outside traditional IRB frameworks. The authors find significant variation in ethical practice and identify areas where the research community lacks consensus. Useful for understanding why corporate ethical review mechanisms face similar challenges of consistency and clarity.
Facial Recognition and Surveillance Assessment
Fussey, Pete, and Daragh Murray. "Independent Report on the London Metropolitan Police Service's Trial of Live Facial Recognition Technology." Human Rights Centre, University of Essex, July 2019. The independent evaluation of the Met's LFR trials, conducted by the University of Essex. The report found significant concerns about accuracy, proportionality, and the adequacy of the DPIA. It provides a model for how independent assessment of surveillance technology should be conducted.
Buolamwini, Joy, and Timnit Gebru. "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (FAT)*, 77-91. 2018. The landmark study demonstrating that commercial facial recognition systems perform significantly worse on women with darker skin tones. The study's methodology -- disaggregated evaluation across intersectional demographic categories -- has become a standard for fairness assessment. Central to understanding why the Met's facial recognition DPIA was found insufficient.
Critical Perspectives on Assessment Practice
Mantelero, Alessandro. "AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment." Computer Law & Security Review 34, no. 4 (2018): 754-772. Mantelero proposes extending impact assessment beyond data protection to encompass human rights, social, and ethical impacts. His "HRESIA" (Human Rights, Ethical and Social Impact Assessment) framework addresses the limitations of privacy-focused assessments that may miss broader societal harms. Relevant to the chapter's argument that DPIAs, while valuable, do not capture the full range of ethical implications.
Kaminski, Margot E., and Gianclaudio Malgieri. "Multi-layered Explanations from Algorithmic Impact Assessments in the GDPR." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT)*, 68-79. An analysis of how AIAs can be designed to produce multi-layered explanations -- different levels of detail for different audiences (affected individuals, regulators, technical reviewers). The paper bridges the gap between assessment as a governance tool and transparency as a right.
Moss, Emanuel, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. "Assembling Accountability: Algorithmic Impact Assessment for the Public Interest." Data & Society Research Institute, 2021. A comprehensive analysis of how algorithmic impact assessments can be designed to serve public accountability rather than just organizational compliance. The authors argue that AIAs should be public-facing, participatory, and connected to enforceable accountability mechanisms -- challenging the self-assessment model that currently dominates.
These readings range from practical implementation guides (ICO, CNIL) to critical scholarship that questions whether assessment frameworks as currently designed are sufficient. Engaging with both the practical and critical perspectives is essential for anyone involved in designing, conducting, or evaluating impact assessments.