Key Takeaways: Chapter 28 -- Privacy Impact Assessments and Ethical Reviews
Core Takeaways
-
Assessments are diagnostic tools, not compliance exercises. A PIA or DPIA conducted genuinely forces an organization to confront the ethical implications of its data practices -- surfacing risks, consent gaps, proportionality failures, and fairness concerns that informal awareness misses. VitraMed's DPIA revealed issues that leadership had not previously confronted. The value is in the organizational learning, not the document.
-
PIAs are ethically required even when not legally mandated. GDPR Article 35 specifies when DPIAs are legally required. But the chapter argues that a PIA is ethically required for any significant data processing. "The law tells you when you must assess. Ethics tells you when you should assess." Organizations that limit assessments to legal mandates leave their most significant ethical risks unexamined.
-
Necessity and proportionality are the foundation of every assessment. Before evaluating risks, an assessment must answer: Is this processing necessary for the stated purpose? Could the same goal be achieved with less data, less intrusion, or no personal data at all? Is the scope of processing proportionate to the benefit sought? The UK Court of Appeal's ruling on facial recognition established that these tests must be genuinely applied, not merely acknowledged.
-
Risk acknowledgment without proportionate mitigation is performative. The Bridges ruling established that a DPIA that identifies risks and proposes inadequate mitigations is legally insufficient. Genuine assessment requires that mitigations be commensurate with the risks identified. "Operator training" is not a proportionate response to systemic algorithmic bias.
-
Algorithmic Impact Assessments extend the assessment framework to cover AI-specific risks. Traditional PIAs and DPIAs focus on data processing. AIAs address the distinct risks of algorithmic decision-making: bias, opacity, autonomy constraints, accountability gaps, and drift. The Canadian AIA framework provides a model for classifying algorithmic systems by impact level and calibrating requirements accordingly.
-
Threshold tests and proportionality prevent assessment fatigue and avoidance. Not every data practice requires a full-scale assessment. Threshold screening identifies which practices warrant detailed review. Proportionality calibrates the depth of review to the level of risk. Together, they prevent two failure modes: trivializing assessments (by requiring the same review for everything) and circumventing assessments (by making the process so burdensome that teams avoid triggering it).
-
Academic IRBs and corporate ethical review serve similar functions but differ structurally. IRBs have legal mandates, structural independence, binding authority, and external oversight. Corporate ethical review is generally voluntary, internal, advisory, and self-regulated. The translation from academic to corporate context remains incomplete, and several high-profile failures demonstrate the consequences.
-
Community consultation must be meaningful, not perfunctory. An assessment's stakeholder consultation section cannot be satisfied by general surveys or passive notice. For processing that disproportionately affects specific communities, meaningful consultation requires targeted engagement with those communities and genuine incorporation of their concerns.
-
Assessments must be iterative, not one-time. Algorithmic systems change over time. Data is updated. Conditions shift. Populations evolve. An assessment conducted at launch becomes stale within months. Effective assessment requires ongoing monitoring and periodic re-assessment -- a requirement that most organizations have not yet operationalized.
-
The DPIA process made VitraMed's power asymmetry visible. VitraMed collected data from patients who trusted their clinicians, processed it in ways patients did not know about, and shared results with parties patients did not choose. The DPIA did not create this asymmetry -- it made it visible. Visibility is the precondition for accountability.
Key Concepts
| Term | Definition |
|---|---|
| Privacy Impact Assessment (PIA) | A systematic evaluation of a proposed data processing activity to identify privacy risks and design mitigations. A general good-practice tool. |
| Data Protection Impact Assessment (DPIA) | A legally required assessment under GDPR Article 35 when processing is likely to result in high risk to individuals' rights and freedoms. |
| Algorithmic Impact Assessment (AIA) | An assessment framework that extends PIAs/DPIAs to cover the distinct risks of algorithmic decision-making: bias, opacity, autonomy, accountability, and drift. |
| Institutional Review Board (IRB) | An academic ethical review body that evaluates research involving human subjects, established by the Belmont Report's principles of respect, beneficence, and justice. |
| Threshold test | A screening mechanism that determines whether a data processing activity requires a formal assessment, based on risk indicators. |
| Proportionality | The principle that the depth and rigor of an assessment should be proportional to the risk of the processing activity. |
| Necessity test | An evaluation of whether a proposed data processing activity is necessary for the stated purpose, or whether the same goal could be achieved with less data or less intrusive methods. |
| Prior consultation | GDPR Article 36 requirement to consult the supervisory authority before proceeding with processing when residual risk remains high after DPIA mitigations. |
| Residual risk | The risk that remains after mitigation measures have been applied. |
| Risk matrix | A structured tool for assessing risks by their likelihood and severity, producing a risk level (low, medium, high) for each identified risk. |
Key Debates
-
Should DPIAs be publicly available? Transparency argues for publication: affected communities and regulators should be able to evaluate the organization's analysis. Commercial confidentiality argues against: DPIAs may contain competitive information or security-relevant details. Where should the balance fall?
-
Are self-assessments inherently compromised? Can organizations honestly evaluate their own data practices? The DPIA is a self-assessment tool -- the organization evaluates its own processing. Does this create an inherent conflict of interest that undermines the assessment's value?
-
Should technology companies be legally required to maintain IRB-equivalent review boards? The structural advantages of IRBs (legal mandate, independence, binding authority, external oversight) are absent in corporate ethics review. Would mandating IRB equivalents for large technology companies improve ethical outcomes?
-
Is algorithmic impact assessment sufficient without external audit? The Canadian AIA and similar frameworks rely primarily on organizational self-assessment. For the highest-impact systems, should independent external audit be mandatory?
-
How should proportionality be applied when potential harm is severe but uncertain? The precautionary principle suggests erring on the side of caution. But precaution can paralyze innovation. How should assessments handle high-severity, low-probability risks?
Applied Framework: Five Questions of Assessment
When evaluating any data processing activity, apply these five questions:
| # | Question | What It Reveals |
|---|---|---|
| 1 | Necessity | Is this processing necessary for the stated purpose, or could the purpose be achieved with less data or less intrusive methods? |
| 2 | Proportionality | Is the scope of processing proportionate to the benefit sought? |
| 3 | Fairness | Does the processing affect different groups differently, and if so, is the differential treatment justified? |
| 4 | Transparency | Are affected individuals aware of the processing and able to understand its implications? |
| 5 | Accountability | If something goes wrong, who is responsible, and what recourse do affected individuals have? |
These five questions do not replace a formal assessment. They provide a quick diagnostic that any practitioner can apply to any data practice at any stage of development.
Looking Ahead
Chapter 28 introduced the assessment processes that organizational infrastructure supports. Chapter 29, "Responsible AI Development," moves from assessing existing practices to building AI systems responsibly from the start. We will examine responsible AI frameworks, model cards (Mitchell et al., 2019), datasheets for datasets (Gebru et al., 2021), red-teaming methodologies, and deployment monitoring -- and introduce the ModelCard Python dataclass that makes AI documentation programmable. VitraMed will create its first model card for the clinical prediction model that the DPIA just assessed.
Use this summary as a study reference and a quick-access card for key vocabulary. The Five Questions of Assessment framework will recur in every remaining chapter.