> "An impact assessment done right changes how you think about the project. An impact assessment done wrong is just paperwork."
Learning Objectives
- Explain the purpose and legal basis for Privacy Impact Assessments and Data Protection Impact Assessments
- Conduct a DPIA following the process specified by GDPR Article 35, including threshold tests, risk identification, and mitigation planning
- Distinguish between academic IRBs and corporate ethical review processes and evaluate their relative strengths and limitations
- Design and apply an Algorithmic Impact Assessment (AIA) for AI/ML systems
- Apply a practical PIA/DPIA template to a real-world data processing scenario
- Identify the triggers that require an assessment and apply proportionality principles to determine assessment scope
- Analyze real-world examples of assessments done well and done poorly
In This Chapter
- Chapter Overview
- 28.1 Privacy Impact Assessments: Purpose and Process
- 28.2 GDPR Article 35: The DPIA Requirement
- 28.3 The Ethical Review Board: Academic and Corporate Models
- 28.4 Algorithmic Impact Assessments
- 28.5 When to Conduct Assessments: Threshold Tests and Proportionality
- 28.6 A Practical PIA/DPIA Template
- 28.7 VitraMed's First DPIA: The Predictive Analytics Platform
- 28.8 Case Studies
- 28.9 Chapter Summary
- What's Next
- Chapter 28 Exercises → exercises.md
- Chapter 28 Quiz → quiz.md
- Case Study: A Model DPIA: Assessing a Facial Recognition Deployment → case-study-01.md
- Case Study: Ethical Review in Tech: Google's AI Ethics Board Controversy → case-study-02.md
Chapter 28: Privacy Impact Assessments and Ethical Reviews
"An impact assessment done right changes how you think about the project. An impact assessment done wrong is just paperwork." — UK Information Commissioner's Office, Conducting Privacy Impact Assessments Code of Practice
Chapter Overview
Chapters 26 and 27 established the organizational infrastructure for data ethics — the committees, the CDO, the catalogs, the culture. This chapter introduces the processes that infrastructure supports: the formal assessments through which organizations evaluate the privacy and ethical implications of specific data practices.
If the ethics committee is the conscience and the CDO is the central nervous system, then Privacy Impact Assessments, Data Protection Impact Assessments, and Algorithmic Impact Assessments are the diagnostic tools — structured methods for identifying potential harm before it occurs.
This distinction between before and after is critical. Most data governance interventions are reactive: a breach happens, an audit finds a violation, a journalist writes an expose, a community complains. Assessments are proactive. They force organizations to confront the ethical implications of their data practices before those practices are deployed — when changes are possible, affordable, and effective.
In this chapter, you will learn to: - Understand the legal requirements and practical benefits of impact assessments - Conduct a DPIA following GDPR Article 35 requirements - Design and apply an Algorithmic Impact Assessment for AI systems - Apply threshold tests to determine when an assessment is required - Navigate the relationship between academic and corporate ethical review - Use a practical assessment template for your own projects
28.1 Privacy Impact Assessments: Purpose and Process
28.1.1 What Is a PIA?
A Privacy Impact Assessment (PIA) is a systematic evaluation of a proposed data processing activity to identify its potential impact on the privacy of individuals and to determine measures to mitigate those impacts. The concept emerged in the 1990s in countries including Canada, New Zealand, and Australia, and has since become a staple of privacy governance worldwide.
The core idea is simple: before you start processing personal data in a new way, think about what could go wrong.
But the simplicity of the idea conceals the complexity of the practice. A genuine PIA is not a checkbox exercise. It requires:
- Understanding the processing. What data is being collected? From whom? For what purpose? How will it be stored, used, and shared?
- Identifying privacy risks. What could go wrong? What are the consequences for individuals if risks materialize?
- Evaluating necessity and proportionality. Is this processing necessary for the stated purpose? Could the same goal be achieved with less data or less intrusive methods?
- Designing mitigations. What safeguards will reduce the risks to an acceptable level?
- Documenting decisions. Why were specific decisions made? What alternatives were considered and rejected?
28.1.2 The PIA vs. the DPIA
The terms PIA and DPIA are often used interchangeably, but there is a legal distinction:
- A PIA is a general good-practice tool, used voluntarily by organizations to evaluate privacy risks. It is not mandated by specific legislation (though some national laws require them in specific contexts).
- A DPIA (Data Protection Impact Assessment) is specifically required by GDPR Article 35 when processing is "likely to result in a high risk to the rights and freedoms of natural persons." It has legally defined requirements, triggers, and consequences for non-compliance.
In practice, the processes are similar. The DPIA adds legal specificity — mandated triggers, consultation requirements, and documentation standards — to the broader PIA framework.
28.1.3 Why Assessments Matter Beyond Compliance
Dr. Adeyemi drew a distinction in class: "The DPIA is legally required. But a PIA is ethically required for any significant data processing, whether the law mandates it or not. The law tells you when you must assess. Ethics tells you when you should assess."
Benefits beyond compliance:
- Better design. The assessment process often reveals design alternatives that are both more privacy-protective and more elegant. Constraints stimulate creativity.
- Stakeholder trust. Published assessment summaries demonstrate to users, customers, and regulators that the organization has thought carefully about privacy implications.
- Institutional learning. Assessment documentation creates organizational memory — patterns of risk, effective mitigations, common pitfalls — that improves future decision-making.
- Cost avoidance. Privacy problems caught at the design stage cost a fraction of what they cost after deployment. Retrofitting privacy protections into a launched product is expensive, disruptive, and often incomplete.
The Consent Fiction and Assessments: A PIA/DPIA forces organizations to confront the consent fiction head-on. If the assessment reveals that users are unlikely to understand how their data will be processed — if "meaningful consent" is implausible in the proposed design — then the organization must either redesign for genuine comprehension or find a lawful basis other than consent. The assessment makes the fiction visible.
28.2 GDPR Article 35: The DPIA Requirement
28.2.1 When Is a DPIA Required?
Article 35 of the GDPR requires a DPIA when processing is "likely to result in a high risk to the rights and freedoms of natural persons." The regulation specifies three mandatory triggers:
- Systematic and extensive evaluation of personal aspects based on automated processing, including profiling, where decisions produce legal or similarly significant effects.
- Large-scale processing of special categories of data (Article 9: racial/ethnic origin, political opinions, religious beliefs, health data, biometric data, sexual orientation) or criminal conviction data (Article 10).
- Systematic monitoring of a publicly accessible area on a large scale (e.g., CCTV surveillance, WiFi tracking in public spaces).
Beyond these mandatory triggers, the European Data Protection Board (EDPB) guidelines identify additional criteria that, when present in combination, indicate high risk:
| Criterion | Example |
|---|---|
| Evaluation or scoring | Credit scoring, behavioral profiling |
| Automated decision-making with legal/significant effects | Algorithmic hiring, benefit eligibility |
| Systematic monitoring | Employee surveillance, location tracking |
| Sensitive data or data of a highly personal nature | Health records, financial data, communications |
| Large-scale processing | Data from thousands of individuals or more |
| Combining datasets | Merging data from multiple sources beyond the data subject's reasonable expectation |
| Vulnerable data subjects | Children, patients, employees, elderly |
| Innovative use of technology | New biometric applications, IoT analytics, AI |
| Data transfers outside the EU | Cross-border transfers to countries without adequacy decisions |
The rule of thumb: If two or more criteria are present, a DPIA is almost certainly required.
28.2.2 The DPIA Process Under GDPR
Article 35(7) specifies that a DPIA must contain, at minimum:
(a) A systematic description of the envisaged processing operations and the purposes of the processing, including, where applicable, the legitimate interest pursued by the controller.
(b) An assessment of the necessity and proportionality of the processing operations in relation to the purposes.
(c) An assessment of the risks to the rights and freedoms of data subjects.
(d) The measures envisaged to address the risks, including safeguards, security measures, and mechanisms to ensure the protection of personal data and to demonstrate compliance.
28.2.3 Prior Consultation: When the DPIA Is Not Enough
If, after completing the DPIA, the residual risk remains high despite mitigations — if the organization cannot reduce the risk to an acceptable level — Article 36 requires prior consultation with the supervisory authority before proceeding.
This is a significant provision. It means that some data processing activities should not proceed without regulatory approval, even if the organization believes the processing is justified. The regulator may impose additional conditions, prohibit the processing, or require modifications.
"Prior consultation is the emergency brake," Dr. Adeyemi explained. "The DPIA is the process of testing the brakes. If the brakes can't stop the car, you don't drive — you go to the authorities."
28.3 The Ethical Review Board: Academic and Corporate Models
28.3.1 Institutional Review Boards (IRBs)
The Institutional Review Board (IRB) is the academic world's mechanism for ethical review of research involving human subjects. IRBs originated in the United States after a series of research ethics scandals — most notoriously, the Tuskegee Syphilis Study (1932-1972), in which Black men with syphilis were deliberately left untreated to study the disease's progression.
The Belmont Report (1979) established three principles governing human subjects research:
- Respect for persons. Individuals should be treated as autonomous agents. Informed consent is required.
- Beneficence. Research should maximize benefits and minimize harm.
- Justice. The benefits and burdens of research should be distributed fairly.
IRBs operationalize these principles by reviewing research protocols before data collection begins. No research involving human subjects can proceed at a federally funded institution without IRB approval.
28.3.2 IRBs and Data Ethics: The Translation Problem
IRBs were designed for a world of clinical trials and laboratory experiments, not algorithmic systems and platform data. Translating IRB principles to the data age creates several tensions:
What counts as "human subjects research"? A company analyzing its own user data to optimize a product is not conducting "research" in the IRB sense — yet the ethical implications may be identical. Facebook's 2014 "emotional contagion" study, which manipulated users' News Feeds to study the spread of emotions, was reviewed by an IRB at Cornell (where a collaborating researcher was based) but not by Facebook itself (which had no IRB). The study was technically legal but widely condemned as unethical.
Consent and scale. IRB consent processes involve individual, informed agreement — typically through a consent form that participants read and sign. This model doesn't translate to platforms with billions of users whose data is analyzed continuously through A/B tests they don't know about.
Ongoing vs. episodic review. IRBs review discrete research projects with defined start and end dates. Data processing at technology companies is continuous and evolving. A one-time review cannot capture the ethical implications of a system that changes daily.
Independence. Academic IRBs, while imperfect, have institutional independence from the researchers they review. Corporate ethical review mechanisms often lack this independence — the reviewers and the people being reviewed work for the same company and share the same incentives.
28.3.3 Corporate Ethical Review: Emerging Models
Some technology companies have developed internal ethical review processes that draw on IRB principles while adapting to the corporate context:
Microsoft's Responsible AI Impact Assessment (RAIA). A questionnaire-based review process that product teams complete at key development milestones. High-risk systems receive additional review from the Office of Responsible AI.
Meta's Responsible Innovation team. An internal team that reviews products and features for potential harms, including privacy, fairness, and societal impact. The team can recommend modifications but generally lacks veto authority.
Salesforce's Ethical and Humane Use program. A review process specifically focused on how customers use Salesforce products, recognizing that the ethical implications extend beyond the platform itself to how it's deployed.
Key differences from academic IRBs:
| Feature | Academic IRB | Corporate Ethical Review |
|---|---|---|
| Legal mandate | Required for federally funded research | Generally voluntary |
| Independence | Structurally independent from researchers | Typically internal to the company |
| Authority | Can block research from proceeding | Usually advisory; veto rare |
| Scope | Defined research protocols | Ongoing product development |
| Transparency | Protocols publicly registered (for clinical trials) | Generally confidential |
| Accountability | Federal oversight (OHRP) | Self-regulated |
Reflection: Should technology companies be required to have IRB-equivalent review boards for data-intensive products that affect millions of users? What would that requirement look like in practice? Who would oversee the overseers?
28.4 Algorithmic Impact Assessments
28.4.1 Extending Assessments to Algorithmic Systems
Traditional PIAs and DPIAs focus on data processing — the collection, storage, use, and sharing of personal data. But many of the most consequential data-related harms arise not from data processing alone but from the algorithmic systems built on that data.
An Algorithmic Impact Assessment (AIA) extends the impact assessment concept to cover the distinct risks that algorithmic decision-making introduces:
- Bias and discrimination. Does the algorithm produce disparate outcomes for different demographic groups?
- Opacity. Can affected individuals understand how decisions about them were made?
- Autonomy. Does the algorithm constrain or manipulate individual choices?
- Accountability. When the algorithm produces a harmful outcome, who is responsible?
- Drift. Will the algorithm's behavior change over time as conditions evolve?
28.4.2 The Canadian AIA: A Model Framework
Canada's Algorithmic Impact Assessment tool, introduced in 2019 for federal government agencies, provides one of the most developed AIA frameworks. It classifies algorithmic systems into four impact levels based on factors including:
- The type of decision being automated (administrative, regulatory, or rights-affecting)
- The reversibility of the decision
- The duration of impact
- The number of people affected
- The availability of recourse
- Whether the system involves personal data
Each impact level triggers escalating requirements:
| Level | Description | Requirements |
|---|---|---|
| I | Little to no impact on rights, health, or economic interests | Peer review; documentation |
| II | Moderate impact | Peer review; fairness testing; notice to affected individuals |
| III | High impact; difficult-to-reverse decisions | Independent review; bias testing; human oversight; public reporting |
| IV | Very high impact; rights-affecting, irreversible | Independent external audit; human decision-maker required; public reporting; ongoing monitoring |
28.4.3 Key AIA Components
A comprehensive AIA should include:
1. System Description - What decisions does the system make or support? - What data does it use? - What algorithmic methods does it employ? - Who are the affected populations?
2. Purpose and Necessity - What problem does this system address? - Could the same objective be achieved without algorithmic decision-making? - What is the human alternative, and what are its limitations?
3. Bias and Fairness Analysis - Has the system been tested for disparate impact across demographic groups? - Which fairness metrics have been applied (demographic parity, equalized odds, calibration — see Chapter 15)? - What trade-offs between fairness definitions have been accepted, and why?
4. Transparency and Explainability - Can affected individuals obtain a meaningful explanation of how decisions about them were made? - Is the system's logic documented in a way that enables external audit? - Are model cards (Chapter 29) available for the underlying models?
5. Human Oversight - Is there a human decision-maker who can override the system's output? - Under what circumstances is human review mandatory? - Are the humans who oversee the system trained to exercise independent judgment?
6. Recourse and Redress - Can affected individuals challenge algorithmic decisions? - Is the appeals process accessible, timely, and meaningful? - Does the organization have a process for correcting errors and compensating harm?
7. Ongoing Monitoring - How will the system be monitored for drift, degradation, and emergent bias? - What triggers a re-assessment? - How frequently is the assessment updated?
Common Pitfall: Many organizations treat AIAs as one-time exercises — assessed at launch, then filed away. But algorithmic systems change over time. Training data is updated. Real-world conditions shift. User populations evolve. An AIA conducted at launch becomes stale within months. Effective assessment requires ongoing monitoring and periodic re-assessment.
28.5 When to Conduct Assessments: Threshold Tests and Proportionality
28.5.1 The Threshold Question
Not every data practice requires a full-scale impact assessment. Assessing every minor data collection — logging which pages of a website users visit, recording meeting attendance — would consume resources without commensurate benefit and trivialize the assessment process.
Threshold tests determine when an assessment is required. A well-designed threshold test uses screening questions to identify processing activities that warrant detailed review:
THRESHOLD SCREENING QUESTIONS
─────────────────────────────
1. Does the processing involve personal data?
└── No → Assessment not required
└── Yes → Continue
2. Is the processing routine and low-risk?
(Examples: employee payroll, standard customer service records)
└── Yes → Document rationale; no further assessment required
└── No → Continue
3. Check for HIGH-RISK INDICATORS (score 1 point each):
□ Involves sensitive data (health, biometric, financial,
children's, racial/ethnic)
□ Involves large-scale processing (>10,000 data subjects)
□ Involves automated decision-making with significant effects
□ Involves systematic monitoring of individuals
□ Involves combining datasets from multiple sources
□ Involves innovative technology or novel application
□ Involves vulnerable populations
□ Involves cross-border data transfers
□ Could result in physical, material, or non-material damage
if data is misused
□ Involves data that could be used for purposes not anticipated
by data subjects
SCORING:
0 indicators → Assessment not required (document rationale)
1 indicator → Abbreviated assessment recommended
2+ indicators → Full PIA/DPIA required
3+ indicators → Full PIA/DPIA + ethics committee review
GDPR mandatory triggers → Full DPIA required regardless of score
28.5.2 Proportionality
The principle of proportionality holds that the depth and rigor of an assessment should be proportional to the risk. A low-risk processing activity warrants a brief screening and documentation. A high-risk activity — deploying a predictive model that affects access to healthcare, credit, or employment — warrants a comprehensive assessment with external review.
Proportionality prevents two failure modes:
Assessment fatigue. If every data practice requires the same level of assessment, teams learn to treat assessments as bureaucratic obstacles rather than genuine ethical exercises. The assessment becomes a checkbox, completed perfunctorily without meaningful reflection.
Assessment avoidance. If the assessment process is so burdensome that teams avoid triggering it — structuring projects to stay just below the threshold, or proceeding without assessment — the process fails to capture the highest-risk activities.
28.6 A Practical PIA/DPIA Template
28.6.1 Template Overview
The following template consolidates best practices from the UK Information Commissioner's Office, the French CNIL, and the EDPB guidelines into a practical document that organizations can adapt to their context.
Real-World Application: This template is designed to be usable, not merely illustrative. If you are building a data-intensive project — for coursework, a startup, or an established organization — you can use this template to conduct a genuine assessment.
PRIVACY IMPACT ASSESSMENT / DATA PROTECTION IMPACT ASSESSMENT
Section 1: Project Overview
| Field | Description |
|---|---|
| Project name | |
| Assessment date | |
| Assessor(s) | |
| Project owner | |
| Project description | (What is being built? What problem does it solve?) |
| Processing purpose | (Why is personal data being processed?) |
| Legal basis | (Consent, legitimate interest, legal obligation, vital interest, public interest, contract) |
| Data subjects | (Who are the people whose data is processed?) |
| Data categories | (What types of personal data are involved?) |
| Data volume | (Approximate number of data subjects and records) |
| Data sources | (Where does the data come from?) |
| Data recipients | (Who will receive or access the data?) |
| Retention period | (How long will data be kept, and why?) |
| Cross-border transfers | (Will data be transferred outside the originating jurisdiction? If so, what safeguards apply?) |
Section 2: Necessity and Proportionality
- Is the processing necessary to achieve the stated purpose?
- Could the same purpose be achieved with less data, less intrusive methods, or without personal data?
- Is the data collection proportionate to the benefit sought?
- Have data minimization principles been applied?
Section 3: Risk Identification
For each identified risk, assess:
| Risk | Likelihood (Low/Med/High) | Severity (Low/Med/High) | Risk Level | Affected Parties |
|---|---|---|---|---|
| (e.g., Unauthorized access to sensitive data) | ||||
| (e.g., Re-identification of de-identified data) | ||||
| (e.g., Discriminatory outcomes from algorithmic processing) | ||||
| (e.g., Data used for purposes beyond original consent) |
Risk Level Matrix:
SEVERITY
Low Med High
LIKELIHOOD Low | Low | Low | Med |
Med | Low | Med | High |
High | Med | High | High |
Section 4: Mitigation Measures
For each Medium or High risk:
| Risk | Mitigation Measure | Responsible Party | Timeline | Residual Risk |
|---|---|---|---|---|
Section 5: Stakeholder Consultation
- Have data subjects or their representatives been consulted? (If not, why not?)
- Has the Data Protection Officer been consulted?
- Has the ethics committee reviewed this assessment?
- Have technical security experts reviewed the safeguards?
Section 6: Decision and Sign-Off
- [ ] Risks are acceptable after mitigation — proceed
- [ ] Risks remain high — escalate to ethics committee/DPO
- [ ] Risks cannot be mitigated — prior consultation with supervisory authority required
- [ ] Risks cannot be mitigated — do not proceed
| Role | Name | Signature | Date |
|---|---|---|---|
| Project Owner | |||
| DPO | |||
| Ethics Committee Chair (if applicable) |
28.6.2 Using the Template Effectively
The template is a structure, not a straitjacket. Effective use requires:
Honest answers. The template is only useful if the answers are candid. If the purpose field says "to improve user experience" but the actual purpose is "to increase advertising revenue through behavioral profiling," the assessment is worthless.
Genuine consultation. The stakeholder consultation section should reflect real engagement with affected communities, not a note saying "deemed unnecessary."
Iterative use. The assessment should evolve with the project. Re-assess when the processing changes, when new data sources are added, when the system is deployed in a new context.
"I can tell a real DPIA from a fake one in thirty seconds," Sofia Reyes said during a DataRights Alliance webinar. "The real one has uncomfortable answers. It identifies risks that the organization would rather not discuss. The fake one has nothing but green lights and reassuring language. If your DPIA didn't make you reconsider anything about your project, you didn't do it right."
28.7 VitraMed's First DPIA: The Predictive Analytics Platform
28.7.1 Background
VitraMed's predictive analytics platform — the product that has been growing alongside VitraMed since Part 2 — uses patient health records to predict which patients are at risk for developing chronic conditions (diabetes, hypertension, cardiovascular disease). The predictions are used to recommend preventive interventions to clinicians and, more controversially, to provide aggregate risk profiles to insurance partners.
With Mira's new ethics advisory group in place (Chapter 26) and the appointment of Dr. Amina Khoury as VitraMed's first Data Protection Officer, Vikram Chakravarti agreed that the predictive analytics platform should undergo a formal DPIA — VitraMed's first.
28.7.2 The Assessment
Section 1: Project Overview
- Project name: VitraMed Predictive Health Analytics (PHA) Platform v2.0
- Processing purpose: Predict patient risk for chronic conditions to enable preventive care interventions
- Legal basis: Legitimate interest (patient health improvement); consent (for data sharing with insurance partners)
- Data subjects: ~180,000 patients across 520 clinic clients
- Data categories: Demographics (age range, gender, zip code prefix), medical history, diagnoses, medications, lab results, vital signs, visit frequency
- Retention period: 7 years from last clinical visit (HIPAA); model training data retained indefinitely in de-identified form
Section 2: Necessity and Proportionality
Dr. Khoury's assessment identified a proportionality concern: the insurance partner sharing component collected and shared more data than necessary for the stated purpose of patient risk prediction. The clinical prediction model required detailed health data, but the insurance partner only needed aggregate risk scores. Yet VitraMed was sharing patient-level (though de-identified) data with insurance partners, not just aggregate scores.
"That's a proportionality failure," Dr. Khoury wrote. "The stated purpose is preventive care. Sharing patient-level data with insurers exceeds what's necessary for that purpose. We should share only aggregate, clinic-level risk profiles."
Section 3: Risk Identification
| Risk | Likelihood | Severity | Level |
|---|---|---|---|
| Re-identification of de-identified patient data by insurance partners | Medium | High | High |
| Algorithmic bias producing less accurate predictions for minority populations | Medium | High | High |
| Patients unaware their data is used for risk prediction | High | Medium | High |
| Insurance partners using risk profiles to deny coverage | Medium | High | High |
| Model drift degrading prediction accuracy over time | High | Medium | High |
| Data breach exposing sensitive health information | Low | High | Medium |
Section 4: Mitigation Measures
| Risk | Mitigation |
|---|---|
| Re-identification | Switch from patient-level to aggregate-only sharing with insurance partners; apply differential privacy to aggregate reports |
| Algorithmic bias | Conduct fairness audit across demographic groups; retrain model on balanced dataset; implement ongoing bias monitoring |
| Patient awareness | Implement clear patient notification at point of care; provide opt-out mechanism; update privacy notice |
| Insurance misuse | Contractual restrictions on insurance partner use; audit rights; termination clause for policy violations |
| Model drift | Implement automated performance monitoring; re-validate model quarterly; human review of flagged cases |
| Data breach | Encryption at rest and in transit; access controls; incident response plan (Chapter 30) |
28.7.3 What the DPIA Revealed
The DPIA process revealed three issues that VitraMed's leadership had not previously confronted:
First, patients were largely unaware that their health data was being used for predictive analytics. The consent forms signed at clinic intake mentioned "data processing for healthcare purposes" but did not specifically mention risk prediction or data sharing with insurance partners. This was a consent fiction — patients were technically consenting to something they did not understand.
Second, the predictive model had never been audited for fairness across demographic groups. Preliminary analysis during the DPIA revealed that the model performed significantly less accurately for patients over 65 and for patients from clinics in predominantly Hispanic neighborhoods — likely because these populations were underrepresented in the training data.
Third, the insurance partner sharing arrangement created a power asymmetry that VitraMed had not acknowledged. Patients had no knowledge that their risk profiles were being shared. Insurance partners could potentially use risk profiles to adjust premiums or coverage decisions, despite contractual prohibitions — and VitraMed had no mechanism to verify compliance.
"This is why we do DPIAs," Dr. Khoury told Vikram. "Not because the regulators require it — though they do. But because without the assessment, we would have continued operating a system that our patients would not recognize if they saw it described honestly."
Mira, reviewing the DPIA results, felt a mixture of validation and discomfort. Validation, because the assessment confirmed concerns she had been raising for months. Discomfort, because the company her father built — a company she loved and believed in — had been operating in ways that, once examined, fell short of its own values.
The Power Asymmetry Made Visible: The DPIA revealed what power asymmetry looks like in operational detail. VitraMed collected data from patients who trusted their clinicians. VitraMed processed that data in ways patients didn't know about. VitraMed shared results with parties patients didn't choose. At every stage, the power to decide rested with VitraMed; the vulnerability rested with patients. The DPIA didn't create this asymmetry — it made it visible.
28.8 Case Studies
28.8.1 A Model DPIA: Assessing a Facial Recognition Deployment
Background: The London Metropolitan Police (Met) proposed deploying live facial recognition (LFR) technology at major public events and in specific high-crime areas. The system would compare faces captured by cameras against a watchlist of individuals wanted by police.
The DPIA process:
The Met conducted a DPIA (made public in 2019) that addressed:
- Purpose and legal basis: Prevention and detection of crime; legitimate interest under GDPR; specific statutory authority under UK law.
- Data subjects: Anyone whose face is captured by the cameras, whether or not they are on the watchlist. This means the vast majority of processed data belongs to people who are not suspects.
- Necessity and proportionality: The Met argued that LFR enabled faster identification of wanted individuals compared to manual methods. Critics argued that the same objective could be achieved through less intrusive means (more officers, better CCTV analysis, community cooperation).
Risk identification (selected):
| Risk | Identified Severity |
|---|---|
| False positive identification leading to wrongful stop | High |
| Chilling effect on public assembly and protest | High |
| Disproportionate impact on Black and ethnic minority individuals (due to documented bias in facial recognition accuracy) | High |
| Mass surveillance normalization | High |
| Function creep (watchlist expansion beyond original scope) | Medium |
Critique of the DPIA:
Privacy advocates and the UK's independent Biometrics and Surveillance Camera Commissioner identified several weaknesses:
- The necessity test was weak. The DPIA assumed LFR was necessary without rigorously evaluating alternatives.
- The proportionality analysis was insufficient. Processing biometric data from thousands of non-suspects to identify a handful of wanted individuals was disproportionate, critics argued.
- Bias was acknowledged but not resolved. The DPIA noted that facial recognition systems performed less accurately on darker-skinned faces but proceeded with deployment, relying on "operator training" as a mitigation — a measure widely considered inadequate.
- Community consultation was limited. The DPIA noted public engagement but critics argued that meaningful consultation with affected communities (particularly Black communities disproportionately impacted by policing decisions) was absent.
Legal aftermath: In August 2020, the UK Court of Appeal ruled in R (Bridges) v. South Wales Police that the use of LFR by South Wales Police was unlawful, in part because the DPIA was insufficiently rigorous. The ruling established that DPIAs for LFR must demonstrate genuine necessity, address bias risk with concrete mitigations, and involve meaningful community consultation.
Key lesson: A DPIA that acknowledges risks without genuinely addressing them is performative, not protective. The Met's DPIA identified the right risks but proposed insufficient mitigations — and the courts noticed.
28.8.2 Ethical Review in Tech: Google's AI Ethics Board Controversy
Background: As discussed briefly in Chapter 26, Google's Advanced Technology External Advisory Council (ATEAC) was formed in March 2019 to provide ethical guidance on AI development. The board lasted one week before being dissolved amid controversy over its composition.
What ATEAC was supposed to do: Provide external ethical review of Google's AI projects, particularly those with significant societal implications — a function analogous to an institutional review board for AI development.
Why it failed:
-
Composition reflected political balance rather than ethical expertise. Board members were chosen to represent a range of perspectives but included individuals whose views on LGBTQ+ rights and immigration were actively harmful to communities affected by Google's technology.
-
No charter, no process, no authority. ATEAC was announced without a published charter, review process, or decision-making authority. It was a board without a job description.
-
No internal buy-in. Over 2,500 Google employees signed a petition opposing the board's composition — suggesting that the board was formed by leadership without adequate consultation with the workforce that would be affected by its decisions.
-
No integration with product development. ATEAC was an external advisory body with no connection to Google's actual product development pipeline. Even if it had survived, its advice would have been structurally disconnected from the engineering decisions that shape AI products.
What Google learned (and what it did afterward): Following ATEAC's dissolution, Google invested in internal responsible AI processes — the Responsible AI and Human-Centered Technology team, internal review processes for sensitive AI applications, and published AI Principles. Whether these internal mechanisms constitute adequate ethical review remains debated, particularly following the dismissal of prominent AI ethics researchers Timnit Gebru and Margaret Mitchell in 2020-2021.
Key lesson: An ethical review mechanism that lacks process, independence, authority, and integration with actual decision-making is worse than useless — it provides a false sense of ethical oversight while actual products proceed unreviewed. The ATEAC experiment demonstrates that ethical review requires institutional infrastructure, not just institutional ambition.
The Accountability Gap: Google's ATEAC failure exposed a fundamental gap: who reviews the reviewer? When an organization's ethical review mechanism fails — through poor design, political compromise, or institutional capture — there is typically no external body with the authority or knowledge to intervene. This gap is particularly concerning for the most powerful technology companies, whose AI products affect billions of people.
28.9 Chapter Summary
Key Concepts
- Privacy Impact Assessments (PIAs) are systematic evaluations of data processing activities to identify privacy risks and design mitigations. They are good practice for any significant data processing.
- Data Protection Impact Assessments (DPIAs) are legally required under GDPR Article 35 when processing is likely to result in high risk to individuals' rights and freedoms.
- Algorithmic Impact Assessments (AIAs) extend the assessment framework to cover the distinct risks of algorithmic decision-making: bias, opacity, autonomy, accountability, and drift.
- Threshold tests determine when an assessment is required, applying proportionality to avoid both assessment fatigue and assessment avoidance.
- Academic IRBs and corporate ethical review serve similar functions but differ in legal mandate, independence, authority, and accountability. The translation from academic to corporate context remains incomplete.
- VitraMed's first DPIA revealed consent fiction, fairness gaps, and power asymmetries that had been invisible without formal assessment — demonstrating the assessment's value as a diagnostic tool, not just a compliance requirement.
Key Debates
- Should DPIAs be publicly available, or does commercial confidentiality justify keeping them private?
- Are Algorithmic Impact Assessments sufficient without independent external audit?
- Should technology companies be legally required to maintain IRB-equivalent review boards for products affecting large populations?
- Is self-assessment inherently compromised — can organizations honestly evaluate their own data practices?
- How should proportionality be applied when the potential harm is severe but uncertain?
Applied Framework
When evaluating any data processing activity, apply the Five Questions of Assessment: 1. Necessity: Is this processing necessary for the stated purpose, or could the purpose be achieved with less data or less intrusive methods? 2. Proportionality: Is the scope of processing proportionate to the benefit sought? 3. Fairness: Does the processing affect different groups differently, and if so, is the differential treatment justified? 4. Transparency: Are affected individuals aware of the processing and able to understand its implications? 5. Accountability: If something goes wrong, who is responsible, and what recourse do affected individuals have?
What's Next
In Chapter 29: Responsible AI Development, we move from assessing data practices to building AI systems responsibly from the start. We'll examine responsible AI frameworks, model cards, datasheets for datasets, red-teaming, and deployment monitoring. Chapter 29 also introduces the ModelCard Python dataclass — a tool for documenting AI models in a format that supports transparency, accountability, and ethical review. VitraMed will create its first model card for the clinical prediction model that the DPIA just assessed.
Before moving on, complete the exercises and quiz to practice conducting impact assessments and evaluating real-world assessment processes.