Quiz: Privacy Impact Assessments and Ethical Reviews
Test your understanding before moving to the next chapter. Target: 70% or higher to proceed.
Section 1: Multiple Choice (1 point each)
1. Which of the following best describes the relationship between a PIA and a DPIA?
- A) A PIA is more rigorous than a DPIA because it is always legally required
- B) A DPIA is a GDPR-specific, legally mandated version of the broader PIA framework, with defined triggers and documentation requirements
- C) PIAs are for public sector organizations; DPIAs are for private companies
- D) PIAs assess privacy risks; DPIAs assess data protection technology
Answer
**B)** A DPIA is a GDPR-specific, legally mandated version of the broader PIA framework, with defined triggers and documentation requirements. *Explanation:* Section 28.1.2 distinguishes the two: a PIA is a general good-practice tool, used voluntarily. A DPIA is specifically required by GDPR Article 35 under defined conditions and has legally specified requirements. The processes are similar, but the DPIA adds legal specificity. Option A is incorrect (PIAs are typically voluntary). Option C is incorrect (both can apply to any sector). Option D mischaracterizes the DPIA scope.2. Under GDPR Article 35, a DPIA is mandatory when:
- A) Any personal data is processed for any purpose
- B) Processing involves data from fewer than 100 individuals
- C) Processing is likely to result in a high risk to the rights and freedoms of natural persons
- D) An organization processes data outside the EU
Answer
**C)** Processing is likely to result in a high risk to the rights and freedoms of natural persons. *Explanation:* Section 28.2.1 states the Article 35 trigger directly. The regulation specifies three mandatory examples (systematic automated profiling with significant effects, large-scale processing of special categories, systematic monitoring of public areas) and additional criteria that, in combination, indicate high risk. Not all personal data processing requires a DPIA -- only processing that is likely to result in high risk.3. The EDPB guidelines suggest that a DPIA is almost certainly required when how many of their high-risk criteria are present?
- A) Any one criterion
- B) Two or more criteria
- C) Five or more criteria
- D) All criteria must be present
Answer
**B)** Two or more criteria. *Explanation:* Section 28.2.1 states the rule of thumb: "If two or more criteria are present, a DPIA is almost certainly required." The nine EDPB criteria (evaluation/scoring, automated decisions with legal effects, systematic monitoring, sensitive data, large-scale processing, combining datasets, vulnerable subjects, innovative technology, cross-border transfers) are additive indicators of risk.4. Prior consultation under GDPR Article 36 is required when:
- A) Any DPIA is conducted
- B) The DPIA reveals that processing involves children's data
- C) After completing the DPIA, residual risk remains high despite mitigations
- D) The organization is processing data for the first time
Answer
**C)** After completing the DPIA, residual risk remains high despite mitigations. *Explanation:* Section 28.2.3 explains that prior consultation with the supervisory authority is required when the organization cannot reduce risk to an acceptable level through mitigations. Dr. Adeyemi calls it "the emergency brake" -- if the DPIA reveals risks that cannot be adequately mitigated, the organization must consult the regulator before proceeding.5. The Institutional Review Board (IRB) system originated in response to:
- A) The development of the internet and digital data collection
- B) Research ethics scandals, most notoriously the Tuskegee Syphilis Study
- C) The passage of the GDPR in 2016
- D) Corporate data breaches in the early 2000s
Answer
**B)** Research ethics scandals, most notoriously the Tuskegee Syphilis Study. *Explanation:* Section 28.3.1 traces IRBs to the Belmont Report (1979), which was itself a response to the Tuskegee Syphilis Study (1932-1972) and other research ethics violations. The IRB system was designed to protect human research subjects by requiring ethical review before data collection begins. It predates digital data collection and corporate data practices.6. According to the chapter, which of the following is a key difference between academic IRBs and corporate ethical review processes?
- A) IRBs review technology products while corporate boards review research
- B) IRBs have legal mandates and structural independence; corporate review is generally voluntary and internal
- C) IRBs meet less frequently than corporate review boards
- D) IRBs focus on data quality while corporate boards focus on data privacy
Answer
**B)** IRBs have legal mandates and structural independence; corporate review is generally voluntary and internal. *Explanation:* Section 28.3.3 presents a detailed comparison table. Academic IRBs are legally required for federally funded research, structurally independent from researchers, and subject to federal oversight (OHRP). Corporate ethical review is generally voluntary, internal to the company, usually advisory rather than binding, and self-regulated. This structural difference has significant implications for effectiveness.7. The Canadian Algorithmic Impact Assessment classifies systems into four impact levels. A Level IV system requires:
- A) Only peer review and documentation
- B) Peer review and fairness testing
- C) Independent external audit, human decision-maker, public reporting, and ongoing monitoring
- D) No specific requirements beyond normal development practices
Answer
**C)** Independent external audit, human decision-maker, public reporting, and ongoing monitoring. *Explanation:* Section 28.4.2 describes the four-level Canadian AIA framework. Level IV systems involve very high impact with rights-affecting, potentially irreversible decisions. They require the most stringent safeguards: independent external audit, a human decision-maker in the loop, public reporting of the assessment and findings, and ongoing monitoring.8. Sofia Reyes states: "If your DPIA didn't make you reconsider anything about your project, you didn't do it right." This statement reflects the principle that:
- A) DPIAs should always recommend cancelling the project
- B) A genuine DPIA should identify risks and considerations that prompt meaningful project modifications
- C) DPIAs are designed to create obstacles for product development
- D) Only DPIAs that find major violations are useful
Answer
**B)** A genuine DPIA should identify risks and considerations that prompt meaningful project modifications. *Explanation:* Section 28.6.2 uses Sofia's quote to distinguish genuine assessment from performative assessment. A real DPIA identifies uncomfortable truths -- risks the organization would rather not discuss, consent gaps, proportionality failures. If the assessment produces only affirmative findings and no modifications, it was conducted as a checkbox exercise rather than a genuine ethical evaluation.9. VitraMed's DPIA (Section 28.7) revealed three previously unconfronted issues. Which of the following was NOT one of them?
- A) Patients were largely unaware their health data was being used for predictive analytics
- B) The predictive model had never been audited for fairness across demographic groups
- C) VitraMed was in violation of HIPAA's data retention requirements
- D) The insurance partner sharing arrangement created a power asymmetry patients did not know about
Answer
**C)** VitraMed was in violation of HIPAA's data retention requirements. *Explanation:* Section 28.7.3 identifies three issues the DPIA revealed: (1) patients were unaware of the predictive analytics use, (2) the model had never been audited for fairness, and (3) the insurance sharing created an unacknowledged power asymmetry. HIPAA retention violations were not among the findings -- VitraMed was described as HIPAA compliant earlier in the chapter.10. The chapter describes two failure modes of assessment processes. Assessment fatigue occurs when:
- A) Organizations conduct so many assessments that each is treated as a bureaucratic checkbox rather than a genuine ethical exercise
- B) Organizations refuse to conduct any assessments due to cost concerns
- C) Regulators become overwhelmed by the volume of prior consultation requests
- D) Assessment tools become technologically obsolete
Answer
**A)** Organizations conduct so many assessments that each is treated as a bureaucratic checkbox rather than a genuine ethical exercise. *Explanation:* Section 28.5.2 identifies assessment fatigue as one of two failure modes (the other is assessment avoidance). When every data practice requires the same level of assessment regardless of risk, teams learn to treat assessments as bureaucratic obstacles, completing them perfunctorily without meaningful reflection. Proportionality prevents this by calibrating assessment depth to risk level.Section 2: True/False with Justification (1 point each)
11. "A PIA is only necessary when legally required by regulation."
Answer
**False.** *Explanation:* Section 28.1.3 makes the explicit distinction: "The DPIA is legally required. But a PIA is ethically required for any significant data processing, whether the law mandates it or not." Dr. Adeyemi states: "The law tells you when you must assess. Ethics tells you when you should assess." A PIA is good practice for any significant data processing activity, regardless of legal mandate.12. "The PIA/DPIA template in Section 28.6 is designed to be used exactly as written, without modification."
Answer
**False.** *Explanation:* Section 28.6.2 states explicitly: "The template is a structure, not a straitjacket." The template consolidates best practices from the UK ICO, French CNIL, and EDPB guidelines into a practical starting point that organizations should adapt to their context. Effective use requires honest answers, genuine consultation, and iterative application as the project evolves.13. "An Algorithmic Impact Assessment can be conducted once at deployment and need not be updated."
Answer
**False.** *Explanation:* Section 28.4.3 explicitly warns against this: "Many organizations treat AIAs as one-time exercises -- assessed at launch, then filed away. But algorithmic systems change over time. Training data is updated. Real-world conditions shift. User populations evolve. An AIA conducted at launch becomes stale within months. Effective assessment requires ongoing monitoring and periodic re-assessment."14. "Google's ATEAC (Advanced Technology External Advisory Council) demonstrates how external advisory boards can provide effective ethical oversight."
Answer
**False.** *Explanation:* Section 28.8.2 presents ATEAC as a failure -- the board lasted one week before dissolution. It failed because it had no charter, no process, no authority, no integration with product development, and problematic composition. The chapter uses ATEAC as an example of what *not* to do: an ethical review mechanism lacking process, independence, authority, and integration is "worse than useless."15. "VitraMed's DPIA revealed that its predictive analytics platform was fully transparent to patients about how their data was being used."
Answer
**False.** *Explanation:* Section 28.7.3 reveals the opposite: patients were largely unaware that their health data was being used for predictive analytics. The consent forms mentioned "data processing for healthcare purposes" but did not specifically mention risk prediction or data sharing with insurance partners. The DPIA revealed this as a consent fiction.Section 3: Short Answer (2 points each)
16. Explain how the PIA/DPIA process serves as a diagnostic tool, not just a compliance requirement. Use VitraMed's DPIA as your primary example.
Sample Answer
The PIA/DPIA process forces organizations to systematically examine their data practices -- mapping data flows, evaluating necessity and proportionality, identifying risks, and designing mitigations. This structured examination surfaces issues that informal awareness misses. VitraMed's DPIA is the primary example: before the assessment, leadership was unaware that patients did not understand their data was being used for risk prediction, that the predictive model had never been tested for fairness across demographic groups, and that the insurance partner arrangement created a power asymmetry. These were not technical bugs -- they were structural governance gaps that would have remained invisible without formal assessment. The DPIA's value was not the document it produced but the organizational learning it forced: "we would have continued operating a system that our patients would not recognize if they saw it described honestly" (Dr. Khoury). *Key points for full credit:* - Explains the diagnostic function (surfacing hidden issues) - References specific VitraMed findings - Distinguishes diagnostic value from compliance value17. The UK Court of Appeal ruled that the Met Police's facial recognition DPIA was insufficiently rigorous. Identify two specific weaknesses the court found and explain why each undermined the assessment's credibility.
Sample Answer
First, the necessity test was weak: the DPIA assumed facial recognition was necessary without rigorously evaluating less intrusive alternatives (more officers, better conventional CCTV, community cooperation). This undermined credibility because the necessity test is a fundamental DPIA component -- if the assessment does not genuinely evaluate whether the processing is necessary, it cannot be considered a rigorous evaluation. Second, bias was acknowledged but not resolved: the DPIA noted that facial recognition performs less accurately on darker-skinned faces but proceeded with deployment, relying on "operator training" as a mitigation. This undermined credibility because acknowledging a risk and then proposing an inadequate mitigation is performative rather than protective. The court recognized that a DPIA that identifies the right risks but proposes insufficient mitigations does not fulfill its legal purpose. *Key points for full credit:* - Identifies two specific weaknesses from the case - Explains why each undermines credibility - Connects to broader DPIA principles18. What is the "accountability gap" that the ATEAC failure exposed, and why is it particularly concerning for the most powerful technology companies?
Sample Answer
The accountability gap is the question of who reviews the reviewer. When an organization's ethical review mechanism fails -- through poor design, political compromise, or institutional capture -- there is typically no external body with the authority or knowledge to intervene. ATEAC collapsed due to composition failures, lack of process, and lack of authority, but no external entity could require Google to establish a better mechanism. This gap is particularly concerning for the most powerful technology companies because their AI products affect billions of people, yet their ethical review mechanisms are entirely self-designed and self-regulated. If Google's internal review fails, no external authority currently has the mandate to require improvement. The accountability gap means that the organizations with the greatest power and impact face the least external oversight of their ethical governance. *Key points for full credit:* - Defines the accountability gap accurately - Connects to the ATEAC example - Explains why it is especially concerning for powerful companies19. Explain the concept of proportionality in the context of impact assessments. Give one example of over-assessment and one example of under-assessment, explaining the harm each causes.
Sample Answer
Proportionality holds that the depth and rigor of an assessment should match the level of risk: low-risk processing warrants brief screening, high-risk processing warrants comprehensive review. Over-assessment example: Requiring a full DPIA with external review for routine employee payroll processing (which involves standard personal data with low risk). The harm is assessment fatigue -- teams learn to treat DPIAs as meaningless paperwork, completing them perfunctorily. When a genuinely high-risk project comes along, the assessment process has been trivialized and cannot serve its diagnostic function. Under-assessment example: Applying only a brief screening questionnaire to a predictive policing algorithm deployed in communities with documented histories of racial profiling. The harm is that serious risks go unidentified -- bias, disparate impact, chilling effects on community activity -- because the assessment was not rigorous enough to surface them. *Key points for full credit:* - Defines proportionality accurately - Provides one example of each failure mode - Explains the specific harm of eachSection 4: Applied Scenario (5 points)
20. Read the following scenario and answer all parts.
Scenario: CityWatch
A mid-size city government proposes deploying an integrated "CityWatch" system in its downtown district. The system combines: - 200 high-definition cameras with automated license plate recognition - WiFi probe request monitoring to estimate pedestrian density - Acoustic gunshot detection sensors - Integration with the city's 911 dispatch system
The stated purpose is "public safety and emergency response optimization." The system will process data from an estimated 50,000 people who pass through the downtown area daily. Data will be retained for 90 days. The system is provided by a private vendor that retains a copy of all data for "system improvement and analytics."
(a) Using the DPIA threshold screening questions from Section 28.5.1, determine whether a DPIA is required. Score each applicable criterion. (1 point)
(b) Complete a risk identification table (Section 28.6.1, Section 3) for CityWatch. Identify at least five risks with likelihood and severity ratings. (1 point)
(c) For the two highest-rated risks, propose specific mitigation measures. Be concrete -- not "improve privacy" but specific technical, policy, or procedural mitigations. (1 point)
(d) The private vendor retains a copy of all data for "system improvement and analytics." Evaluate this arrangement using the necessity and proportionality tests from Section 28.6.1, Section 2. Is this data sharing necessary? Is it proportionate? (1 point)
(e) Dr. Adeyemi's "Five Questions of Assessment" (Section 28.9) asks about necessity, proportionality, fairness, transparency, and accountability. Apply all five to CityWatch. For each, state the question and provide your assessment. (1 point)
Sample Answer
**(a)** DPIA scoring: - Involves personal data? Yes (images, location, license plates, device identifiers). Continue. - Routine and low-risk? No. Continue. - High-risk indicators: - Systematic monitoring of individuals: YES (cameras, WiFi tracking) - Large-scale processing: YES (50,000 people daily) - Innovative technology: YES (integrated multi-sensor system) - Data of a highly personal nature: YES (location, movement patterns) - Vulnerable populations: POSSIBLE (may capture children, homeless individuals, protestors) - Cross-border transfers: POSSIBLE (if vendor is based outside the jurisdiction) Score: 4-6 indicators. Full DPIA required, plus ethics committee review. **(b)** Risk identification: | Risk | Likelihood | Severity | Level | |------|-----------|----------|-------| | Mass surveillance normalization -- 50,000 daily subjects monitored without individual consent | High | High | High | | Chilling effect on public assembly, protest, and free movement in downtown area | Medium | High | High | | Mission creep -- data collected for "public safety" used for immigration enforcement, protest monitoring, or commercial purposes | Medium | High | High | | Disproportionate impact on communities of color (documented disparities in surveillance deployment and policing) | Medium | High | High | | Vendor data retention creates a secondary surveillance database outside government control | High | Medium | High | **(c)** Mitigations for top two risks: Mass surveillance normalization: Implement strict data minimization -- cameras operate only in response to triggered events (gunshot detection, 911 dispatch), not continuously. WiFi monitoring aggregates counts without capturing individual device identifiers. License plate data is purged within 24 hours unless linked to an active investigation. Signage notifies all entrants of monitoring. Chilling effect: Establish geographic exclusion zones around locations of public assembly (protest areas, houses of worship, political party offices). Implement a policy prohibiting use of CityWatch data for monitoring lawful protest or political activity. Create an independent civilian oversight board with access to system logs and the authority to audit use. **(d)** The vendor's data retention for "system improvement and analytics" fails both tests. Necessity: The city's stated purpose is public safety; vendor analytics is not necessary for that purpose. The city can achieve its goals without the vendor retaining a separate copy. Proportionality: Retaining surveillance data from 50,000 daily subjects for vendor "improvement" is disproportionate to the benefit. The vendor could improve its system using synthetic data, aggregated statistics, or a limited, anonymized sample rather than retaining complete surveillance records. The DPIA should recommend that the vendor be contractually prohibited from retaining identifiable data. **(e)** Five Questions of Assessment applied to CityWatch: 1. **Necessity:** Is continuous multi-sensor surveillance of 50,000 people daily necessary for public safety? Less intrusive alternatives exist (more visible police presence, community policing, targeted deployment in response to specific threats). The system captures vastly more data than is needed for emergency response. 2. **Proportionality:** The scope is disproportionate. A system designed to detect gunshots does not require WiFi tracking or license plate recognition. Each sensor type should be independently justified, and the combination creates surveillance capability far exceeding any single stated purpose. 3. **Fairness:** The system will likely be deployed in areas with higher concentrations of Black and Latino residents (downtown areas, commercial districts adjacent to lower-income neighborhoods), replicating documented patterns of disproportionate surveillance. Fairness requires that deployment and use be audited for demographic equity. 4. **Transparency:** The 50,000 daily subjects have no meaningful awareness of the system's capabilities. Transparency requires prominent public notice, published policies on data use and retention, and public reporting on system performance and use. 5. **Accountability:** If the system is misused -- data accessed for unauthorized purposes, false identifications leading to wrongful stops -- who is accountable? The current proposal does not specify accountability mechanisms. An independent oversight body, audit logs, and a complaint mechanism are essential.Scoring & Review Recommendations
| Score Range | Assessment | Next Steps |
|---|---|---|
| Below 50% (< 15 pts) | Needs review | Re-read Sections 28.1-28.5, redo Part A exercises |
| 50-69% (15-20 pts) | Partial understanding | Review specific weak areas, attempt a mini-PIA |
| 70-85% (21-25 pts) | Solid understanding | Ready to proceed to Chapter 29 |
| Above 85% (> 25 pts) | Strong mastery | Proceed to Chapter 29: Responsible AI Development |
| Section | Points Available |
|---|---|
| Section 1: Multiple Choice | 10 points (10 questions x 1 pt) |
| Section 2: True/False with Justification | 5 points (5 questions x 1 pt) |
| Section 3: Short Answer | 8 points (4 questions x 2 pts) |
| Section 4: Applied Scenario | 5 points (5 parts x 1 pt) |
| Total | 28 points |