Exercises: Privacy Impact Assessments and Ethical Reviews

These exercises progress from concept checks to challenging applications. Estimated completion time: 3-4 hours.

Difficulty Guide: - * Foundational (5-10 min each) - ** Intermediate (10-20 min each) - *** Challenging (20-40 min each) - **** Advanced/Research (40+ min each)


Part A: Conceptual Understanding *

Test your grasp of core concepts from Chapter 28.

A.1. Explain the difference between a PIA and a DPIA (Section 28.1.2). Under what circumstances is a DPIA legally required, and when is a PIA ethically required even without a legal mandate?

A.2. Section 28.2.1 identifies three mandatory DPIA triggers under GDPR Article 35 and nine additional high-risk criteria from the EDPB. List the three mandatory triggers and explain the "rule of thumb" for when a DPIA is almost certainly required.

A.3. What are the four minimum components of a DPIA as specified by Article 35(7)? For each, explain in one sentence what the component is designed to accomplish.

A.4. Section 28.3.1 describes the Belmont Report's three principles governing human subjects research. Name each principle and explain how it translates (or fails to translate) to corporate data ethics review.

A.5. What is "prior consultation" under GDPR Article 36 (Section 28.2.3)? Dr. Adeyemi calls it "the emergency brake." Under what specific circumstance is prior consultation required?

A.6. Explain the concept of an Algorithmic Impact Assessment (AIA) and describe how it extends traditional PIAs and DPIAs (Section 28.4.1). Identify the five distinct risks that AIAs address that data-focused assessments may miss.

A.7. Section 28.5 discusses threshold tests and proportionality. Define each concept and explain why both are necessary -- that is, why would an organization need both a threshold test (to determine whether to assess) and proportionality (to determine how deeply to assess)?


Part B: Applied Analysis **

Analyze scenarios, arguments, and real-world situations using concepts from Chapter 28.

B.1. A university proposes to install WiFi-based location tracking in its libraries to analyze study patterns and optimize resource allocation (seating, power outlets, quiet zones). Using the DPIA threshold screening questions from Section 28.5.1, determine whether a full DPIA is required. Score each applicable criterion and justify your conclusion.

B.2. Sofia Reyes says: "If your DPIA didn't make you reconsider anything about your project, you didn't do it right" (Section 28.6.2). Apply this test to VitraMed's DPIA in Section 28.7. What specifically did the DPIA make VitraMed reconsider? Identify at least three findings that changed the project.

B.3. Section 28.3.2 describes four tensions in translating IRB principles to the data age: what counts as research, consent and scale, ongoing vs. episodic review, and independence. For each tension, provide one example from the technology industry that illustrates the problem.

B.4. Using the PIA/DPIA template from Section 28.6.1, complete Sections 1 and 3 (Project Overview and Risk Identification) for the following scenario:

A ride-sharing company plans to use driver GPS data, passenger ratings, and trip completion rates to build a "driver reliability score." Drivers with low scores will receive fewer ride requests. The company serves 50,000 drivers across 20 cities.

Identify at least five risks and rate each for likelihood and severity.

B.5. The chapter compares academic IRBs and corporate ethical review boards in a table (Section 28.3.3). Select the two characteristics you consider most important for effective ethical review and explain why. Then evaluate whether VitraMed's ethics advisory group (as described in Chapter 26) meets those criteria.

B.6. The Canadian AIA framework classifies algorithmic systems into four impact levels (Section 28.4.2). Classify each of the following systems into one of the four levels and explain your reasoning:

  • (a) A spam filter for a corporate email system
  • (b) A predictive model that determines which patients are offered enrollment in a clinical trial
  • (c) An AI-powered chatbot that provides customer service for a retail bank
  • (d) A facial recognition system used to verify identity for government benefit access

Part C: Real-World Application Challenges -*

These exercises ask you to apply assessment frameworks to real or realistic scenarios.

C.1. ** Conduct a Mini-PIA. Select a data practice you participate in regularly (a fitness tracker, a social media platform, a university LMS, a workplace productivity tool). Using the template from Section 28.6.1, complete a simplified PIA:

  • Section 1: Project Overview (complete all fields)
  • Section 2: Necessity and Proportionality (answer all four questions)
  • Section 3: Risk Identification (identify at least three risks with ratings)
  • Section 4: Mitigation Measures (propose one mitigation per risk)

C.2. *** AIA for a Real System. Select an AI system you have interacted with (a recommendation engine, a voice assistant, an automated customer service system, a content moderation system). Using the seven AIA components from Section 28.4.3, conduct a simplified assessment. You will need to research or infer information about the system's design, data, and impact.

C.3. *** DPIA for VitraMed's Insurance Partner Sharing. The DPIA in Section 28.7 identified that VitraMed was sharing patient-level de-identified data with insurance partners, when aggregate scores would suffice. Write a complete Section 4 (Mitigation Measures) for this specific risk, including: the mitigation, the responsible party, the timeline, and the expected residual risk. Then write a Section 6 (Decision and Sign-Off) recommendation.

C.4. *** Assessment Process Design. Design an assessment workflow for a mid-size technology company (500 engineers, 100 data scientists, releasing approximately 50 new features per quarter). Your workflow should specify:

  • Who conducts the threshold screening?
  • What criteria trigger a full assessment?
  • Who reviews the assessment?
  • What authority does the reviewer have?
  • What happens when a risk cannot be mitigated?
  • How are assessments documented and maintained?

Part D: Synthesis & Critical Thinking ***

These questions require you to integrate multiple concepts and think beyond the material presented.

D.1. The chapter describes two failure modes of assessment processes: assessment fatigue (everything requires the same level of review, trivializing the process) and assessment avoidance (the process is so burdensome that teams structure projects to avoid triggering it). Design a calibrated assessment system that avoids both failure modes. Your system should specify at least three tiers of assessment with clear criteria for each and explain how the system prevents gaming.

D.2. The UK Court of Appeal ruled that the Met Police's facial recognition DPIA was insufficiently rigorous (Section 28.8.1). The court found that the DPIA acknowledged risks without genuinely addressing them. What does "genuine" risk mitigation look like, as opposed to "performative" risk mitigation? Develop a set of three to five criteria for distinguishing genuine from performative mitigations in a DPIA.

D.3. Section 28.3.2 raises the question of whether technology companies should be legally required to maintain IRB-equivalent review boards. Construct arguments on both sides. Then take a position and defend it, addressing the strongest counterargument to your position.

D.4. The chapter argues that assessments should be iterative -- conducted at design, revisited post-deployment, and updated as conditions change. But in practice, most assessments are conducted once and filed away. What organizational structures, incentives, or requirements would make iterative assessment the norm rather than the exception?


Part E: Research & Extension ****

These are open-ended projects for students seeking deeper engagement.

E.1. Published DPIA Analysis. Several organizations have published their DPIAs (the UK Metropolitan Police for facial recognition, the European Commission for various data systems, some government agencies). Locate a published DPIA and evaluate it against the template and criteria from Section 28.6. Write a 1,000-word critique: Is the assessment genuine or performative? Are the risks identified adequate? Are the mitigations proportionate? What is missing?

E.2. AIA Framework Comparison. Research the Canadian Algorithmic Impact Assessment tool and at least one other AIA framework (the EU AI Act's risk classification, New York City's Local Law 144 bias audit requirement, or Singapore's Model AI Governance Framework). Compare the frameworks: What do they measure? What do they require? Where do they differ? Write a 1,200-word comparative analysis.

E.3. Assessment Automation. Can elements of a PIA or DPIA be automated? Research tools and approaches for automated privacy assessment (including Privacy by Design tools, DPIA templates with embedded logic, and AI-assisted risk identification). Write a 1,000-word analysis evaluating: What can automation do well? What requires human judgment? Where does automation create a false sense of rigor?


Solutions

Selected solutions are available in appendices/answers-to-selected.md.