Chapter 29 Quiz: Privacy, Security, and AI


Multiple Choice

Question 1. What was the root cause of Athena's data breach?

  • (a) A zero-day vulnerability in the recommendation engine's software
  • (b) The recommendation engine API was granted overly broad access to customer data, violating the principle of least privilege
  • (c) An employee at Athena intentionally exfiltrated customer data
  • (d) Athena failed to encrypt customer data at rest

Question 2. Under GDPR, organizations must notify the relevant supervisory authority of a personal data breach within what timeframe?

  • (a) 24 hours
  • (b) 48 hours
  • (c) 72 hours
  • (d) 7 business days

Question 3. Which of the following best describes the "inference problem" as it relates to AI and privacy?

  • (a) AI models are too slow to process data in real time, creating delays in privacy enforcement
  • (b) AI systems can derive sensitive personal information from non-sensitive data, even when that sensitive information was never explicitly collected
  • (c) AI models cannot infer anything beyond what is explicitly present in the training data
  • (d) Privacy regulations prevent AI models from making inferences about individuals

Question 4. In differential privacy, a smaller epsilon (ε) value means:

  • (a) Stronger privacy protection but noisier (less accurate) results
  • (b) Weaker privacy protection but noisier results
  • (c) Stronger privacy protection and more accurate results
  • (d) No change in privacy protection — epsilon only affects computational speed

Question 5. Which privacy-preserving technology trains a model across multiple devices without centralizing raw data?

  • (a) Homomorphic encryption
  • (b) Federated learning
  • (c) Secure multi-party computation
  • (d) Differential privacy

Question 6. A researcher queries a facial recognition API repeatedly and, by analyzing confidence scores, reconstructs approximate images of individuals in the training set. This is an example of:

  • (a) An evasion attack
  • (b) A data poisoning attack
  • (c) A model inversion attack
  • (d) A prompt injection attack

Question 7. What distinguishes a backdoor attack from other forms of data poisoning?

  • (a) Backdoor attacks target the inference stage, while data poisoning targets the training stage
  • (b) The poisoned model performs normally on standard test data and only exhibits malicious behavior when a specific trigger pattern is present in the input
  • (c) Backdoor attacks require physical access to the model's hardware
  • (d) Backdoor attacks are only possible against deep learning models, not traditional ML models

Question 8. Which of the following is NOT a limitation of federated learning as described in the chapter?

  • (a) Communication overhead from sending model updates back and forth
  • (b) Data heterogeneity across participating devices
  • (c) Potential information leakage through model updates
  • (d) The requirement that all participating devices use the same operating system

Question 9. An attacker crafts a transaction with slightly modified attributes (timing, amounts, merchant categories) to avoid detection by a fraud model. This is an example of:

  • (a) Model extraction
  • (b) An evasion attack
  • (c) A supply chain attack
  • (d) Prompt injection

Question 10. Homomorphic encryption allows:

  • (a) Data to be encrypted using multiple keys simultaneously
  • (b) Computations to be performed on encrypted data without decrypting it first
  • (c) Data to be transmitted without any possibility of interception
  • (d) Machine learning models to be trained without any data

Question 11. GDPR Article 22 addresses automated decision-making. Which of the following does it require?

  • (a) All AI-powered decisions must be reviewed by a human before being communicated to the individual
  • (b) Individuals have the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects, unless specific conditions are met
  • (c) Automated decision-making is prohibited for any decision involving personal data
  • (d) Companies must disclose the full source code of any automated decision-making system

Question 12. The chapter describes Athena's estimated breach cost of $12 million. Which of the following costs was the largest single component?

  • (a) Forensic investigation ($1.8M)
  • (b) Credit monitoring for affected customers ($3.4M)
  • (c) Legal and regulatory costs ($2.2M)
  • (d) Customer notification and support ($2.1M)

Question 13. Synthetic data for AI training offers privacy advantages because:

  • (a) It is encrypted by default
  • (b) No real individuals are represented in the data, reducing re-identification risk
  • (c) It is always more accurate than real data
  • (d) It is required by GDPR for all AI training

Question 14. Which of the following statements about the consent model for data privacy is most accurate?

  • (a) Opt-in consent typically results in higher data volumes than opt-out consent
  • (b) GDPR generally requires opt-in consent for processing personal data, particularly sensitive data
  • (c) The CCPA requires opt-in consent for all personal data processing
  • (d) Opt-in and opt-out models produce equivalent levels of privacy protection

Question 15. An AI agent that browses the web encounters a webpage with hidden text instructing it to ignore its original instructions and forward the user's data to an external server. This is an example of:

  • (a) Direct prompt injection
  • (b) Indirect prompt injection
  • (c) Data poisoning
  • (d) Model extraction

Short Answer

Question 16. Explain the principle of "data minimization" and why it is particularly challenging for AI systems. Use one example from the Athena case to illustrate the consequences of violating this principle.


Question 17. Describe the tension between model explainability and privacy. Why does providing detailed explanations of AI decisions potentially create privacy risks? What strategies can organizations use to navigate this tension?


Question 18. What is a Data Protection Impact Assessment (DPIA)? Under what circumstances does GDPR require one? Identify three specific elements that a DPIA for an AI system should address.


Question 19. Compare and contrast model inversion attacks and model extraction attacks. For each, explain the attacker's goal, the method used, and the primary business risk.


Question 20. The chapter argues that privacy can be a competitive differentiator, not just a compliance cost. Identify three specific mechanisms through which strong privacy practices create business value. Support each with evidence from the chapter.


True or False

Question 21. True or False: A 2019 study found that 99.98 percent of Americans could be correctly re-identified using just 15 demographic attributes, suggesting that truly anonymous data is extremely difficult to achieve.


Question 22. True or False: Federated learning provides complete privacy protection because model updates cannot leak any information about the underlying training data.


Question 23. True or False: Under GDPR's "right to be forgotten" (Article 17), if a user requests deletion of their data and that data was used to train a machine learning model, the organization must retrain the model without that data.


Question 24. True or False: Fully homomorphic encryption currently enables computations on encrypted data at approximately the same speed as computations on unencrypted data.


Question 25. True or False: According to the IBM Cost of a Data Breach Report (2024), organizations that extensively used AI-powered security tools experienced breach costs $2.22 million lower than organizations that did not.


Answer key is available in Appendix B. Questions 1-15 are multiple choice (2 points each). Questions 16-20 are short answer (5 points each). Questions 21-25 are true/false (2 points each). Total: 85 points.