Chapter 25: Exercises — Cybersecurity and AI Systems

Comprehension Exercises

Exercise 1: AI vs. Traditional Security Compare and contrast AI system security vulnerabilities with traditional software security vulnerabilities. For each of the following vulnerability categories, describe how it manifests in traditional software and how an analogous or distinct vulnerability manifests in AI systems: (a) input validation failures, (b) injection attacks, (c) authentication bypass, (d) data integrity violations, (e) information disclosure.

Exercise 2: Three-Layer Threat Identification You are conducting a security assessment of an AI-powered fraud detection system for a retail bank. For each of the three threat layers (data, model, deployment), identify two specific threats that apply to this system. For each threat, describe the attack mechanism, the potential consequences, and one mitigation.

Exercise 3: Adversarial Attack Intuition Explain, in terms accessible to a non-technical executive, why a neural network image classifier can be fooled by changes to an image that are invisible to human observers. What does this tell us about the differences between how humans and AI systems "see"? What are the practical implications for deploying image AI in security applications?

Exercise 4: Physical vs. Digital Adversarial Attacks Compare the challenges of conducting adversarial attacks in digital environments (manipulating digital image files) versus physical environments (manipulating physical objects that are then photographed). What additional constraints must physical-world adversarial attacks overcome? Which deployment contexts create meaningful physical-world adversarial attack risk, and which do not?

Exercise 5: Backdoor Attack Analysis A company uses an image classifier trained by a third-party vendor to screen job applicants' photograph submissions for compliance with submission requirements. How could a backdoor attack be introduced into this system? What would the trigger look like, and what effect might it produce? What testing would detect the backdoor, and what testing would not?

Application Exercises

Exercise 6: AI Security Threat Model Develop a security threat model for a proposed AI system: an LLM-powered customer service chatbot for a financial services firm. The chatbot can access customer account information, answer questions about products and accounts, and initiate certain account actions (changing contact information, scheduling calls, requesting account statements). Using the STRIDE framework (extended for AI), identify threats in each category and rate their likelihood and potential impact.

Exercise 7: Adversarial Robustness Testing Plan You are security lead for a company deploying an AI-based access control system that uses facial recognition to grant physical access to secure facilities. Design a pre-deployment adversarial robustness testing plan. What attacks will you test? What tools will you use? What performance thresholds must the system meet on adversarial inputs before you will recommend deployment? What monitoring will you implement post-deployment?

Exercise 8: Supply Chain Security Assessment Your organization is building an AI model using: training data collected from a third-party data provider; a pre-trained foundation model downloaded from a public repository; a data labeling service for annotation; and cloud compute from a major provider. Map the supply chain and assess the security risk at each stage. What verification mechanisms are available? What contractual protections should you require? What residual risks remain after mitigations?

Exercise 9: Prompt Injection Red Team Design a prompt injection red team exercise for an LLM application. The application is an AI assistant integrated into a corporate email system that can read emails, draft responses, and take calendar actions. What prompt injection scenarios would you test? What would a successful attack accomplish? How would you assess the application's robustness to the attacks you design?

Exercise 10: BEC Defense Protocol Design You are CISO for a manufacturing company with $500 million in annual revenue and significant international vendor payments. Design a Business Email Compromise defense protocol that protects against AI-enhanced BEC attacks. The protocol should include: verification requirements for payments at different thresholds, communication channel specifications, employee training requirements, and technical controls. Consider how your protocol handles urgent requests and international time zone complications.

Case Analysis Exercises

Exercise 11: Tesla Autopilot Attack Analysis Based on Case Study 25-1, analyze the Tesla stop sign adversarial attack: - a) What specific AI component was attacked, and what was the attack mechanism? - b) Tesla's response emphasized that Autopilot requires driver attention and is not the sole speed arbitration system. Evaluate this response as a security argument. - c) What defense-in-depth measures should autonomous vehicle AI systems implement to reduce vulnerability to physical-world adversarial attacks? - d) What regulatory requirements, if any, should apply to autonomous vehicle AI systems with respect to adversarial robustness?

Exercise 12: Deepfake Fraud Case Analysis Based on the $25 million Hong Kong deepfake fraud case described in Case Study 25-2: - a) What verification controls, if present, would have prevented the fraud? - b) Why are standard "know your customer" intuitions (recognizing a familiar voice or face) no longer reliable security controls? - c) Design a "deepfake-resistant" financial authorization protocol for a multinational company with remote management. - d) What employee training program would prepare finance teams for AI-enabled impersonation fraud?

Exercise 13: Criminal AI Tool Assessment Based on the FraudGPT and WormGPT discussion, answer the following: - a) What existing cybercrime capabilities do these tools enhance? What genuinely new capabilities do they enable? - b) What technical defenses are most effective against AI-generated phishing specifically (as opposed to traditional phishing)? - c) The democratization of sophisticated attack capability is identified as a key concern. What does this mean for organizations' risk models? How should risk assessments change? - d) What regulatory responses, if any, could reduce the harm from criminal AI tools without unduly restricting legitimate AI use?

Critical Thinking Exercises

Exercise 14: The Arms Race Dilemma AI cybersecurity is characterized by an arms race: offensive AI capabilities drive defensive AI investment, which drives further offensive development. Given this dynamic, is it possible for defenders to achieve durable advantage through AI investment? What would durable defensive advantage require? Are there mechanisms (regulatory, technical, organizational) that could break the arms race dynamic?

Exercise 15: Adversarial Robustness vs. Accuracy Trade-off Research consistently shows that training AI systems to be more robust against adversarial attacks reduces their accuracy on clean inputs. A medical imaging AI company claims its system is 95% accurate on clinical test sets. A security researcher demonstrates that it can be fooled with adversarial perturbations. The company argues that the adversarial examples are not realistic in clinical deployment. Evaluate this argument. Under what conditions would adversarial robustness testing be required before clinical deployment? Who should make this determination?

Exercise 16: Responsible Disclosure for AI The security research community has developed responsible disclosure norms for software vulnerabilities: researchers notify the vendor before publishing, vendors have a fixed period to develop and deploy a fix, then the vulnerability is disclosed. Apply this framework to AI vulnerabilities. What challenges arise when applying responsible disclosure to: (a) adversarial attacks on deployed models, (b) training data poisoning vulnerabilities, (c) backdoor attacks in pre-trained models available in public repositories?

Exercise 17: LLM Security Design Principles An organization wants to deploy an LLM agent that can browse the internet, read and send emails on behalf of users, and execute code in a sandboxed environment. Design the security architecture for this deployment. What constraints should be imposed on the agent's capabilities? How should prompt injection risk be mitigated? What actions should require explicit human confirmation before execution? What logging and monitoring should be in place?

Exercise 18: Critical Infrastructure AI Governance A power utility wants to deploy AI for predictive maintenance — using sensor data to predict equipment failures before they occur. The AI would recommend maintenance actions to human operators, who retain authority to approve or reject recommendations. Design a security governance framework for this deployment that addresses: (a) adversarial attack risk, (b) data poisoning risk, (c) model drift, (d) nation-state threat, (e) regulatory requirements, and (f) incident response.

Synthesis Exercises

Exercise 19: AI Security Framework for Your Organization Drawing on the chapter's content and the NIST AI RMF, develop a draft AI security framework for an organization of your choosing. The framework should include: (a) scope and applicability criteria, (b) risk assessment methodology, (c) security requirements by risk tier, (d) supply chain security requirements, (e) testing and validation requirements, (f) monitoring and incident response requirements, and (g) governance and accountability mechanisms.

Exercise 20: Regulatory Sufficiency Assessment Evaluate whether the current regulatory framework for AI cybersecurity — NIST CSF and AI RMF, EU AI Act security requirements, NIS2, SEC disclosure rules — is sufficient to ensure that AI systems meet appropriate security standards. Where are the most significant gaps? What additional regulatory requirements would you recommend? How would you balance security requirements against innovation?

Exercise 21: Defensive AI Investment Priorities You are CTO of a company with a $2 million annual cybersecurity budget. AI-enabled attacks are increasing in sophistication and frequency. You must decide how to allocate your budget across: AI-based email security, AI-based endpoint detection, adversarial robustness testing for your AI systems, employee awareness training on AI-enabled attacks, and out-of-band verification controls for financial transactions. Develop an allocation recommendation and defend your priorities.

Exercise 22: AI-Enabled Fraud in Your Sector Identify the three most significant AI-enabled fraud or social engineering threats specific to your professional sector. For each: describe the attack mechanism, identify who would be targeted within your organization, estimate the potential financial or operational impact, and design a defense-in-depth response.

Exercise 23: The Deepfake Authenticity Problem If AI can generate convincing audio and video of any person saying anything, what are the long-term implications for evidence in legal proceedings? For corporate governance (verifying that board decisions are authentic)? For news media (verifying the authenticity of recorded statements)? For international relations (verifying the authenticity of diplomatic communications)? What technical and social solutions could restore the reliability of audio and video evidence?

Exercise 24: Privacy and Security of AI Training Data The tension between AI development (which benefits from large, rich training datasets) and privacy protection (which requires data minimization and deletion rights) creates ongoing conflict. Analyze this tension for a specific AI use case of your choosing: what training data is required, what privacy risks does that data create, what privacy-preserving alternatives are available, and what tradeoffs do those alternatives impose? Recommend a data strategy that achieves an appropriate balance.

Exercise 25: Post-Incident Analysis An e-commerce company discovers that its recommendation engine — which personalizes product recommendations for each user — has been producing anomalous recommendations for a subset of users for the past three months. Investigation reveals that training data for the recommendation model was poisoned through a compromised data pipeline. Design a post-incident response plan that includes: (a) immediate containment, (b) scope determination, (c) remediation, (d) regulatory notification assessment, (e) customer notification, (f) root cause analysis, and (g) preventive measures.