Chapter 25: Quiz — Cybersecurity and AI Systems

20 questions. Each question is worth 5 points. Total: 100 points.


Question 1. The primary characteristic that distinguishes AI security vulnerabilities from traditional software security vulnerabilities is that:

A) AI systems are more complex and therefore have more potential vulnerabilities than traditional software B) AI vulnerabilities can exist in correctly implemented systems, arising from properties of machine learning models rather than implementation mistakes C) AI systems are more difficult to patch than traditional software because they require retraining D) AI systems process more data than traditional software, creating larger attack surfaces

Correct Answer: B Traditional software vulnerabilities arise from implementation mistakes — code that can be exploited. AI vulnerabilities, including adversarial attacks and data poisoning, can exist even in correctly implemented systems because they arise from the mathematical properties of how machine learning models learn and generalize.


Question 2. An adversarial example in the context of AI security is:

A) An example from an adversarial dataset used to test model performance B) A training data point that causes a model to learn incorrect classifications C) An input specifically engineered to cause an AI system to make a mistake while appearing normal to human observers D) A model trained specifically to generate challenging test cases for other models

Correct Answer: C Adversarial examples are carefully crafted inputs that cause AI systems to produce incorrect outputs, while being designed to appear normal or benign to human observers. The perturbations are calculated mathematically to exploit properties of the model's decision boundaries.


Question 3. "Transferability" of adversarial examples refers to:

A) The ability to move adversarial examples from digital to physical environments B) Adversarial examples crafted against one model often fooling other models trained on the same task C) The process of transferring ownership of an AI model while preserving its adversarial vulnerabilities D) The translation of adversarial attacks from one data modality to another

Correct Answer: B Transferability means that adversarial examples are not always model-specific — examples crafted against one model (a surrogate) often transfer to fool other models with different architectures trained on the same task. This enables "black-box" attacks against inaccessible models.


Question 4. A backdoor attack on an AI model differs from a standard data poisoning attack in that:

A) Backdoor attacks target deployed models while standard poisoning targets training data B) Backdoor attacks require physical access to the model's training environment C) The model behaves normally on all inputs except those containing a specific trigger pattern designed by the attacker D) Backdoor attacks degrade overall model performance while standard poisoning targets specific classes

Correct Answer: C A backdoor creates a hidden failure mode activated only by the attacker's trigger pattern. The model passes all standard accuracy tests on clean data; the backdoor only activates when the specific trigger is present. This stealthiness is what makes backdoor attacks particularly dangerous.


Question 5. Model extraction attacks are a concern primarily because:

A) They allow attackers to modify the model's weights and parameters remotely B) They allow competitors to replicate model functionality without making the underlying investment, and enable better downstream adversarial attacks C) They expose users' personal data stored in the model's training database D) They prevent the model from making correct predictions after the attack

Correct Answer: B Model extraction attacks steal model functionality by systematically querying the target model and training a surrogate. This enables IP theft (competitors getting model capabilities without investment) and enables better adversarial attacks (a locally accessible surrogate is easier to attack than an inaccessible target).


Question 6. A "membership inference attack" on an AI model attempts to determine:

A) Whether the organization that owns the model is a member of a regulated industry B) Whether a specific data record was included in the model's training dataset C) Which features of the training data the model has "remembered" most clearly D) Whether the model's performance is comparable to member models in an ensemble

Correct Answer: B Membership inference attacks determine whether a specific individual's data was used to train the model. This can reveal sensitive information — for example, whether a specific person was a patient if the model was trained on medical records.


Question 7. Which of the following best describes "prompt injection" as a security threat to LLM applications?

A) The injection of malicious code into an LLM's training data to compromise its behavior B) The embedding of malicious instructions in user input or external content that override the application's intended system prompt C) The injection of adversarial text perturbations designed to cause the LLM to generate harmful content D) The interception of API communications between an LLM application and the model provider

Correct Answer: B Prompt injection attacks embed instructions in user input or externally retrieved content that the LLM treats as having the authority to override or modify the system prompt established by the application developer. This can cause the LLM to take unauthorized actions, reveal confidential information, or ignore safety guidelines.


Question 8. The NIST AI Risk Management Framework (AI RMF) organizes AI risk management around which four core functions?

A) Identify, Protect, Detect, Respond B) Prevent, Monitor, Contain, Recover C) Map, Measure, Manage, Govern D) Assess, Design, Implement, Verify

Correct Answer: C The NIST AI RMF uses Map, Measure, Manage, and Govern as its four core functions. This differs from the traditional NIST Cybersecurity Framework's Identify, Protect, Detect, Respond, Recover (CSF 1.1) or the CSF 2.0's addition of Govern.


Question 9. FraudGPT and WormGPT represent:

A) Legitimate AI security research tools used by penetration testers B) AI systems developed by law enforcement for detecting AI-generated fraud C) AI tools specifically designed for criminal use, sold via subscription in underground forums with no safety restrictions D) Open-source AI frameworks that have been co-opted by criminal actors but were designed for legitimate purposes

Correct Answer: C FraudGPT and WormGPT were explicitly designed and marketed for criminal use — generating phishing content, malicious code, and fraud scripts — without the safety guardrails of mainstream LLMs. They were sold via subscription in criminal underground forums.


Question 10. The "$25 million Hong Kong deepfake fraud" case (2024) is significant because:

A) It was the largest individual cybercrime loss ever recorded B) It demonstrated that AI-generated multiparty video conference impersonation is technically feasible and is being used for financial fraud C) It was the first case in which voice cloning technology was used for fraud D) It resulted in the first criminal conviction for deepfake-enabled financial fraud

Correct Answer: B The significance of the Hong Kong case is that it demonstrated full AI-generated multiparty video conference fraud — not just a voice call or phishing email, but a video call with multiple fake participants whose faces and voices were all AI-synthesized. This represents a significant escalation in AI-enabled fraud capabilities.


Question 11. Which of the following is the most effective single defense against AI-enabled Business Email Compromise?

A) AI-powered email security that detects AI-generated content B) Biometric authentication of email senders through voice analysis C) Out-of-band verification of financial requests through pre-established, verified communication channels D) End-to-end encryption of all internal financial communications

Correct Answer: C Out-of-band verification — requiring a separate call to a pre-registered, verified phone number to confirm financial requests — is effective regardless of how convincing the fraudulent communication is. It does not rely on detecting AI-generated content, which is increasingly difficult. AI-powered email security and other technical defenses are valuable but not reliable enough as sole defenses against sophisticated AI-enabled attacks.


Question 12. The EU AI Act's cybersecurity requirements are significant because:

A) They impose the highest cybersecurity fines of any AI regulation globally B) They are the first regulatory framework to explicitly require adversarial robustness for high-risk AI systems C) They require all AI systems to undergo penetration testing before deployment D) They prohibit the deployment of AI systems in critical infrastructure without government approval

Correct Answer: B The EU AI Act is the first regulatory framework to explicitly address AI-specific security requirements — including robustness against adversarial attacks, data poisoning, and model manipulation — as regulatory obligations for high-risk AI systems, rather than merely recommending best practices.


Question 13. "Data poisoning" attacks are particularly concerning for AI systems trained on:

A) Structured, internally generated datasets with comprehensive provenance B) Proprietary databases with physical access controls C) Web-scraped, user-generated, or third-party sourced data from potentially untrusted sources D) Small, carefully curated datasets with limited diversity

Correct Answer: C Data poisoning requires the ability to influence the training dataset. This is most feasible for training data sourced from untrusted sources: web-scraped data (attackers control web pages), user-generated content (attackers can create accounts), and third-party data (supply chain compromise). Internally generated, controlled data is harder to poison.


Question 14. The McAfee / Tesla stop sign attack demonstrated that:

A) Tesla autopilot was programmed to ignore speed limit signs above 70 MPH B) Adding a small black rectangle of tape to a 35 MPH sign caused the autopilot system to read it as 85 MPH C) Adversarial stickers attached to stop signs could cause autopilot to classify them as yield signs D) A digital manipulation of camera feed could alter autopilot's sign recognition

Correct Answer: B The McAfee Advanced Threat Research demonstration showed that a small piece of tape added to the bottom of a 35 MPH speed limit sign caused the autopilot system to misread the "3" as an "8," classifying the sign as displaying 85 MPH. The attack exploited the character recognition component of the sign interpretation system.


Question 15. "Alert fatigue" in AI-based cybersecurity refers to:

A) The tendency of AI security systems to generate fewer alerts over time as they learn normal baseline behavior B) The degradation of human analyst effectiveness caused by large numbers of alerts — including false positives — from AI security tools C) The computational exhaustion that occurs when AI security systems process more alerts than they can handle D) The regulatory requirement to archive all security alerts for audit purposes

Correct Answer: B Alert fatigue occurs when human analysts are presented with more alerts than they can meaningfully investigate, particularly when many alerts are false positives. Analysts begin to discount alerts or fail to investigate carefully, reducing the effectiveness of both the AI detection system and the human review process.


Question 16. Voice cloning technology is now capable of:

A) Generating synthetic speech that matches a target's voice only when trained on at least one hour of audio samples B) Generating synthetic speech that is distinguishable from authentic speech only by forensic audio analysis C) Generating synthetic speech that sounds like a specific person from audio samples as short as a few seconds D) Generating synthetic speech at the quality of professional voice actors but not specific individuals

Correct Answer: C Current voice cloning technology can generate convincing synthetic speech matching a specific person's voice from audio samples as short as a few seconds, using commercially available tools. This makes voice calls an unreliable authenticator of identity.


Question 17. The EU's NIS2 Directive is relevant to AI cybersecurity because:

A) It requires all AI systems to be registered with EU cybersecurity authorities before deployment B) It prohibits the use of AI in cybersecurity applications without regulatory approval C) It extends cybersecurity requirements to essential and important entities, including supply chain security requirements that encompass AI components and providers D) It creates a specific AI cybersecurity certification scheme for security products

Correct Answer: C NIS2 applies comprehensive cybersecurity requirements to essential and important entities across multiple sectors. Its supply chain security provisions require organizations to assess the cybersecurity of their suppliers and service providers, which extends to AI model providers, training data sources, and AI component vendors.


Question 18. The "adversarial gap" in AI security refers to:

A) The difference in capability between AI-powered attackers and AI-powered defenders B) The performance difference between a model's accuracy on clean inputs and its accuracy on adversarial inputs C) The time gap between an adversarial attack being discovered and a defense being deployed D) The gap in regulatory coverage between existing cybersecurity law and AI-specific security requirements

Correct Answer: B The adversarial gap is the difference in model performance between clean (unperturbed) inputs and adversarial inputs. A model that achieves 95% accuracy on clean inputs may achieve only 10% accuracy on adversarially perturbed inputs. This gap represents a fundamental challenge for AI security in adversarial environments.


Question 19. Which component of a secure AI development lifecycle addresses the specific risk of training data poisoning?

A) The testing phase, through adversarial robustness evaluation of the deployed model B) The data phase, through provenance verification, integrity checking, and anomaly detection in training data C) The deployment phase, through API rate limiting and query monitoring D) The requirements phase, through specifying accuracy requirements for the model

Correct Answer: B Defending against data poisoning requires addressing training data at the data phase — before the model is trained. This includes verifying data provenance, checking data integrity, auditing for statistical anomalies that might indicate poisoning, and securing the data labeling supply chain.


Question 20. The SEC's 2023 cybersecurity disclosure rules require public companies to:

A) Disclose all cybersecurity incidents within 24 hours of detection B) Publicly disclose material cybersecurity incidents within four business days of determining materiality, and annually disclose cybersecurity risk management processes C) Obtain third-party certification of their cybersecurity programs before annual reporting D) Disclose the specific technical details of all security vulnerabilities discovered during the reporting period

Correct Answer: B The SEC's 2023 rules require: (1) disclosure of material cybersecurity incidents within four business days of determining materiality (not within 24 hours of detection — materiality determination may take longer); and (2) annual disclosure of cybersecurity risk management strategy, governance, and processes. The rules do not require disclosure of specific vulnerability details.