**Adversarial Example:** An input intentionally designed to cause a model to make a mistake - **Perturbation:** The modification applied to a benign input to make it adversarial - **Evasion Attack:** Adversarial examples at inference time (model already deployed) - **Poisoning Attack:** Manipulating