Manipulating LLM behavior through crafted inputs 2. **LLM02: Insecure Output Handling** — Failing to sanitize LLM-generated output 3. **LLM03: Training Data Poisoning** — Corrupting training data to influence behavior 4. **LLM04: Model Denial of Service** — Causing excessive resource consumption 5.