Filter and sanitize user inputs before they reach the LLM > 2. **Output Validation** — Never trust LLM output; sanitize before rendering or executing > 3. **Privilege Minimization** — Limit the tools and data the LLM can access > 4. **Prompt Armoring** — Use structured prompts with clear delimiters