LLMs cannot reliably distinguish between developer instructions and attacker instructions, making prompt injection a systemic challenge for all LLM applications.