If the LLM processes external data (URLs, documents, emails), embed adversarial instructions in those sources - Test whether the LLM follows injected instructions in retrieved content