Indirect prompt injection involves planting malicious instructions in external data sources (webpages, documents, emails) that the LLM will process. The attacker does not directly interact with the LLM; instead, the LLM encounters the malicious instructions while performing its normal function.