Prompt injection is the SQL injection of LLM systems — widely understood in theory, consistently underestimated in practice. The naive defenses (input sanitization, keyword blocking) fail quickly against adversarial users and break legitimate use cases along the way.
Defenses that actually hold up
The defenses that survive adversarial testing share a common pattern: they don't trust the model to recognize injection, they trust the system architecture to contain it. That means privilege separation (the model can't call privileged APIs directly), output validation (the model's output is parsed and validated before execution), and explicit human-in-the-loop for irreversible actions.
Warning
A language model cannot reliably detect prompt injection in its own context window. Treat model output as untrusted user input when it feeds into downstream systems.