# prompt-injection-defense
2 articlestagged with “prompt-injection-defense”
CaMeL & Dual LLM Pattern
Architectural defense patterns that separate trusted and untrusted processing: Simon Willison's Dual LLM concept and Google DeepMind's CaMeL framework for defending tool-using AI agents against prompt injection.
dual-llmcamelprompt-injection-defenseagent-securityarchitecturetool-use
Building Input Guardrails for LLM Applications
Step-by-step walkthrough for implementing production-grade input guardrails that protect LLM applications from prompt injection, content policy violations, and resource abuse through multi-layer validation, classification, and rate limiting.
guardrailsinput-validationprompt-injection-defensecontent-safetydefensewalkthrough