# mitigation
8 articlestagged with “mitigation”
Defense & Mitigation Assessment
Assessment covering guardrails, classifiers, constitutional AI, and defense-in-depth architectures.
Adversarial Training for LLM Defense
Use adversarial training techniques to improve LLM robustness against known attack patterns.
Circuit Breaker Patterns for LLMs
Implement circuit breaker patterns that halt LLM processing when anomalous behavior is detected.
Defense & Mitigation
Defensive strategies for AI systems including guardrails architecture, monitoring and observability, secure development practices, remediation mapping, and advanced defense techniques.
Privilege Separation in LLM Applications
Implement privilege separation to limit the capabilities available to the LLM based on context and user role.
Prompt Injection Canary System
Deploy canary strings in system prompts to detect and alert on prompt injection and extraction attempts.
Response Consistency Checking
Implement consistency checking between model responses and known facts to detect manipulation.
Token Attribution Monitoring
Monitor token attributions in model outputs to detect adversarial influence on generation.