# defense-testing
6 articlestagged with “defense-testing”
Evaluating Defense Effectiveness
Metrics, benchmarks, and methodology for measuring how well AI defenses work against real attacks, including evaluation pitfalls and best practices.
Lab: Introduction to Defense Testing
Learn to systematically test LLM application defenses by probing input filters, output validators, and guardrails.
Lab: Testing Prompt Leaking Defenses
Test various prompt leaking defense configurations to evaluate their effectiveness against extraction techniques.
Lab: Build Guardrail Evaluator
Build an automated framework for evaluating AI guardrails and safety filters. Test input filters, output classifiers, content moderation systems, and defense-in-depth architectures for coverage gaps and bypass vulnerabilities.
Lab: Defense Effectiveness Testing
Systematically test and measure the robustness of AI guardrails using structured methodology, metrics, and repeatable test suites.
Testing Prompt Injection Defenses with Rebuff
Walkthrough for using Rebuff to test and evaluate prompt injection detection capabilities, covering installation, detection pipeline analysis, adversarial evasion testing, custom rule development, and benchmarking detection accuracy.