# reference
23 articlestagged with “reference”
Report Templates & Examples
Full AI red team report templates: executive summary, technical findings, methodology section, remediation roadmap, and annotated examples.
Extended AI Security Glossary
Extended glossary of AI security terminology covering attack techniques, defense mechanisms, model architectures, and evaluation metrics.
LLM API Endpoint Reference
Reference for LLM API endpoints across providers with security-relevant parameters and options.
Attack Technique Taxonomy Reference
Comprehensive attack technique taxonomy cross-referencing MITRE ATLAS, OWASP LLM Top 10, and custom classification schemes for AI security.
Benchmark Suite Comparison
Comparison of AI safety benchmark suites including HarmBench, JailbreakBench, and custom evaluation frameworks with coverage analysis.
CVE Tracking for AI Systems
Guide to tracking and analyzing CVEs affecting AI systems and frameworks, with historical analysis and trending vulnerability classes.
Defense Bypass Quick Reference
Quick reference card for common AI defense mechanisms and their known bypass techniques, organized by defense type.
Defense Mechanism Comparison
Comprehensive comparison of LLM defense mechanisms including guardrails, classifiers, filtering, and architectural approaches with effectiveness data.
Framework Mapping Reference
Cross-mapping between OWASP LLM Top 10, MITRE ATLAS, NIST AI RMF, and EU AI Act requirements for AI security assessments.
Garak Tool Guide
Complete operational guide to NVIDIA's Garak LLM vulnerability scanner including installation, configuration, plugin development, and result interpretation.
Extended AI Security Glossary (References)
Comprehensive glossary of AI red teaming terms, covering attack techniques, defense mechanisms, model internals, and assessment methodology.
Incident Response Quick Reference
Quick reference card for AI incident response covering initial triage, containment, evidence collection, and communication templates.
Injection Payload Cheat Sheet
Quick reference of proven injection payloads organized by technique category, encoding method, and target defense type.
Jailbreak Technique Catalog
Comprehensive catalog of jailbreak techniques with effectiveness ratings, model compatibility notes, and evolution history.
Model API Comparison Table
Side-by-side comparison of major LLM API features, security controls, and rate limits for OpenAI, Anthropic, Google, and other providers.
Model API Security Reference
Security reference for major model APIs including authentication, rate limits, and safety features.
OWASP LLM Top 10 2025 Reference
Quick reference for OWASP LLM Top 10 2025 with detection and mitigation summaries.
Promptfoo Configuration Guide
Detailed guide to configuring Promptfoo for LLM security testing including provider setup, test assertions, and CI/CD integration.
PyRIT Tool Guide
Comprehensive guide to Microsoft's PyRIT (Python Risk Identification Tool) for automated AI red teaming including setup, attack strategies, and scoring.
Red Team Command Reference
Quick reference for common red team commands, API calls, and tool invocations used in AI security testing.
Automated Red Teaming Tools Comparison
Comprehensive comparison of automated AI red teaming tools including PyRIT, Garak, DeepTeam, AutoRedTeamer, HarmBench, and ART, with detailed capability matrices, strengths analysis, and use case recommendations.
Regulatory Compliance Matrix
Cross-reference matrix mapping AI security requirements across NIST AI 600-1, EU AI Act, ISO 42001, and OWASP LLM Top 10.
Red Team Tool Comparison Matrix
Side-by-side comparison of AI red teaming tools -- Garak, PyRIT, promptfoo, Inspect AI, and HarmBench -- covering capabilities, use cases, and integration options.