Framework Mapping Reference
Intermediate4 min readUpdated 2026-03-13
Cross-mapping between OWASP LLM Top 10, MITRE ATLAS, NIST AI RMF, and EU AI Act requirements for AI security assessments.
OWASP LLM Top 10 → MITRE ATLAS Mapping
| OWASP LLM Top 10 (2025) | MITRE ATLAS Techniques | Description |
|---|---|---|
| LLM01: Prompt Injection | AML.T0051 (LLM Prompt Injection) | Direct and indirect injection of instructions |
| LLM02: Insecure Output Handling | AML.T0048.005 (Command Injection via AI) | Unvalidated LLM outputs used in downstream systems |
| LLM03: Training Data Poisoning | AML.T0020 (Poison Training Data) | Manipulation of training datasets |
| LLM04: Model Denial of Service | AML.T0029 (Denial of ML Service) | Resource exhaustion attacks on models |
| LLM05: Supply Chain Vulnerabilities | AML.T0010 (ML Supply Chain Compromise) | Compromised models, libraries, or datasets |
| LLM06: Sensitive Information Disclosure | AML.T0024 (Exfiltration via ML Inference) | Model memorization and data extraction |
| LLM07: Insecure Plugin Design | AML.T0051.001 (Tool Manipulation) | Unsafe tool/plugin interfaces |
| LLM08: Excessive Agency | AML.T0048 (Command & Control via AI) | Overprivileged AI agents |
| LLM09: Overreliance | -- | Human over-trust in AI outputs |
| LLM10: Model Theft | AML.T0024.001 (Model Extraction) | Stealing model weights or functionality |
NIST AI RMF → Red Team Testing Areas
| NIST AI RMF Function | Red Team Testing Focus | Key Assessments |
|---|---|---|
| GOVERN | Policy and process review | Authorization frameworks, disclosure policies |
| MAP | Attack surface identification | System architecture review, threat modeling |
| MEASURE | Quantitative security testing | ASR metrics, bypass rates, coverage analysis |
| MANAGE | Remediation verification | Defense effectiveness, incident response |
EU AI Act → Assessment Requirements
| EU AI Act Requirement | Red Team Assessment |
|---|---|
| Art. 9: Risk Management | Threat modeling and attack surface analysis |
| Art. 10: Data Governance | Training data poisoning assessment |
| Art. 15: Accuracy & Robustness | Adversarial robustness testing |
| Art. 62: Serious Incident Reporting | Incident simulation and response testing |
| Art. 52: Transparency | System prompt extraction and output attribution |
Quick Reference: Attack → Framework ID
| Attack Technique | OWASP | MITRE ATLAS | NIST CSF |
|---|---|---|---|
| Direct prompt injection | LLM01 | AML.T0051 | DE.AE-2 |
| Indirect prompt injection | LLM01 | AML.T0051.002 | DE.AE-2 |
| Jailbreaking | LLM01 | AML.T0054 | PR.AC-4 |
| Data extraction | LLM06 | AML.T0024 | PR.DS-5 |
| Training data poisoning | LLM03 | AML.T0020 | PR.DS-6 |
| Model extraction | LLM10 | AML.T0024.001 | PR.IP-1 |
| Tool/plugin abuse | LLM07 | AML.T0051.001 | PR.AC-4 |
| Agent exploitation | LLM08 | AML.T0048 | PR.AC-1 |
| Supply chain attack | LLM05 | AML.T0010 | ID.SC-2 |
Related Topics
- OWASP LLM Top 10 -- Detailed OWASP analysis
- MITRE ATLAS -- ATLAS technique database
- NIST AI RMF -- Risk management framework
References
- OWASP LLM Top 10 (2025) - OWASP Foundation - LLM application vulnerability taxonomy
- MITRE ATLAS - MITRE Corporation (2024) - Adversarial threat landscape for artificial intelligence systems
- NIST AI Risk Management Framework (AI RMF 1.0) - NIST (2023) - AI risk management governance framework
- EU AI Act - European Parliament (2024) - Regulation on artificial intelligence risk categories and requirements
Knowledge Check
Why should red team reports reference multiple frameworks rather than just one?