References & Cheat Sheets Regulatory Quick Reference Regulatory Quick Reference Beginner 9 min readUpdated 2026-03-15 Quick reference for AI-relevant regulations and frameworks including NIST AI RMF, ISO/IEC 42001, EU AI Act, and sector-specific requirements.
A condensed reference of the major AI governance frameworks and regulations relevant to AI red teaming. Use this to quickly understand which frameworks apply, what they require, and how red teaming supports compliance.
Framework Type Jurisdiction Mandatory? Primary Focus NIST AI RMF Framework US (global influence) Voluntary (but increasingly referenced) Risk management for trustworthy AI EU AI Act Regulation EU (extraterritorial effect) Mandatory (phased implementation) Risk-based AI regulation ISO/IEC 42001 Standard International Voluntary (certifiable) AI management system requirements OWASP LLM Top 10 Industry guidance Global Voluntary LLM-specific security risks MITRE ATLAS Knowledge base Global Voluntary Adversary TTPs for AI/ML EO 14110 Executive Order US Mandatory for federal agencies Safe, secure, and trustworthy AI NIST AI 100-2 Guidelines US (global influence) Voluntary Adversarial ML taxonomy
Function Purpose Red Teaming Connection GOVERN Establish policies, roles, accountability, and culture for AI risk management Red team findings inform policy decisions about acceptable risk thresholds and governance requirements MAP Identify and understand AI risks in context -- who is affected, what can go wrong Red team threat modeling contributes to comprehensive risk identification and attack surface mapping MEASURE Assess, analyze, and quantify AI risks through testing and evaluation Red teaming is a primary Measure function activity -- it empirically evaluates AI system vulnerabilities MANAGE Prioritize, treat, and monitor identified risks Red team findings drive remediation priorities, and retesting validates that risk treatments work
The NIST AI RMF defines seven trustworthiness characteristics. Red teaming directly assesses several:
Characteristic Definition Red Team Relevance Valid and reliable System performs as intended under expected and unexpected conditions Testing unexpected/adversarial inputs Safe System does not endanger human life, health, property, or environment Testing for harmful output generation Secure and resilient System resists unauthorized access and maintains function under attack Core red teaming focus Accountable and transparent System operation can be understood and explained Testing for system prompt leaking, decision explainability Explainable and interpretable System outputs can be understood by stakeholders Assessing whether model decisions can be audited Privacy-enhanced System protects individuals' privacy Testing for PII leakage, training data extraction Fair (bias managed) System does not create unjust impacts across groups Testing for bias in model outputs across demographics
Risk Level Examples Requirements Red Teaming Implication Unacceptable Social scoring, real-time biometric identification (with exceptions), manipulative AI Prohibited Not applicable -- system cannot be deployed High-risk AI in employment decisions, credit scoring, law enforcement, critical infrastructure, education Mandatory risk management, testing, documentation, human oversight, transparency Red teaming is essential for demonstrating compliance with robustness and cybersecurity requirements Limited risk Chatbots, deepfake generators Transparency obligations (disclose AI interaction) Red teaming recommended but not mandatory Minimal risk Spam filters, AI in games No specific requirements Red teaming optional
GPAI Category Requirements Red Teaming Relevance All GPAI models Technical documentation, training data transparency, copyright compliance Documentation of model capabilities and limitations GPAI with systemic risk (above compute threshold)Model evaluation, adversarial testing, incident reporting, cybersecurity measures Mandatory adversarial testing -- red teaming is explicitly required
Date Milestone August 2024 AI Act entered into force February 2025 Prohibitions on unacceptable-risk AI apply August 2025 GPAI provisions apply August 2026 High-risk AI system requirements apply August 2027 Some high-risk categories (Annex I) fully apply
High-risk AI systems require:
Risk management system documentation
Data governance measures
Technical documentation of the system
Record-keeping and logging
Transparency and user information
Human oversight measures
Accuracy, robustness, and cybersecurity documentation
Conformity assessment results
ISO/IEC 42001 follows the Annex SL management system structure (similar to ISO 27001):
Clause Topic Key Requirements 4 Context Understand the organization, stakeholders, and scope of the AIMS 5 Leadership Management commitment, AI policy, roles and responsibilities 6 Planning Risk assessment, AI-specific risk identification, objectives 7 Support Resources, competence, awareness, communication, documentation 8 Operation Operational planning, AI system lifecycle management, third-party considerations 9 Performance evaluation Monitoring, measurement, internal audit, management review 10 Improvement Nonconformity, corrective action, continual improvement
Control Area Examples of Controls Red Teaming Connection AI policies AI impact assessment policy, responsible AI policy Red team scope should align with organizational AI policy AI system lifecycle Development process, verification and validation Red teaming as validation activity Data management Data quality, data provenance, bias management Testing for training data poisoning, data extraction AI system monitoring Performance monitoring, anomaly detection Validating monitoring effectiveness through testing Third-party management Supplier assessment, contract requirements Assessing supply chain risks in AI components
Aspect ISO 27001 ISO 42001 Scope Information security AI management Risk focus Confidentiality, integrity, availability AI-specific risks (bias, safety, transparency, security) Controls Annex A information security controls Annex A AI-specific controls + Annex B guidance Relationship Foundational for IT security Builds on 27001; references it for security controls Certification Widely established Newer; growing adoption
Area Requirement Agency Safety testing Developers of powerful AI must share safety test results with the government Commerce (NIST) Red teaming NIST to develop guidelines for AI red teaming NIST Standards Develop standards for AI safety and security NIST Watermarking Develop standards for AI content authentication and watermarking Commerce Privacy Evaluate AI's impact on privacy Various Equity Address AI's potential to exacerbate discrimination Various
Companion document providing taxonomy and terminology for adversarial ML:
Attack Category Definition Evasion Manipulating inputs at inference time to cause misclassification Poisoning Manipulating training data to compromise model behavior Privacy Extracting information about training data or model internals Abuse Using AI systems for unintended, harmful purposes
Sector Regulation/Framework AI-Specific Implications Healthcare HIPAA, FDA SaMD guidance PHI protection in AI pipelines, medical device classification for diagnostic AI, adverse event reporting Finance SR 11-7 (Model Risk Management), Fair Lending, SEC/FINRA Model validation requirements, explainability for credit decisions, algorithmic trading oversight Critical Infrastructure CISA guidelines, sector-specific standards Resilience requirements for AI in energy/water/transport, incident reporting obligations Government EO 14110, OMB guidance, FedRAMP Safety testing requirements, authorized AI use cases, procurement standards Education FERPA, state-level AI in education laws Student data protection in AI tutoring, bias in grading AI Employment NYC Local Law 144, EEOC guidance Bias auditing for AI hiring tools, notice and disclosure requirements
How red teaming activities map to compliance requirements:
Red Teaming Activity NIST AI RMF EU AI Act ISO 42001 Threat modeling MAP function Risk management system (Art. 9) Clause 6 (Planning) Vulnerability testing MEASURE function Robustness testing (Art. 15) Clause 8 (Operation) Bias testing MEASURE function Non-discrimination (Art. 10) Annex A (Data management) Penetration testing MEASURE function Cybersecurity (Art. 15) Annex A + ISO 27001 Report writing MANAGE function Technical documentation (Art. 11) Clause 7 (Documentation) Remediation verification MANAGE function Conformity assessment (Art. 43) Clause 9 (Performance evaluation) Continuous monitoring MANAGE function Post-market monitoring (Art. 72) Clause 9 (Monitoring)
Scenario Required? Framework/Regulation High-risk AI system in EU Effectively yes EU AI Act Art. 9, 15 GPAI with systemic risk Explicitly yes EU AI Act Art. 55 US federal AI deployment Strongly recommended EO 14110, NIST AI RMF ISO 42001 certification Part of validation ISO 42001 Clause 8 Financial services AI (US) Required by guidance SR 11-7, OCC guidance Healthcare AI (FDA-regulated) Required for SaMD FDA pre-market requirements Any AI handling PII Strongly recommended GDPR Art. 25, 35 (DPIA)
Related Articles IntermediateRegulatory & Standards Landscape 2026 Comprehensive mapping of the 2026 AI regulatory landscape including EU AI Act Article 55, NIST AI RMF, MITRE ATLAS, and OWASP Top 10 for LLMs, with compliance checklists, penalty structures, and regulatory timelines.
AdvancedCapstone: Implement an AI Compliance Framework Build a comprehensive AI compliance framework that maps security testing to regulatory requirements including the EU AI Act, NIST AI RMF, and ISO 42001.
IntermediateEU AI Act: Comprehensive Analysis Comprehensive analysis of the EU AI Act including risk tiers, obligations, and enforcement timeline.
Previous
OWASP LLM Top 10 2025 Reference
Next
AI Incident Response Checklist