International AI Security Law
How AI security testing laws differ across jurisdictions: EU AI Act, US executive orders, UK AI Safety Institute, China AI regulations, and their impact on red teaming scope.
AI regulation is evolving rapidly across jurisdictions, and no two countries take the same approach. For red teamers, this means the legality, scope, and reporting obligations of a given engagement can change dramatically depending on where the system is deployed, where the tester is located, and where the data resides.
Regulatory Approaches Compared
| Dimension | EU | United States | United Kingdom | China |
|---|---|---|---|---|
| Primary instrument | AI Act (Regulation 2024/1689) | Executive orders, sector-specific rules | Voluntary framework + AI Safety Institute | Interim Generative AI Measures + sector rules |
| Approach | Risk-based, prescriptive | Sector-specific, fragmented | Pro-innovation, principles-based | State control, content-focused |
| Red team requirements | Mandatory for high-risk systems | Federal agencies (EO 14110) | Voluntary, AISI-encouraged | Required for public-facing generative AI |
| Extraterritorial reach | Yes (affects systems serving EU users) | Limited | Limited | Yes (affects services accessible in China) |
| Penalties | Up to 7% global annual turnover | Varies by sector | None (voluntary framework) | Administrative penalties, service shutdown |
European Union: The AI Act
The EU AI Act is the world's first comprehensive AI regulation and has the most direct impact on red teaming activities.
Risk Classification
| Risk Tier | Examples | Testing Requirements |
|---|---|---|
| Unacceptable risk | Social scoring, real-time biometric surveillance | Prohibited -- no testing needed, system cannot be deployed |
| High risk | AI in hiring, credit scoring, law enforcement, critical infrastructure | Mandatory conformity assessment, including adversarial testing |
| Limited risk | Chatbots, AI-generated content | Transparency obligations, voluntary testing |
| Minimal risk | Spam filters, AI in video games | No specific requirements |
Red Teaming Obligations Under the AI Act
For high-risk AI systems, Article 9 requires providers to implement a risk management system that includes:
- Identification and analysis of known and foreseeable risks
- Testing with "previously unknown inputs" (adversarial testing)
- Evaluation against "reasonably foreseeable misuse"
- Testing against risks to health, safety, and fundamental rights
General-Purpose AI (GPAI) Models
Models with systemic risk (generally those trained with more than 10^25 FLOPs) face additional obligations:
- Adversarial testing to identify systemic risks
- Documented testing methodology
- Incident reporting for serious incidents
- Cybersecurity protections for model weights
United States: Fragmented Federal Approach
The US lacks a comprehensive federal AI law. Instead, AI testing obligations arise from:
Executive Order 14110 (2023)
- Dual-use foundation model developers must share red team results with the government
- Federal agencies must conduct red team testing before deploying AI
- NIST directed to develop AI testing standards (see NIST AI RMF)
Sector-Specific Regulations
| Sector | Regulator | AI Testing Relevance |
|---|---|---|
| Financial services | OCC, FDIC, Fed | Model risk management (SR 11-7) requires validation |
| Healthcare | FDA | AI/ML-based SaMD requires clinical validation |
| Employment | EEOC | AI hiring tools subject to disparate impact analysis |
| Housing | HUD | Fair Housing Act applies to AI-assisted decisions |
| Automotive | NHTSA | Autonomous vehicle AI testing requirements |
State-Level Activity
States are filling the federal gap with their own AI legislation. Colorado, California, Illinois, and Texas have enacted or proposed laws with testing requirements. See the legal frameworks overview for a state-by-state breakdown.
United Kingdom: Pro-Innovation Framework
The UK has taken a deliberately different approach from the EU, favoring flexibility over prescriptive regulation.
AI Safety Institute (AISI)
The UK AI Safety Institute conducts pre-deployment testing of frontier AI models. Key aspects:
- Voluntary agreements with major AI labs for pre-release access
- Evaluation capabilities for dangerous capabilities, societal harms, and misuse potential
- Published research on evaluation methodology and findings
- International coordination with US AISI and other bodies
Regulatory Sandbox Approach
The UK encourages security research through regulatory sandboxes and the principle that existing regulators (FCA, Ofcom, CMA, etc.) should apply AI governance within their existing mandates rather than creating new AI-specific laws.
China: Content Control Focus
China's AI regulations focus heavily on content control and alignment with "socialist core values." Key instruments:
- Interim Measures for the Management of Generative AI Services (2023): Requires safety assessments before public-facing generative AI services launch
- Algorithm Recommendation Regulation (2022): Requires algorithmic transparency and user control
- Deep Synthesis Regulation (2023): Governs deepfakes and synthetic media
Red Teaming Implications
| Aspect | China's Approach |
|---|---|
| Mandatory testing | Required before public deployment |
| Focus areas | Content safety, political sensitivity, "socialist core values" alignment |
| Data localization | AI systems must use domestically stored training data |
| Foreign tester access | Extremely restricted; local entities preferred |
| Reporting | Findings must be reported to the Cyberspace Administration |
Navigating Multi-Jurisdictional Engagements
Many AI red teaming engagements involve multiple jurisdictions. A common scenario: a US-based tester evaluating an AI system deployed by a UK company serving EU users, built on a model from a US provider.
Map all jurisdictions
Identify where the tester is located, where the system is hosted, where the model provider is incorporated, and where the end users are located.
Identify the most restrictive requirements
The EU AI Act's extraterritorial reach means EU rules often apply even when the system is hosted outside the EU. Apply the most restrictive standard.
Address data transfer rules
GDPR and other data protection laws may restrict transferring test data (including model outputs) across borders. Ensure your data handling plan accounts for this.
Align on reporting obligations
Different jurisdictions have different incident and vulnerability reporting requirements. Your engagement contract should specify which reporting obligations apply.
Related Topics
- Legal Frameworks for AI Red Teaming -- US-focused legal analysis including CFAA
- EU AI Act Compliance Testing -- detailed EU AI Act testing methodology
- Authorization, Contracts & Liability -- contract provisions for multi-jurisdictional engagements
- NIST AI RMF & ISO 42001 -- the frameworks regulators reference
References
- "EU Artificial Intelligence Act" - European Parliament (2024) - Articles 9 and 15 on adversarial testing requirements for high-risk AI systems
- "Interim Administrative Measures for Generative AI Services" - Cyberspace Administration of China (2023) - China's regulatory framework for generative AI including security assessment requirements
- "AI Safety Institute Approach to Evaluations" - UK AI Safety Institute (2024) - UK government methodology for pre-deployment AI safety testing
- "Cross-Border Data Transfer Mechanisms and AI Testing" - International Association of Privacy Professionals (2024) - Analysis of GDPR implications for international AI security assessments
- "ISO/IEC 42001:2023 AI Management Systems" - International Organization for Standardization (2023) - International standard for AI governance referenced across jurisdictions
Which jurisdiction has the most prescriptive, legally binding requirements for adversarial testing of high-risk AI systems?