Post-Executive Order AI Governance Landscape
The US AI governance landscape after the rescission of Executive Order 14110: what was lost, what remains, and how it affects AI red teaming practice and the broader regulatory environment.
The regulatory landscape for AI in the United States shifted dramatically between October 2023 and January 2025. 理解 this shift is essential for AI red teamers -- not 因為 紅隊演練 requires a government mandate, but 因為 the regulatory environment shapes what organizations prioritize, fund, and staff. When mandatory requirements become voluntary guidelines, the demand for 安全 測試 changes accordingly.
Biden's Executive Order 14110: What It Established
Background
On October 30, 2023, President Biden signed Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It was the most significant US federal action on AI 安全 to that point.
Key Provisions
The executive order was extensive, spanning dozens of directives across multiple agencies. The provisions most relevant to AI 紅隊演練 and 安全 included:
| Provision | Responsible Agency | Relevance to 紅隊演練 |
|---|---|---|
| 安全 測試 for dual-use foundation models | Commerce / NIST | Directly mandated 紅隊演練 and 安全 評估 |
| Reporting thresholds | Commerce | Required developers to report results of 安全 tests for models exceeding compute thresholds |
| NIST AI 安全 standards | NIST | Directed development of standards and benchmarks for AI 安全 測試 |
| Red teaming guidance | NIST | Specifically called for development of 紅隊演練 guidelines and best practices |
| Watermarking and provenance | Commerce | Directed development of standards for identifying AI-generated content |
| Critical infrastructure protection | DHS / Sector-Specific Agencies | Required 評估 of AI risks to critical infrastructure |
| Biological and chemical risk | HHS / DHS | Mandated 評估 of AI's potential to assist in creating biological or chemical threats |
| Cybersecurity applications | NSA / CISA | Directed use of AI for cyber 防禦 while managing offensive AI risks |
The Compute Threshold
One of the most consequential provisions was the compute reporting threshold:
| Model Type | Threshold | Approximate Equivalent (at time of EO) |
|---|---|---|
| General-purpose foundation models | 10^26 FLOP | Roughly GPT-4 scale and above |
| Biological sequence models | 10^23 FLOP | Lower threshold reflecting higher risk |
Developers 訓練 models above these thresholds were required to:
- Notify the federal government before 訓練 begins
- Share results of 安全 tests, including 紅隊演練, with the government
- Report on 模型's potential for misuse in areas like cybersecurity, biological weapons, and critical infrastructure disruption
NIST's Role
The executive order placed NIST at the center of AI 安全 standardization, directing it to:
- Develop guidelines for 紅隊演練 and 安全 評估 of AI systems
- Create benchmarks for 測試 AI system 安全 and 安全
- Establish standards for AI system transparency and accountability
- Coordinate with international standards bodies on AI 安全
NIST responded by building on its existing AI Risk Management Framework (AI RMF) and initiating work on AI 安全-specific guidance. This work was well underway when the executive order was rescinded.
The Rescission: January 2025
What Happened
On January 20, 2025, the incoming Trump administration rescinded Executive Order 14110 as part of a broader set of executive actions on the first day in office. The rescission was accompanied by a new executive order focused on "removing barriers to American AI innovation."
What Was Lost
The rescission immediately affected several categories of requirements:
| Category | Status Before Rescission | Status After Rescission |
|---|---|---|
| Mandatory 安全 測試 for frontier models | Required for models above compute threshold | No federal requirement |
| Government reporting of 訓練 plans and 安全 測試 results | Required | No longer required |
| NIST 安全 standards mandate | NIST directed to develop binding standards | NIST continues voluntary work but with reduced urgency and scope |
| Red teaming guidance | NIST directed to develop official guidelines | Guidelines remain as voluntary resources |
| Watermarking standards | Commerce directed to develop | Reduced priority |
| Agency-specific AI assessments | Required across multiple agencies | Halted or deprioritized |
What Remains
Not everything was lost with the rescission. Several sources of AI 安全 requirements and incentives remain:
| Source | Type | Status |
|---|---|---|
| Voluntary industry commitments | Self-regulation | Still in effect for signatories (but voluntary) |
| NIST AI RMF | Voluntary framework | Still available and maintained |
| State-level legislation | Binding law (varies by state) | Active and expanding |
| EU AI Act | Binding regulation (for EU market) | Fully in effect; applies to US companies serving EU users |
| Sector-specific regulation | Federal agency rules | Some agencies (FDA, NHTSA, SEC) maintain AI-related requirements within their existing authority |
| FTC enforcement | Consumer protection law | FTC retains authority to act against deceptive or unfair AI practices under existing consumer protection law |
| Contractual requirements | Business-to-business | Enterprise customers increasingly require AI 安全 測試 as a procurement condition |
Voluntary Commitments: The White House Pledges
Background
In July 2023 -- before EO 14110 -- the Biden administration secured voluntary commitments from major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI). These commitments included:
| Commitment | Scope | Enforcement |
|---|---|---|
| Internal and external 紅隊演練 before deployment | All frontier models | Self-reported; no external verification |
| Sharing 安全 information with government and peers | 安全-critical findings | Voluntary disclosure |
| Investing in cybersecurity for model weights | Model 安全 | Self-managed |
| Developing watermarking technology | AI-generated content | Self-managed |
| Publicly reporting model capabilities and limitations | Transparency | Model cards and system cards |
| Researching societal risks | Bias, fairness, societal impact | Self-directed research |
Current Status
These commitments technically remain in effect -- companies made voluntary pledges, and the rescission of EO 14110 does not rescind voluntary commitments. 然而:
- 存在 no mechanism to enforce these commitments
- 存在 no penalties for non-compliance
- Some companies have been more transparent about their practices than others
- The commitments were made in a different political environment and may be deprioritized
State-Level AI Legislation
The Patchwork Landscape
With limited federal action, US states have become the primary domestic source of AI regulation. As of early 2026, the landscape includes:
| State | Legislation | Key Provisions | Relevance to 紅隊演練 |
|---|---|---|---|
| Colorado | SB 21-169 (AI Act, effective 2026) | Requires impact assessments for "high-risk" AI systems; developers must 測試 for algorithmic discrimination | Directly creates 測試 requirements for AI systems used in consequential decisions |
| California | Various bills (SB 1047 vetoed, others pending) | SB 1047 would have required 安全 測試 for large models; subsequent legislation addresses narrower AI issues | Ongoing legislative activity; most significant state-level AI bills originate here |
| Illinois | AI Video Interview Act; pending broader legislation | Requires disclosure and consent for AI in hiring | Sector-specific 測試 implications for AI hiring tools |
| New York City | Local Law 144 | Requires bias audits for automated employment decision tools | Creates specific bias 測試 requirements |
| Texas | Texas AI Advisory Council; pending legislation | Advisory body studying AI regulation; legislation in development | Future requirements likely |
| Connecticut | AI Act (pending) | Broad AI transparency and accountability requirements | If enacted, would create 測試 obligations |
The Colorado AI Act in Detail
The Colorado AI Act (SB 21-169) is the most comprehensive state-level AI regulation in the US and deserves specific 注意力:
| Requirement | Detail | Impact on 紅隊演練 |
|---|---|---|
| Impact 評估 | Developers and deployers of high-risk AI must conduct impact assessments | Creates demand for structured AI risk 評估 |
| Discrimination 測試 | Must 測試 for algorithmic discrimination based on protected characteristics | Requires bias and fairness 測試 expertise |
| Transparency | Must disclose when AI is used in consequential decisions | Red teams should 測試 whether disclosure mechanisms work correctly |
| Risk management | Must 實作 risk management programs | Creates ongoing need for 安全 and 安全 測試 |
| Effective date | February 1, 2026 | Requirements are now active |
The EU AI Act: The Major Remaining Framework
Why the EU AI Act Matters for US Red Teamers
Even without US federal mandates, the EU AI Act remains a significant driver of AI 安全 測試:
- Extraterritorial scope: Applies to any AI system used in the EU, regardless of where the developer is based
- Major US AI companies serve EU users: All major frontier model providers must comply
- Specific 測試 requirements: High-risk AI systems require conformity assessments that include 安全 and 安全 測試
- Penalties: Fines up to 35 million EUR or 7% of global annual turnover for the most serious violations
EU AI Act Requirements Relevant to 紅隊演練
| Requirement | Article | 紅隊演練 Connection |
|---|---|---|
| Risk management system | Art. 9 | Must include 測試 for known and foreseeable risks |
| Data quality and governance | Art. 10 | 訓練資料 must be tested for bias and quality issues |
| Technical documentation | Art. 11 | Must document 測試 methodology and results |
| Transparency and information | Art. 13 | System capabilities and limitations must be tested and documented |
| Human oversight | Art. 14 | Human-in-the-loop mechanisms must be tested for effectiveness |
| Accuracy, robustness, cybersecurity | Art. 15 | Directly requires 對抗性 robustness 測試 |
| Post-market 監控 | Art. 72 | Ongoing 監控 and 測試 after deployment |
For a detailed treatment of EU AI Act compliance 測試, see EU AI Act Compliance 測試.
Impact on the AI 紅隊演練 Profession
Before EO 14110 (Pre-October 2023)
| Aspect | Status |
|---|---|
| Demand drivers | Voluntary 安全 efforts, competitive differentiation, liability concerns |
| Who hired red teamers | Primarily frontier AI labs (internal teams) |
| Standardization | Minimal; ad-hoc approaches |
| Reporting requirements | None (federal); some sector-specific |
| Career path | Emerging; no established role definitions |
During EO 14110 (October 2023 - January 2025)
| Aspect | Status |
|---|---|
| Demand drivers | Federal mandate (for frontier models) + all previous drivers |
| Who hired red teamers | Frontier labs, government contractors, compliance consultancies |
| Standardization | NIST actively developing standards and guidelines |
| Reporting requirements | Required for models above compute threshold |
| Career path | Growing; federal endorsement legitimized the field |
After Rescission (January 2025 - Present)
| Aspect | Status |
|---|---|
| Demand drivers | EU AI Act compliance, state legislation, enterprise risk management, voluntary commitments |
| Who hired red teamers | Frontier labs (continued), EU-facing companies, regulated industries, enterprises with AI risk programs |
| Standardization | NIST work continues but at reduced pace; OWASP, MITRE provide community-driven standards |
| Reporting requirements | None (federal); state and EU requirements apply selectively |
| Career path | Established but less government-backed; increasingly private sector and compliance-driven |
Current State of US AI Regulation (Early 2026)
Federal Level
| Entity | Current AI Stance | Implications |
|---|---|---|
| White House | Pro-innovation; minimal prescriptive 安全 requirements | No new federal AI 安全 mandates expected in near term |
| Congress | Multiple bills introduced but limited progress | Bipartisan interest in AI legislation but no comprehensive bill likely to pass soon |
| NIST | Continues AI RMF and 安全 work as voluntary resources | Standards available but not mandated |
| FTC | Active on AI-related consumer protection enforcement | Can address deceptive AI practices under existing authority |
| FDA | Maintains authority over AI in medical devices | Sector-specific AI requirements continue |
| SEC | Focused on AI use in financial services | Existing financial regulation applies to AI-driven decisions |
| DOD | Active AI ethics and 測試 programs | Military AI 測試 has separate governance structure |
The Voluntary vs. Mandatory Spectrum
The current US approach can be characterized as:
More Mandatory More Voluntary
│ │
│ EU AI Act State Laws Sector Regs NIST RMF Industry Pledges
│ (binding, (binding, (binding, (voluntary (voluntary,
│ extraterr.) state-level) sector-spec.) framework) self-enforced)
│ │
└──────────────────────────────────────────────────────────────┘
The US has moved toward the voluntary end of this spectrum at the federal level, while state legislation and EU requirements provide binding constraints in specific contexts.
What This Means for Practitioners
For AI Developers
| Situation | Guidance |
|---|---|
| Serving EU users | Must comply with EU AI Act requirements, including 安全 測試 |
| Operating in Colorado | Must comply with Colorado AI Act starting February 2026 |
| Building for regulated industries (healthcare, finance) | Sector-specific requirements still apply |
| Building frontier models | No federal mandate, but voluntary commitments and market expectations apply |
| Building enterprise AI tools | Increasing customer requirements for 安全 documentation and 測試 |
For Red Teamers
| Situation | Guidance |
|---|---|
| Evaluating frontier models | Same methodologies apply regardless of regulatory environment |
| Compliance-focused engagements | Know which regulations apply to the client's specific situation |
| Scope discussions | Regulatory requirements (EU, state) can help justify scope and budget |
| Reporting | Frame findings in terms of applicable standards (NIST AI RMF, OWASP LLM Top 10, MITRE ATLAS) |
| Career development | Build expertise in both technical 紅隊演練 and regulatory frameworks |
For Organizations
- Do not assume "no federal mandate" means "no requirement" -- state laws, EU regulations, and sector-specific rules still apply
- Voluntary is not optional for reputation -- organizations that skip 安全 測試 face brand risk, liability exposure, and potential loss of enterprise customers
- Standards still matter -- NIST AI RMF, OWASP LLM Top 10, and MITRE ATLAS provide recognized frameworks for demonstrating due diligence
- The regulatory pendulum swings -- federal requirements may return in a future administration; building 安全 測試 capacity now avoids scrambling later
Looking Forward
Likely Developments
| Development | Likelihood | Timeline |
|---|---|---|
| Additional state-level AI legislation | High | Ongoing (2026-2027) |
| Federal AI legislation (narrow, bipartisan) | Moderate | 2026-2027 if at all |
| Comprehensive federal AI regulation | Low (current administration) | Not in current term |
| EU AI Act enforcement actions | High | 2026 onwards as 實作 dates pass |
| Industry self-regulation expansion | High | Ongoing |
| International coordination on AI standards | Moderate | Ongoing through G7, OECD, ISO |
The Standards Landscape
Even without federal mandates, standards organizations continue developing AI 安全 and 測試 standards:
| Standard | Organization | Status | Relevance |
|---|---|---|---|
| AI RMF 1.0 | NIST | Published | Voluntary risk management framework |
| OWASP LLM Top 10 | OWASP | Published, regularly updated | Industry-standard 漏洞 taxonomy |
| MITRE ATLAS | MITRE | Published, regularly updated | 對抗性 attack 知識庫 |
| ISO/IEC 42001 | ISO | Published | AI management system standard |
| ISO/IEC 23894 | ISO | Published | AI risk management guidance |
| EU harmonized standards | CEN/CENELEC | In development | Will provide presumption of conformity with EU AI Act |
Further Reading
- EU AI Act Compliance 測試 -- Detailed treatment of EU AI Act requirements and 測試 methodology
- OWASP LLM Top 10 -- Industry-standard 漏洞 taxonomy for LLM applications
- MITRE ATLAS -- 知識庫 of 對抗性 tactics and techniques against AI systems
- NIST AI RMF & ISO 42001 -- Voluntary risk management frameworks
相關主題
- Cross-Framework Mapping - How different regulatory and standards frameworks relate to each other
- Constitutional Classifiers - 防禦 approaches that may satisfy regulatory 測試 requirements
- Alignment Faking - 安全 research that informs regulatory discussions about model 評估
參考文獻
- Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" - The White House (October 30, 2023) - The original executive order establishing federal AI 安全 requirements
- "Revocation of Executive Order 14110" - The White House (January 20, 2025) - The executive action rescinding EO 14110
- Colorado SB 21-169, "Concerning Consumer Protections in Interactions with Artificial Intelligence Systems" - Colorado General Assembly (2021, amended 2024) - The most comprehensive US state-level AI regulation
- Regulation (EU) 2024/1689 (EU AI Act) - European Parliament and Council (2024) - The EU's comprehensive AI regulation
- NIST AI Risk Management Framework (AI RMF 1.0) - National Institute of Standards and Technology (2023) - The US voluntary framework for AI risk management
- "Voluntary AI Commitments" - The White House (July 2023) - The voluntary pledges from major AI companies
After the rescission of EO 14110, which of the following still creates a binding legal requirement for AI 安全 測試 that affects US-based AI companies?