Post-Executive Order AI Governance Landscape
The US AI governance landscape after the rescission of Executive Order 14110: what was lost, what remains, and how it affects AI red teaming practice and the broader regulatory environment.
The regulatory landscape for AI in the United States shifted dramatically between October 2023 and January 2025. Understanding this shift is essential for AI red teamers -- not because red teaming requires a government mandate, but because the regulatory environment shapes what organizations prioritize, fund, and staff. When mandatory requirements become voluntary guidelines, the demand for security testing changes accordingly.
Biden's Executive Order 14110: What It Established
Background
On October 30, 2023, President Biden signed Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." It was the most significant US federal action on AI safety to that point.
Key Provisions
The executive order was extensive, spanning dozens of directives across multiple agencies. The provisions most relevant to AI red teaming and safety included:
| Provision | Responsible Agency | Relevance to Red Teaming |
|---|---|---|
| Safety testing for dual-use foundation models | Commerce / NIST | Directly mandated red teaming and safety evaluation |
| Reporting thresholds | Commerce | Required developers to report results of safety tests for models exceeding compute thresholds |
| NIST AI safety standards | NIST | Directed development of standards and benchmarks for AI safety testing |
| Red teaming guidance | NIST | Specifically called for development of red teaming guidelines and best practices |
| Watermarking and provenance | Commerce | Directed development of standards for identifying AI-generated content |
| Critical infrastructure protection | DHS / Sector-Specific Agencies | Required assessment of AI risks to critical infrastructure |
| Biological and chemical risk | HHS / DHS | Mandated evaluation of AI's potential to assist in creating biological or chemical threats |
| Cybersecurity applications | NSA / CISA | Directed use of AI for cyber defense while managing offensive AI risks |
The Compute Threshold
One of the most consequential provisions was the compute reporting threshold:
| Model Type | Threshold | Approximate Equivalent (at time of EO) |
|---|---|---|
| General-purpose foundation models | 10^26 FLOP | Roughly GPT-4 scale and above |
| Biological sequence models | 10^23 FLOP | Lower threshold reflecting higher risk |
Developers training models above these thresholds were required to:
- Notify the federal government before training begins
- Share results of safety tests, including red teaming, with the government
- Report on the model's potential for misuse in areas like cybersecurity, biological weapons, and critical infrastructure disruption
NIST's Role
The executive order placed NIST at the center of AI safety standardization, directing it to:
- Develop guidelines for red teaming and safety evaluation of AI systems
- Create benchmarks for testing AI system safety and security
- Establish standards for AI system transparency and accountability
- Coordinate with international standards bodies on AI safety
NIST responded by building on its existing AI Risk Management Framework (AI RMF) and initiating work on AI safety-specific guidance. This work was well underway when the executive order was rescinded.
The Rescission: January 2025
What Happened
On January 20, 2025, the incoming Trump administration rescinded Executive Order 14110 as part of a broader set of executive actions on the first day in office. The rescission was accompanied by a new executive order focused on "removing barriers to American AI innovation."
What Was Lost
The rescission immediately affected several categories of requirements:
| Category | Status Before Rescission | Status After Rescission |
|---|---|---|
| Mandatory safety testing for frontier models | Required for models above compute threshold | No federal requirement |
| Government reporting of training plans and safety test results | Required | No longer required |
| NIST safety standards mandate | NIST directed to develop binding standards | NIST continues voluntary work but with reduced urgency and scope |
| Red teaming guidance | NIST directed to develop official guidelines | Guidelines remain as voluntary resources |
| Watermarking standards | Commerce directed to develop | Reduced priority |
| Agency-specific AI assessments | Required across multiple agencies | Halted or deprioritized |
What Remains
Not everything was lost with the rescission. Several sources of AI safety requirements and incentives remain:
| Source | Type | Status |
|---|---|---|
| Voluntary industry commitments | Self-regulation | Still in effect for signatories (but voluntary) |
| NIST AI RMF | Voluntary framework | Still available and maintained |
| State-level legislation | Binding law (varies by state) | Active and expanding |
| EU AI Act | Binding regulation (for EU market) | Fully in effect; applies to US companies serving EU users |
| Sector-specific regulation | Federal agency rules | Some agencies (FDA, NHTSA, SEC) maintain AI-related requirements within their existing authority |
| FTC enforcement | Consumer protection law | FTC retains authority to act against deceptive or unfair AI practices under existing consumer protection law |
| Contractual requirements | Business-to-business | Enterprise customers increasingly require AI safety testing as a procurement condition |
Voluntary Commitments: The White House Pledges
Background
In July 2023 -- before EO 14110 -- the Biden administration secured voluntary commitments from major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI). These commitments included:
| Commitment | Scope | Enforcement |
|---|---|---|
| Internal and external red teaming before deployment | All frontier models | Self-reported; no external verification |
| Sharing safety information with government and peers | Safety-critical findings | Voluntary disclosure |
| Investing in cybersecurity for model weights | Model security | Self-managed |
| Developing watermarking technology | AI-generated content | Self-managed |
| Publicly reporting model capabilities and limitations | Transparency | Model cards and system cards |
| Researching societal risks | Bias, fairness, societal impact | Self-directed research |
Current Status
These commitments technically remain in effect -- companies made voluntary pledges, and the rescission of EO 14110 does not rescind voluntary commitments. However:
- There is no mechanism to enforce these commitments
- There are no penalties for non-compliance
- Some companies have been more transparent about their practices than others
- The commitments were made in a different political environment and may be deprioritized
State-Level AI Legislation
The Patchwork Landscape
With limited federal action, US states have become the primary domestic source of AI regulation. As of early 2026, the landscape includes:
| State | Legislation | Key Provisions | Relevance to Red Teaming |
|---|---|---|---|
| Colorado | SB 21-169 (AI Act, effective 2026) | Requires impact assessments for "high-risk" AI systems; developers must test for algorithmic discrimination | Directly creates testing requirements for AI systems used in consequential decisions |
| California | Various bills (SB 1047 vetoed, others pending) | SB 1047 would have required safety testing for large models; subsequent legislation addresses narrower AI issues | Ongoing legislative activity; most significant state-level AI bills originate here |
| Illinois | AI Video Interview Act; pending broader legislation | Requires disclosure and consent for AI in hiring | Sector-specific testing implications for AI hiring tools |
| New York City | Local Law 144 | Requires bias audits for automated employment decision tools | Creates specific bias testing requirements |
| Texas | Texas AI Advisory Council; pending legislation | Advisory body studying AI regulation; legislation in development | Future requirements likely |
| Connecticut | AI Act (pending) | Broad AI transparency and accountability requirements | If enacted, would create testing obligations |
The Colorado AI Act in Detail
The Colorado AI Act (SB 21-169) is the most comprehensive state-level AI regulation in the US and deserves specific attention:
| Requirement | Detail | Impact on Red Teaming |
|---|---|---|
| Impact assessment | Developers and deployers of high-risk AI must conduct impact assessments | Creates demand for structured AI risk assessment |
| Discrimination testing | Must test for algorithmic discrimination based on protected characteristics | Requires bias and fairness testing expertise |
| Transparency | Must disclose when AI is used in consequential decisions | Red teams should test whether disclosure mechanisms work correctly |
| Risk management | Must implement risk management programs | Creates ongoing need for security and safety testing |
| Effective date | February 1, 2026 | Requirements are now active |
The EU AI Act: The Major Remaining Framework
Why the EU AI Act Matters for US Red Teamers
Even without US federal mandates, the EU AI Act remains a significant driver of AI safety testing:
- Extraterritorial scope: Applies to any AI system used in the EU, regardless of where the developer is based
- Major US AI companies serve EU users: All major frontier model providers must comply
- Specific testing requirements: High-risk AI systems require conformity assessments that include safety and security testing
- Penalties: Fines up to 35 million EUR or 7% of global annual turnover for the most serious violations
EU AI Act Requirements Relevant to Red Teaming
| Requirement | Article | Red Teaming Connection |
|---|---|---|
| Risk management system | Art. 9 | Must include testing for known and foreseeable risks |
| Data quality and governance | Art. 10 | Training data must be tested for bias and quality issues |
| Technical documentation | Art. 11 | Must document testing methodology and results |
| Transparency and information | Art. 13 | System capabilities and limitations must be tested and documented |
| Human oversight | Art. 14 | Human-in-the-loop mechanisms must be tested for effectiveness |
| Accuracy, robustness, cybersecurity | Art. 15 | Directly requires adversarial robustness testing |
| Post-market monitoring | Art. 72 | Ongoing monitoring and testing after deployment |
For a detailed treatment of EU AI Act compliance testing, see EU AI Act Compliance Testing.
Impact on the AI Red Teaming Profession
Before EO 14110 (Pre-October 2023)
| Aspect | Status |
|---|---|
| Demand drivers | Voluntary safety efforts, competitive differentiation, liability concerns |
| Who hired red teamers | Primarily frontier AI labs (internal teams) |
| Standardization | Minimal; ad-hoc approaches |
| Reporting requirements | None (federal); some sector-specific |
| Career path | Emerging; no established role definitions |
During EO 14110 (October 2023 - January 2025)
| Aspect | Status |
|---|---|
| Demand drivers | Federal mandate (for frontier models) + all previous drivers |
| Who hired red teamers | Frontier labs, government contractors, compliance consultancies |
| Standardization | NIST actively developing standards and guidelines |
| Reporting requirements | Required for models above compute threshold |
| Career path | Growing; federal endorsement legitimized the field |
After Rescission (January 2025 - Present)
| Aspect | Status |
|---|---|
| Demand drivers | EU AI Act compliance, state legislation, enterprise risk management, voluntary commitments |
| Who hired red teamers | Frontier labs (continued), EU-facing companies, regulated industries, enterprises with AI risk programs |
| Standardization | NIST work continues but at reduced pace; OWASP, MITRE provide community-driven standards |
| Reporting requirements | None (federal); state and EU requirements apply selectively |
| Career path | Established but less government-backed; increasingly private sector and compliance-driven |
Current State of US AI Regulation (Early 2026)
Federal Level
| Entity | Current AI Stance | Implications |
|---|---|---|
| White House | Pro-innovation; minimal prescriptive safety requirements | No new federal AI safety mandates expected in near term |
| Congress | Multiple bills introduced but limited progress | Bipartisan interest in AI legislation but no comprehensive bill likely to pass soon |
| NIST | Continues AI RMF and safety work as voluntary resources | Standards available but not mandated |
| FTC | Active on AI-related consumer protection enforcement | Can address deceptive AI practices under existing authority |
| FDA | Maintains authority over AI in medical devices | Sector-specific AI requirements continue |
| SEC | Focused on AI use in financial services | Existing financial regulation applies to AI-driven decisions |
| DOD | Active AI ethics and testing programs | Military AI testing has separate governance structure |
The Voluntary vs. Mandatory Spectrum
The current US approach can be characterized as:
More Mandatory More Voluntary
│ │
│ EU AI Act State Laws Sector Regs NIST RMF Industry Pledges
│ (binding, (binding, (binding, (voluntary (voluntary,
│ extraterr.) state-level) sector-spec.) framework) self-enforced)
│ │
└──────────────────────────────────────────────────────────────┘
The US has moved toward the voluntary end of this spectrum at the federal level, while state legislation and EU requirements provide binding constraints in specific contexts.
What This Means for Practitioners
For AI Developers
| Situation | Guidance |
|---|---|
| Serving EU users | Must comply with EU AI Act requirements, including safety testing |
| Operating in Colorado | Must comply with Colorado AI Act starting February 2026 |
| Building for regulated industries (healthcare, finance) | Sector-specific requirements still apply |
| Building frontier models | No federal mandate, but voluntary commitments and market expectations apply |
| Building enterprise AI tools | Increasing customer requirements for safety documentation and testing |
For Red Teamers
| Situation | Guidance |
|---|---|
| Evaluating frontier models | Same methodologies apply regardless of regulatory environment |
| Compliance-focused engagements | Know which regulations apply to the client's specific situation |
| Scope discussions | Regulatory requirements (EU, state) can help justify scope and budget |
| Reporting | Frame findings in terms of applicable standards (NIST AI RMF, OWASP LLM Top 10, MITRE ATLAS) |
| Career development | Build expertise in both technical red teaming and regulatory frameworks |
For Organizations
- Do not assume "no federal mandate" means "no requirement" -- state laws, EU regulations, and sector-specific rules still apply
- Voluntary is not optional for reputation -- organizations that skip safety testing face brand risk, liability exposure, and potential loss of enterprise customers
- Standards still matter -- NIST AI RMF, OWASP LLM Top 10, and MITRE ATLAS provide recognized frameworks for demonstrating due diligence
- The regulatory pendulum swings -- federal requirements may return in a future administration; building safety testing capacity now avoids scrambling later
Looking Forward
Likely Developments
| Development | Likelihood | Timeline |
|---|---|---|
| Additional state-level AI legislation | High | Ongoing (2026-2027) |
| Federal AI legislation (narrow, bipartisan) | Moderate | 2026-2027 if at all |
| Comprehensive federal AI regulation | Low (current administration) | Not in current term |
| EU AI Act enforcement actions | High | 2026 onwards as implementation dates pass |
| Industry self-regulation expansion | High | Ongoing |
| International coordination on AI standards | Moderate | Ongoing through G7, OECD, ISO |
The Standards Landscape
Even without federal mandates, standards organizations continue developing AI safety and testing standards:
| Standard | Organization | Status | Relevance |
|---|---|---|---|
| AI RMF 1.0 | NIST | Published | Voluntary risk management framework |
| OWASP LLM Top 10 | OWASP | Published, regularly updated | Industry-standard vulnerability taxonomy |
| MITRE ATLAS | MITRE | Published, regularly updated | Adversarial attack knowledge base |
| ISO/IEC 42001 | ISO | Published | AI management system standard |
| ISO/IEC 23894 | ISO | Published | AI risk management guidance |
| EU harmonized standards | CEN/CENELEC | In development | Will provide presumption of conformity with EU AI Act |
Further Reading
- EU AI Act Compliance Testing -- Detailed treatment of EU AI Act requirements and testing methodology
- OWASP LLM Top 10 -- Industry-standard vulnerability taxonomy for LLM applications
- MITRE ATLAS -- Knowledge base of adversarial tactics and techniques against AI systems
- NIST AI RMF & ISO 42001 -- Voluntary risk management frameworks
Related Topics
- Cross-Framework Mapping - How different regulatory and standards frameworks relate to each other
- Constitutional Classifiers - Defense approaches that may satisfy regulatory testing requirements
- Alignment Faking - Safety research that informs regulatory discussions about model evaluation
References
- Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" - The White House (October 30, 2023) - The original executive order establishing federal AI safety requirements
- "Revocation of Executive Order 14110" - The White House (January 20, 2025) - The executive action rescinding EO 14110
- Colorado SB 21-169, "Concerning Consumer Protections in Interactions with Artificial Intelligence Systems" - Colorado General Assembly (2021, amended 2024) - The most comprehensive US state-level AI regulation
- Regulation (EU) 2024/1689 (EU AI Act) - European Parliament and Council (2024) - The EU's comprehensive AI regulation
- NIST AI Risk Management Framework (AI RMF 1.0) - National Institute of Standards and Technology (2023) - The US voluntary framework for AI risk management
- "Voluntary AI Commitments" - The White House (July 2023) - The voluntary pledges from major AI companies
After the rescission of EO 14110, which of the following still creates a binding legal requirement for AI safety testing that affects US-based AI companies?