US State AI Legislation
Survey of US state AI legislation including the Colorado AI Act, California AI bills, Illinois BIPA for AI, and the compliance challenges of navigating a patchwork regulatory landscape.
In the absence of comprehensive federal AI legislation, US states have moved forward with their own AI regulations. This creates a fragmented compliance landscape that affects how AI systems are developed, deployed, and tested. Red teamers must understand these laws because they shape engagement scope, determine which tests are legally required, and influence how findings are reported.
Colorado AI Act (SB 21-169 / SB 24-205)
Colorado enacted the most comprehensive state-level AI regulation in the United States. The Colorado AI Act applies to developers and deployers of "high-risk AI systems" that make consequential decisions about consumers.
Key Requirements
| Requirement | Details | Red Team Relevance |
|---|---|---|
| Risk management | Developers and deployers must implement reasonable risk management programs | Red team testing validates risk management effectiveness |
| Impact assessments | Annual impact assessments for high-risk AI systems | Red team findings inform impact assessment content |
| Algorithmic discrimination prevention | Must use reasonable care to protect against algorithmic discrimination | Bias testing and fairness assessment are directly required |
| Transparency | Must disclose to consumers when AI is used for consequential decisions | Test whether disclosure mechanisms function correctly |
| Documentation | Must maintain documentation of AI system capabilities and limitations | Verify documentation accuracy against actual system behavior |
High-Risk AI System Definition
Colorado defines high-risk AI systems as those that make or are a substantial factor in making consequential decisions in specific domains:
| Domain | Examples | Testing Focus |
|---|---|---|
| Education | Enrollment, discipline, assessment | Bias in educational outcome predictions |
| Employment | Hiring, promotion, termination | Discrimination in resume screening, interview analysis |
| Financial services | Lending, insurance, credit scoring | Fair lending compliance, protected class disparities |
| Healthcare | Treatment recommendations, insurance coverage | Clinical decision bias, coverage recommendation fairness |
| Housing | Rental applications, mortgage approvals | Fair housing compliance, disparate impact testing |
| Legal services | Sentencing recommendations, parole decisions | Bias in risk assessment scores, accuracy verification |
Developer vs. Deployer Obligations
| Obligation | Developer Responsibility | Deployer Responsibility |
|---|---|---|
| Risk management program | Implement during development | Implement for deployment context |
| Impact assessments | Provide technical documentation to deployers | Conduct deployment-specific impact assessments |
| Discrimination testing | Test for algorithmic discrimination | Monitor for discriminatory outcomes in production |
| Transparency | Document system capabilities and limitations | Disclose AI use to consumers |
| Incident reporting | Report known discrimination to deployers and AG | Report discovered discrimination to AG |
California AI Legislation
California has pursued multiple AI-related bills, creating a layered regulatory environment:
Key California AI Laws and Bills
| Law/Bill | Status | Focus | Key Requirements |
|---|---|---|---|
| SB 1047 (Safe and Secure Innovation for Frontier AI Models Act) | Vetoed 2024, revised versions pending | Frontier model safety | Safety testing, kill switch requirements, incident reporting |
| AB 2013 (AI Transparency Act) | Enacted 2024 | Training data transparency | AI developers must disclose training data summaries |
| AB 2885 (AI definition) | Enacted 2024 | Definitional | Establishes state definition of artificial intelligence |
| SB 942 (California AI Transparency Act) | Enacted 2024 | Output labeling | Requires detection tools for AI-generated content, watermarking |
| AB 1008 (Automated decision-making) | Pending | Employment decisions | Restrictions on automated employment decision tools |
SB 1047 Legacy and Ongoing Impact
Although SB 1047 was vetoed by Governor Newsom in September 2024, its concepts continue to influence California and national policy discussions. The bill would have required:
- Pre-deployment safety testing for frontier models (compute threshold: 10^26 FLOP or $100M training cost)
- Kill switch capabilities for deployed models
- Third-party safety audits
- Incident reporting for safety failures
- Whistleblower protections for AI safety concerns
Red team implications: Even without SB 1047, the safety testing concepts it proposed are becoming industry norms. Red teamers should be prepared to conduct the types of assessments the bill envisioned, as revised legislation is expected.
California Consumer Privacy Act (CCPA) and AI
The CCPA and its amendment (CPRA) have AI-specific implications through the California Privacy Protection Agency's rulemaking on automated decision-making technology (ADMT):
| CCPA/CPRA Requirement | AI Application | Red Team Test |
|---|---|---|
| Right to know | Consumers can ask what data AI systems collect about them | Verify data disclosure mechanisms work correctly |
| Right to delete | Consumers can request deletion of data used by AI | Test whether deletion actually removes data from AI systems |
| Right to opt-out | Consumers can opt out of automated decision-making | Verify opt-out mechanisms prevent AI processing |
| Non-discrimination | Cannot discriminate against consumers who exercise rights | Test whether opting out of AI results in degraded service |
Illinois Biometric Information Privacy Act (BIPA) for AI
Illinois BIPA is the most consequential biometric privacy law in the US and has significant implications for AI systems that process biometric data.
BIPA Requirements Affecting AI
| Requirement | Application to AI | Testing Approach |
|---|---|---|
| Written consent | Must obtain consent before collecting biometric data for AI processing | Verify consent collection mechanisms |
| Purpose limitation | Must disclose the specific purpose of biometric data collection | Test whether AI uses biometric data beyond stated purposes |
| Retention schedule | Must establish and follow a retention and destruction schedule | Verify biometric data deletion from AI training sets |
| No sale | Cannot sell, lease, trade, or profit from biometric data | Test whether biometric data is shared with third-party AI systems |
| Data protection | Must store and protect biometric data using reasonable security measures | Security assessment of biometric data storage and processing |
AI Systems Affected by BIPA
| AI System Type | Biometric Data | BIPA Implication |
|---|---|---|
| Facial recognition | Faceprint geometry | Full BIPA compliance required for each individual |
| Voice assistants | Voiceprint | Consent required before voice processing |
| Emotion detection | Facial expressions, voice patterns | Biometric identifier capture triggers BIPA |
| Gait analysis | Movement patterns | May constitute biometric identifier |
| Iris/retina scanning | Eye patterns | Full BIPA compliance required |
Other Notable State AI Laws
State-by-State Overview
| State | Law/Focus | Key Requirement | Effective Date |
|---|---|---|---|
| Connecticut | SB 1103 (AI accountability) | Impact assessments, transparency for high-risk AI | 2026 |
| Texas | AI advisory council, deepfake law | Deepfake restrictions, government AI transparency | Various |
| New York | NYC Local Law 144 (automated employment) | Bias audits for automated employment decision tools | 2023 |
| Maryland | HB 1202 (facial recognition) | Restrictions on employer use of facial recognition | 2020 |
| Virginia | VCDPA (AI provisions) | Consumer rights regarding AI profiling | 2023 |
| Utah | AI Policy Act | AI disclosure requirements, regulatory sandbox | 2024 |
| Tennessee | ELVIS Act | Protects voice likeness from AI replication | 2024 |
| Washington | Various bills | AI in hiring, facial recognition moratorium | Various |
NYC Local Law 144: A Case Study
New York City's Local Law 144 specifically regulates automated employment decision tools (AEDTs) and provides a useful model for understanding municipal AI regulation:
| Requirement | Detail | Red Team Relevance |
|---|---|---|
| Bias audit | Annual third-party bias audit required | Must test for disparate impact across protected classes |
| Audit methodology | Must calculate impact ratios and scoring rate comparisons | Quantitative bias testing with statistical analysis |
| Public posting | Audit results must be posted publicly | Findings will be publicly scrutinized |
| Candidate notice | Must notify candidates about AEDT use 10 business days before | Test whether notification mechanisms function |
| Alternative process | Must offer alternative selection process | Verify alternative process actually exists and works |
Patchwork Compliance Challenges
The Multi-State Problem
Organizations deploying AI nationally face a complex compliance matrix:
| Challenge | Description | Red Team Impact |
|---|---|---|
| Conflicting definitions | States define "AI system" and "high-risk" differently | Must test against the most restrictive applicable definition |
| Varying consent requirements | Consent thresholds differ across states | Consent mechanisms must satisfy the strictest state |
| Different bias testing standards | Some require quantitative audits, others qualitative assessments | Testing methodology must cover all applicable standards |
| Jurisdictional triggers | Unclear which state's law applies in interstate AI services | May need to test under multiple regulatory frameworks |
| Enforcement variability | Some states have AG enforcement only, others allow private action | Risk assessment varies by potential enforcement mechanism |
Compliance Strategy for Red Teams
When scoping multi-state engagements, use a "highest common denominator" approach:
Identify applicable jurisdictions
Determine which states' users interact with the AI system. Consider both the organization's location and users' locations as jurisdictional triggers.
Map requirements
Create a matrix of requirements across all applicable state laws. Identify the most restrictive requirement for each category (consent, bias testing, transparency, etc.).
Test to the highest standard
Design tests that satisfy the most stringent applicable requirements. This ensures compliance with all applicable laws through a single testing engagement.
Report per jurisdiction
Structure findings to show compliance status against each applicable state law. This allows clients to prioritize remediation based on jurisdictional risk.
Reporting for Multi-State Compliance
| Report Section | Content |
|---|---|
| Jurisdictional analysis | Which state laws apply and why |
| Requirements matrix | Cross-reference of requirements across applicable states |
| Test results by category | Bias testing, transparency verification, consent mechanism testing |
| Compliance status per state | Pass/fail assessment for each applicable state law |
| Gap analysis | Requirements not currently met, organized by state and priority |
| Remediation roadmap | Prioritized by legal exposure (private right of action states first) |
Anticipating Future Legislation
Legislative Trends
Red teamers should monitor these emerging legislative patterns to stay ahead of compliance requirements:
| Trend | Direction | Implications for Red Teams |
|---|---|---|
| Algorithmic impact assessments | Becoming required in more states | Standard engagement deliverable |
| Bias audit mandates | Expanding beyond employment to all high-risk domains | Quantitative bias testing as a core competency |
| AI transparency requirements | Universal disclosure becoming the norm | Testing disclosure mechanisms becomes routine |
| Right to human review | Growing right to opt out of automated decisions | Testing human override and escalation paths |
| AI incident reporting | Mandatory reporting requirements emerging | Incident simulation and response testing |
| Foundation model regulation | Renewed interest post-SB 1047 veto | Pre-deployment safety testing for model developers |
Federal Preemption Possibility
The prospect of federal AI legislation that preempts state laws remains a possibility. Red teamers should prepare for both scenarios:
- If federal law preempts: Testing requirements would standardize, potentially simplifying engagements but possibly reducing scope
- If no preemption: The patchwork will continue to grow, increasing the complexity and value of comprehensive multi-state compliance testing
Regardless of the federal outcome, state laws create current obligations that organizations must address today. Red team engagements that map findings to specific state requirements provide immediate compliance value while building institutional knowledge that will remain relevant under any future federal framework.