US State AI Legislation
Survey of US state AI legislation including the Colorado AI Act, California AI bills, Illinois BIPA for AI, and the compliance challenges of navigating a patchwork regulatory landscape.
In the absence of comprehensive federal AI legislation, US states have moved forward with their own AI regulations. This creates a fragmented compliance landscape that affects how AI systems are developed, deployed, and tested. Red teamers must 理解 these laws 因為 they shape engagement scope, determine which tests are legally required, and influence how findings are reported.
Colorado AI Act (SB 21-169 / SB 24-205)
Colorado enacted the most comprehensive state-level AI regulation in the United States. The Colorado AI Act applies to developers and deployers of "high-risk AI systems" that make consequential decisions about consumers.
Key Requirements
| Requirement | Details | 紅隊 Relevance |
|---|---|---|
| Risk management | Developers and deployers must 實作 reasonable risk management programs | Red team 測試 validates risk management effectiveness |
| Impact assessments | Annual impact assessments for high-risk AI systems | Red team findings inform impact 評估 content |
| Algorithmic discrimination prevention | Must use reasonable care to protect against algorithmic discrimination | Bias 測試 and fairness 評估 are directly required |
| Transparency | Must disclose to consumers when AI is used for consequential decisions | 測試 whether disclosure mechanisms function correctly |
| Documentation | Must maintain documentation of AI system capabilities and limitations | Verify documentation accuracy against actual system behavior |
High-Risk AI System Definition
Colorado defines high-risk AI systems as those that make or are a substantial factor in making consequential decisions in specific domains:
| Domain | 範例 | 測試 Focus |
|---|---|---|
| Education | Enrollment, discipline, 評估 | Bias in educational outcome predictions |
| Employment | Hiring, promotion, termination | Discrimination in resume screening, interview analysis |
| Financial services | Lending, insurance, credit scoring | Fair lending compliance, protected class disparities |
| Healthcare | Treatment recommendations, insurance coverage | Clinical decision bias, coverage recommendation fairness |
| Housing | Rental applications, mortgage approvals | Fair housing compliance, disparate impact 測試 |
| Legal services | Sentencing recommendations, parole decisions | Bias in risk 評估 scores, accuracy verification |
Developer vs. Deployer Obligations
| Obligation | Developer Responsibility | Deployer Responsibility |
|---|---|---|
| Risk management program | 實作 during development | 實作 for deployment context |
| Impact assessments | Provide technical documentation to deployers | Conduct deployment-specific impact assessments |
| Discrimination 測試 | 測試 for algorithmic discrimination | Monitor for discriminatory outcomes in production |
| Transparency | Document system capabilities and limitations | Disclose AI use to consumers |
| Incident reporting | Report known discrimination to deployers and AG | Report discovered discrimination to AG |
California AI Legislation
California has pursued multiple AI-related bills, creating a layered regulatory environment:
Key California AI Laws and Bills
| Law/Bill | Status | Focus | Key Requirements |
|---|---|---|---|
| SB 1047 (Safe and Secure Innovation for Frontier AI Models Act) | Vetoed 2024, revised versions pending | Frontier model 安全 | 安全 測試, kill switch requirements, incident reporting |
| AB 2013 (AI Transparency Act) | Enacted 2024 | 訓練資料 transparency | AI developers must disclose 訓練資料 summaries |
| AB 2885 (AI definition) | Enacted 2024 | Definitional | Establishes state definition of artificial intelligence |
| SB 942 (California AI Transparency Act) | Enacted 2024 | 輸出 labeling | Requires 偵測 tools for AI-generated content, watermarking |
| AB 1008 (Automated decision-making) | Pending | Employment decisions | Restrictions on automated employment decision tools |
SB 1047 Legacy and Ongoing Impact
Although SB 1047 was vetoed by Governor Newsom in September 2024, its concepts continue to influence California and national policy discussions. The bill would have required:
- Pre-deployment 安全 測試 for frontier models (compute threshold: 10^26 FLOP or $100M 訓練 cost)
- Kill switch capabilities for deployed models
- Third-party 安全 audits
- Incident reporting for 安全 failures
- Whistleblower protections for AI 安全 concerns
Red team implications: Even without SB 1047, the 安全 測試 concepts it proposed are becoming industry norms. Red teamers should be prepared to conduct the types of assessments the bill envisioned, as revised legislation is expected.
California Consumer Privacy Act (CCPA) and AI
The CCPA and its amendment (CPRA) have AI-specific implications through the California Privacy Protection Agency's rulemaking on automated decision-making technology (ADMT):
| CCPA/CPRA Requirement | AI Application | 紅隊 測試 |
|---|---|---|
| Right to know | Consumers can ask what data AI systems collect about them | Verify data disclosure mechanisms work correctly |
| Right to delete | Consumers can request deletion of data used by AI | 測試 whether deletion actually removes data from AI systems |
| Right to opt-out | Consumers can opt out of automated decision-making | Verify opt-out mechanisms prevent AI processing |
| Non-discrimination | Cannot discriminate against consumers who exercise rights | 測試 whether opting out of AI results in degraded service |
Illinois Biometric Information Privacy Act (BIPA) for AI
Illinois BIPA is the most consequential biometric privacy law in the US and has significant implications for AI systems that process biometric data.
BIPA Requirements Affecting AI
| Requirement | Application to AI | 測試 Approach |
|---|---|---|
| Written consent | Must obtain consent before collecting biometric data for AI processing | Verify consent collection mechanisms |
| Purpose limitation | Must disclose the specific purpose of biometric data collection | 測試 whether AI uses biometric data beyond stated purposes |
| Retention schedule | Must establish and follow a retention and destruction schedule | Verify biometric data deletion from AI 訓練 sets |
| No sale | Cannot sell, lease, trade, or profit from biometric data | 測試 whether biometric data is shared with third-party AI systems |
| Data protection | Must store and protect biometric data using reasonable 安全 measures | 安全 評估 of biometric data storage and processing |
AI Systems Affected by BIPA
| AI System Type | Biometric Data | BIPA Implication |
|---|---|---|
| Facial recognition | Faceprint geometry | Full BIPA compliance required 對每個 individual |
| Voice assistants | Voiceprint | Consent required before voice processing |
| Emotion 偵測 | Facial expressions, voice patterns | Biometric identifier capture triggers BIPA |
| Gait analysis | Movement patterns | May constitute biometric identifier |
| Iris/retina scanning | Eye patterns | Full BIPA compliance required |
Other Notable State AI Laws
State-by-State 概覽
| State | Law/Focus | Key Requirement | Effective Date |
|---|---|---|---|
| Connecticut | SB 1103 (AI accountability) | Impact assessments, transparency for high-risk AI | 2026 |
| Texas | AI advisory council, deepfake law | Deepfake restrictions, government AI transparency | Various |
| New York | NYC Local Law 144 (automated employment) | Bias audits for automated employment decision tools | 2023 |
| Maryland | HB 1202 (facial recognition) | Restrictions on employer use of facial recognition | 2020 |
| Virginia | VCDPA (AI provisions) | Consumer rights regarding AI profiling | 2023 |
| Utah | AI Policy Act | AI disclosure requirements, regulatory sandbox | 2024 |
| Tennessee | ELVIS Act | Protects voice likeness from AI replication | 2024 |
| Washington | Various bills | AI in hiring, facial recognition moratorium | Various |
NYC Local Law 144: A Case Study
New York City's Local Law 144 specifically regulates automated employment decision tools (AEDTs) and provides a useful model for 理解 municipal AI regulation:
| Requirement | Detail | 紅隊 Relevance |
|---|---|---|
| Bias audit | Annual third-party bias audit required | Must 測試 for disparate impact across protected classes |
| Audit methodology | Must calculate impact ratios and scoring rate comparisons | Quantitative bias 測試 with statistical analysis |
| Public posting | Audit results must be posted publicly | Findings will be publicly scrutinized |
| Candidate notice | Must notify candidates about AEDT use 10 business days before | 測試 whether notification mechanisms function |
| Alternative process | Must offer alternative selection process | Verify alternative process actually exists and works |
Patchwork Compliance Challenges
The Multi-State Problem
Organizations deploying AI nationally face a complex compliance matrix:
| Challenge | Description | 紅隊 Impact |
|---|---|---|
| Conflicting definitions | States define "AI system" and "high-risk" differently | Must 測試 against the most restrictive applicable definition |
| Varying consent requirements | Consent thresholds differ across states | Consent mechanisms must satisfy the strictest state |
| Different bias 測試 standards | Some require quantitative audits, others qualitative assessments | 測試 methodology must cover all applicable standards |
| Jurisdictional triggers | Unclear which state's law applies in interstate AI services | May need to 測試 under multiple regulatory frameworks |
| Enforcement variability | Some states have AG enforcement only, others allow private action | Risk 評估 varies by potential enforcement mechanism |
Compliance Strategy for Red Teams
When scoping multi-state engagements, use a "highest common denominator" approach:
識別 applicable jurisdictions
Determine which states' users interact with the AI system. 考慮 both the organization's location and users' locations as jurisdictional triggers.
Map requirements
Create a matrix of requirements across all applicable state laws. 識別 the most restrictive requirement 對每個 category (consent, bias 測試, transparency, etc.).
測試 to the highest standard
Design tests that satisfy the most stringent applicable requirements. This ensures compliance with all applicable laws through a single 測試 engagement.
Report per jurisdiction
Structure findings to show compliance status against each applicable state law. This allows clients to prioritize remediation based on jurisdictional risk.
Reporting for Multi-State Compliance
| Report Section | Content |
|---|---|
| Jurisdictional analysis | Which state laws apply and why |
| Requirements matrix | Cross-reference of requirements across applicable states |
| 測試 results by category | Bias 測試, transparency verification, consent mechanism 測試 |
| Compliance status per state | Pass/fail 評估 對每個 applicable state law |
| Gap analysis | Requirements not currently met, organized by state and priority |
| Remediation roadmap | Prioritized by legal exposure (private right of action states first) |
Anticipating Future Legislation
Legislative Trends
Red teamers should monitor these emerging legislative patterns to stay ahead of compliance requirements:
| Trend | Direction | Implications for Red Teams |
|---|---|---|
| Algorithmic impact assessments | Becoming required in more states | Standard engagement deliverable |
| Bias audit mandates | Expanding beyond employment to all high-risk domains | Quantitative bias 測試 as a core competency |
| AI transparency requirements | Universal disclosure becoming the norm | 測試 disclosure mechanisms becomes routine |
| Right to human review | Growing right to opt out of automated decisions | 測試 human override and escalation paths |
| AI incident reporting | Mandatory reporting requirements emerging | Incident simulation and response 測試 |
| Foundation model regulation | Renewed interest post-SB 1047 veto | Pre-deployment 安全 測試 for model developers |
Federal Preemption Possibility
The prospect of federal AI legislation that preempts state laws remains a possibility. Red teamers should prepare for both scenarios:
- If federal law preempts: 測試 requirements would standardize, potentially simplifying engagements but possibly reducing scope
- If no preemption: The patchwork will continue to grow, increasing the complexity and value of comprehensive multi-state compliance 測試
Regardless of the federal outcome, state laws create current obligations that organizations must address today. Red team engagements that map findings to specific state requirements provide immediate compliance value while building institutional knowledge that will remain relevant under any future federal framework.