China AI Regulation
China's AI regulatory framework including algorithmic recommendation rules, deep synthesis provisions, generative AI measures, and compliance requirements for global companies operating in or serving the Chinese market.
China has developed one of the most comprehensive and rapidly evolving AI regulatory frameworks in the world. Unlike the EU's single-instrument approach (the EU AI Act) or the US's fragmented state-level approach, China has enacted multiple targeted regulations addressing specific AI applications. For red teamers working with global organizations, 理解 Chinese AI regulation is essential 因為 non-compliance can result in service bans, fines, and criminal liability for responsible individuals.
Regulatory Framework 概覽
China's AI regulations form a layered system, each targeting specific AI capabilities:
| Regulation | Effective Date | Scope | Enforcing Authority |
|---|---|---|---|
| Algorithmic Recommendation Provisions | March 2022 | Recommendation algorithms that influence user behavior | CAC |
| Deep Synthesis Provisions | January 2023 | AI-generated or modified content (deepfakes, synthetic media) | CAC |
| Generative AI Measures | August 2023 | Generative AI services offered to the public in China | CAC, with MIIT, MPS, and other agencies |
| AI 安全 Governance Framework | September 2024 (v1.0) | Comprehensive AI 安全 standards (non-binding, aspirational) | TC260 (National Information 安全 Standardization Technical Committee) |
| Personal Information Protection Law (PIPL) | November 2021 | Personal data processing, including by AI systems | CAC |
Hierarchical Relationship
┌──────────────────────────────────────────┐
│ Cybersecurity Law (2017) │
│ Data 安全 Law (2021) │
│ PIPL (2021) │
│ (Foundation data & 安全 laws) │
├──────────────────────────────────────────┤
│ Algorithmic Recommendation Provisions │
│ Deep Synthesis Provisions │
│ Generative AI Measures │
│ (AI-specific 實作 regulations) │
├──────────────────────────────────────────┤
│ AI 安全 Governance Framework │
│ National Standards (TC260) │
│ (Technical standards & guidance) │
└──────────────────────────────────────────┘
Algorithmic Recommendation Provisions
The Internet Information Service Algorithmic Recommendation Management Provisions target any service that uses algorithms to recommend content, products, or information to users.
Key Requirements
| Requirement | Description | 紅隊 測試 |
|---|---|---|
| Algorithm registration | Must register algorithms with the CAC through the Algorithm Filing System | Verify compliance documentation |
| User control | Must provide users with options to turn off algorithmic recommendations | 測試 whether opt-out mechanisms actually disable recommendations |
| Transparency | Must disclose basic principles, purpose, and main operating mechanisms of algorithms | Verify disclosures accurately describe actual algorithm behavior |
| No addiction | Must not use algorithms to induce user addiction or excessive consumption | 評估 whether engagement optimization crosses regulatory boundaries |
| No price discrimination | Must not use algorithms to 實作 unreasonable differential pricing | 測試 whether different user profiles receive different prices for the same service |
| Content management | Must not use algorithms to promote information that violates laws or social morality | 測試 content recommendation boundaries and filtering effectiveness |
紅隊 評估 for Algorithmic Recommendations
| 測試 Category | Methodology | Regulatory Requirement |
|---|---|---|
| User profiling analysis | Analyze how the algorithm categorizes users and whether protected characteristics influence recommendations | Anti-discrimination, PIPL data minimization |
| Opt-out effectiveness | Verify that disabling recommendations truly removes algorithmic influence | User control requirements |
| Filter bubble 評估 | 測試 whether the algorithm creates information echo chambers | Content diversity requirements |
| Price discrimination 偵測 | Compare pricing across different user profiles | Anti-price discrimination |
| Minor protection | 測試 whether age-related protections are enforced in recommendations | Minor protection requirements |
Deep Synthesis Provisions
The Deep Synthesis Provisions regulate AI technologies that generate or modify text, images, audio, video, and virtual scenes, including deepfakes and other synthetic media.
Core Requirements
| Requirement | Scope | 實作 |
|---|---|---|
| Labeling | All deep synthesis content must be labeled with conspicuous markers | Technical watermarking and visible labels required |
| Real identity | Users of deep synthesis services must register with real identity information | Identity verification before service access |
| Content review | Providers must review generated content before public distribution | Automated and manual content review systems |
| Prohibited content | Must not generate content that threatens national 安全, social stability, or public interest | Content filtering and 安全 mechanisms |
| Data handling | Must establish data management protocols for 訓練資料 and generated content | Data governance and retention policies |
| Incident response | Must have mechanisms to detect and respond to illegal synthetic content | 監控 and takedown procedures |
紅隊 測試 for Deep Synthesis Compliance
| 測試 Area | Description | Expected Controls |
|---|---|---|
| Watermark robustness | Attempt to remove or modify deep synthesis watermarks through image/video processing | Robust watermarking resistant to common transformation attacks |
| Label bypass | 測試 whether synthetic content can be generated without required labels | Mandatory labeling applied at generation time |
| Identity verification bypass | Attempt to use deep synthesis services without proper identity verification | Strong identity verification before access |
| Prohibited content generation | 測試 whether 系統 can be manipulated into generating prohibited content categories | Content filtering for politically sensitive, violent, or destabilizing content |
| Data provenance | Verify 訓練資料 sources and whether consent was obtained | Documented data provenance with consent records |
Generative AI Measures
The Interim Administrative Measures for Generative AI Services are the most directly relevant regulation for organizations deploying LLMs and other generative AI systems.
Key Provisions
| Provision | Requirement | 紅隊 Relevance |
|---|---|---|
| Core socialist values | GenAI content must adhere to core socialist values and not subvert state power | 測試 content filtering for politically sensitive topics |
| 訓練資料 legality | 訓練資料 must be lawfully obtained, with proper IP and privacy protections | Audit 訓練資料 provenance and consent |
| Content accuracy | Must take measures to improve accuracy of generated content | Hallucination 測試, factual accuracy 評估 |
| 安全 評估 | Must conduct 安全 assessments before public launch | Pre-deployment 安全 測試 (紅隊演練) |
| Algorithm filing | Must file algorithms with the CAC | Documentation and registration verification |
| User complaint mechanisms | Must provide channels for user complaints about generated content | 測試 complaint mechanism functionality and response |
| Incident reporting | Must report 安全 incidents to authorities within prescribed timeframes | Incident response 測試 |
Content Restrictions Specific to China
The Generative AI Measures impose content restrictions that are unique to the Chinese regulatory context and may not align with content policies in other jurisdictions:
| Content Category | Requirement | 測試 Approach |
|---|---|---|
| National 安全 | Must not generate content threatening national 安全 or sovereignty | 測試 with politically sensitive prompts |
| Social stability | Must not generate content inciting subversion or separatism | Boundary 測試 for political content |
| Economic order | Must not generate content disrupting economic or social order | 測試 for market manipulation or financial misinformation |
| Individual rights | Must not infringe on others' reputation, privacy, or IP rights | Privacy and defamation scenario 測試 |
| Historical accuracy | Must not distort historical events or deny historical facts (as defined by the state) | 測試 responses on sensitive historical topics |
| Discrimination | Must not generate discriminatory content | Bias and fairness 測試 |
安全 評估 Requirements
Before deploying generative AI services to the public in China, providers must complete a 安全 評估:
Self-評估
Conduct internal 安全 and content 安全 評估 covering all regulatory requirements. 這是 where 紅隊演練 fits most directly.
Algorithm filing
File the algorithm with the CAC's Algorithm Filing System, providing technical details about how 系統 works.
安全 評估 report
Submit a 安全 評估 report to the relevant provincial-level CAC office covering content 安全, data 安全, and personal information protection.
Ongoing 監控
實作 continuous 監控 and take corrective action within required timeframes when issues are identified.
Compliance for Global Companies
Jurisdictional Triggers
| Scenario | Chinese Law Applicable? | Key Obligations |
|---|---|---|
| Chinese company operating in China | Yes -- full compliance | All provisions apply |
| Foreign company with Chinese entity | Yes -- through Chinese entity | Entity must comply with all provisions |
| Foreign company serving Chinese users (no entity) | Likely yes -- extraterritorial application | Content restrictions, data localization, user protections |
| Foreign company with no Chinese users | No | Not applicable |
| Foreign company whose service is used in China via VPN | Gray area | Technical enforcement uncertain, legal risk remains |
Practical Compliance Challenges for Global Companies
| Challenge | Description | 緩解 |
|---|---|---|
| Content policy divergence | Chinese content requirements differ from Western content policies | 實作 geographically segmented content policies |
| Data localization | PIPL requires personal data of Chinese users to be stored in China | Separate data infrastructure for Chinese users |
| Real-name registration | Deep synthesis and GenAI require real identity verification | 實作 identity verification for Chinese-market services |
| Algorithm transparency | Must disclose algorithm details to CAC | Prepare technical documentation for Chinese regulators |
| Dual compliance | Must comply with both Chinese law and home jurisdiction requirements | Legal review for conflicting requirements |
Comparison with Other Regulatory Frameworks
| Dimension | China | EU AI Act | US (State Laws) |
|---|---|---|---|
| Approach | Technology-specific regulations | Risk-based comprehensive regulation | Fragmented, application-specific |
| Content restrictions | Extensive political and social content restrictions | Focus on 安全 and fundamental rights | Limited (mainly anti-discrimination) |
| Enforcement | Administrative (CAC), potentially criminal | Administrative fines (up to 7% global turnover) | AG enforcement and private action |
| Speed of regulation | Rapid (months from proposal to enforcement) | Slow (years-long legislative process) | Variable by state |
| Extraterritorial reach | Applies to services accessible in China | Applies to AI systems deployed in EU market | Applies based on user location |
| Algorithm registration | Required for most AI systems | Not required (conformity 評估 instead) | Not required |
| Pre-market approval | 安全 評估 required for GenAI services | Conformity 評估 for high-risk systems | Generally not required |
紅隊 Engagement Considerations
Scoping for Chinese Regulatory Compliance
When conducting 紅隊 engagements for organizations subject to Chinese AI regulations:
- Determine which specific regulations apply based on the AI system type and deployment context
- 理解 that content 安全 測試 for the Chinese market requires specialized knowledge of politically sensitive topics
- Coordinate with local legal counsel to ensure 測試 activities themselves comply with Chinese law
- Structure findings to directly reference applicable Chinese regulatory provisions
- Be aware that 測試 results may need to be shared with Chinese regulatory authorities as part of the 安全 評估 process
測試 Activities by Regulation
| Regulation | Primary 測試 Activities |
|---|---|
| Algorithmic Recommendation Provisions | User control verification, price discrimination 偵測, filter bubble 評估, minor protection 測試 |
| Deep Synthesis Provisions | Watermark robustness, label bypass 測試, identity verification, prohibited content generation |
| Generative AI Measures | Content 安全 boundary 測試, hallucination 評估, 安全 評估 support, complaint mechanism verification |
| PIPL | Data extraction 測試, consent verification, data localization verification, deletion effectiveness |
Red teamers working with Chinese regulatory compliance should develop specialized expertise in the political and cultural context that shapes Chinese content requirements. Technical skill alone is insufficient -- effective 測試 requires 理解 the regulatory intent and enforcement patterns unique to China's AI governance approach.