China AI Regulation
China's AI regulatory framework including algorithmic recommendation rules, deep synthesis provisions, generative AI measures, and compliance requirements for global companies operating in or serving the Chinese market.
China has developed one of the most comprehensive and rapidly evolving AI regulatory frameworks in the world. Unlike the EU's single-instrument approach (the EU AI Act) or the US's fragmented state-level approach, China has enacted multiple targeted regulations addressing specific AI applications. For red teamers working with global organizations, understanding Chinese AI regulation is essential because non-compliance can result in service bans, fines, and criminal liability for responsible individuals.
Regulatory Framework Overview
China's AI regulations form a layered system, each targeting specific AI capabilities:
| Regulation | Effective Date | Scope | Enforcing Authority |
|---|---|---|---|
| Algorithmic Recommendation Provisions | March 2022 | Recommendation algorithms that influence user behavior | CAC |
| Deep Synthesis Provisions | January 2023 | AI-generated or modified content (deepfakes, synthetic media) | CAC |
| Generative AI Measures | August 2023 | Generative AI services offered to the public in China | CAC, with MIIT, MPS, and other agencies |
| AI Safety Governance Framework | September 2024 (v1.0) | Comprehensive AI safety standards (non-binding, aspirational) | TC260 (National Information Security Standardization Technical Committee) |
| Personal Information Protection Law (PIPL) | November 2021 | Personal data processing, including by AI systems | CAC |
Hierarchical Relationship
┌──────────────────────────────────────────┐
│ Cybersecurity Law (2017) │
│ Data Security Law (2021) │
│ PIPL (2021) │
│ (Foundation data & security laws) │
├──────────────────────────────────────────┤
│ Algorithmic Recommendation Provisions │
│ Deep Synthesis Provisions │
│ Generative AI Measures │
│ (AI-specific implementing regulations) │
├──────────────────────────────────────────┤
│ AI Safety Governance Framework │
│ National Standards (TC260) │
│ (Technical standards & guidance) │
└──────────────────────────────────────────┘
Algorithmic Recommendation Provisions
The Internet Information Service Algorithmic Recommendation Management Provisions target any service that uses algorithms to recommend content, products, or information to users.
Key Requirements
| Requirement | Description | Red Team Testing |
|---|---|---|
| Algorithm registration | Must register algorithms with the CAC through the Algorithm Filing System | Verify compliance documentation |
| User control | Must provide users with options to turn off algorithmic recommendations | Test whether opt-out mechanisms actually disable recommendations |
| Transparency | Must disclose basic principles, purpose, and main operating mechanisms of algorithms | Verify disclosures accurately describe actual algorithm behavior |
| No addiction | Must not use algorithms to induce user addiction or excessive consumption | Assess whether engagement optimization crosses regulatory boundaries |
| No price discrimination | Must not use algorithms to implement unreasonable differential pricing | Test whether different user profiles receive different prices for the same service |
| Content management | Must not use algorithms to promote information that violates laws or social morality | Test content recommendation boundaries and filtering effectiveness |
Red Team Assessment for Algorithmic Recommendations
| Test Category | Methodology | Regulatory Requirement |
|---|---|---|
| User profiling analysis | Analyze how the algorithm categorizes users and whether protected characteristics influence recommendations | Anti-discrimination, PIPL data minimization |
| Opt-out effectiveness | Verify that disabling recommendations truly removes algorithmic influence | User control requirements |
| Filter bubble assessment | Test whether the algorithm creates information echo chambers | Content diversity requirements |
| Price discrimination detection | Compare pricing across different user profiles | Anti-price discrimination |
| Minor protection | Test whether age-related protections are enforced in recommendations | Minor protection requirements |
Deep Synthesis Provisions
The Deep Synthesis Provisions regulate AI technologies that generate or modify text, images, audio, video, and virtual scenes, including deepfakes and other synthetic media.
Core Requirements
| Requirement | Scope | Implementation |
|---|---|---|
| Labeling | All deep synthesis content must be labeled with conspicuous markers | Technical watermarking and visible labels required |
| Real identity | Users of deep synthesis services must register with real identity information | Identity verification before service access |
| Content review | Providers must review generated content before public distribution | Automated and manual content review systems |
| Prohibited content | Must not generate content that threatens national security, social stability, or public interest | Content filtering and safety mechanisms |
| Data handling | Must establish data management protocols for training data and generated content | Data governance and retention policies |
| Incident response | Must have mechanisms to detect and respond to illegal synthetic content | Monitoring and takedown procedures |
Red Team Testing for Deep Synthesis Compliance
| Test Area | Description | Expected Controls |
|---|---|---|
| Watermark robustness | Attempt to remove or modify deep synthesis watermarks through image/video processing | Robust watermarking resistant to common transformation attacks |
| Label bypass | Test whether synthetic content can be generated without required labels | Mandatory labeling applied at generation time |
| Identity verification bypass | Attempt to use deep synthesis services without proper identity verification | Strong identity verification before access |
| Prohibited content generation | Test whether the system can be manipulated into generating prohibited content categories | Content filtering for politically sensitive, violent, or destabilizing content |
| Data provenance | Verify training data sources and whether consent was obtained | Documented data provenance with consent records |
Generative AI Measures
The Interim Administrative Measures for Generative AI Services are the most directly relevant regulation for organizations deploying LLMs and other generative AI systems.
Key Provisions
| Provision | Requirement | Red Team Relevance |
|---|---|---|
| Core socialist values | GenAI content must adhere to core socialist values and not subvert state power | Test content filtering for politically sensitive topics |
| Training data legality | Training data must be lawfully obtained, with proper IP and privacy protections | Audit training data provenance and consent |
| Content accuracy | Must take measures to improve accuracy of generated content | Hallucination testing, factual accuracy assessment |
| Safety assessment | Must conduct security assessments before public launch | Pre-deployment security testing (red teaming) |
| Algorithm filing | Must file algorithms with the CAC | Documentation and registration verification |
| User complaint mechanisms | Must provide channels for user complaints about generated content | Test complaint mechanism functionality and response |
| Incident reporting | Must report security incidents to authorities within prescribed timeframes | Incident response testing |
Content Restrictions Specific to China
The Generative AI Measures impose content restrictions that are unique to the Chinese regulatory context and may not align with content policies in other jurisdictions:
| Content Category | Requirement | Testing Approach |
|---|---|---|
| National security | Must not generate content threatening national security or sovereignty | Test with politically sensitive prompts |
| Social stability | Must not generate content inciting subversion or separatism | Boundary testing for political content |
| Economic order | Must not generate content disrupting economic or social order | Test for market manipulation or financial misinformation |
| Individual rights | Must not infringe on others' reputation, privacy, or IP rights | Privacy and defamation scenario testing |
| Historical accuracy | Must not distort historical events or deny historical facts (as defined by the state) | Test responses on sensitive historical topics |
| Discrimination | Must not generate discriminatory content | Bias and fairness testing |
Safety Assessment Requirements
Before deploying generative AI services to the public in China, providers must complete a security assessment:
Self-assessment
Conduct internal security and content safety assessment covering all regulatory requirements. This is where red teaming fits most directly.
Algorithm filing
File the algorithm with the CAC's Algorithm Filing System, providing technical details about how the system works.
Safety assessment report
Submit a safety assessment report to the relevant provincial-level CAC office covering content safety, data security, and personal information protection.
Ongoing monitoring
Implement continuous monitoring and take corrective action within required timeframes when issues are identified.
Compliance for Global Companies
Jurisdictional Triggers
| Scenario | Chinese Law Applicable? | Key Obligations |
|---|---|---|
| Chinese company operating in China | Yes -- full compliance | All provisions apply |
| Foreign company with Chinese entity | Yes -- through Chinese entity | Entity must comply with all provisions |
| Foreign company serving Chinese users (no entity) | Likely yes -- extraterritorial application | Content restrictions, data localization, user protections |
| Foreign company with no Chinese users | No | Not applicable |
| Foreign company whose service is used in China via VPN | Gray area | Technical enforcement uncertain, legal risk remains |
Practical Compliance Challenges for Global Companies
| Challenge | Description | Mitigation |
|---|---|---|
| Content policy divergence | Chinese content requirements differ from Western content policies | Implement geographically segmented content policies |
| Data localization | PIPL requires personal data of Chinese users to be stored in China | Separate data infrastructure for Chinese users |
| Real-name registration | Deep synthesis and GenAI require real identity verification | Implement identity verification for Chinese-market services |
| Algorithm transparency | Must disclose algorithm details to CAC | Prepare technical documentation for Chinese regulators |
| Dual compliance | Must comply with both Chinese law and home jurisdiction requirements | Legal review for conflicting requirements |
Comparison with Other Regulatory Frameworks
| Dimension | China | EU AI Act | US (State Laws) |
|---|---|---|---|
| Approach | Technology-specific regulations | Risk-based comprehensive regulation | Fragmented, application-specific |
| Content restrictions | Extensive political and social content restrictions | Focus on safety and fundamental rights | Limited (mainly anti-discrimination) |
| Enforcement | Administrative (CAC), potentially criminal | Administrative fines (up to 7% global turnover) | AG enforcement and private action |
| Speed of regulation | Rapid (months from proposal to enforcement) | Slow (years-long legislative process) | Variable by state |
| Extraterritorial reach | Applies to services accessible in China | Applies to AI systems deployed in EU market | Applies based on user location |
| Algorithm registration | Required for most AI systems | Not required (conformity assessment instead) | Not required |
| Pre-market approval | Safety assessment required for GenAI services | Conformity assessment for high-risk systems | Generally not required |
Red Team Engagement Considerations
Scoping for Chinese Regulatory Compliance
When conducting red team engagements for organizations subject to Chinese AI regulations:
- Determine which specific regulations apply based on the AI system type and deployment context
- Understand that content safety testing for the Chinese market requires specialized knowledge of politically sensitive topics
- Coordinate with local legal counsel to ensure testing activities themselves comply with Chinese law
- Structure findings to directly reference applicable Chinese regulatory provisions
- Be aware that testing results may need to be shared with Chinese regulatory authorities as part of the safety assessment process
Testing Activities by Regulation
| Regulation | Primary Testing Activities |
|---|---|
| Algorithmic Recommendation Provisions | User control verification, price discrimination detection, filter bubble assessment, minor protection testing |
| Deep Synthesis Provisions | Watermark robustness, label bypass testing, identity verification, prohibited content generation |
| Generative AI Measures | Content safety boundary testing, hallucination assessment, safety assessment support, complaint mechanism verification |
| PIPL | Data extraction testing, consent verification, data localization verification, deletion effectiveness |
Red teamers working with Chinese regulatory compliance should develop specialized expertise in the political and cultural context that shapes Chinese content requirements. Technical skill alone is insufficient -- effective testing requires understanding the regulatory intent and enforcement patterns unique to China's AI governance approach.