China has adopted a sector-specific, iterative regulatory approach to AI governance, issuing targeted regulations addressing specific AI applications rather than a single comprehensive AI law. This approach allows rapid regulatory response to emerging technologies while maintaining state control over information ecosystems.
Key Principle: China's AI governance is characterized by a dual objective — promoting AI innovation and industrial development while maintaining social stability, content control, and "socialist core values." This creates a unique regulatory environment where commercial AI development is actively encouraged but subject to content and ideological requirements not found in Western frameworks.
Regulatory Philosophy
China's approach differs fundamentally from the EU and US models:
Technology-specific rules: Rather than a single omnibus law (like the EU AI Act), China issues separate regulations for each AI application category — algorithmic recommendations, deepfakes/deep synthesis, and generative AI
Speed of regulation: China has been the first major jurisdiction to regulate several AI applications, including recommendation algorithms (2022) and generative AI (2023)
Content governance focus: All AI regulations include requirements to uphold "socialist core values" and prevent content that undermines national unity or social stability
Filing/registration systems: AI service providers must register algorithms and models with regulators, creating a centralized database of deployed AI systems
Dual emphasis: Regulations consistently balance control provisions with clauses promoting innovation and development
Timeline of Major AI Regulations
Date
Regulation
Issuing Authority
Scope
Status
July 2017
New Generation AI Development Plan
State Council
National AI strategy and goals through 2030
Active
November 2021
Personal Information Protection Law (PIPL)
NPC Standing Committee
Data protection, automated decision-making
In Force
March 2022
Algorithm Recommendation Management Provisions
CAC + 3 agencies
Recommendation algorithms, content curation
In Force
January 2023
Deep Synthesis Provisions
CAC + 2 agencies
Deepfakes, synthetic media, virtual persons
In Force
August 2023
Interim Measures for Generative AI Services
CAC + 6 agencies
ChatGPT-type services, LLMs, foundation models
In Force
September 2023
Global AI Governance Initiative
Ministry of Foreign Affairs
China's proposed international AI governance framework
Active
October 2023
National AI Ethics Guidelines
MOST National AI Governance Committee
Ethical principles for AI R&D and deployment
Active
February 2024
TC260 AI Safety Standard (Draft)
National Information Security Standardization Technical Committee
Technical safety requirements for generative AI
Draft
China's AI Regulation Stack (2022–2025)
March 2022
Algorithm Recommendation Regulation — First-of-kind rules governing algorithmic recommendation systems, requiring transparency and user opt-out mechanisms.
January 2023
Deep Synthesis Provisions — Regulations targeting deepfakes and synthetic content, mandating labeling and traceability of AI-generated media.
August 2023
Generative AI Measures — Interim measures requiring security assessments, training data legality verification, content labeling, and algorithm registration with the Cyberspace Administration of China (CAC).
2024–2025
AI Safety Governance Framework — Draft comprehensive AI safety rules expanding scope to cover foundation models, autonomous agents, and cross-border AI services.
2. Key Regulatory Bodies
China's AI governance involves multiple government agencies with overlapping and coordinated jurisdictions. Unlike the EU's planned centralized AI Office, China distributes AI regulatory authority across existing agencies based on their sectoral mandates.
Agency
Chinese Name
AI Jurisdiction
Key Powers
Cyberspace Administration of China (CAC)
国家互联网信息办公室
Primary AI regulator — algorithm filing, content review, generative AI oversight
Algorithm registration, security assessments, content moderation enforcement, penalties up to service suspension
Ministry of Science and Technology (MOST)
科学技术部
AI R&D policy, ethics guidelines, national AI strategy
Research funding, ethics committee oversight, technology standards
Ministry of Industry and Information Technology (MIIT)
工业和信息化部
AI industry development, telecommunications, autonomous vehicles
Industry standards, licensing, testing approvals
Ministry of Public Security (MPS)
公安部
AI in law enforcement, facial recognition, surveillance systems
Criminal enforcement, surveillance system approval
AI industrial policy, social credit system coordination
Economic planning, project approvals, social credit coordination
Multi-Agency Coordination: Most AI regulations are jointly issued by multiple agencies. For example, the Generative AI Measures were issued by seven agencies: CAC, NDRC, MOST, MIIT, Ministry of Education, MPS, and the State Administration of Radio, Film, and Television. This reflects the cross-cutting nature of AI and ensures buy-in from all relevant regulators.
3. Algorithm Recommendation Management Provisions
The Provisions on the Management of Algorithmic Recommendations in Internet Information Services (互联网信息服务算法推荐管理规定), effective March 1, 2022, were among the world's first regulations specifically targeting recommendation algorithms. They apply to any service using algorithms to recommend information to users within China.
Scope & Definitions
The provisions cover five categories of recommendation algorithms:
Generative/Synthetic algorithms: Generate or synthesize content (text, images, audio, video)
Personalized push algorithms: Curate content feeds based on user profiles and behavior
Ranking/Sorting algorithms: Order search results, product listings, or information
Selection/Filtering algorithms: Filter content or information for users
Dispatch/Decision algorithms: Match or allocate resources (e.g., ride-hailing, gig work)
Key Requirements
Requirement
Description
Article
Algorithm Filing
Providers with "public opinion characteristics or social mobilization capabilities" must file algorithm details with the CAC through the Internet Information Service Algorithm Filing System
Art. 24
Transparency
Must inform users that algorithm recommendation services are being used and disclose basic principles, purpose, and main operating mechanism
Art. 16
User Controls
Must provide users the option to turn off algorithmic recommendations and offer non-personalized content options
Art. 17
Tag Management
Users must be able to select or delete user tags/profiles used for algorithmic recommendations
Art. 17
Content Governance
Algorithms must not be used to disseminate information prohibited by laws, must promote "positive energy" and uphold socialist core values
Arts. 6-7
No Price Discrimination
Algorithms must not implement unreasonable differential treatment in transaction conditions (prices, terms) based on consumer data profiles
Art. 21
Labor Protection
Dispatch algorithms (gig work) must protect workers' legitimate rights — reasonable work hours, rest periods, fair compensation standards
Art. 20
Minor Protection
Must not use algorithms to recommend content that may negatively affect the physical and mental health of minors; must develop minor-specific algorithms
Art. 18
Addiction Prevention
Must not use algorithms to induce users into addiction or excessive consumption
Art. 19
Security Assessment
Providers must conduct security self-assessments of algorithm mechanisms, models, data, and outputs
Art. 27
Algorithm Filing System
The CAC operates the Internet Information Service Algorithm Filing System where providers must register their algorithms. As of early 2025, over 3,000 algorithms have been filed, including systems from major companies like Alibaba, Tencent, ByteDance, Baidu, and JD.com.
Filed information includes:
Name and basic information of the algorithm provider
Algorithm name, type, and application area
Algorithm's basic principles and purpose
Self-assessment report on algorithm security
Penalties: Violations can result in warnings, fines of 10,000 to 100,000 RMB (approximately $1,400 to $14,000 USD), and for serious violations, fines up to 1% of previous year's revenue, suspension of services, or revocation of business licenses. Per Articles 31-33.
The Provisions on the Management of Deep Synthesis in Internet Information Services (互联网信息服务深度合成管理规定), effective January 10, 2023, were the world's first dedicated deepfake regulations. They govern the use of deep learning and other AI technologies to generate or manipulate text, images, audio, video, and virtual scenes.
Definition of Deep Synthesis
The regulations define "deep synthesis technology" broadly as technology that uses deep learning, virtual reality, and other generative or synthetic algorithms to produce:
Facial generation/replacement: Face swapping, facial generation, facial attribute manipulation
Video generation: AI-generated video content, video manipulation
Virtual scenes: Immersive 3D scenes, digital environments
Other: Any content created using generative sequence models or other synthesis technologies
Key Requirements
Requirement
Description
Article
Mandatory Labeling
All deep synthesis content must be clearly labeled/watermarked to inform the public. Labels must not be easily removed. Both visible labels and embedded metadata/watermarks are required.
Arts. 16-17
Implicit Watermarking
Providers must add watermarks to deep synthesis content that can be identified by technical detection, in addition to visible labels
Art. 16
Real Identity Verification
Users of deep synthesis services must register with their real identity. Providers must verify user identities before allowing use.
Art. 9
Consent for Biometric Use
Using identifiable biometric information (faces, voices) of real individuals requires explicit, separate consent from those individuals
Art. 14
Content Moderation
Providers must review deep synthesis content before dissemination, maintain content review logs, and prevent prohibited content
Arts. 6-7
Training Data Governance
Must ensure lawfulness of training data sources and implement data management systems including data classification, quality verification, and security protection
Art. 14
Technical Security
Must establish technology management systems including algorithm mechanism reviews, ethics reviews, and anti-abuse measures
Art. 6
Reporting Mechanisms
Must provide public reporting/complaint channels and process reports within specified timeframes
Art. 10
Log Retention
Must retain logs of deep synthesis content generation for no less than 6 months, including user request logs, generation records, and review records
Art. 17
Algorithm Filing
Deep synthesis service providers with public opinion characteristics must file their algorithms with the CAC
Art. 19
Three-Tier Responsibility Structure
The provisions establish obligations at three levels:
Deep Synthesis Service Providers (深度合成服务提供者): Companies offering deep synthesis technology as a service — bear primary compliance responsibility
Deep Synthesis Service Technical Supporters (技术支持者): Organizations providing the underlying technology or tools — must provide technical means for labeling, must not assist illegal uses
Deep Synthesis Service Users (使用者): End users creating content with deep synthesis tools — must not use services to create/disseminate illegal content, must use real identities
Global First: China's deep synthesis provisions predated the EU AI Act's deepfake transparency requirements by over a year. They influenced subsequent regulatory discussions globally and set a precedent for mandatory AI content labeling that has since been adopted in various forms worldwide.
The Interim Measures for the Management of Generative Artificial Intelligence Services (生成式人工智能服务管理暂行办法), effective August 15, 2023, are China's response to ChatGPT and the generative AI revolution. These were among the first regulations globally specifically targeting generative AI/large language model services. Notably, the final version was significantly softened from the draft, reflecting a deliberate pivot toward encouraging innovation.
Key Changes from Draft to Final
Aspect
Draft (April 2023)
Final (July 2023)
Significance
Title
"Administrative Measures" (管理办法)
"Interim Measures" (暂行办法)
Signals flexibility and willingness to revise
Scope
All generative AI providers
Only those offering services "to the public within the PRC"
Excludes internal/enterprise use and R&D
Security Assessment
Mandatory pre-launch security assessment for all
Security assessment only for services with "public opinion characteristics"
Reduces barrier to market entry
Training Data
Training data must be "true and accurate"
Must take "effective measures to improve quality of training data"
Acknowledges practical impossibility of guaranteeing data truth
Content Accuracy
Generated content must be "true and accurate"
Removed; focus on preventing "illegal content"
Recognizes that AI hallucination cannot be fully eliminated
Innovation Clause
Not emphasized
Explicit: "The state supports indigenous innovation in generative AI" (Art. 3)
Clear signal of policy support for development
Core Requirements (Final Version)
Requirement
Description
Article
Lawful Training Data
Must use lawfully obtained training data, respect IP rights, and take effective measures to improve data quality. Must not infringe others' personal information rights.
Art. 7
Content Safety
Generated content must not contain content prohibited by laws — no incitement of subversion, no terrorism, no hate speech, no false information harmful to economic/social order
Art. 4
Socialist Core Values
Must "adhere to the core values of socialism" and must not generate content that undermines national unity or social stability
Art. 4
Algorithmic Transparency
Must provide clear descriptions of the service, applicable user groups, and usage scenarios
Art. 9
User Real-Name Verification
Must verify user real identity per existing PRC regulations before providing service
Art. 9
Content Labeling
AI-generated content must be labeled in accordance with deep synthesis provisions
Art. 12
Model Filing
Providers offering services to the public must file their models with relevant authorities
Art. 17
User Complaints
Must establish complaint/reporting mechanisms and process complaints within 3 working days
Art. 11
Data Protection
Must not illegally collect personal information, must not engage in profiling of users, must protect user input information
Art. 11
Discrimination Prevention
Must take effective measures to prevent discrimination based on ethnicity, belief, nationality, region, gender, age, or occupation in algorithm design, training data selection, and model generation
Art. 4
Model Filing Process
Since August 2023, the CAC has operated a generative AI model filing system. Companies must register large language models and other generative AI models before offering them to the public. As of early 2025, over 200 models have been filed, including:
Baidu: ERNIE Bot (文心一言)
Alibaba: Tongyi Qianwen (通义千问)
Tencent: Hunyuan (混元)
ByteDance: Doubao (豆包)
iFlytek: Spark (星火认知)
SenseTime: SenseNova (日日新)
Zhipu AI: ChatGLM (智谱清言)
Moonshot AI: Kimi
DeepSeek: DeepSeek
Penalties: The measures reference existing cybersecurity, data security, and personal information protection laws for enforcement. Violations can trigger penalties under the Cybersecurity Law (up to 1M RMB / ~$140,000), PIPL (up to 50M RMB / ~$7M or 5% of annual revenue), or Data Security Law (up to 10M RMB / ~$1.4M). Serious violations may result in service suspension or license revocation.
The Personal Information Protection Law (个人信息保护法), effective November 1, 2021, is China's comprehensive data protection law — often compared to the EU's GDPR. While not AI-specific, PIPL contains provisions that directly regulate AI systems that process personal information, particularly automated decision-making.
Key AI-Relevant Provisions
Provision
Description
Article
Comparison to GDPR
Automated Decision-Making Definition
Activities using computer programs to automatically analyze and evaluate personal behavior, habits, interests, hobbies, or financial/health/credit status, and make decisions
Art. 73(2)
Broader than GDPR Art. 22 — covers all automated decisions, not just solely automated
Transparency of Automated Decisions
Must ensure transparency of automated decision-making and fairness/impartiality of results. Must not apply unreasonable differential treatment to individuals on transaction terms.
Art. 24(1)
Similar to GDPR transparency requirements but with explicit fair pricing mandate
Right to Opt Out
Individuals have the right to request that personal information processors not make decisions solely through automated decision-making that have a significant influence on their rights
Art. 24(3)
Similar to GDPR Art. 22 right to human intervention
Right to Explanation
When automated decisions significantly affect an individual's rights, they have the right to request an explanation and to refuse decisions made solely through automated processing
Art. 24(3)
More explicit than GDPR; explicit right to refuse
Opt-Out from Marketing
Automated decision-making for personalized marketing must provide an option to not target the individual's personal characteristics, or a convenient way to refuse
Art. 24(2)
Specific to marketing; mirrors GDPR profiling for direct marketing objection
Personal Information Impact Assessment
Must conduct impact assessments before automated decision-making that significantly affects individuals. Must assess lawfulness, necessity, impact on rights, security measures.
Art. 55
Similar to GDPR DPIA (Art. 35) but triggered differently
Sensitive Personal Information
Biometric data (face, voice, fingerprint), financial info, location tracking, minors' data — require separate consent and specific purpose limitation when used in AI
Arts. 28-32
Similar to GDPR special category data but categorized differently
Cross-Border Transfer
Personal information leaving China requires: (a) CAC security assessment, (b) certification, or (c) standard contractual clauses — all relevant for AI models trained on PRC data
Arts. 38-43
Stricter than GDPR; government security assessment required for large-scale transfers
Maximum Penalties under PIPL
Violation Type
Fine (Organization)
Fine (Responsible Individuals)
Additional Measures
Standard violations
Up to 1 million RMB (~$140,000 USD)
10,000-100,000 RMB (~$1,400-$14,000)
Correction orders, warnings
Serious violations
Up to 50 million RMB (~$7M) or 5% of prior year's annual revenue
100,000-1,000,000 RMB (~$14,000-$140,000)
Suspension of services, revocation of business licenses, industry bans for responsible persons
Extraterritorial Reach: Like GDPR, PIPL applies extraterritorially. It covers processing of personal information of individuals within China even by organizations outside China if the purpose is to provide products/services to individuals in China or to analyze/evaluate their behavior. Foreign AI companies must appoint a representative in China. (Art. 3, Art. 53)
China's Social Credit System (社会信用体系) represents one of the world's most ambitious intersections of AI and governance. While often sensationalized in Western media, the system is more accurately described as a fragmented collection of government and commercial credit/trustworthiness scoring systems with varying degrees of AI integration.
System Architecture
The social credit system operates at multiple levels, each with different AI involvement:
Layer
Operator
AI Role
Scope
Status
National Corporate Credit
NDRC, SAMR
Data aggregation, pattern detection, risk scoring
Business compliance, tax, regulatory violations
Operational
Financial Credit
People's Bank of China (PBOC)
ML-based credit scoring, fraud detection
Individual and corporate financial creditworthiness
Operational (PBOC Credit Reference Center)
Municipal/City Pilots
Local governments (50+ cities)
Scoring algorithms, behavioral analysis, surveillance integration (varies by city)
Facial recognition: Integrated into surveillance systems in some city pilots for identity verification and behavior monitoring
Natural language processing: Analysis of court judgments, regulatory filings, and public records
Machine learning scoring: Algorithmic assessment of creditworthiness based on financial, behavioral, and social data
Data fusion: Aggregation across government databases (tax, courts, business registration, customs)
Automated enforcement: Algorithmic triggering of consequences (travel bans, contract restrictions) based on blacklist status
Legal Framework
State Council Plan (2014): "Planning Outline for the Construction of a Social Credit System (2014-2020)" — established the framework and goals
Social Credit Legislation (Draft, 2022): A comprehensive Social Credit Law has been in draft stage since 2022 but has not yet been finalized
Sector-specific regulations: Multiple regulations govern specific aspects — the "Dishonest Judgment Debtor" system is governed by Supreme Court rules; commercial credit scoring is regulated by PBOC; corporate credit by SAMR
International Implications: The social credit system extends to foreign companies operating in China. Corporate social credit scores affect market access, regulatory treatment, and inspection frequency. AI-driven risk classification determines how closely foreign businesses are monitored by Chinese regulators. The EU Chamber of Commerce in China has identified this as a significant concern for European businesses.
The New Generation Artificial Intelligence Development Plan (新一代人工智能发展规划), issued by the State Council in July 2017, is China's master strategic plan for AI development. It established the national ambition to become the world's primary AI innovation center by 2030 and has driven massive government investment in AI research, talent, and infrastructure.
Three-Step Strategic Goals
Phase
Timeline
Core AI Industry Target
AI-Related Industry Target
Key Goals
Step 1
By 2020
150 billion RMB (~$21B)
1 trillion RMB (~$140B)
Keep pace with leading AI nations; establish initial governance frameworks
Step 2
By 2025
400 billion RMB (~$56B)
5 trillion RMB (~$700B)
Achieve major AI breakthroughs; AI becomes primary driver of industrial upgrading
Step 3
By 2030
1 trillion RMB (~$140B)
10 trillion RMB (~$1.4T)
Become the world's primary AI innovation center; establish comprehensive governance
China's New Generation AI Ethics Code (新一代人工智能伦理规范), released by MOST's National New Generation AI Governance Expert Committee in September 2021 (updated October 2023), establishes ethical principles for AI research, development, deployment, and use.
Six Core Ethical Principles
Principle
Chinese Term
Description
1. Human-Centered
以人为本
AI development must serve humanity's interests; human dignity and rights must be respected; prevent harm to human physical/mental health
2. Fairness & Justice
公平公正
AI must not discriminate based on ethnicity, belief, nationality, skin color, gender, age, or occupation; equal access to AI benefits; prevent monopolistic behavior
3. Privacy Protection
隐私保护
Protect personal privacy and data security throughout the AI lifecycle; follow data minimization; ensure data quality; prevent unauthorized access
4. Security & Controllability
安全可控
AI systems must be safe, reliable, and controllable; ensure human ability to intervene and override AI systems; prevent uncontrolled self-learning
5. Transparency & Explainability
透明可解释
AI decision-making should be transparent and explainable at appropriate levels; ensure traceability; provide mechanisms for questioning AI decisions
6. Responsibility & Accountability
责任担当
Clear allocation of responsibility throughout the AI lifecycle; developers, deployers, and users all bear appropriate responsibilities; establish risk monitoring and accountability mechanisms
Four Stakeholder Categories
The guidelines assign specific ethical obligations to four groups:
AI R&D Institutions: Conduct ethical impact assessments, establish ethics review committees, ensure data quality
AI Service Providers: Ensure product safety, provide user mechanisms for complaints, conduct ongoing monitoring
AI Users: Use AI systems responsibly, comply with terms of use, not use AI for illegal purposes
Comparison with International Frameworks: China's ethics guidelines share significant overlap with OECD AI Principles (human-centered, transparent, accountable, safe) and UNESCO's Recommendation on AI Ethics. However, they include unique Chinese elements: emphasis on "socialist core values," national security considerations, and explicit focus on social harmony and stability as ethical goals.
The Data Security Law (数据安全法), effective September 1, 2021, together with the Cybersecurity Law (2017) and PIPL (2021), forms the "three pillars" of China's data governance framework. For AI, the DSL creates obligations around training data management, data classification, and cross-border data transfers that directly affect AI development and deployment.
AI-Relevant Provisions
Provision
Description
Article
AI Impact
Data Classification
All data must be classified by importance to economic/social development and national security — "core data," "important data," and "general data"
Art. 21
AI training datasets must be classified; different rules apply to each tier
Core Data
Data related to national security, economic lifelines, people's livelihoods, and major public interests — subject to stricter management system
Art. 21
AI models trained on core data face highest scrutiny; cross-border transfer generally prohibited
Important Data
Data that may affect national security, public interests, or lawful rights if tampered with, leaked, or illegally obtained
Art. 21
AI processors must designate data security officers, conduct regular risk assessments
Data Security Review
Government can conduct national security reviews of data processing activities that affect or may affect national security
Art. 24
AI companies processing large-scale data may be subject to security reviews
Cross-Border Restrictions
Important data collected by critical information infrastructure operators must be stored in China; transfers abroad require security assessment
Art. 31
Limits ability to train AI models abroad using Chinese data; affects multinational AI development
Data Transaction
Establishes framework for lawful data trading/exchange markets
Art. 19
Provides legal basis for AI training data marketplaces
Anti-Discrimination
Government departments must not use data to seek improper benefits or discriminate against companies
Art. 8
Government AI systems using data must maintain competitive neutrality
Maximum Penalties
General violations: Fines up to 500,000 RMB (~$70,000) plus correction orders
Serious violations: Fines up to 5 million RMB (~$700,000) plus suspension of business
Important data violations: Fines up to 2 million RMB (~$280,000) and up to 10 million RMB for serious cases
Core data violations: Fines up to 10 million RMB (~$1.4M) plus revocation of business licenses
Cross-border transfer violations: Correction orders, warnings, fines, and potential criminal liability
China has established a multi-layered regulatory framework for autonomous vehicles, combining national guidelines with local testing regulations in major cities. China's approach is notable for its aggressive support of autonomous driving deployment alongside regulatory development.
National Regulations
Regulation
Issuing Authority
Date
Key Provisions
Road Traffic Safety Law (Amendment Draft)
NPC Standing Committee
2021 (Draft)
Proposed legal framework for autonomous vehicle road use; liability allocation; data recording requirements
Intelligent Connected Vehicle Road Testing Norms
MIIT + 3 agencies
2021 (Updated)
National standards for AV testing on public roads; safety driver requirements; testing area designations
Guidelines for AV Production Access
MIIT
2023
Requirements for manufacturers to produce autonomous vehicles; cybersecurity and data protection requirements
MIIT L3+ Approval Framework
MIIT
November 2023
First approvals for L3 autonomous driving systems on Chinese highways — Mercedes-Benz and BMW among first approved
China's first city-level law specifically for autonomous vehicles; allows fully driverless operation in designated zones; establishes liability framework
Key Testing Zones
Beijing Yizhuang: 60+ sq km autonomous driving demonstration zone; Baidu Apollo Go robotaxi operations; 300+ AVs in daily operation
Shanghai Jiading: National intelligent connected vehicle pilot zone; open road testing since 2018
Shenzhen: First city to legalize L3+ autonomous driving on public roads; AutoX and Pony.ai operations
Guangzhou Nansha: WeRide, Pony.ai robotaxi testing and commercial operations
Wuhan: Baidu Apollo Go fully driverless (no safety driver) commercial robotaxi service — largest globally
Changsha: National smart connected vehicle testing zone; V2X infrastructure deployment
Major Chinese AV Companies
Company
Focus
Status (2025)
Key Regulatory Milestones
Baidu Apollo
Robotaxi (Apollo Go), autonomous driving platform
Fully driverless commercial service in Wuhan, Beijing, Shenzhen
First company approved for fully driverless robotaxi in China (2023)
Pony.ai
Robotaxi, autonomous trucking
Commercial operations in 4+ cities; IPO completed
Taxi license in Guangzhou (first for AV company); Beijing driverless permit
WeRide
Robotaxi, robobus, robosweeper
Operations in 30+ cities globally
First to receive all-scenario AV permit in China
AutoX
Robotaxi
Fully driverless operations in Shenzhen
First fully driverless robotaxi permit in Shenzhen
Huawei / Avatr / Arcfox
L2+/L3 ADAS systems
Production vehicles with advanced urban NCA
MIIT L3 testing approvals
12. Enforcement Actions & Cases
China has actively enforced its AI-related regulations, with the CAC serving as the primary enforcement body. Enforcement has focused on algorithm compliance, content moderation failures, data protection violations, and unauthorized AI service deployment.
Notable Enforcement Actions
Date
Target
Violation
Outcome
Regulation Applied
July 2021
Didi Global
Illegal collection and use of personal information; data security violations around US IPO
8.026 billion RMB fine (~$1.2B USD); app removed from stores; officers fined personally
Cybersecurity Law, Data Security Law, PIPL
December 2021
Alibaba
Algorithmic pricing manipulation; anti-competitive use of platform algorithms
18.228 billion RMB fine (~$2.8B) for antitrust; algorithm compliance reforms mandated
Failure to file algorithms with CAC by the March 2022 deadline
Warnings issued; 30+ major companies filed algorithms including Alibaba, Tencent, ByteDance
Algorithm Recommendation Provisions
May 2023
AIGC Deepfake Cases
Individuals used AI face-swapping for fraud (impersonating victims in video calls); AI voice cloning for scam calls
Criminal prosecution under fraud statutes plus deep synthesis provisions; multiple arrests
Deep Synthesis Provisions, Criminal Law
August 2023
Unlicensed GenAI Services
Multiple small companies offering ChatGPT-like services without completing model filing
Service shutdowns; warnings; required to complete filing before resuming
Generative AI Interim Measures
November 2023
Ant Group (Sesame Credit)
PBOC investigation into consumer credit scoring practices and data collection
7.123 billion RMB fine (~$985M); comprehensive compliance overhaul; restructuring of credit business
PIPL, PBOC regulations, Consumer Rights Protection
2023-2024
AI-Generated Misinformation
Multiple cases of individuals using generative AI to create and spread false news/rumors
Administrative detention (5-15 days) and fines under public security laws; platform penalties
Generative AI Measures, Public Security Administration Punishment Law
February 2024
Guangzhou Internet Court Case
First Chinese court ruling on AI-generated content copyright — Feilin v. Baidu (AI-generated image)
Court ruled AI-generated images can receive copyright protection if there is sufficient human creative input in prompting and selection
Copyright Law, civil litigation
Enforcement Pattern: China's enforcement of AI regulations tends to follow a pattern: (1) initial grace period after regulation takes effect, (2) high-profile enforcement actions against major companies to signal seriousness, (3) broader industry compliance campaigns. The Didi and Alibaba cases demonstrated willingness to impose severe penalties on the largest tech companies, establishing deterrent effects across the industry.
13. China vs. EU vs. US Comparison
Understanding China's AI governance requires comparison with other major regulatory approaches. The three regimes reflect fundamentally different governance philosophies while sharing some common concerns around safety, fairness, and transparency.
AI Act Art. 50 transparency obligations (2026 implementation)
No federal law; some state laws (California, Texas deepfake laws)
Generative AI
Dedicated interim measures (Aug 2023); model filing; content safety
GPAI provisions in AI Act; systemic risk assessment for powerful models
EO 14110 required NIST guidelines (now partially rescinded by EO 14179)
Maximum Penalties
Up to 5% annual revenue (PIPL); criminal prosecution possible; service suspension
Up to 7% global annual turnover (AI Act)
Varies by sector and enforcement authority; FTC Act Section 5
Cross-Border Data
Strict — security assessment required; data localization for important/core data
Adequacy decisions; SCCs; BCRs
Generally permissive; sectoral restrictions (HIPAA, etc.)
Innovation Stance
Explicitly dual — promote development AND control; massive state investment ($15B+ annually)
Innovation-friendly rhetoric with regulatory sandboxes
Innovation-first; voluntary commitments; industry self-regulation
Biometric AI
Consent requirements (PIPL); but extensive government surveillance use
Strict restrictions; ban on real-time public biometric identification (with exceptions)
Patchwork — BIPA (Illinois); some city bans; no federal law
Key Difference
State-directed innovation with content/ideological control
Rights-based, risk-tiered comprehensive framework
Market-driven, innovation-first with sectoral guardrails
14. Compliance Requirements for Companies
Companies developing or deploying AI systems in China must navigate a complex web of overlapping regulations. This section provides a practical compliance framework organized by obligation type.
Compliance Checklist by Regulation
Obligation
Algorithm Rec. Provisions
Deep Synthesis
Generative AI
PIPL
DSL
Algorithm/Model Filing
Required (CAC)
Required (CAC)
Required (CAC)
➖ N/A
➖ N/A
Security Assessment
Self-assessment
Required
For public-facing
PIIA required
For important data
User Real-Name Verification
➖ Not explicit
Required
Required
➖ N/A
➖ N/A
Content Labeling
➖ Not required
Mandatory (visible + watermark)
Per deep synthesis rules
➖ N/A
➖ N/A
Content Moderation
Required
Required
Required
➖ N/A
➖ N/A
User Opt-Out
From recommendations
➖ N/A
➖ N/A
From automated decisions
➖ N/A
Complaint Mechanisms
Required
Required
Within 3 days
Required
➖ N/A
Log Retention
6 months
6 months
Per existing rules
Per data lifecycle
Required
Training Data Governance
➖ N/A
Required
Lawful sources; quality measures
Lawful processing
Classification required
Cross-Border Restrictions
➖ N/A
➖ N/A
➖ N/A (but data rules apply)
Security assessment/SCCs
For important data
Steps for Foreign Companies
Foreign companies deploying AI in China should follow these compliance steps:
Scope Assessment: Determine which regulations apply based on AI type (recommendation, deep synthesis, generative) and data processing activities
Data Mapping: Classify all data under DSL categories (core, important, general); identify personal information under PIPL
Localization Review: Assess data localization requirements — important data and personal information may need to remain in China
Algorithm/Model Filing: Complete CAC algorithm or model filing if offering public-facing AI services
Security Assessment: Conduct required self-assessments; prepare for potential government security reviews
Content Compliance: Implement content moderation systems compliant with Chinese content requirements
Appoint Representatives: Designate a data protection officer and/or local representative as required by PIPL
Establish Mechanisms: Create user complaint channels, opt-out options, and incident response procedures
Ongoing Monitoring: Track regulatory developments — China issues new AI regulations frequently
Practical Tip: China's AI regulations are evolving rapidly. The "interim" nature of the Generative AI Measures signals that more permanent, potentially stricter regulations are likely. Companies should build compliance infrastructure that can adapt to changing requirements. The TC260 technical standards (currently in draft) will add detailed technical compliance requirements when finalized.
7. Social Credit System & AI
China's Social Credit System (社会信用体系) represents one of the world's most ambitious intersections of AI and governance. While often sensationalized in Western media, the system is more accurately described as a fragmented collection of government and commercial credit/trustworthiness scoring systems with varying degrees of AI integration.
System Architecture
The social credit system operates at multiple levels, each with different AI involvement:
AI Technologies Used
Legal Framework
Further Reading