When ChatGPT launched in November 2022, two very different reactions emerged from two very different capitals. Washington scrambled to understand the technology. Beijing scrambled to control it. That contrast captures how China approaches generative AI: not as a product to be iterated on in the open market, but as a strategic national asset to be developed, directed, and governed by the state.
The result is the most comprehensive AI regulatory architecture in the world by 2026, blending innovation mandates with ideological guardrails, commercial ambition with political censorship, and technical standardisation with state surveillance. This article maps the actual current legal framework as it operates in April 2026: the binding instruments, the registration regime, the labelling architecture, the enforcement campaigns, and the practical implications for any business operating in or building for the Chinese AI market.
The foundation: the Interim Measures for Generative AI (2023)
Everything in China's AI control framework traces back to a single landmark document: the Interim Measures for the Management of Generative Artificial Intelligence Services, in force from 15 August 2023. It was the first comprehensive generative AI law anywhere in the world, predating the EU AI Act, the US Executive Order on AI, and every other comparable instrument.
The law was jointly issued by seven agencies: the Cyberspace Administration of China (CAC), the National Development and Reform Commission, the Ministry of Education, the Ministry of Science and Technology, the Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security (MPS), and the National Radio and Television Administration (NRTA). The breadth of that list signals the document's character: not a consumer protection rule, but a whole-of-government control directive.
Five foundational obligations
The Interim Measures impose five foundational obligations on every provider of generative AI services accessible to the Chinese public:
- Content must uphold "Socialist Core Values": a non-negotiable political condition embedded in Article 4.
- Training data must come from "lawful sources": meaning data that has passed China's content legality standards, which categorically excludes anything censored under Chinese law.
- Algorithm filing: providers must file their algorithm with the CAC before offering services to the public where the service has "public opinion properties or the capacity for social mobilization."
- Security assessments: mandatory for any model with "public opinion properties or the capacity for social mobilization."
- Labelling of AI-generated content: a requirement that grew into its own comprehensive regulation by 2025.
What distinguishes this law from Western counterparts is the pre-launch approval process. Article 17 requires providers to complete government security assessments and algorithm filing before releasing services to the public. The EU and US largely apply regulations after deployment. China reviews the model first.
The CAC: nerve centre of AI governance
The Cyberspace Administration of China sits at the centre of AI governance, combining functions that in the US are spread across the FTC, the FCC, and content regulators. Since the Interim Measures came into force, CAC has built out an elaborate enforcement architecture.
Algorithm filing registry
Every generative AI service with significant public reach must register its algorithm with the CAC. The registry has expanded rapidly. As of 31 March 2025, 346 generative AI services had been filed with CAC, including DeepSeek and Baidu's Ernie Bot. By the end of 2025, CAC had reported 446 newly filed and 330 newly registered services for the year. CAC publicly lists registered services and requires them to display model names and filing numbers prominently to users, a transparency requirement comparable to drug approval numbers.
Beyond the headline numbers, the broader registry of generative algorithmic tools (covering both filed services and the wider algorithm recommendation registry) contained more than 3,700 generative algorithmic tools from approximately 2,353 unique companies as of April 2025, growing by 250-300 entries monthly. No other country provides comparable visibility into its AI ecosystem.
Pre-launch security reviews
Specialised CAC teams conduct compliance audits with focus on ensuring high rates of "appropriate responses" to politically sensitive queries. Providers must submit algorithm self-assessments, training data documentation, annotation rules, keyword lists, and evaluation question sets. They must also provide API access and virtual testing accounts for CAC officers to conduct functional and security testing during the review.
Two-track filing system
The current system distinguishes between full LLM filings (for providers building proprietary foundation models) and simpler registrations (for applications calling already-filed models via API). For the simpler registration pathway, providers typically register with their provincial CAC. Filings under the LLM track require substantially more documentation including security self-assessment, service agreement, annotation rules, keyword lists, and pre-tested evaluation questions.
The Qinglang enforcement campaigns
CAC runs annual "Qinglang" (清朗, "Clear and Bright") enforcement campaigns. The 2025 campaign titled "Rectification of AI Technology Misuse" launched on 30 April 2025 and ran for three months across two phases. By the time the first phase concluded in June 2025, authorities had taken down more than 3,500 AI-related products, scrubbed over 960,000 pieces of illegal or harmful content, and shut down or penalised more than 3,700 accounts. Local regulators including the Shanghai CAC penalised AI applications that had launched without completing the required filing process. The Zhejiang CAC ordered app stores to remove a face-swapping app that had not undergone security assessment.
The 2026 campaign, launched in early 2026, carries substantially more legal teeth than its predecessor. It builds on new instruments including the Interim Measures for the Management of Anthropomorphic AI Interactive Services (published 10 April 2026, in force from 15 July 2026) governing chatbots, AI companions, and AI customer service agents that simulate human personality, and the draft rules for Digital Virtual Human Services (published 3 April 2026, public comment closed 6 May 2026) covering biometric deepfakes, consent for likeness use, and bypass of biometric authentication systems.
Penalties and consequences
Under Article 31 of the Algorithm Recommendation Measures, providers that fail to complete required filings can receive warnings, public reprimands, orders to rectify, suspension of information updates, and fines ranging from RMB 10,000 to 100,000 (approximately USD 1,400 to 14,000). For more serious violations, the consequences include service suspension, app store delisting, and licensing revocation under Articles 10-12 of the Generative AI Interim Measures. In the Chinese market, where government suspension can effectively end a startup's commercial viability, these consequences are powerful deterrents.
The censorship architecture: what Chinese AI cannot say
The political guardrails on Chinese AI are not vague. They are specific, quantified, and actively enforced.
The keyword quota system
Under the standards framework for generative AI safety, companies must compile and maintain lists of sensitive keywords for flagging unsafe content. The standards define eight categories of political content that violates "core socialist values," each requiring 200 keywords chosen by the companies themselves. There are also nine categories of "discriminative" content covering religion, nationality, gender, and age. The arrangement makes companies complicit in their own censorship by tasking them with selecting which terms to suppress.
Politically aligned benchmarks
Two months before the Interim Measures came into force, Chinese researchers released the C-Eval benchmark, comprising approximately 14,000 multi-choice questions spanning 52 disciplines, including "Mao Zedong's Thought," "Marxism," and "Ideological and Moral Cultivation." This is one of the standard benchmarks against which Chinese LLMs are evaluated for "lawfulness." The China Academy of Information and Communications Technology (CAICT) followed with its own AI Safety Benchmark containing approximately 400,000 Chinese-language prompts addressing political correctness alongside privacy and cultural bias.
Filtered training data
Because the Interim Measures require training data from "lawful sources," every model trained in China is, by design, trained on data that excludes politically sensitive material. China's large language models are built on filtered datasets, meaning the censorship is baked in during training rather than applied afterward as a guardrail. Researchers have described these datasets as systematically embedding state-aligned narratives while omitting dissenting voices.
Conversation termination rules
According to operational guidance issued to AI companies, if users ask too many politically sensitive questions in succession, the systems must automatically terminate the conversation. This is not a passive refusal but an active shutdown mechanism triggered by pattern detection.
What chatbots actually say
When Reporters Without Borders tested Chinese chatbots by asking about China's press freedom ranking (178th out of 180 countries in 2025), the responses were illustrative. DeepSeek "apologised" for not being trained to answer the question. Qwen acknowledged the ranking while adding government talking points about press freedom guarantees. Ernie characterised RSF as a "Western political instrument." These are not accidental responses. They reflect deliberate training and refusal architecture.
The AI labelling regime
China's AI labelling rules are arguably the most technically sophisticated piece of its regulatory architecture. On 7 March 2025 (publicly released 14 March 2025), the CAC, MIIT, MPS, and NRTA jointly issued the Measures for Labeling of AI-Generated Synthetic Content (State Council Information Office document Tongzi [2025] No. 2), accompanied by the mandatory national standard GB 45438-2025 (Cybersecurity Technology: Labeling Method for Content Generated by Artificial Intelligence). Both took effect on 1 September 2025.
Two types of mandatory labels
- Explicit labels: visible, user-perceptible markers including watermarks, disclosures, overlays, or audio cues that must appear on AI-generated text, audio, images, videos, and virtual scenes likely to cause confusion or misidentification.
- Implicit labels: machine-readable metadata embedded invisibly within file headers, persisting across distribution and traceable by authorities even after visible labels are removed.
The implicit labelling obligation under Article 5 is broader than explicit labelling. Implicit labels apply to all AI-generated synthetic content, while explicit labels apply only to specific content categories. The metadata must include "AIGC" identifiers, the company's unified social credit code, content IDs, and (where applicable) the citizen identification number of the producer. Digital watermarks are recommended but not mandated under Article 5.
Three-tier classification
Platforms must classify content as confirmed AI-generated, possibly AI-generated, or suspected AI-generated, based on metadata verification and detection of explicit labels or other AI traces. Each classification level carries distinct labelling and metadata obligations.
Operational requirements
Service providers must integrate labelling capabilities by design, include disclosures in user agreements, and retain generation logs for at least six months. Distribution platforms (social media, content aggregators, file-sharing services) must verify file metadata, add their own implicit labels, scan uploads, apply classification labels, and display prominent visible notices around AI-generated content. Removing or tampering with labels is explicitly banned.
Supporting technical standards (effective November 2025)
The TC260 (National Cybersecurity Standardisation Technical Committee) issued three additional standards covering generative AI: GB/T 45654-2025 (service security requirements), GB/T 45674-2025 (data annotation security specifications), and GB/T 45652-2025 (pre-training and fine-tuning training data security guidelines). All took effect on 1 November 2025.
By mandating explicit and implicit labelling at the file level rather than relying solely on platform UI markers, China has built provenance tracking infrastructure at national scale. The labelling architecture has been characterised by international observers as more detailed than comparable EU AI Act provisions, an unusual position for a Chinese regulation often described in Western analysis as blunt-instrument focused.
Digital IDs and centralised data control
The labelling regime fits within a broader data centralisation strategy. According to Carnegie Endowment research, China is rolling out a digital ID system for identity verification across AI platforms. The design reduces individual companies' direct access to user information by routing identity verification through a centralised state system, redirecting data flows toward the government rather than tech companies.
Currently, large platforms including ByteDance and Baidu hold extensive user data troves. The digital ID architecture would shift the state into the position of primary custodian for identity information across the AI ecosystem. The shift positions the state as the ultimate gatekeeper for both information and identity in China's AI infrastructure.
The academic and research sector
China's AI rules extend into academic research. In December 2023, the Ministry of Science and Technology issued regulations prohibiting the direct generation of funding application materials using generative AI. Any content generated using generative AI in research must be clearly labelled, the generation process must be explained, AI-generated citations cannot be used as original literature, and generative AI cannot be listed as a co-contributor to research outputs. These rules mirror debates in Western academic institutions but with a critical difference: they are enforceable national regulations with ministerial authority, not institutional guidelines.
The four eras of Chinese AI policy
Carnegie Endowment researchers have mapped China's AI policy evolution into four distinct periods:
- The Go-Go Era (2017 to early 2020): minimal regulation, maximum investment. The 2017 New Generation AI Development Plan set the goal of becoming the world's leading AI power by 2030.
- The Crackdown Era (2020 to late 2022): AI companies were caught in the broader CCP reassertion of party control over the technology sector that also captured Alibaba, Didi, and others.
- The Catch-Up Era (2022 to early 2025): post-ChatGPT anxiety motivated regulatory pragmatism. The Interim Measures of 2023 were designed to regulate without strangling capability development.
- The Crossroads Era (2025 to present): DeepSeek-R1's breakthrough in early 2025 demonstrated that the gap with US frontier models had largely closed. With confidence restored, more comprehensive regulation, more centralisation, and more ideological enforcement are being layered on top of the existing framework.
The unresolved tension: does control slow capability?
China's AI control regime carries an inherent tension: political censorship can make competitive AI harder to build. The Financial Times has reported that CAC's elaborate model reviews, requiring high rates of "appropriate responses" to politically sensitive queries, create meaningful friction in development. Training becomes slower when developers must filter datasets to remove politically sensitive material. Models fine-tuned to avoid certain topics often become less capable across the board, because the same reasoning pathways that generate nuanced political analysis also generate nuanced scientific, legal, and creative outputs.
This creates a structural disadvantage that government investment alone cannot fully offset. Some analysts argue that the political demands of the system may prove as decisive as US chip export controls in shaping the global AI competition. Others argue that DeepSeek's breakthrough, achieved partly through efficiency innovations that worked around hardware constraints, suggests Chinese engineers continue to find creative paths forward despite the constraints.
What businesses operating in China must know
For any organisation operating in or entering the Chinese AI market, the regulatory framework imposes clear requirements:
Register before launch
Any generative AI service accessible to Chinese users with public opinion properties or social mobilisation capacity must complete algorithm filing with CAC before going public. Provincial CAC processes apply for services calling already-filed models via API. There is no grace period: Shanghai and Zhejiang regulators have penalised and removed apps that launched without completing the required filing.
Label everything
From 1 September 2025, all AI-generated content must carry implicit labels in file metadata, with explicit labels required for content likely to cause confusion or misidentification. This applies to text, audio, images, video, and virtual scenes. Generation logs must be retained for at least six months. Distribution platforms must verify metadata, add their own labels, and apply prominent visible notices for confirmed and suspected AI content.
Build human moderation capacity
The regulatory framework explicitly states that the size of a company's human moderation team should match the size of its service. Content moderation is treated as a staffing requirement, not just a technical problem.
Conduct security and privacy assessments
Security self-assessments and (for models with public opinion or social mobilisation properties) regulatory security assessments are mandatory before launch. Privacy audits should align with the Personal Information Protection Law (PIPL), the Data Security Law, and the Cybersecurity Law, plus the TC260 standards effective November 2025.
Plan for the 2026 instruments
The Interim Measures for Anthropomorphic AI Interactive Services (in force 15 July 2026) cover chatbots, AI companions, and AI customer service agents that simulate human personality. Digital Virtual Human Services rules are progressing toward enactment. The 2026 Qinglang campaign carries expanded legal authority compared to its 2025 predecessor.
Expect enforcement
Qinglang campaigns are annual, escalating, and increasingly well-resourced. Beyond fines, consequences include app suspension, app store delisting, licensing revocation, and account termination, all of which can effectively end a service's commercial viability in the Chinese market.
Compliance FAQ
Does the Interim Measures apply to my non-Chinese AI service?
The Interim Measures apply to generative AI services accessible to the Chinese public. If your service is not accessible from China (whether by geographic blocking or by not being listed in Chinese app stores), the Interim Measures do not directly apply. If your service is accessible to Chinese users, the labelling, registration, and security assessment requirements apply regardless of where your company is incorporated.
What is the difference between filing and registration?
Algorithm filing applies to providers of foundation generative AI models with public opinion or social mobilisation properties, with substantial documentation requirements (security self-assessment, training data documentation, keyword lists, evaluation questions). Registration applies to applications that call already-filed models via API and is a simpler provincial-level process. Both lead to a published filing or registration number that must be displayed prominently to users.
How does GB 45438-2025 differ from the EU AI Act labelling rules?
EU AI Act Article 50 requires deployers of deepfake-generating AI systems to disclose AI generation, with limited artistic and creative exceptions. China's GB 45438-2025 mandates both visible (explicit) and machine-readable (implicit) labels on virtually all AI-generated synthetic content, with explicit labels required only for content likely to cause confusion. China's implicit labelling obligation is broader than the EU's, and the technical standard provides more detailed implementation guidance than the EU's current Code of Practice on AI-Generated Content (whose first draft was published 17 December 2025).
What happens if I miss the September 2025 labelling deadline?
The labelling rules are part of CAC's enforcement priorities under the 2025 and 2026 Qinglang campaigns. Consequences include rectification orders, content takedowns, service suspension, and account closures. Major Chinese platforms including Tencent, Douyin, RedNote, Weibo, and DeepSeek implemented compliance programmes ahead of the effective date, signalling that the regulator expects rapid implementation.
Does the Personal Information Protection Law (PIPL) also apply?
Yes. PIPL applies independently to AI services processing personal information. The Cybersecurity Law and Data Security Law also apply. The CAC's AI-specific instruments operate as additional layers on top of these baseline data protection statutes rather than replacing them.
What is the practical significance of the keyword quota system?
Companies must compile and maintain keyword lists across the standards framework's content categories (eight political and nine "discriminative") and submit these as part of their algorithm filing. The system shifts the operational burden of identifying suppressible content onto the provider. The lists must be kept current as the political environment evolves.
The bottom line
China's approach to controlling generative AI is unique in scale, speed, and intent. No other government has moved so comprehensively to regulate what AI can say, how it must label what it creates, who can build it, and how those builders must prove their compliance before going public. The framework is neither purely about innovation nor purely about repression. It is a calculated attempt to capture the economic benefits of generative AI while filtering political risk to CCP authority.
Whether that calculation succeeds long-term remains an open question. The friction between political censorship and AI capability is real, and the DeepSeek breakthrough demonstrates that Chinese engineers continue to find creative paths despite constraints. What is beyond dispute is that China has built the world's most detailed regulatory architecture for generative AI by April 2026, and that architecture is becoming a reference point that other governments are adapting from or consciously rejecting. The next decade of global AI governance will be shaped in significant part by how the rest of the world responds to the Chinese model.
For businesses, the practical message is straightforward. China is not a market where compliance can be retrofitted. Filing, labelling, security assessment, content moderation, and ongoing engagement with CAC enforcement priorities must be built into the product before launch, maintained continuously, and updated as new instruments enter force. The Interim Measures of 2023 are the foundation. The Labelling Measures and GB 45438-2025 are now operational. The Anthropomorphic AI Interactive Services rules are coming. The framework is consolidating, not loosening.
Last updated: April 2026. This article is educational content and is not legal advice. China's AI regulatory framework is consolidating with new instruments expected through 2026 including the Anthropomorphic AI Interactive Services Measures (in force 15 July 2026) and Digital Virtual Human Services rules (post-public-comment). Consult qualified counsel before making compliance decisions.