Think Tank

Expert Interpretation | Labeling Synthetic Content Generated by Artificial Intelligence with Digital Labels: How the New Labeling Regulations Reshape Digital Trust

2025-09-08   

In March 2025, the State Internet Information Office and other four departments jointly issued the Identification Method for Artificial Intelligence Generated Composite Content (hereinafter referred to as the Identification Method), and the supporting mandatory national standard Identification Method for Artificial Intelligence Generated Composite Content in Network Security Technology (GB 45438-2025, hereinafter referred to as the Identification Standard) was also issued. Both of them will be officially implemented on September 1, 2025. The launch of this combination of punches not only provides clear and operable technical specifications and mandatory legal bottom line for the identification of generative artificial intelligence content, but also constructs a collaborative governance loop of "source identification distribution review dissemination verification user declaration" through precise division of responsibilities among all parties in the entire chain of content generation, dissemination, and use, successfully transforming principle requirements into executable governance practices. 1、 The introduction of the "Identification Measures" and "Identification Standards" marks an important step forward in the legalization of artificial intelligence in China. This step is not only a timely response to the development of domestic industries, but also a Chinese voice for exploring global artificial intelligence governance. From a global perspective, regulating the content generated by artificial intelligence has become an international consensus and a common challenge. Whether it is the mandatory labeling requirements of the EU's Artificial Intelligence Act, the focus of the California series of laws on "digital traceability" and "artificial intelligence watermarking", or the clear labeling obligations of operators in South Korea's Artificial Intelligence Basic Law, all indicate that countries are realizing that the opportunities and risks brought by this technological revolution to content production coexist. Although there are differences in specific technological paths and regulatory priorities among countries, the core goal is highly consistent: through the key tool of "identification", while stimulating innovation vitality, safeguarding the authenticity and credibility of information, and resisting the risks brought by technological abuse. In this context, the introduction of new regulations in China demonstrates a more systematic and refined governance mindset. It does not start from scratch, but rather deepens, refines and technically upgrades the principles of identification provisions in the Administrative Provisions on the Deep Integration of Internet Information Services issued in 2022. The core point of the new regulations is that they no longer remain at the level of principle requirements, but provide clear, unified, and operable technical implementation plans through mandatory national standards (such as the duration and size of explicit labeling, the elements included in implicit labeling, etc.), and attach digital labels to the content generated by artificial intelligence, achieving a transition from "principle based rule of law" to "technology based rule of law". Its profound significance lies in its attempt to find the "golden balance point" between technological innovation and standardized governance. Through the idea of "technology empowered governance", we strive to achieve effective governance in a way that minimizes intervention in content production and maximizes efficiency - by embedding identifiers at the source of generation and dissemination - to minimize interference with user experience and creative processes. This marks the refinement of China's artificial intelligence governance model, shifting from passive response to active construction, and contributing a forward-looking and practical "Chinese solution" to global artificial intelligence governance. 2、 The introduction of the "Identification Measures" provides a standardized path for artificial intelligence to generate synthetic content identification, which clearly outlines a standardized path for artificial intelligence to generate synthetic content. Its core wisdom lies in the fact that it does not broadly assign responsibility to any party, but rather accurately "role positioning" and "responsibility segmentation" for each key participant in the chain based on the entire lifecycle of content from production to consumption. Specifically, the source (the provider of generative synthesis services) bears a mandatory labeling obligation and needs to adopt explicit and implicit labeling methods for the content to imprint it from the source; The portal (Internet application distribution platform) is responsible for the front-end audit. When the app is launched, it needs to check whether it has the identification ability, and keep the nonconforming AI applications out of the door; The circulation (network information content dissemination service provider) bears the responsibility of verification and notification, and needs to verify content identification or user statements. For generated and synthesized content, it should be clearly reminded to the public; The terminal (user) bears the responsibility of actively declaring, and when publishing artificial intelligence generated composite content, it should actively declare and not intentionally "confuse the truth with falsehood". This institutional design goes beyond simple technical compliance requirements and is shaping a new digital social contract. It means that while enjoying the convenience of artificial intelligence creation, one must also assume corresponding responsibilities and obligations. Artificial intelligence generates synthetic content with digital identifiers, providing a convenient way to distinguish between "true" and "false" content. This is not only a necessary measure to regulate the development of the artificial intelligence industry, but also a social cornerstone for building a trustworthy digital ecosystem, marking our shift from passive adaptation to technological development to actively shaping the path of technological standardization. These measures have positive significance in promoting the innovative development of artificial intelligence and facilitating its convenient application. 3、 The standardized development of artificial intelligence requires active participation from all parties, which cannot be achieved by a single policy or force. It is a profound change that requires the overall participation of society. The introduction of the "Identification Measures" is a concentrated embodiment of the concept of "multi-party collaborative participation". Its standardized development must be based on the three pillars of individuals, enterprises, and the country, all of which are indispensable. From a personal perspective, each of us is transitioning from a passive recipient of information to an active subject of responsibility. This means that we need to continuously improve our digital literacy, not only by learning to identify artificial intelligence generated synthetic content, but also by adhering to the legal and ethical obligations of actively declaring and labeling when using and disseminating it. The collective consciousness and responsible behavior of billions of netizens are the most extensive and profound social foundation for building a clear and ecological cyberspace. From the perspective of enterprises, they are technology providers and platform operators, whose roles have evolved from simple technology pursuers to key "gatekeepers". Embedding compliance capabilities into product design and fulfilling the responsibility of reviewing platform content is no longer an additional burden, but a core competitive advantage for building a company's long-term reputation and winning market trust. The self-restraint and innovation leadership of enterprises are one of the core engines driving the healthy and sustainable development of the industry. From a national perspective, the responsibility of the government is to be a good "rule maker" and "ecological cultivator". By introducing regulations and standards such as the "Identification Measures" and "Identification Standards" that combine principles and operability, a clear track has been drawn for the standardized development of artificial intelligence; At the same time, by investing in infrastructure, promoting technological research and development, and participating in global dialogue, we aim to cultivate fertile ground for the prosperity and standardization of industries. The forward-looking layout and standardized governance of the country are the fundamental guarantees for balancing innovative development with security and reliability. The future of artificial intelligence will be shaped by our shared choices today. Only through individual consciousness, corporate self-discipline, and national responsibility, which resonate and form a joint force, can we ensure that this disruptive technology is truly trustworthy, controllable, and beneficial, ultimately benefiting the development and progress of human society. (Xinhua News Agency) Author: Zou Xiaoxiang, Director of the Institute of Information Technology at the China Academy of Cyberspace Research

Edit:Luo yu Responsible editor:Wang xiao jing

Source:

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links