Sci-Tech

Control AI fraud and retain social trust

2025-10-17   

In recent years, artificial intelligence technology has become increasingly mature, and the technical threshold for "AI face swapping" has been greatly reduced. Nowadays, with just one image and one sound, highly realistic fake videos can be generated. This has led some unscrupulous merchants to harbor malicious intentions, using AI to deeply fabricate the faces and voices of well-known figures such as scientists, doctors, athletes, and actors, and concoct AI "Li Guis" one by one. This new type of illegal behavior not only violates the legitimate rights and interests of counterfeiters, creates the illusion of "authoritative endorsement" to deceive the public, poses challenges to content security, but also erodes the trust foundation that society relies on for operation. As fake images are packaged by AI as "nose and eye" videos spread wantonly. When fake celebrity images peddle fake and inferior goods in the live broadcast room, when the voice and appearance of telecom fraudsters are exactly the same as the victims' families... In today's complex AI generated content, if "seeing is believing" no longer exists, how can social trust be maintained? In order to solve the problem of AI generated content being fake and genuine, China officially implemented the "Artificial Intelligence Generated Composite Content Identification Method" in September this year. This method explicitly requires that all AI generated text, images, videos, and other content must be marked with an explicit "identity" tag, while encouraging the addition of implicit tags such as digital watermarks to let the public know which content may have originated from AI generation. Since there are already regulations, relevant departments should increase supervision and law enforcement efforts, investigate and punish platforms and individuals who violate relevant regulations in accordance with the law, and form a deterrent. The investigation and handling of relevant cases by the Beijing market regulatory authorities has set a good start, sending a strong signal to strengthen supervision of "AI manufacturing". Content dissemination platforms, AI generation service platforms, and application distribution platforms should fulfill their corresponding responsibilities and obligations, implement relevant requirements, further optimize AI recognition technology, strengthen anti-counterfeiting and traceability capabilities, work together with regulatory authorities, and guard the boundaries of "authenticity". At the same time, netizens should also remain vigilant, strengthen the identification of the authenticity of information, and not be blinded by false information. Currently, artificial intelligence technology in China is developing rapidly, and safety standards and legal guidelines for different application scenarios are constantly being improved. Building a good online ecosystem requires joint efforts from all parties to return the "seeing is believing" cyberspace to each and every one of us, and safeguard the foundation of social trust. (New Society)

Edit:Momo Responsible editor:Chen zhaozhao

Source:Science and Technology Daily

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links