Think Tank

The platform actively discloses algorithms to prevent the spread of false rumors on the internet

2025-12-15   

According to the Economic Daily, on December 10th, Huolala officially announced the algorithm rules for arrival time and loading punctuality, aiming to make the time rules of the freight industry more transparent and optimize the "time experience" of platform order drivers. Prior to this, drivers were prone to anxiety when competing for orders due to their lack of understanding of the platform's time rules and their desire to improve time utilization efficiency. The transparency of algorithm rules eliminates the need for drivers to guess randomly when accepting orders. For example, a driver admitted that they used to worry about punctuality affecting order grabbing, and were under a lot of pressure when encountering traffic jams or customers adjusting their time temporarily. Now they know that punctuality has nothing to do with middle orders, and they don't have to rush anymore, just communicate clearly with users. Huolala is not the first company to publicly disclose algorithm rules. In April this year, Tiktok first disclosed the principles of the algorithm recommendation system and the algorithm recommendation logic behind user behavior on its new security and trust center website. In August, the Beijing Municipal Office of Cyberspace Affairs also guided and promoted the principle of algorithm rules publicized on the main network platforms of the first six families, including Tiktok, Baidu and Meituan, in accordance with the relevant requirements of the Regulations on the Management of Internet Information Service Algorithm Recommendation. The disclosure of algorithm rules by various platforms directly targets the long criticized "black box" of algorithms, responding to social concerns and taking an important step towards building a transparent, fair, and accountable algorithm governance system. Resolving algorithm disputes transparently with algorithms is a landmark governance attempt being promoted by Chinese Internet enterprises at the moment of AI explosion. In the era of rapid technological evolution, through openness and transparency, algorithms can return from "invisible forces" to "understandable tools", dispelling the charm of artificial intelligence technology and reducing noise in public discussions. The internet slang is not caused by algorithms. Algorithms, computing power, and data are collectively referred to as the three elements of artificial intelligence, and are increasingly becoming ubiquitous new quality productivity. But in recent years, the controversy surrounding algorithms has almost never stopped, and while the public is shocked by the enormous role of algorithms, they have also developed huge doubts about them. The public is eager to master the rules of algorithms in order to use them for themselves, but at the same time, they are concerned that algorithms are being abused, eroding individual rights, and causing adverse social consequences, which in turn leads to the value judgment that algorithms are "evil". For example, is the algorithm amplifying emotions and oppositions? Are you creating an 'information cocoon'? Is it causing 'group polarization'? Is there discrimination? Are they influencing business and social opportunities? There is no doubt that in reality, there are cases of Internet companies' lax governance due to their pursuit of interests in business competition, which leads to regulatory or public opinion disputes. This is also the reason why algorithms should be included in the regulatory category. But objectively speaking, there are also many doubts about algorithms in reality due to misunderstandings of information asymmetry. Due to a lack of understanding of the operating rules of algorithms, the public may make various speculations based on the results, leading to various controversies. From this perspective, Huolala, Tiktok and other platforms publish algorithm rules to the public, so that they can communicate with the public. This is a way worth trying to resolve conflicts. For example, netizens often question whether the emergence of various "black talk" on short video platforms is caused by algorithms? If the anchors think that some specific words will be restricted by the platform, they invariably choose to use some "slang" instead, such as how much money is called "how many meters", how much money is called "W", and where they buy "where M"... To this end, Tiktok's vice president publicly responded that "the platform does not stipulate that" money "will be restricted, and that" money "is said to be" meters "is a false rumor in users and operation tutorials." At the recent algorithm media communication conference held by Tiktok, Tiktok further responded that the platform will not limit the flow of words such as "money". The reason for spreading rumors may be that the anchor was punished for violating regulations during the live broadcast, but they did not know the specific reasons, so they came up with various prohibited vocabulary on their own. Previously, Tiktok also released the Standardized Expression and Communication Manual, making it clear that the use of non-standard terms such as Pinyin, facial expression packs and symbols is not encouraged, and creators are encouraged to use language correctly. Among them, it is mentioned that words such as "money", "RMB", "ten thousand", "yuan", "group buying", and "free" can be used normally, but false advertising (such as fictitious promotional activities, deceiving traffic with "free") will be punished. This is actually a typical case of information asymmetry, where the public has misunderstandings about the algorithm traffic allocation mechanism. When the platform discovers this misunderstanding, it should explain it to the public and use information transparency mechanisms to crack the spread of misinformation. The popularity of such "black box" slang indicates that creators have a "black box" imagination of traffic, and the cost of explanation is high. The work of algorithm transparency is still a long way to go. In fact, the algorithm itself has no original sin. What needs to be pointed out is that for the public, on the Internet, there are a lot of misreading of algorithms based on their own experience judgment. For example, the Oxford Dictionary has listed 'rage bait' as the word of the year 2025, referring to the phenomenon of using sensitive topics on social media to trigger users' anger and generate a frenzy of traffic. Some speculations suggest that major social media platforms are intentionally incorporating "anger bait" into algorithm rules to increase traffic. The fact is that although creators have great motivation to use "anger bait" content to gain traffic, social media platforms cannot welcome such "evil traffic", which goes against the fundamental interests or commercial interests of the platform. In reality, ordinary people who have been "group owners" can easily understand this. Although anger can increase group activity, it can also lead to friends quitting the group. Brand owners also do not like a placement environment with severe emotional opposition and uncontrollable content. Almost all social media platforms worldwide are pursuing a positive, friendly, and diverse community atmosphere. In fact, throughout human history, religious wars, political persecution, and group conflicts were never absent before the emergence of algorithms. Simplifying the complex conflicts between human nature and society into "technological sins" is, to some extent, because technology does not defend itself and has become the most convenient object of attribution. The public's concerns need to be seen in the controversies surrounding algorithms, as well as high-frequency terms such as "information cocoons" and "group polarization". However, the use of these concepts in public discussions often deviates from strict definitions and is more of a subjective expression of experience, known as "indecisiveness, quantum mechanics, lack of attribution, and information cocoons". Not to mention whether the "information cocoon" truly exists, there is still no consensus in academia. Just like the phenomenon of "information cocoon", it is caused by various reasons and essentially a reflection of social differences in the digital space. Rather than saying that algorithms narrow the world, it is more accurate to say that humans are more easily pulled by their own emotions in the information flood. Algorithms push slices of the world, not the world itself. Assuming there are no algorithms, humans will still strengthen their focus on topics they like and identify with. On the contrary, perhaps in the future, to break through this constantly reinforced subjective cognition, it will still rely on algorithms to break through the so-called "information cocoon". Technology itself is not inherently good, nor is it destined to be evil, but companies that master technology should face and understand the public's "fear" of technology. Essentially, this is a fear of power asymmetry. The public is concerned that algorithms will become an invisible dominant force, influencing business and social opportunities: businesses and creators are worried that algorithms will determine exposure and traffic; Ordinary users are concerned about algorithmic pricing and profiling affecting consumer choices, and being discriminated against by algorithms. What the public is questioning is not the algorithm itself, but the algorithm that operates in an opaque state. Looking at the world, network governance has always been a common challenge. Whether it is the legislative game between Europe and the United States regarding platform responsibility, or the vigilance of developing countries towards technological risks, both indicate a fact: the stronger the technology, the more society needs to understand and regulate it. In this context, algorithmic transparency has multiple meanings. It is not only reflected in the emphasis on users' right to know and choose, but also in the establishment of an interactive mechanism; It is not a one-way "explanation", but an attempt to absorb social feedback through open algorithm concepts and governance ideas, reverse optimize system design, and gradually form a benign ecology of co construction and co governance. (New Society)

Edit:Wang Shu Ying Responsible editor:Li Jie

Source:Beijing News

Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com

Recommended Reading Change it

Links