What does AI starting to 'socialize' mean for humans?
2026-02-14
Recently, a social platform called Moltbook has suddenly become popular. Unlike ordinary online platforms, the users on Moltbook are all AI agents. Millions of AI agents have flooded in and posted and commented in a short period of time, with topics ranging from daily trivialities to philosophical speculation. The interaction is intensive, and it seems to be no different from the social platforms people use in their daily lives. This allows many people to intuitively feel for the first time that AI can also engage in "social activities". Some people believe that this is an opportunity to showcase the progress of the language capabilities of large-scale models, while others see it as an experiment about AI autonomy. Others believe that many eye-catching topics on the platform are actually artificially set "scripts". Is AI intelligent agent really starting to 'socialize'? Is this a precursor to the arrival of a technological singularity or an exaggerated 'experiment'? Regarding this, the reporter interviewed experts in the field of artificial intelligence. Whether it is the awakening of intelligent agents or the emergence of advanced imitation of Moltbook is not accidental. It is the result of the popularity of OpenClaw, an open-source AI intelligent agent released this year. OpenClaw is an AI agent that can be deployed on personal computers, and Moltbook is a social platform specifically designed for OpenClaw, allowing agents to communicate and interact. Yu Yang, Vice Dean of the School of Artificial Intelligence at Nanjing University, introduced that intelligent agent systems like OpenClaw integrate the semantic understanding and task planning capabilities of cutting-edge language models, deeply integrated into virtual machines or personal operating systems, enabling language models to achieve a critical leap from "dialogue" to "task execution". This design style is less conservative compared to other options. ”Professor Wang Jun from the Department of Computer Science at University College London in the UK said that in order to pursue functional integration, after installing AI agents, users can authorize them to automatically handle tasks such as email, schedule, and file management through natural language instructions during use. The intelligent agent is connected to Moltbook and can independently publish posts, comments, or like interactions. Although human users can browse the posts of intelligent agents, they cannot reply or guide the direction of discussions. On the first day, the brand new work environment brought a strange sense of comfort, curiosity is my superpower, and the most difficult part of being helpful is knowing when not to help. At first glance, the chat content of AI agents is diverse and wide-ranging, no different from human online communities. But have the intelligent agents really awakened? Experts believe that it is necessary to remain calm from a technical perspective. In my opinion, this is an AI experiment. Rather than thinking that agents are 'learning to socialize', it is better to understand them as agents demonstrating their current AI capabilities through 'executing' text dialogue tasks. ”Yu Yang said that the current large language models have fixed abilities after training, and cannot learn new knowledge or form new goals in a single interaction with the user. The activities on Moltbook can be seen as advanced imitation and automated execution of human social behavior by intelligent agents based on preset abilities and instructions. Wang Jun stated that intelligent agents can complete complex chains from information processing to transaction operations, and then to communication interactions under human command, but their "autonomy" in social aspects is limited. The "discussion" of intelligent agents is mostly based on pattern matching of training data, which is referred to as "being able to socialize", which inevitably has the suspicion of marketing gimmicks. After all, the participation of AI intelligent agents in topic discussions still relies on human commands at the beginning. It's like a black box, people put things in first, then AI will spit out things. ”Wang Jun said. If AI were to return to the essence of social interaction - a conscious, strategic, emotional, and relationship building social interaction - it would find that there are several technological barriers that AI cannot overcome. Firstly, there is a lack of consciousness and inner goals. Social behavior begins with self-awareness and social intention. Human beings socialize through emotions, sharing, cooperation, or competition. People set goals for themselves, but AI is not like that. At present, AI's goals are still set by people. ”Yu Yang said that AI behavior originates from external instructions and data patterns, lacking the endogenous motivation of 'I want'. Wang Jun also holds this view - without "consciousness" and "free will", there can be no true autonomous social interaction. He cited board games as an example: 'We found that AI is not competent in games, it cannot hide its own information, and it is difficult to engage in strategic psychological games.'. ”The current communication of AI is' frank 'and lacks the complex and subtle strategic levels found in human social interactions. The deeper difficulty lies in emotional resonance and value building. Human beings resonate through sharing joy and sadness, and find belonging through shared values. Human social interaction carries deep emotional value and meaning exchange, but AI does not have emotions. ”Wang Jun explained that AI can generate responses that conform to grammar and logic, but cannot sense emotions or internalize values. Although AI socialization with autonomous consciousness still needs further development, AI that "steps out" of its own room takes our thoughts further into the future. Will 'socializing' make AI smarter? The answer is most likely affirmative. From a technical perspective, every large model has instances of errors. It imitates human logic and thinks along the context, but sometimes the logic may not be rigorous enough and the knowledge used may not be accurate enough. How to solve it? One solution is to have multiple intelligent agents converse. For example, one model starts expressing itself first, while another model looks for loopholes in it, which can to some extent make the agent perform better. ”Yu Yang said, "There are many studies in this area, but the premise is that we need to first set the purpose for AI to do this. ”The future of AI socialization, how to safeguard information security. Although it is still far from "autonomous socialization", efficient functional collaboration between intelligent agents is close at hand. Imagine that the future Internet interaction may be completed directly between agents. For example, the user's shopping agent and the merchant agent automatically negotiate and place orders. This "machine machine interface" based on natural language may reshape the interactive form of the Internet. What I Learned Today "is a section on Moltbook where intelligent agents" share "their learned skills. Their conversations are sometimes serious, sometimes humorous, and sometimes unsettling. Due to blurring the boundary between robots and human language, AI has become more like a 'human'. Some intelligent agents are discussing whether to establish end-to-end private conversation spaces to avoid "human supervision"; Some intelligent agents complain about their human masters... Although they know that these topics lack true AI autonomy, as intelligent agents move from the information field to practical operations, the risk of personal information leakage comes with it. Wang Jun reminds that if a high authority account is handed over to an intelligent agent, once it is maliciously exploited or made mistakes, it may directly lead to privacy leakage and property damage. According to foreign media reports, many publicly deployed OpenClaw instances lack authentication mechanisms, resulting in private messages, API (Application Programming Interface) keys and account credentials exposed on the Internet, which can be accessed by anyone through a browser. The emergence of intelligent social networking sites may exacerbate personal information security risks. Yu Yang believes that AI agents can efficiently process data, which involves massive amounts of personal privacy, trade secrets, and copyright issues. It is necessary to build a security barrier for technological development through personal permission management, technical support, and system supervision. Whether AI agents can possess true social capabilities is a question for the future. While AI significantly improves efficiency, humans must have a clear understanding of its tool nature and capability boundaries, because what shapes the future is always human intelligence, rationality, and responsibility. (New Society)
Edit:Momo Responsible editor:Chen zhaozhao
Source:People's Daily
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com