Nowadays, more and more elderly people are using AI health assistants to consult on health issues, analyze physical examination reports, and obtain medication guidelines. However, at the same time, they have also become the main victims of various health rumors. Recently, some media conducted tests on 10 AI health assistants and found that the tested AI can basically identify and correct rumors, but some failed to recognize the exaggerated promotional risks of online "miracle drugs". From a content perspective, most of the analysis of the effects of "miracle drugs" is based solely on promotional advertisements and cannot provide users with reasonable advice. AI health assistants have become an important tool for the public to seek medical treatment and medication. For young people, they are readily accessible 'health encyclopedias'; For the elderly, they are "medical translators" who bridge the digital divide. Analyzing physical examination reports, reminding medication time, interpreting professional terminology, recommending health preservation methods, etc., AI health assistants have enabled many elderly people to feel the health care brought by technology. This low-cost and easy to operate health management method has been increasingly accepted by more and more people. However, this evaluation has exposed a key risk point, which is that some AI tools have failed to effectively identify exaggerated propaganda as "miracle drugs". Some so-called "miracle drugs" that have been punished for exaggerated advertising, some generic models not only fail to indicate risks, but also use product promotion advertisements as analysis basis, and even suggest that they can replace formal treatment. This' just following the advertisement 'response method is actually a secondary dissemination of false information after being packaged by technology. Once AI becomes the "intelligent repeating machine" for pharmaceutical advertising, its credibility will inevitably be greatly reduced. If AI health assistants continue to act as accomplices in false advertising, the consequences would be unimaginable. The elderly population is already the main victims of health rumors, and they are more likely to believe in phrases such as "one bite, one spirit" and "complete cure". When AI repeats false information in a seemingly professional tone, many elderly people find it difficult to distinguish whether it is an advertisement or scientific advice. Especially, although some AI add reminders such as "please follow the doctor's advice" at the end of the conversation, they use a lot of space in the previous text to render the product efficacy. This kind of "misleading first and then throwing the blame" operation is more likely to confuse the public. Intercepting misleading information should become a fundamental skill of AI health assistants, and efforts need to be made from both technical and management perspectives to achieve this. On a technical level, establish a medical information verification system that requires AI to cross verify information sources when providing drug recommendations, and automatically label risk warnings for products that have been punished in the past. If multiple people report the phenomenon of AI health assistant pushing misleading information, the provision of relevant content should be suspended, and technological upgrades and transformations should be quickly carried out. At the management level, it is necessary to clarify the responsibility boundaries of AI medical applications, require the application management to improve the content review mechanism, and make special records for functions related to drug recommendation, in order to avoid AI assistants making random remarks and ignoring them afterwards. There is a wide variety of AI product categories on the current market, including both general modeling tools and professional medical tools. These products have demonstrated practical value in health education, chronic disease management, and other areas, and their super powerful functions are astonishing. But just as qualified doctors need to be able to treat patients and distinguish pseudoscience, the basic skills of AI health assistants also include the ability to quickly identify and intercept false information. Only in this way can AI not become an amplifier of health risks, but truly serve health and healthcare. (New Society)
Edit:XINGYU Responsible editor:LIUYANG
Source:ynet.com
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com