On November 17th, a British website published an article titled 'Artificial Intelligence: New Rules of War'. The discussion about the application of artificial intelligence (AI) in warfare has brought about a dystopian fear. Military commanders expect to have a digital force that is faster, more accurate, and capable of fighting beyond human command. However, there are also concerns that as AI plays an increasingly central role in warfare, military commanders may lose control of the conflict as it rapidly escalates and lacks moral or legal oversight. Understanding and reducing these risks is a military priority of our time. A consensus gradually reached in the West is that the decision to deploy nuclear weapons should not be left to AI. UN Secretary General Guterres has further called for a comprehensive ban on completely autonomous lethal weapon systems. Regulation must keep up with the pace of technological development, which is crucial. But in the exciting scenes inspired by science fiction works, it is easy for people to overlook what may actually happen. As researchers at Harvard University's Belfort Center for Science and International Affairs have pointed out, AI optimists often underestimate the challenges faced in deploying fully autonomous weapon systems. The ability of AI in combat may be greatly exaggerated. The main supporter of this viewpoint, Professor Anthony King, believes that AI will not replace humans, but will be used to enhance military insight. Although the nature of war is changing and remote technology is improving weapon systems, he insists that "complete automation of war is just a fantasy." Currently, AI's three application scenarios in the military field cannot be considered completely autonomous. AI is being used for planning and logistics, cyber warfare (including sabotage, espionage, hacking, and information warfare), and the most controversial weapon targeting. The latter has been put into use on the battlefield in Ukraine. The military in Kiev uses AI software to command drones, which can avoid Russian jammers when approaching sensitive locations. There is an interesting viewpoint that suggests that among those who feel fearful and shocked by the application of AI in warfare, there may be those who are not familiar with the cruel but realistic military norms. Some voices opposing the use of AI in warfare are not so much against the use of autonomous systems as against war itself. AI companies have undergone a significant shift in their attitude towards applying their products in the military field. In early 2024, OpenAI explicitly prohibited the use of its AI tools for warfare, but by the end of that year, it had signed an agreement with Anduril to help the company shoot down drones on the battlefield. Although this step is not a completely autonomous weapon, it is definitely the application of AI on the battlefield, marking a significant shift where technology companies can publicly disclose their connections with military systems. People who oppose the AI war can be divided into several categories. One group of people simply do not agree that more precise targeting means fewer casualties, or rather, they believe that this will only lead to more wars. Think about the first stage of the drone war in Afghanistan. As the implementation cost of drone attacks decreases, can we really say that it reduces killing, or does it just mean that every dollar spent can cause more damage? But there is also a group of critics who are familiar with the reality of war and have very specific dissatisfaction with the fundamental limitations of this technology. For example, former US Navy fighter pilot and professor of engineering and computer science at George Mason University, Missy Cummins, bluntly believes that big language models are particularly prone to making serious mistakes in military scenarios. We should continue to question the security of AI warfare systems and ensure that political leaders take responsibility in this regard. We should also be skeptical of the "extremely ambitious promises" made by some technology companies regarding the potential goals of AI on the battlefield. The new field of defense technology provides both opportunities and risks for the military. The danger is that in the process of the AI arms race, these emerging capabilities may not receive the scrutiny and discussion they urgently need. (New Society)
Edit:QuanYi Responsible editor:Wang Xiaoxiao
Source:cankaoxinxi.com
Special statement: if the pictures and texts reproduced or quoted on this site infringe your legitimate rights and interests, please contact this site, and this site will correct and delete them in time. For copyright issues and website cooperation, please contact through outlook new era email:lwxsd@liaowanghn.com