The development of global robotics and artificial intelligence affects all areas of life and, of course, concerns the military. Channelnewsasia writes about what will happen if the neural networks of different states collide in war. The military calls AI-based weapons “lethal” and “autonomous” and is spearheading their development—some covertly, some openly. Drones, autonomous firing systems, and even Israel’s “Iron Dome” are timid attempts to introduce AI, robotic and semi-autonomous warfare systems so far, as the same “Iron Dome” that can recognize a threat and shoot missiles on its own is a hammer but not a scalpel.
And they want to see the ranks of warriors in the army, like “terminators” or “universal soldiers”, who can act according to the war situation and make cold and competent decisions on the fly. The article says the same goes for anything autonomous that doesn’t even need to be controlled by a remote operator (like drones).
A robot dog with a rifle.
Some scholars feel that the use of AI in the military has advantages, aside from the obvious fact of saving the lives of soldiers in their own armies. Ronald Arkin of the Georgia Institute of Technology says AI-based robotic systems will be far more “human” than humans when it comes to eliminating the enemy: no sadism, no cruelty, no revenge or unbridled anger.
Scientists at the University of Bristol are concerned about the risk to civilians and disruption of negotiation processes if AI gets out of control or malfunctions. Additionally, they point out that when it comes to enemy manpower, there is a big problem in training the AI based on its “self-alien” algorithm. What if the AI was trained to respond to an enemy tank and mistook a civilian car passing through the barricade for itself?
Moreover, even if some international memorandums are signed that prohibit the application of artificial intelligence in the world’s armies, there is no guarantee that all parties will comply with this rule. After all, as experts rightly reason, the agreement on the non-use of chemical weapons, for example, was not respected by the USSR – they were used in Afghanistan.
Or when it comes to such a banal and logical desire to ban anti-personnel mines, which in most cases only civilians suffer, then countries like the USA, Russia and China are in no hurry to sign the Ottawa Treaty. countries signed it already in 1997.
Returning to the topic of artificial intelligence and weapons, supporters of this symbiosis emphasize the right of any state to defend itself against any form of aggression, and that artificial intelligence is much smarter, faster and more neutral than humans.
However, we have already entered the age of artificial intelligence in weapons unnoticed: drones and missiles, sentry robots, etc., which are more precisely directed at the target than the operator. All of this already exists and we will no doubt live up to the traditional “terminators” or “Skynet” era. After all, politicians will always find a loophole in fraudulent formulas such as “autonomous weapons”, “deadly autonomous weapons” or “lethal robots”…
The authors of the article say that you and I don’t need to be super-experts in artificial intelligence to understand how dangerous such a symbiosis is. They want anyone who has heard or read about the use of AI in the military to ask themselves: How justified and risky is this in terms of saving the lives of the civilian population? Source
Source: Port Altele
I’m Maurice Knox, a professional news writer with a focus on science. I work for Div Bracket. My articles cover everything from the latest scientific breakthroughs to advances in technology and medicine. I have a passion for understanding the world around us and helping people stay informed about important developments in science and beyond.