May 19, 2025
Science

https://www.xataka.com/magnet/todas-poderas-mundiales-solo-se-ha-negado-a-firmar-atrabajo-clave-que-ia-no-controle-boton-jojo-nuclear

  • September 12, 2024
  • 0

If a war-like future is shaped by the development of technology and artificial intelligence, it is possible to think that the so-called ‘autonomous war’ we see in movies

If a war-like future is shaped by the development of technology and artificial intelligence, it is possible to think that the so-called ‘autonomous war’ we see in movies like ‘Terminator’ is not as far away as we think. In the current scenario, where there is no clear regulation demarcating the red lines on the battlefield, this situation is worrying. Without human control, giving the keys to an algorithm is fertile ground for imagining Dante-like scenarios. This was the subject of the meeting between 90 countries, and all but one reached the same conclusion.

Don’t let the AI ​​push the button. The story happened during the Responsible Artificial Intelligence in Military Domain (REAIM) summit held recently in Seoul. The basic issue was not very regular, in fact the final signature is not binding, but it was the first and important tacit rapprochement between countries on a problem we barely touched.

Should machines make decisions about using nuclear weapons? That was the big question and debate, and everyone except China seemed to have made it pretty clear: a statement on Tuesday acknowledged that humans, not AI, should make the fundamental decisions about using nuclear weapons.

Agreement. Nearly a hundred countries, including the United States, China, England, the Netherlands and Ukraine, accepted the “Action Plan” after two days of talks. As we said, the agreement, which is not legally binding and was not signed by China, stated that “maintaining human control and participation in all actions related to the use of nuclear weapons” was essential.

The agreement also added that AI capabilities in the military field “must be implemented in accordance with applicable national and international law, and AI applications must be ethical and human-centered.”

No from China. China’s response is a symptom of the long road ahead and the geopolitical tensions at the moment. Russia, which was banned from attending the summit due to the war in Ukraine, did not respond. Even so, the rapid deployment of AI-based military systems in recent conflicts and their increasing importance in future plans underscore the urgent need to assess the potential unintended consequences of misusing this technology.

One example of this is the unregulated development and use of drones on the battlefield, where the only real limitation is the fear of algorithmic errors resulting in friendly fire.

Incentives to participate. That’s another thorn in the side of things. Countries directly involved in conflicts have little incentive to slow this development; as seen in the war between Russia and Ukraine, both sides are increasingly using AI-powered drones for actions that require minimal human supervision.

As Kateryna Bondar, an expert on advanced technologies at the Center for Strategic and International Studies, explained at the summit: “These countries are reluctant to impose restrictions because they see military AI as a critical advantage. In matters of national survival, no declaration or agreement, no matter how well-intentioned, can prevent a country from doing what is necessary to guarantee its own security,” she said.

Technology is always ahead. Another issue discussed at the summit was the question of legislating AI. In this regard, Manoj Harjani, coordinator of the Military Transformations Program at the School of International Studies, commented: “This is a constant game of catch-up, where technology is evolving much faster than government regulation, which is often slow.”

In the background: China again. While the US has enjoyed a military technological edge since the end of the Cold War, that advantage is no longer what it once was in the race to become the world leader in artificial intelligence and machine learning technologies that could revolutionize warfare, and China is on the horizon again.

In fact, there is data to support its leadership position. According to a report published in August by the Australian Strategic Policy Institute, China has extended its lead as the world’s leading research country, taking over nearly 90% of the 64 categories examined by the think tank, including advanced data analytics, artificial intelligence algorithms, artificial intelligence algorithms, machine learning and competitive artificial intelligence. We should add that the top 10 companies doing AI research also include Chinese companies such as Tencent Holdings, Alibaba Group Holdings and Huawei Technologies.

Solution. The summit was never binding, nor was it intended to be. It was a board-like approach, with each country moving its own token and revealing some of its cards. Despite the dim prospects for a binding international agreement to regulate military AI in the near future, experts argue that such an achievement would be “extremely important if possible” in the future.

As Bondar puts it, AI is essentially a product of human design; it is “software that can be programmed to follow rules and regulations.” The problem is, those rules and regulations can become each other’s faults and/or revenge. It is our responsibility to reach consensus to minimize and anticipate unnecessary disasters.

Image | wodidi

In Xataka | China is the country that has increased its nuclear arsenal the most. Now it calls for restraint from the US and its allies

In Xataka | China turns to nuclear energy to boost economy. Will invest $28 billion in 11 reactors

Source: Xatak Android

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version