Killing Machines: Why We Need to Talk About AI use in Military

As a researcher who has spent years studying the intersection of technology and conflict, I find myself deeply troubled by the escalating use of artificial intelligence (AI) in military operations. The tragic tale of the Russian soldier in Ukraine, hunted down by an autonomous drone, is just one chilling example of how advanced AI technologies are being weaponized without proper safeguards or regulations.


Following an exhausting day of combat, a single, weary Russian soldier is navigating out of a trench. This trench is littered with the fallen comrades who had perished during the Russian-Ukrainian war. As he stumbles and steps over their bodies, a Ukrainian drone persistently hovers overhead, emitting a recognizable, chilling hum.

Similar to a hunter playing with its prey before finishing the kill, the AI-controlled drone pursues the soldier, followed by dropping explosives nearby to inflict wounds. Eventually weakened, the soldier surrenders, placing his hands in front of the drone in supplication. The drone pauses momentarily, giving the impression of contemplation, before administering the lethal strike on the soldier.

Experience the 21st-century battlefield, a blend of advanced technologies such as machine learning, artificial intelligence, and sophisticated hardware causing destruction in various regions, including rural Ukraine and the Middle East’s Gaza Strip.

With AI technology becoming more widespread globally, it’s no longer just about our smartphones and air conditioners. In recent times, artificial intelligence has been enlisted by militaries, with notable instances being the Russia-Ukraine and Israel-Hamas conflicts. The toll of casualties, unfortunately, reaching thousands, has prompted many human rights advocates to voice concerns over the growing use of AI in questionable surveillance, manipulated videos, and sophisticated military operations.

A two-day conference titled “Responsible Employment of Artificial Intelligence in the Military” (REAIM) is happening in Seoul, South Korea from September 9th to 10th. This gathering will welcome representatives from approximately 90 nations, including the U.S. and China, as well as influential figures from the arms industry. These attendees will engage in discussions aimed at finding methods to control the application of AI within military operations.

What is the REAIM Summit in Seoul About?

At the heart of the two-day REAIM summit lies the mission to establish worldwide standards for applying artificial intelligence in military support systems. Additionally, it intends to create a roadmap that offers recommendations for ethical AI usage and management.

Killing Machines: Why We Need to Talk About AI use in Military

The Seoul Summit was jointly organized by Singapore, the Netherlands, Kenya, and the United Kingdom. More than ninety countries, including the U.S. and China, sent their official delegates to this summit. Furthermore, over 2000 individuals registered to participate, demonstrating the importance and excitement surrounding this event.

The Seoul gathering marked the second instance of this particular event; the inaugural event occurred in The Hague, Netherlands, back in February 2023. While the initial conference primarily revolved around discourse on AI’s military application, the central theme for the Seoul summit was instead geared towards exploring solutions and measures.

At this summit, our primary objective is to establish fresh guidelines regarding the employment of AI in conflict, foster thoughts on international control of military AI, and deepen our comprehension of its impact on worldwide tranquility and safety – both positive and negative aspects.

Benefits and Dangers of AI in the Military

The increasing importance of artificial intelligence (AI) can be seen in military functions, given its vast capabilities in areas such as stock control, monitoring, exploration, intelligence gathering, strategic planning for the battlefield, and logistics organization.

Military forces worldwide are increasingly understanding the potential of Artificial Intelligence (AI) and employing it strategically in combat situations. This includes gathering battlefield data and analyzing it, which subsequently becomes essential for their operations and aids in decision-making, increasing problem-solving awareness, formulating strategies, and minimizing civilian harm.

From my perspective as an analyst, just like a coin has its head and tail, artificial intelligence (AI) carries both advantages and potential pitfalls. While AI’s benefits are undeniable, it’s crucial to acknowledge the risks and dangers associated with its use. Particularly concerning is the misapplication of AI in military contexts, which could yield detrimental consequences for human life.

There are some major concerns like the use of AI in lethal autonomous weapons, illegal surveillance, intimidation, war robots, AI use in hacking, and cyber attacks among others. Besides these, it has global security concerns as AI could lead to an arms race and increase global tensions.

Killing Machines: Why We Need to Talk About AI use in Military

From Ukraine to Gaza: Real-Life Terminator Scenarios in Military AI

AI is making science fiction from Hollywood movies seem more like reality, as shown by recent events in Gaza and Ukraine. These regions have unfortunately become the initial battlegrounds for AI conflicts.

According to media reports, it appears that Israel’s military is employing Artificial Intelligence (AI) in their military actions within the Gaza Strip. These AI-driven drones are being utilized primarily for monitoring activities and identifying armed combatants in the densely populated Gaza Strip.

Israel’s Autonomous Weapons and AI Targeting Systems

Israel employs AI-driven weapon systems called “Lavender” and “The Gospel” for strikes on Gaza locations. Operating autonomously, these systems have resulted in extensive damage and casualties during the last year’s conflict in Gaza.

Killing Machines: Why We Need to Talk About AI use in Military

Beyond this, Israel has integrated various AI applications into their military conflicts, among which are Lethal Autonomous Weapons Systems (LAWs). These self-operating weapons are armed with guns, missiles, and incorporate technologies such as facial recognition, biometric monitoring, and autonomous target identification systems.

Yet, some activists argue that employing such AI-driven weaponry infringes upon human rights.

Ukraine’s Vampire Drones

In a similar vein to Israel, it was discovered in May 2024 that Ukraine’s military was experimenting with a drone modeled after a vampire, as reported by Reuters. This drone is an AI-enhanced device armed with weaponry and designed to require minimal human intervention for operation. Remarkably, these autonomous drones have already been deployed in combat against the Russian army, as evidenced by numerous online videos that have surfaced.

Conclusion

In my role as an analyst, I’m observing a significant trend: Artificial Intelligence (AI) is increasingly integrated into military operations, which raises concerns about its potential misuse. The instances of AI abuse, such as those evident in Gaza and Ukraine, serve as stark reminders of this risk. It becomes crucial, therefore, to establish regulations and norms for the ethical application of AI in warfare, ensuring its use aligns with global standards of conduct and promotes peace rather than conflict.

Read More

Sorry. No data so far.

2024-09-10 09:09