...

Israel Accused of Employing AI to Target Thousands in Gaza, as Lethal Algorithms Surpass Global Legal Framework

Israel Accused of Employing AI to Target Thousands in Gaza, as Lethal Algorithms Surpass Global Legal Framework

An article published last week by +972 Magazine, a nonprofit outlet run by Israeli and Palestinian journalists, reported that the Israeli army utilized a new artificial intelligence (AI) system called Lavender to generate lists of potential human targets for airstrikes in Gaza. The system, according to six unnamed sources in Israeli intelligence, was used in conjunction with other AI systems to target and assassinate suspected militants, resulting in a significant number of civilian casualties.

The Guardian also reported on the same claims made by the +972 report, with one intelligence officer stating that the AI system “made it easier” to carry out numerous strikes as “the machine did it coldly.”

While the Israeli Defense Force denies many of the allegations made in these reports, stating that Lavender is not an AI system but rather a database for cross-referencing intelligence sources, previous reports have indicated that Israel has employed AI systems in its military operations. For example, last year’s +972 report revealed the use of an AI system called Habsora to identify potential targets for bombing.

The recent +972 report also mentions a third system called Where’s Daddy?, which monitors targets identified by Lavender and alerts the military when they return home.

The use of AI in military operations is not unique to Israel. Countries like the US and China are also developing AI systems for data analysis, target selection, and decision-making in warfare. Proponents argue that military AI can lead to faster decision-making, improved accuracy, and reduced casualties. However, critics raise concerns about limited human oversight and the potential for errors and civilian harm.

International rules and regulations regarding military AI are still lacking, although there have been some efforts to address the issue. The United Nations has been discussing “lethal autonomous weapons systems” for over a decade, and last year, the UN General Assembly voted in favor of a draft resolution to ensure algorithms are not fully in control of decisions involving killing. The US also released a declaration on responsible military AI use, endorsed by 50 other states.

Reports on the use of AI systems like Lavender and Habsora in Gaza highlight the limitations and ethical concerns associated with military AI. The industrial-scale generation of targets by AI systems displaces human decision-making and increases the potential for harm. The way the international community responds to current uses of military AI will likely shape the future development and use of this technology.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading