41 Winter 2026 The ultimate decision making takes place in the Target Room – in which legal advisers, engineers, and senior intelligence officers revise suggested targets. Hence the introduction of AI tools does not alter the foundational principles of operation in the IDF, which is that only a military commander can make the final decision to deploy military force. B. Technical Challenges Several technical challenges exist, in addition to the human-centered challenges reviewed above. A major technical challenge is that of algorithmic bias, namely when AI outputs promote a preference to a certain group.38 This usually derives from the overrepresentation of a certain group in the training data that is used to train decision-making systems.39 Another notable challenge is the phenomenon of automation bias – namely, the tendency to over-trust the AI output.40 A third technical challenge is the “black-box” problem.41 This refers to a situation in which a user of the system might be unable to assert the inner workings which remain obscure or incomprehensible.42 In our context, if systems like the Fire Weaver system are deployed by the IDF, without employing measures that ensure that commanders sufficiently understand how to work with the system, it might become challenging to evaluate how they identify risks. Such a gap may impact predictability, validity and reliability.43 The use of military AI tools presents challenges in this regard.44 In particular, the principle of precautions requires that commanders “do everything feasible to verify that the objectives to be attacked are neither civilians nor civilian objects.”45 Hence, armies should promote proper training for commanders, and enable them to seek the advice of technical experts in a way that will close this gap of comprehension. In practice, a functional understanding of which data feed the system, how those data are weighted, and where their vulnerabilities lie – is a good start. This can be done with simulations that teach the commander to evaluate the behavior of the system and to recognize anomalies, and this effort can be complemented with SOPs that include procedural safeguards and promote a culture of critical use of the system, as is done in the IDF. Another concern that arises is the ability to meet the requirement to carry out investigations of potential violations of IHL and international human rights law (IHRL).46 After all, the incapability of AI-based systems to provide a comprehensible explanation of decision-making processes might hinder investigations of military incidents. 26. Yuval Abraham, “‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza,” +972 MAGAZINE (April 3, 2024), available at https://www.972mag.com/lavenderai-israeli-army-gaza/ (”Abraham, Lavender”). 27. IDF Press Release, “The IDF's Use of Data Technologies in Intelligence Processing” (June 18, 2024), available at https://www.idf.il/210062 28. “Spike Anti-Tank Guided Missiles,” ARMY TECHNOLOGY (June 23, 2023), available at https://www.armytechnology.com/projects/gill-spike-anti-tank-missiles/ 29. Typically, human participation in decision-making is categorized into three types: a human-in-the-loop; a human-on-the-loop; and a human-out-of-the-loop. See Michael N. Schmitt & Jeffrey S. Thurnher, “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict,” 4 HARV. NAT'L SEC. J. 231, 235 (2013) (“Schmitt and Thurnher, Out of the loop”). A common term in this context is “meaningful human control”; see Article 36, “Killer Robots: UK Government Policy on Fully Autonomous Weapons” (April 2013), available at https://www.article36.org/wp-content/uploads/2013/04/ Policy_Paper1.pdf 30. See Office of the Prosecutor, International Criminal Tribunal for the Former Yugoslavia (ICTY), Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia, 39 INT'L LEGAL MATERIALS 1257, ¶ 49-50 (2000). 31. Tal Mimran & Gal Dahan, “Artificial Intelligence in the Battlefield: A Perspective from Israel,” OPINIOJURIS (April 20, 2024), available at https://opiniojuris. org/2024/04/20/artificial-intelligence-in-the-battlefielda-perspective-from-israel/ 32. Ibid.; see also Abraham, A mass assassination factory, supra note 25. Broadly speaking, international legitimacy is currently of great importance when armed conflicts are not only conducted on the battlefield but also on the legal and diplomatic fields, before the international community and the media. 33. Alexander Blanchard, Chris Thomas, Mariarosaria Taddeo, “Ethical Governance of Artificial Intelligence for Defence: Normative Tradeoffs for Principle to Practice Guidance,” 15 SSRN (2023), available at https:// papers.ssrn.com/sol3/papers.cfm?abstract_id=4517701 34. N. Sharkey, “Killing Made Easy.” in ROBOT ETHICS: THE ETHICAL AND SOCIAL IMPLICATIONS OF ROBOTICS 118 (Lin P., Abney K., Bekey G. A. eds., MIT Press 2014). 35. Tal Mimran, “How to regulate AI-Influenced Weapons,” PKU FINANCIAL REV. (Dec. 26, 2023), available at https:// english.phbs.pku.edu.cn/2023/review_1226/3509.html
RkJQdWJsaXNoZXIy MjgzNzA=