42 No. 76 JUSTICE We cannot compromise on this issue, and States must verify that they can stand behind and explain every decision. In sum, we need to be open and transparent about the risks and challenges of the military uses of AI, and try to find a way to deal with them. And until an international benchmark will be solidified, we should treat meaningful human control of AI tools as a discipline, operationalized through processes and cultural habits that keep humanitarian considerations firmly in place at all levels of the military. For example, as done in the IDF,47 when evaluating a possible target, the commander must consider the underlying sensor data, imagery timestamps, and communications intercepts as a starting point, and he or she should add professional discretion relating to unique vulnerabilities and other relevant circumstances, and reflect on them based on human experience and mindset. C. Decision-making pace and operational implications AI systems are able to process vast amounts of data and analyze complex scenarios almost instantly, in a way that renders them of tremendous operational value. Notwithstanding, this accelerated decision-making capability necessitates a delicate balance between automation and human oversight to ensure responsible and ethical decision-making. In particular, the rapid pace of generation of target suggestions by an AI system can be seen as falling short of the obligation to exhaust all “feasible” means to avert harm to civilians and may not align with the duty of precautions, in the sense that these systems do not allow for meaningful human judgment. The IDF is no stranger to this claim. As noted, the utilization of AI systems by the IDF is governed by SOP that regulate the processes for approving and planning attacks on targets and include a series of steps confirming that the target is indeed of a military nature and minimizing collateral damage. It should be emphasized that the use of the systems is confined to the intelligencegathering phase, in the early stages of the “life cycle” of a target, in the sense that later stages include corroboration and oversight over the intelligence gathering and evaluation stages, including review by legal advisers. These experts verify not only the factual assertions made, but also the appropriateness of an attack in terms of international law.48 Indeed, the IDF has clarified that the selection of a target for attack by AI systems will undergo processes designed to ensure meaningful human involvement before targeting.49 IV. The Road Ahead The challenges arising from the growing integration of AI into the battlefield underscore the urgent need to identify and develop mechanisms to mitigate their impact as this process continues to unfold. In this section I refer to a main tool in this regard: preliminary legal review of weapons, means, and methods of warfare.50 36. Tal Mimran and Lior Weinstein, “The IDF Introduce Artificial Intelligence to the Battlefield – A New Frontier?” ARTICLES OF WAR (March 1, 2023), available at https://lieber.westpoint.edu/idf-introduces-ai-battlefieldnew-frontier/ 37. Supra note 28. 38. The Selected Committee on Artificial Intelligence, “AI in the UK: Ready, Willing and Able?” (April 16, 2018), ¶ 108, available at https://publications.parliament. uk/pa/ld201719/ldselect/ldai/100/100.pdf (“Selected Committee on Artificial Intelligence”). 39. Lindsey Barrett, “Reasonably Suspicious Algorithms: Predictive Policing at the United States Border,” 41 N.Y.U. REV. L. & SOC. CHANGE 327, 337 (2017). 40. David Lyell & Enrico Coiera, “Automation bias and verification complexity: a systematic review,” 24 JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION 423-431 (2017). 41. Yavar Bathaee, “The Artificial Intelligence Black Box and the Failure of Intent and Causation,” 31 HARV. J. L. & TECH. 889, 893, 901 (2018). 42. Scott Sullivan, “Targeting in the Black Box: The Need to Reprioritize AI Explainability,” ARTICLES OF WAR (Aug. 28, 2024), available at https://lieber.westpoint. edu/targeting-black-box-need-reprioritize-aiexplainability/ (“Sullivan, Black Box”). 43. Zoe Stanley-Lockman & Edward Hunter Christie, “An Artificial Intelligence Strategy for NATO,” NATO REVIEW (Oct. 25, 2021), available at https://www.nato. int/docu/review/articles/2021/10/25/an-artificialintelligence-strategy-for-nato/index.html 44. For example, the principle of distinction requires parties to an armed conflict to always distinguish between civilians and combatants and between military targets and civilian objects. See Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts arts. 48, 51, June 8, 1977, 1125 U.N.T.S. 3. (“(AP I”); see also Jean-Marie Henckaerts & Louise Doswald-Beck, CUSTOMARY INTERNATIONAL HUMANITARIAN LAW, VOL 1: RULES 3, 25 (Cambridge University Press 2005) (“CIL study”). 45. AP I, supra note 44, at art. 57(2).
RkJQdWJsaXNoZXIy MjgzNzA=