Skip to main content

Israel's Use of AI in War

April 22, 2024

Recent reporting has focused on AI’s use in war and concerns about the ethical issues involved. Specifically, there have been reports on Israel's use of two AI systems: “Lavender,” which targets people associated with Hamas; and “Gospel,” which targets buildings and locations associated with the Palestinian militant group.  

While being used in warfare, it is important to note that they are both information systems rather than explicit decision makers. In effect, each system “suggests” a set of targets, human and physical, and human users make the actual decisions.  

The argument is that there is a human in the loop validating the suggestions before any actions are taken. So human decision-making is part of the process. 

Unfortunately, in practice, that doesn't seem to be the case. In reality, both of these systems are there to make the process of target identification faster and more efficient. The result is a human in the loop who is doing little more than rubber stamping the machine’s recommendations.  

Although there is a human in the loop, there was not an effective human in the loop. This is a reflexive use of information, as opposed to a mindful consideration. 

This issue of the role of human operators in AI systems that support decision-making in battle and in other aspects of our lives is crucial.  Are we building systems that by their nature will be used without question, whatever their accuracy rate is? Or are we building systems that by their nature make it impossible for us to use them without question? 

In the latter case, we're in a situation where the actual decisions are made by a human being, based upon information that AI systems provide. In the former case, even though the systems are information only, we're still seeing the decisions being made without thoughtfulness. This is troubling. 

In general, we need to consider how to make humans part of the decision-making associated with human life. But if we do so in a way that looks like we have control when we are really just rubber stamping, we might argue that there's a human in the loop, but the reality is that the machine is calling the shots. 

Kristian HammondKristian Hammond
Bill and Cathy Osborn Professor of Computer Science
Director of the Center for Advancing Safety of Machine Intelligence (CASMI)
Director of the Master of Science in Artificial Intelligence (MSAI) Program

Back to top