/ 3 June 2024

How to judge an AI-programmed genocide

Israeli Army Attacks Continue On The Gaza Strip
The dead bodies of two Palestinian staff members of Kuwait Hospital, who died as a result of the unmanned aerial vehicle attack by the Israeli army near Kuwait Hospital, are being removed from the ground by Palestinians in Rafah, Gaza on May 27, 2024. (Photo by Mahmoud Bassam/Anadolu via Getty Images)

On 11 January, 2024 South Africa presented arguments accusing Israel of genocidal acts at the International Court of Justice. During the public hearing, South Africa emphasised the genocidal intent of its acts of destruction and the mass killing of Palestinian people. 

According to the Rome Statute of 1998, “genocide” means acts committed with intent to destroy, in whole or in part, a group through killing, causing serious bodily or mental harm, as well as deliberately inflicting on the group conditions of life calculated to bring about its physical destruction in whole or in part. 

Judges were asked to respond to the intention of Israel and concluded that measures should be taken to prevent acts of genocide in the Gaza strip. 

Israel’s high-tech killing campaign relies on automatised weaponry, making it difficult to establish intent and accountability. Yet, the technology of genocide, programmed by the Israeli Defence Forces (IDF), is based on human agency — in it resides the impulse to model machines as automated systems of algorithmic killing. 

The question of the intention to kill is central to assessing the responsibility of an agent committing a voluntary act. Two recent investigations revealed by independent publications +972  Magazine and Local Call detailed the systems developed and used by the IDF to program calculated bombing in Gaza, one that can program the destruction of targets using computational science. 

The first system, called “The Gospel”, can generate four distinct categories of targets, from residential towers to family homes. For each target, a file is attached that specifies the number of civilians that are probably going to be killed in the attack. 

“Nothing happens by accident,” said a member of Israel’s intelligence community, interviewed for the investigation. “When a three-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.” 

In an attempt to kill one Hamas leader, the military command can approve the killing of hundreds of Palestinians. In carrying out the attacks, the army’s intelligence units know in advance how many people will die. Here, the intention to kill a Hamas leader is clear, yet that intention comes with the collateral damage of soon-to-be dead civilians. 

While in the past, power targets, such as high-rise buildings, were hit to shock civilian society, the latest war against Palestinians reveals that military protocols have been loosened to increase civilian assassination. 

The second system, Lavender, is an Artificial Intelligence-controlled machine that marked as many as 37 000 Palestinians as suspected militants for assassination. Lavender curates a kill list of suspected operatives in the military wings of Hamas and the Palestinian Islamic Jihad. 

These human targets were designated despite knowing that the system makes so-called errors in about 10% of cases. A loose connection, or no connection at all, to militant groups is valid data used by the system to produce a human target ready for assassination. 

How can one verify the data used to train the AI systems developed by the IDF? This data can be collected through voluntary use of agreements by Palestinians users of smartphones and social media but also through spying. Israel is known to have designed the infamous spy-software Pagasus that can bypass detection and be used by governments to spy on journalists, activists and citizens in several countries. 

The AI-generated kill list was not thoroughly investigated by human beings. One source stated that they served as a “rubber stamp” for the machine’s decisions. They personally spent about “20 seconds” on each target before authorising a bombing — just to make sure the Lavender-marked target is male and that the target was in their home. 

Data used by The Gospel and Lavender could be synthetic data, that is data generated by the machine itself and then used to feed the program and validate the model. In Op-Ed: AI’s Most Pressing Ethics Problem Anika Collier Navaroli, a senior fellow at Columbia University, reveals the ethical issues regarding the use of synthetic data. 

Synthetic data designates information that is artificially generated. This data can be used to train machine learning or validate models. It can be biased and shape the system making it harder to distinguish between real-world events and false information. In the question of intention to kill, the assessment of the deliberate use of biased data to justify the production of targets should certainly be taken into consideration. 

The third system is called “Where’s Daddy”. It is used to track a suspect and carry out the bombing when they enter their family’s residence. The system is programmed to bomb the target in their homes, where other people, including women and children, reside. Hundreds of private homes have been destroyed, killing entire families. 

Together, these systems can generate targets and authorise bombings with added permissibility to kill civilians. This level of automation relies on the minimisation of human involvement. It reveals two concomitant dehumanising processes. 

First, Palestinians’ lives are reduced to raw data — a child killed in an airstrike is a number on the list of casualties. Second, AI takes the responsibility away from the military units by replacing human decisions with statically-driven outputs.  

Who is responsible for the models, the data and the system in place that is facilitating the mass-killing of thousands of Palestinians? While international law is notoriously slow to produce adequate frameworks that demand more accountability regarding the use of AI in many important decisions in our societies, the use of algorithmic killing by the IDF reveals the urgent need to create and implement an international regulation of the rule of law. 

The topography of war has changed drastically with the use of AI-assisted weapons. As early as 1971, the Israeli government experimented with the use of Firebee drones with Maverick missiles, inscribing the asymmetry of power forces and unbalanced military protocols. In the case of the mass killing of Palestinians, the technology used by the IDF reveals a genocidal impulse programmed, organised and perpetrated with the use of AI. 

The political and ethical dilemma resides in how the technology deployed masks the intention to kill. The genocide is the consequence of the human programming of automated weaponry. The systems programmed by the IDF can digest millions of pieces of data and generate targets without human oversight. They have replaced fairness with efficiency, moral rigour with the advancement of technology. 

The machine has no body, no emotions, such as empathy, and no duties, such as responsibility. Yet, in every machine resides a human reality that speaks to the intent of the inventor and the moral value of the team of engineers that imagined, programmed and developed the machine. 

In the case of the genocide of Palestinians, the AI-programmed systems are killing in an automated fashion, revealing the human-generated intention behind the horror. Such as human intention demonstrates the long-gone ideal of the rule of international law and the much needed update of its jurisdiction. 

Amrita Pande is a professor of sociology at the University of Cape Town. 

Anaïs Nony is an associate researcher at the Centre for the Study of Race, Gender & Class, at the University of Johannesburg.