ru24.pro
News in English
Июль
2025
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
26
27
28
29
30
31

Automated surveillance, targeted killings, and AI warfare in Gaza: A conversation with legal scholar Khalil Dewan

0

What we are witnessing in Gaza is the future of warfare

Originally published on Global Voices

Image courtesy of Untoldmag.org.

This interview was first published by UntoldMag on June 22, 2025. An edited version is republished on Global Voices as part of a content-sharing agreement. 

Amid the ongoing genocide in Gaza, biometric surveillance and drones have become central tools in modern warfare. Khalil Dewan is a legal scholar and investigator. He is a Nomos Scholar at SOAS University of London. Dewan has spent over 15 years researching the global war on terror and its transformation through AI, drone technology, and legal manipulation. In this interview, he discusses how targeted killings have evolved, the implications for international law, and what Gaza reveals about the future of warfare.

Walid El Houri (WH): You’ve spent more than a decade researching drone warfare and surveillance. How did you begin working in this field?

Khalil Dewan, used with permission.

Khalil Dewan (KD): I’ve been researching the global war on terror and drone strikes for the better part of 15 years. I’ve covered the US, UK, France, and other drone programs, and one of the things that’s become clear is how these Western powers are using drone warfare for several strategic purposes. Most notably, they enable easier killing through what I call the individualisation of warfare, a new form of conflict where states no longer just target non-state armed groups or enemy states, but individuals themselves, based on conduct or perceived threat.

WH: Can you explain what you mean by “individualisation of warfare”?

KD: It’s about going after people rather than traditional military targets. Drones can fly and hover over remote parts of the world and strike individuals with very little legal accountability. This kind of targeting has dominated the past two decades of the global war on terror. And now, with AI-enabled targeting systems, things are becoming even more problematic.

WH: How is AI being used in targeted killings, especially in places like Gaza?

KD: In the case of Israel, for example, we’ve seen the use of AI-enabled targeting systems through drones. These systems are designed to process metadata and help identify targets who can be struck and where they may be located, especially in complex urban environments like Gaza. But that’s highly problematic, especially when used by states like Israel or the United States, which already operate under an enabling posture, meaning they are already inclined to strike, with or without clear legal justification.

WH: What are the legal implications of this AI-driven kill chain?

KD: It complicates everything. We’re no longer just asking whether a killing was lawful. Now we’re dealing with algorithmic bias baked into AI systems. Biases that are designed and implemented by states. In an armed conflict, relying on such technology raises serious legal and ethical concerns. Who designed the system? What biases are embedded? Who’s accountable when the wrong person is killed?

WH: We saw the use of biometric scans during evacuations in Gaza. What are the implications of such surveillance in a humanitarian context?

KD: It’s deeply troubling. When Israel opened what it called an evacuation corridor in Gaza, Palestinians were held between two large structures and forced to have their faces scanned before being allowed to move. This was done amid ongoing shelling and airstrikes, and it shows how biometric data extraction is being used as a precondition for survival. Palestinians are already among the most heavily surveilled communities in the world, and now biometric submission is being weaponized during a humanitarian crisis.

WH: Does this practice align with international law?

KD: It certainly raises major concerns. International law is being manipulated, particularly by Western states, to justify targeted killings both inside and outside of armed conflict. They rely on legal arguments about imminent threats, self-defense, and the use of force, but in reality, they are pushing the boundaries of lawfare. Forcing biometric scans during a crisis, for example, fits within a broader strategy of control and dehumanization, not humanitarian protection.

WH: What role do private actors play in this new landscape of warfare?

KD: Private actors are increasingly involved, whether through data processing, AI development, or logistical support. This privatisation of warfare makes it even harder to assign accountability. You have a blurred line between state and corporate responsibility, and international law isn’t adequately equipped to handle that yet.

WH: How do you see the future of AI and autonomy shaping the battlefield?

KD: What we are witnessing in Gaza is the future of warfare, a convergence of AI, autonomy, targeted killings, and legal manipulation. States are racing to stay competitive in this space. The Global South is developing its own drone and AI capabilities and looking at the last 20 years, particularly at Israel’s actions in Gaza, and asking, “If they can get away with it, what should our position be?”

WH: What advice do you have for states in the Global South navigating this environment?

KD: My message is clear: comply with international law as faithfully as possible. Uphold ethical standards, chivalry, if you will, but also recognize the geopolitical reality. States must remain competitive in the lawmaking process. They shouldn’t be colonized by international law that’s weaponized by powerful actors, but they also shouldn’t abandon it. It’s about balancing ethics with survival, because if Gaza has taught us anything, it’s that survival is now a legal, political, and existential question.