Artificial intelligence is accelerating how militaries in the Middle East identify and strike targets, generating an unprecedented volume of bombing coordinates while eroding the legal frameworks meant to protect civilians. Israel’s use of AI-driven targeting systems in Gaza has drawn sharp condemnation from United Nations officials, who say the technology contributed to the systematic destruction of civilian homes on a scale they describe as “domicide.” As commercial tech firms supply the underlying models and cloud infrastructure, the question of who bears responsibility for the resulting destruction has no clear answer, and international efforts to regulate autonomous weapons systems remain stalled.
AI-Driven Targeting and the Scale of Destruction
The Israeli Defense Forces operate an AI system known as “Gospel,” or Habsora in Hebrew, that is designed to produce targets at a fast pace, according to a description cited from the military’s own materials. That characterization, detailed by the Guardian in late 2023, captures the core tension: speed and volume in target generation can outstrip the ability of human operators to verify each strike against the laws of armed conflict. The system processes large datasets, including surveillance feeds and signals intelligence, to flag buildings and locations as potential military objectives.
What distinguishes this from earlier targeting methods is throughput. Traditional intelligence analysis required teams of specialists working over days or weeks to develop a single target package. Gospel compresses that timeline dramatically, feeding a pipeline that can generate far more strike coordinates than any previous conflict. The more the system succeeds at its technical goal of rapid identification, the more pressure it places on the human review stages that are supposed to ensure legality and proportionality.
The result, according to UN Special Rapporteurs, has been the mass destruction of civilian infrastructure in Gaza. In an April 2024 statement, those experts deplored the use of purported AI to commit domicide, using the term to describe deliberate, large-scale demolition of homes and residential areas. They argued that the pattern of strikes pointed to systematic targeting of the built environment rather than incidental damage around specific military objectives, and urged a reparative approach to reconstruction that acknowledges the role of emerging technologies in enabling such devastation.
Commercial AI Enters the Battlefield
The pipeline connecting Silicon Valley to the front lines runs through cloud computing contracts and AI model licensing. The Associated Press reported that Israel uses U.S.-made AI models in its military operations, with Microsoft and OpenAI technology playing a role in battlefield decision-making. According to that reporting, Israeli units have used large language models and related tools to assist with tasks such as image analysis, data sorting and operational planning, embedding commercial systems into military workflows.
A separate AP investigation found that Microsoft sold advanced AI and cloud services to the Israeli military during the Gaza war. The company acknowledged those sales but denied its AI was used to harm people in Gaza, arguing that its contracts focused on defensive and administrative applications. The company also emphasized that it does not control how customers configure and deploy its general-purpose tools once access is granted.
That denial highlights a structural problem in accountability. A cloud provider can sell compute power and model access without controlling how the end user applies the output. Microsoft’s position, essentially, is that it supplied tools but did not direct their use. Critics argue this framing allows vendors to profit from military contracts while distancing themselves from the consequences, even when their products are woven into the same digital infrastructure that supports offensive operations.
The AP’s reporting included examples of how AI can enter the target-selection process, illustrating the practical difficulty of drawing a clean line between a commercial product and a lethal military decision. Once a general-purpose model is fine-tuned on classified data and connected to surveillance feeds, it becomes part of the kill chain, even if its original marketing emphasized productivity or research assistance.
The involvement of major American tech companies also complicates U.S. foreign policy. Export controls and end-use agreements govern transfers of conventional weapons, but AI models and cloud subscriptions occupy a gray zone. No existing regulatory framework clearly defines when a general-purpose AI tool becomes a component of a weapons system, and vendors have little incentive to seek that clarity on their own. The result is a policy lag in which companies can expand their defense portfolios faster than governments can adapt oversight mechanisms.
Blurred Lines of Accountability
UN Secretary-General António Guterres has addressed this gap directly. In an April 2024 press encounter on Gaza, he raised concern about reports of AI being used for target identification and linked the speed and scale of AI-enabled operations to what he called a blurring of accountability. His remarks reflected a growing worry among international officials: when an algorithm generates a target, a commander approves the strike, and a tech company provides the underlying model and infrastructure, legal responsibility fragments across multiple actors and jurisdictions.
International humanitarian law traditionally assigns responsibility to the commander who orders an attack and to the state that fields the weapon. But the introduction of AI into the kill chain raises questions that existing doctrine was not built to answer. If an algorithm misidentifies a residential building as a military target, and a human operator approves the strike based on the system’s recommendation without independent verification, where does the fault lie? With the software developer who designed the model, the military procurement office that integrated it, the commanding officer who trusted the output, or the operator who executed the order?
Each actor can point to another link in the chain. Developers may argue that they supplied a tool with documented limitations; militaries may claim they followed internal procedures and relied on the best available technology; political leaders may insist they lacked detailed knowledge of specific targeting decisions. This diffusion of responsibility makes it harder to apply existing mechanisms of accountability, from domestic criminal law to international investigations.
Most mainstream policy debates still treat this as a future governance problem. That framing understates the urgency. The accountability gap is already playing out in Gaza, where AI-generated target lists have been acted upon and entire neighborhoods have been reduced to rubble. Families whose homes were destroyed cannot easily trace a clear line from the blast crater to a particular coder, contractor or commander. The law is not merely behind the technology; it is struggling to map responsibility onto a socio-technical system designed to be distributed and opaque.
Regulation Stalls as New Threats Emerge
Efforts to establish international rules for autonomous weapons systems have made little progress. Analysis from the Lieber Institute at West Point notes that major military powers oppose binding regulation of such systems, making the prospects for a comprehensive treaty slim. The United States, Russia and China, which lead in military AI research and deployment, have each favored nonbinding principles and technical confidence-building measures over hard legal constraints.
This regulatory vacuum coincides with a new and dangerous development: the physical infrastructure that powers military AI is itself becoming a target. A March 2026 Guardian analysis reported that data centers are emerging as strategic assets in their own right, with Iranian facilities cited as early examples of how server farms and cloud hubs can be drawn into regional conflict. Because those centers host both civilian and military workloads, strikes against them risk cascading effects across financial systems, health services and communications networks.
The convergence of these trends (AI-driven targeting, commercial infrastructure embedded in warfare and stalled regulation) creates a feedback loop of instability. As more militaries rely on cloud-based AI to plan and execute operations, adversaries gain incentives to disrupt or destroy the data centers, fiber links and satellite networks that make such capabilities possible. Civilian populations then face dual exposure: first to AI-enabled bombing campaigns, and second to attacks on the digital utilities that underpin modern life.
Breaking this cycle will require more than voluntary ethics statements from tech companies or incremental updates to military doctrine. It will demand new legal instruments that treat AI systems and the infrastructure that sustains them as integral parts of the battlespace, subject to specific obligations and constraints. It will also require greater transparency from states and corporations about how these tools are developed, tested and deployed, so that accountability does not vanish into the gaps between code, contracts and command chains.
For now, Gaza stands as a stark case study in how quickly AI can amplify the destructive capacity of conventional forces while outpacing the institutions meant to restrain them. The same technologies that promise efficiency and precision in theory have, in practice, enabled a scale of urban devastation that UN experts describe as domicide. Unless law and policy catch up, the pattern seen there may become a template for future conflicts, one in which responsibility is everywhere and nowhere, and the most advanced systems of intelligence are paired with some of the weakest systems of accountability.
More from Morning Overview
*This article was researched with the help of AI, with human editors creating the final content.