Morning Overview

Iran war highlights Silicon Valley’s role in AI targeting and infrastructure

The U.S. military’s ongoing air campaign against Iran has placed Silicon Valley’s largest technology companies at the center of a sharp debate over who builds the systems that help decide where bombs fall. From Anthropic’s Claude language model to Palantir’s targeting platform to Microsoft’s Azure cloud, commercial products designed for civilian markets are now deeply embedded in the kill chain, raising questions about accountability that Washington has barely begun to answer.

Commercial AI Models Enter the Kill Chain

The clearest sign that the Iran conflict has changed the relationship between tech firms and the Pentagon is the integration of off-the-shelf AI into live targeting workflows. The U.S. military has paired Claude, the AI model built by Anthropic, with Palantir’s Maven system for real-time targeting and target assessment during strikes on Iran. Maven functions as a targeting infrastructure that fuses satellite imagery, signals intelligence, and sensor data to identify targets, with Palantir Technologies providing the data analytics layer that turns raw feeds into strike options.

That pipeline came under intense scrutiny after the U.S. military struck an elementary school in southern Iran on February 28, killing scores of civilians in one of the war’s deadliest incidents. AI initially received much of the public blame, though subsequent reporting suggested the failures ran deeper than any single algorithm. The speed at which AI compresses the targeting cycle, from sensor detection to strike authorization, makes accountability ever more elusive when something goes catastrophically wrong.

The school bombing crystallized a tension that defense analysts had warned about for years: commercial AI tools can accelerate military decision-making far beyond the pace at which humans can meaningfully review each step. When a Washington forum convened to debate AI warfare as U.S. strikes continued, one observation captured the mood: “What we’re seeing is algorithms are used at war.” With the Strait of Hormuz effectively closed and the stated war aims expanding as the campaign dragged on, the question was no longer whether AI would be used in combat, but how deeply it would be woven into the decision to pull the trigger.

For now, military officials insist that humans remain “in the loop,” reviewing AI-generated target lists before authorizing strikes. Yet the logic of these systems pushes in the opposite direction. Once commanders grow accustomed to automated prioritization, pattern recognition, and risk scoring, rejecting machine recommendations becomes harder, especially under time pressure. The Iran school strike, in which multiple safeguards appear to have failed in rapid succession, showed how a chain of “assists” from commercial AI tools can add up to a de facto delegation of lethal authority.

Cloud Infrastructure as a Military Backbone

The AI models grabbing headlines depend on a less visible but equally significant layer: the cloud infrastructure that stores, processes, and transmits the data those models consume. In recent years, U.S. allies have formalized this dependency through long-term procurement deals. Israel’s government, for example, created Project Nimbus, a sweeping cloud contract whose tender documents show that AWS and Google won the central competition to provide public cloud services to ministries and security agencies. Both companies committed to building dedicated cloud regions inside Israel as part of the deal.

Microsoft, though not part of Nimbus, has built its own deep ties to the Israeli military. During the Gaza war, leaked documents and usage analysis showed that Microsoft expanded its support relationship with the IDF, with Azure storage consumption surging early in the conflict as battlefield data and operational planning moved into the cloud. Azure also serves as the backbone for expansive surveillance of Palestinians, processing what reporting describes as a million calls an hour of intercepted communications and handling sensitive intelligence streams that would once have been confined to secure government facilities.

These are not peripheral contracts. When a military routes its intelligence, surveillance, and targeting data through commercial cloud platforms, those platforms become structural elements of the war effort. Israel’s own military acknowledged this logic when the IDF stated on its website that it uses an AI-based system to produce targets at a fast pace, a reference to the targeting platform known as Habsora, or “the Gospel,” which selects bombing targets from massive datasets. Even where companies stress that their cloud regions are “segmented” or “dual use,” the operational reality is that the same hyperscale infrastructure that runs ride-sharing apps and streaming services also underpins live combat operations.

For U.S. firms, this convergence poses a strategic dilemma. Their largest growth markets are government and defense, where long-term contracts can be worth billions. Yet the same deals that please investors can entangle them in controversies over civilian casualties, occupation, and the ethics of autonomous warfare. Employees at several major tech companies have already staged protests and internal campaigns against defense work, arguing that they never signed up to build tools for war. The Iran conflict, with its visible reliance on commercial AI models and cloud platforms, has intensified those internal battles.

Data Centers as Targets

The deeper that militaries integrate commercial cloud services, the more those services become legitimate objects of attack under the laws of armed conflict. Scholars of international law have argued that when a military runs on the cloud, the cloud becomes a legal target, subject to the same rules of distinction and proportionality that govern strikes on command bunkers or radar sites. That analysis carries immediate practical consequences: data centers operated by American tech companies in allied nations could be struck by adversaries who view them as military infrastructure rather than civilian property.

Iran’s responses during the conflict have already demonstrated that the digital dimension of warfare is not theoretical. Heightened fears about AI-driven escalation and cyber operations prompted the U.S. State Department to stand up a dedicated task force on Iranian threats that coordinates sanctions, cyber defense, and information-sharing with private firms. While much of its work remains classified, officials have signaled that protecting critical cloud and networking assets from both physical and digital attack is now a central priority.

For the companies that own those assets, the new reality is sobering. Data centers were once marketed as neutral infrastructure, interchangeable warehouses of computation that could be located wherever electricity and land were cheap. Now, facility locations, redundancy plans, and cross-border data flows are being scrutinized through a military lens. Executives must weigh not only commercial risks but also whether hosting certain workloads could turn their buildings (and the workers inside) into targets in a future conflict.

This shift has outpaced regulation. Existing export-control regimes and arms-trafficking laws were built around hardware: missiles, tanks, encryption devices. They offer little guidance on how to treat general-purpose cloud services that can be repurposed for battlefield surveillance or automated targeting. Nor do they clearly assign responsibility when an AI model fine-tuned on commercial data is plugged into a classified kill chain. As the Iran school strike showed, tracing causality through layers of contractors, sub-contractors, and software vendors is extraordinarily difficult once a bomb has already fallen.

Accountability Lagging Behind Deployment

In Washington, lawmakers have begun to float proposals for stricter oversight of military AI, including mandatory testing, incident reporting, and clearer rules on human control. But the pace of deployment continues to outrun the pace of governance. Defense agencies, eager to maintain an edge over adversaries, are signing multi-year cloud and AI contracts even as they commission studies on the risks those same systems pose. The result is a patchwork of pilot guidelines and voluntary principles that do little to constrain real-world operations.

For now, the most immediate checks on this trend come from outside government. Investigative reporting on systems like Maven and Habsora, employee organizing inside tech firms, and legal challenges from civil-society groups have all forced companies to disclose more about how their tools are used in war. Yet these pressures remain reactive, surfacing only after a scandal or a particularly visible tragedy. They do not amount to a coherent framework for deciding which commercial technologies should be allowed into the kill chain in the first place.

The Iran conflict has made clear that such a framework can no longer be deferred. As commercial AI and cloud infrastructure become inseparable from modern warfare, the question is not whether Silicon Valley will be involved in future conflicts, but on what terms, and with what safeguards for the civilians who live, and sometimes die, under its code.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.