Morning Overview

Pentagon wants AI to aim its drone-killing turrets — with a human still on the trigger

A federal contract signed on March 7, 2025, puts Anduril Industries at the center of the Pentagon’s push to let artificial intelligence aim its counter-drone weapons. The deal, logged in federal procurement records, ties the Palmer Luckey-founded defense startup to counter-unmanned aerial system work at a moment when cheap, swarming drones have reshaped battlefields from Ukraine’s Donbas to the Red Sea.

The contract arrives under a policy that sounds simple but is brutally hard to execute at machine speed: AI can detect, track, and aim, but a human must still authorize every shot.

What the procurement records actually show

The paper trail sits in two federal databases. The FPDS entry lists Anduril’s Unique Entity Identifier and CAGE code, confirms the March 2025 signing date, and categorizes the work under counter-UAS efforts. Cross-referencing those vendor identifiers in Sam.gov’s standard award reports confirms the match. What the records do not reveal is the contract’s dollar value, the number of units involved, or the specific hardware platform. That level of redaction is common in defense procurement but leaves the scale of the investment opaque.

Anduril already fields several counter-drone products, including its Sentry Tower autonomous surveillance system and the Anvil kinetic interceptor. The FPDS description does not name which platform this contract covers, so it is not possible to confirm from public records alone whether the Pentagon is buying an upgrade to existing Anduril hardware or funding something new.

The policy guardrail: DoDD 3000.09

Every AI-enabled weapon in the U.S. arsenal operates under DoD Directive 3000.09, which the Pentagon updated in January 2023. The directive requires that autonomous and semi-autonomous weapon systems be designed to allow “appropriate levels of human judgment over the use of force.” It does not ban machine autonomy outright. Instead, it demands that the system’s architecture make meaningful human control possible, not just theoretical.

A Congressional Research Service primer explains the enforcement machinery behind that language. Two mechanisms stand out: a senior-level review that evaluates autonomous weapon systems before they enter service, and the Autonomous Weapon Systems Working Group, which coordinates policy across the department. Together, these bodies are supposed to ensure the “human on the trigger” principle survives the journey from directive text to deployed hardware.

In practical terms, the workflow typically looks like this: sensors and algorithms handle the early links in the engagement chain, spotting objects, predicting flight paths, and ranking threats. A human operator then confirms that a given track is hostile and authorizes the weapon to fire. The Anduril contract fits squarely within this model: the government is buying a counter-drone capability that leans on AI for speed but is required to stop short of full autonomy over lethal decisions.

Why speed creates a policy stress test

The tension at the heart of this contract is time. A single commercial quadcopter rigged with explosives can close on a target in seconds. A coordinated swarm of 20 or 30 drones compresses that window further. AI can identify and track each threat in milliseconds, but if the system must wait for a human to press “engage,” the shot opportunity may expire before the operator acts.

Battlefields in Ukraine have already demonstrated the problem. Ukrainian and Russian forces regularly launch waves of first-person-view drones that overwhelm manual defenses. In the Red Sea, Houthi forces have used drone-and-missile salvos to test U.S. Navy air-defense crews, who rely on semi-automated systems like the Phalanx CIWS that can operate in a fully automatic mode when the threat tempo demands it. The Pentagon’s counter-UAS push exists because commanders watched these engagements and concluded that human-only targeting cannot keep pace.

Some defense analysts argue this dynamic could gradually erode the practical distinction between semi-autonomous and fully autonomous systems. If an operator faces a rapid-fire series of “engage / do not engage” prompts with fractions of a second to evaluate each one, the approval step risks becoming a rubber stamp rather than a genuine decision. DoDD 3000.09 calls for “appropriate levels of human judgment” but does not define how much time, information, or situational awareness qualifies as appropriate. That ambiguity gives program managers room to maneuver, but it also means the guardrail’s strength depends heavily on implementation choices that remain classified.

The gaps outside observers cannot fill

Several critical unknowns sit behind the classification barrier. No publicly available document from the Autonomous Weapon Systems Working Group confirms whether this specific Anduril contract has undergone or been scheduled for the senior-level review that DoDD 3000.09 requires. The Sam.gov documentation confirms the contract exists but offers no technical detail about the AI involved: whether it uses rule-based logic, machine-learning models trained on large sensor datasets, or a hybrid approach.

There is also no published testing data on AI accuracy in counter-drone targeting tied to this award. Distinguishing a hostile drone from a bird, a friendly aircraft, or a piece of debris is one of the hardest classification problems in automated defense, especially under battlefield conditions involving electronic jamming, poor visibility, and mixed airspace. Even small error rates carry serious consequences when the output is lethal force.

Equally unclear is whether the contract includes requirements for ongoing monitoring once the system is fielded. Performance in controlled tests can diverge from performance in real operations. Without mandated audits of engagement logs, operator-override tracking, or periodic accuracy reviews, the “human in the loop” role could weaken over time as operators learn to trust the machine’s recommendations by default. The public record does not show whether the Pentagon has built those safeguards into this particular deal.

How the Anduril contract fits a broader autonomous-weapons race

The United States is not working in isolation. China has publicly demonstrated autonomous drone swarms, and its defense industry markets AI-enabled targeting systems for export. At the United Nations, talks on lethal autonomous weapon systems under the Convention on Certain Conventional Weapons have stalled for years, with no binding international treaty in sight as of mid-2025. The Pentagon’s approach, embedding AI deeply into the targeting chain while insisting on human authorization, represents a middle path that satisfies neither advocates of a full ban on autonomous weapons nor those who argue that removing the human entirely is the only way to match the speed of modern threats.

For now, the strongest evidence points to a Defense Department that is moving fast to field AI-assisted counter-drone systems under a policy framework built around human judgment at the moment of fire. The Anduril contract is one concrete step in that direction. But the details that would let the public evaluate whether the promise of meaningful human control can survive contact with a drone swarm remain locked behind procurement redactions and classification markings. The next real test will come not in a policy document but on a firing range or a forward operating base, where milliseconds matter and the operator’s finger is the last checkpoint between an algorithm’s recommendation and a lethal shot.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.