AI Has Entered The Kill Chain
Artificial intelligence is now defining targets on the battlefield
Artificial intelligence is changing how wars are fought. It doesn’t fire weapons. It decides what human commanders see as targets—and the race between great powers means the system will only move faster.
By Nick Holt
On the morning of February 28th 2026, missiles struck a girls’ primary school in Minab, in Iran’s Hormozgan province, during school hours. The victims were mainly girls between seven and twelve years old.
United Nations human-rights experts, citing reports of at least 165 dead, condemned the strike as a potential war crime under the Rome Statute and called for an independent investigation. The United States said it was investigating.
No one has been charged with anything. What the satellite record shows is a school. What the targeting system apparently saw is a question no official has yet answered in public.
To understand how this happens, we need to understand how modern militaries choose their targets. Military planners describe the process through which targets are identified and destroyed as the “kill chain”.
A reconnaissance aircraft spots a vehicle column. A satellite image reveals a missile site. A radio transmission exposes the location of an enemy command post. Analysts review the information, debate its meaning, and pass their conclusions up the chain of command. Only then does a decision to strike occur.
The chain is simple: a target is found, fixed, tracked, engaged and assessed. Each stage depends on information gathered from sensors, satellites, aircraft and intelligence networks.
For decades the pace of this process was limited not by weapons, but by how quickly human analysts could interpret the information required to confirm a target.
That constraint has begun to disappear.
Over the past decade military systems have started using artificial intelligence to process the enormous volumes of data produced by modern surveillance networks.
Algorithms now scan satellite imagery, analyse drone video and identify patterns in electronic signals at speeds no human intelligence team could match.
One of the earliest examples was Project Maven, launched by the Pentagon in 2017 to use machine learning to identify objects in aerial imagery.
The purpose of these systems isn’t to fire weapons automatically. The final decision to strike remains a human one. What the systems change is the information environment in which that decision is made.
Algorithms flag objects that resemble missile launchers, vehicles or troop formations. Analysts review the results and decide which signals deserve attention.
The technology doesn’t produce decisions. It produces priorities.
That distinction may sound subtle. It isn’t.
When algorithms filter the battlefield before analysts review it, they shape the picture of reality from which commanders make decisions.
Targets that algorithms identify appear urgent. Signals the system doesn’t recognise may never reach analysts at all. Like a social media feed, the algorithm determines what rises to the top — and what disappears from view.
The machine doesn’t decide to strike. But it has already influenced the chain of events that leads to the strike.
Artificial intelligence compresses the time required to move from detection to engagement. What once required hours of analysis can occur in minutes. That acceleration gives militaries a powerful advantage in environments where speed determines survival: missile defence, drone warfare and high-tempo air operations.
It also creates a new kind of strategic pressure.
Military planners have long understood that the side that moves faster through the OODA loop tends to dominate the battlefield.
AI dramatically accelerates the first two stages: observing the environment and orienting decision-makers within it. When one side gains that advantage, the other side must match it.
The result is a competition not just in weapons but in decision speed.
That competition has already begun.
A New Kind of Campaign
The implications of this shift become visible when a military campaign unfolds at unusual speed.
Modern air operations traditionally require time. Intelligence must be assembled, targets verified, strike packages planned and commanders briefed before weapons are released.
Even with advanced surveillance systems, the process of confirming targets has historically imposed a natural brake on the pace of operations.
Recent conflicts suggest that this brake may be weakening.
When the United States and its allies began striking targets in Iran, the tempo of the opening phase was staggering. Within the first day, hundreds of targets had reportedly been identified and engaged across multiple locations.
The scale of the operation suggested an intelligence environment capable of processing large volumes of surveillance data extremely quickly.
That speed doesn’t necessarily mean that artificial intelligence selected the targets. It does suggest that AI systems may have played a role in organising the information from which those targets were chosen.
The modern battlefield produces an enormous quantity of signals. Satellites observe movement. Drones transmit continuous video. Electronic surveillance systems intercept communications and radar emissions.
AI systems perform the first stage of interpretation — scanning imagery, clustering signals and flagging anomalies. Human analysts then review the results and decide what those signals mean.
Artificial intelligence doesn’t determine the target. It structures the search, turning a battlefield too vast and data-rich for human analysts to examine directly into a set of manageable priorities.
The same process also shapes the battlefield as analysts see it.
The result is a form of warfare that still appears human-directed but increasingly depends on machine-assisted perception. Commanders make the final decision. Pilots or missile crews execute the strike.
Yet the informational picture guiding those decisions is no longer assembled entirely by human minds.
The question isn’t whether this is happening.
The question is what it means when it goes wrong.
The First Consequence
AI targeting systems learn from historical data. They identify targets by detecting statistical patterns that resemble examples in their training data. They do not know what a building is. They know what a building has looked like in previous images associated with a target.
A rectangular structure near vehicles may resemble a command facility. A cluster of moving figures may resemble troop movement. A parking area near a compound may resemble vehicle staging. The system does not understand context. It identifies correlations between shapes, movement, and proximity.
Proximity becomes a variable. Separation requires interpretation. A school beside a military facility may resemble the spatial pattern of a military compound itself. The model flags what the training data taught it to flag.
The output is not a conclusion. It is a probability.
Human analysts still make the decision. But the signals they examine have already been filtered by the model’s classification system. The machine decides which patterns rise to the surface and which remain buried in the data.
No one intended to strike a school. The algorithm made an inference from a pattern. Humans acted on the signal the system surfaced.
That creates a different kind of accountability problem than negligence or intent.
Traditional investigations can reconstruct a chain of human decisions. But when the first stage of interpretation occurs inside a machine-learning model, the chain becomes harder to trace. The model’s reasoning may not be fully explainable, even to the engineers who built it.
Responsibility becomes diffuse. The analyst approved the strike. The commander authorised it. The engineers built the model. The training data shaped the pattern the model recognised.
The decision still appears human.
But the perception that led to it was constructed by software.
That is an accountability problem the law has no framework for and the military has no doctrine to address.
Where Responsibility Breaks Down
For most of modern warfare, responsibility inside the targeting process has been relatively clear. Intelligence officers gather information. Analysts interpret it. Commanders review the evidence and authorise strikes.
If a mistake occurs — if a target is misidentified or civilians are harmed — investigations can trace the chain of decisions that led to the outcome. The logic of accountability follows the structure of the kill chain itself.
Artificial intelligence complicates that structure in ways that go beyond the question of who approved which strike. When AI systems filter battlefield data before human analysts review it, the first stage of interpretation occurs inside a machine-learning model.
Human analysts still examine the results. Commanders still authorise the strike. But the informational environment that shaped those judgments was created by software.
Modern machine-learning systems often operate as black boxes. The model produces a result, but the internal reasoning that led to it may not be easily explained — sometimes not even by the engineers who built it.
There’s also a commercial dimension that receives little public attention. The systems running inside the Pentagon’s targeting infrastructure were built by private companies including Palantir Technologies, OpenAI and, until recently, Anthropic.
These companies have shareholders, valuations and government contracts that create structural incentives to expand capability rather than constrain it. The people who built the filter have a financial interest in the filter being used.
On February 27th 2026, Anthropic was designated a supply-chain risk to national security — the first publicly recorded instance of such a designation being applied to an American company. The stated reason was Anthropic’s refusal to remove ethical constraints from its systems: no autonomous weapons targeting, no mass-surveillance capability.
The designation didn’t interrupt the operation. The systems were already running under a separate contract through Palantir’s Maven Smart System. The paperwork caught up later.
What the designation established was a principle.
In a military context, ethical constraints on AI systems aren’t a feature. They’re friction. And friction, in a competition for decision speed, is a liability.
The Race
That principle didn’t emerge from a single decision. It emerged from a structural logic that no individual actor designed and no institution can easily reverse.
The United States and China are engaged in a competition for AI military dominance that neither side believes it can afford to lose.
The Chinese state has been explicit about this since its 2017 Next Generation AI Development Plan, which set a target for global AI leadership by 2030.
The American national-security apparatus understood the implications immediately.
In that competition, every constraint one side imposes on itself becomes an advantage for the other.
If the United States builds slower, more auditable targeting systems to reduce the risk of errors like Minab, China doesn’t. If the United States insists on explainable models that commanders can interrogate before authorising a strike, China doesn’t.
If the United States pauses to develop legal frameworks for algorithmic accountability, China doesn’t.
The race doesn’t reward caution.
It punishes it.
This is why the speed trap matters beyond its tactical implications. When artificial intelligence compresses the kill chain from hours to minutes, human oversight doesn’t disappear. It becomes subordinate to the machine-constructed picture of the battlefield.
A commander approving a strike package generated by an algorithm in thirty seconds isn’t really authorising anything. They’re ratifying a machine output they have no time to interrogate.
The human is still in the loop. But the loop has been compressed to the point where it no longer functions as a meaningful check.
The Anthropic designation is the clearest illustration of where this logic leads.
A company that said certain things shouldn’t be done was told, in effect, that the competition didn’t allow for that position.
The ethical constraint wasn’t evaluated and found wanting.
It was classified as an obstacle and removed.
Nobody decided that machines should determine who dies in war.
The competition made it inevitable.
That isn’t how conspiracies work.
It’s how history works.




cowardly business - it's like Big Tech Bros playing computer games