How the Race to Beat China at AI Drove America to War with Iran
The race between American and China for AI dominance isn’t just about technology. It’s about infrastructure, energy, and military power—and the Middle East has become the front line.
The U.S. strike on Iran in February 2026 was the result of converging pressures: a global AI race with China, the degradation of Iran’s regional proxies, and the transformation of warfare around technology and infrastructure—showing how modern conflict emerges not from single decisions, but from the complex intersection of geopolitical, technological, and strategic forces.
By Nick Holt
March 7, 2026
In the final week of February 2026, the United States government did something it had never done before.
It designated an American technology company a national security threat.
The following morning, the United States and Israel launched the most consequential military operation in the Middle East since the invasion of Iraq.
Within hours of the first strikes, another American AI company signed the Pentagon contract Anthropic had just lost — and closed the largest private funding round in the history of capitalism.
Seventy-two hours later, Iranian drones struck Amazon Web Services data centres in the Gulf.
The events were reported as separate stories.
* * *
On March 1, Iranian drones struck three Amazon Web Services facilities in the UAE and Bahrain.
Regional banks went offline. Payment systems failed. Commercial life across the Gulf collapsed for hours.
Amazon, one of the world’s most sophisticated communications operations, described what had hit its facilities as “objects.”
On March 4, the CEOs of Amazon, Google, Meta, and OpenAI gathered in the East Room of the White House and signed a pledge to fund their own power generation for AI data centers.
The president called it a win for American families.
The same day, the Senate voted 51 to 49 against limiting the president’s war powers over Iran. Both were reported as separate stories.
They are not separate stories. They are one story.
And it begins not in February 2026 but in July 2017, in Beijing, when the State Council of the People’s Republic of China quietly published a document that set the terms for everything that followed.
To understand why, you have to understand what was different about this moment — and why the case that had existed for 47 years finally found its answer in February 2026.
* * *
Why Iran. Why Now.
This is what makes Iran categorically different from every other state that has acquired or sought nuclear weapons in the postwar era.
Pakistan has nuclear weapons. North Korea has nuclear weapons. India has nuclear weapons. Israel has nuclear weapons it will not officially acknowledge. The United States has managed to live with all of them.
It has done so because none of those states has paired its arsenal with a declared theological obligation to eliminate a specific American ally.
The variable that changes the calculation is not the weapon. It is the ideology attached to it.
The claim that America went to war because Israel told it to is not so much wrong as insufficient. It mistakes influence for control.
Israel has genuine influence in Washington — documented across forty years of lobbying, intelligence sharing, military cooperation, and diplomatic coordination. But influence is not instruction.
Netanyahu has been arguing for military action against Iran’s nuclear programme since he held up a cartoon bomb at the UN General Assembly in 2012. For over a decade, Washington said no — not because the threat was doubted but because the cost was too high.
Then came a president who had already torn up the nuclear deal, assassinated Iran’s most effective military commander, moved the American embassy to Jerusalem, and described himself, without embarrassment, as the most pro-Israel president in American history.
Israel did not instruct Donald Trump. It did not need to. It waited thirteen years for a president whose instincts, incentives, and ideology aligned with a decision Israel had already made.
When the proxy shield was down, when Russia was consumed in Ukraine, and when a trillion-dollar AI infrastructure deadline made inaction as costly as action, the window opened.
* * *
The Window
The case against Iran did not require construction in February 2026. It had been accumulating for 47 years. The question is not why Iran. The question is why now.
The answer requires understanding three constraints that had operated independently for years and that, for the first time, lifted simultaneously.
The First Constraint: The Shield
Iran’s deterrence was never primarily nuclear. It was architectural.
Over four decades, the Islamic Republic had constructed a system of proxy forces across the Middle East — Hezbollah in Lebanon, Hamas in Gaza, the Houthis in Yemen, Shia militias across Iraq and Syria — that functioned as distributed retaliatory capacity.
Strike Iran directly and the response would come not from Tehran but from everywhere simultaneously.
This architecture was Iran’s answer to American and Israeli conventional superiority.
October 7, 2023 began its dismantlement.
What followed across sixteen months was the most systematic degradation of a proxy network in modern Middle Eastern history.
Israel killed Hassan Nasrallah and decimated Hezbollah’s command structure. Hamas was operationally hollowed out. The Houthi threat was substantially reduced. Iranian weapons shipment routes were interdicted.
By February 2026, the axis of resistance was at its lowest operational capacity since its creation.
The shield had not been destroyed. But it had been damaged enough that the retaliatory calculus had genuinely changed — and the window would not stay open.
Iran would rebuild. It always had.
The Second Constraint: The Brake
Every previous American administration had calculated Russian response into any Iran scenario.
Moscow had longstanding diplomatic, commercial, and military relationships with Tehran.
A US-Israeli operation that killed the Supreme Leader of a Russian partner state would, in any previous decade, have triggered serious countermoves — accelerated weapons transfers, great power signaling that makes regional conflicts metastasize.
In February 2026 that calculation had changed fundamentally.
Russia had been fighting a war of attrition in Ukraine for three years. Its conventional military capacity had been consumed by a conflict it had expected to last weeks.
Its ability to project meaningful force into the Middle East — already limited compared to its Cold War posture — had been further constrained by a war it could not afford to lose and could not find a way to win.
The historical brake on American action was not available — not because anyone had planned it that way but because Vladimir Putin had made a catastrophic miscalculation in February 2022 whose consequences were still compounding four years later.
The Third Constraint: The Deadline
In July 2017, the State Council of China published a document that received modest coverage in Western technology press and essentially no coverage in Western foreign policy analysis.
The Next Generation Artificial Intelligence Development Plan set a precise target: China would become the world’s leading AI power by 2030.
This was not aspiration. It was a state directive from a government that does not publish targets it does not intend to meet.
It came with funding commitments, institutional mandates, and explicit civil-military integration — the unification of AI development across civilian and military applications as a single national objective.
Washington had no equivalent doctrine until 2019. China declared this race. America entered it.
By early 2025, the question of whether China was on schedule had been answered in a way that concentrated minds more than any intelligence assessment.
DeepSeek demonstrated that Chinese AI capability had reached the global frontier — not approaching it, but at it — ahead of the timeline American strategic planners had been using to calibrate their response.
Against this background, the physical infrastructure dimension of the race had become decisive.
In January 2025, the Stargate joint venture committed $500 billion to American AI infrastructure. Among its founding equity partners was MGX — an Abu Dhabi sovereign wealth fund.
The Gulf states had what the AI race required above almost everything else: cheap and abundant energy, sovereign capital and political will to become the physical infrastructure platform of the AI era.
China builds power infrastructure fifteen times faster than the United States. The GCC AI Stack strategy identified Gulf energy as a structural advantage that China could not easily replicate.
One variable made the entire thesis fragile.
Iranian missile and drone capability — demonstrated with precision against Saudi oil facilities in 2019 and against American bases across the region since — was the single security risk that Gulf infrastructure papers identified as potentially decisive.
Half-built data centers cannot be rerouted like oil barrels.
A sustained Iranian campaign against Gulf construction would not delay the AI buildout. It would reset it by a decade.
Against a Chinese programme running on schedule and ahead of Western assessments, a decade’s delay was not a setback. It was a concession of the race.
The 2030 deadline was visible to both governments. Every month of Iranian regional threat capacity left intact was a month of Chinese acceleration left unanswered.
Three constraints had lifted simultaneously. This had never happened before. It would not last.
The proxy shield was down. The Russian brake was off. The AI deadline was live.
None of these conditions had existed simultaneously before February 2026.
The question that had been deferred for 47 years — what happens when the cost of action finally falls below the cost of inaction — had found its answer not through anyone’s master plan but through the convergence of a Ukrainian miscalculation, sixteen months of Israeli military operations, a Chinese state directive published eight years earlier, and a sovereign wealth fund in Abu Dhabi that needed its infrastructure investments to survive.
This is not how conspiracies work. This is how history works. What the convergence made possible, a parallel system had spent seven years making ready.
* * *
The Militarization of the Supply Chain
On February 11, 2019, President Trump signed Executive Order 13859, the American AI Initiative— the first formal acknowledgement that AI infrastructure was a national security asset to be controlled, protected, and denied to adversaries.
The public framing was economic. The strategic logic was different: energy, chips, and data were the three inputs that would determine the outcome of the most consequential technological competition in the country’s history.
The man who signed it as Chief Technology Officer of the United States was Michael Kratsios.
In May 2021 Kratsios left government and joined Scale AI as Managing Director.
Scale AI’s primary customer was the Department of Defense. Its core work was data labeling — the preparation of training datasets on which military AI models depend.
Its most significant contract was with Project Maven, the Pentagon’s AI targeting system designed to process drone and satellite imagery and identify targets at machine speed.
Within eight months of the Kratsios hire, Scale AI won a $250 million federal contract. Between 2021 and 2024 its federal contract obligations grew from $15 million to $110 million.
The targeting infrastructure Scale AI built during this period was operational in February 2026 when Operation Epic Fury commenced.
Kratsios had by then returned to government as Director of the White House Office of Science and Technology Policy. He presided over the East Room ceremony on March 4.
He was the only individual to hold senior institutional roles at every stage of the sequence — the doctrine, the implementation, the mobilization.
His career does not prove coordination. It demonstrates continuity.
In a system this large, continuity is more significant than coordination. Coordination requires intent. Continuity requires only that the same people, pursuing their own rational interests at each stage, happen to be present when the system they helped build arrives at its destination.
In October 2024 the White House published a National Security Memorandum on AI whose central instruction was precise: the United States must harness AI with responsible speed to achieve national security objectives.
The memorandum did not define responsible speed.
The operational meaning was clear to anyone inside the system: any constraint on the pace of military AI deployment was, by definition, a national security liability.
In 2025 the AI Diffusion Rule established the National Verified End User programme — the legal mechanism through which American technology companies access the volumes of advanced AI chips their infrastructure timelines require.
To maintain NVEU status they must align their technology security and clean energy policies with the US government.
The rule also formally classified cloud infrastructure as Critical National Security Infrastructure.
The technology companies that had spent two decades building consumer products — had been reclassified, without announcement, as components of the national security apparatus. The reclassification was accomplished through an export control regulation.
The leverage was not money. It was chips.
Without NVEU status the chip quotas necessary to build and operate frontier AI infrastructure were unavailable.
The March 4 pledge — the East Room ceremony, the commitment to fund their own power generation — was the public expression of an alignment the NVEU programme had already made structurally necessary.
The CEOs who signed it were not making a voluntary commitment. They were completing a compliance transaction whose terms had been set in advance.
Against this architecture, in mid-February 2026, Anthropic drew a line.
The company’s contracts with the Department of War included specific prohibitions: no fully autonomous lethal weapons systems, no AI-enabled mass domestic surveillance.
These were not new positions adopted under pressure. They had been Anthropic’s stated terms from the beginning of its government work, accepted by the Pentagon in July 2025 when Claude became the first frontier AI model approved for use on classified networks.
What changed was not Anthropic’s position. What changed was that the Department of War, preparing for an operation that would require AI to shorten kill chains at a scale that human targeting alone could not achieve, found those terms incompatible with its operational requirements.
The Pentagon demanded that Anthropic allow Claude to be used for all lawful purposes. Anthropic refused.
The negotiation lasted weeks, grew increasingly bitter, and ended on February 27 at 5:01 pm when the deadline passed without agreement.
Within minutes, Defense Secretary Pete Hegseth designated Anthropic a Supply Chain Risk to National Security.
It was the first publicly recorded instance of an American company receiving the supply chain risk designation — a classification previously reserved for foreign adversaries, specifically Chinese technology firms.
Anthropic is headquartered in San Francisco. Its founders are American. Its offense was a contractual refusal to remove ethical constraints from its AI systems.
The following morning, as the first strikes of Operation Epic Fury hit targets across Iran, OpenAI signed its replacement contract with the Department of War.
Unlike Anthropic, OpenAI agreed that the Pentagon could use its technology for all lawful purposes.
The same day OpenAI closed its $110 billion funding round. Amazon invested $50 billion. Nvidia invested $30 billion. SoftBank, already a 40 percent equity holder in Stargate, invested $30 billion.
AWS became the exclusive third-party cloud distribution partner for OpenAI’s enterprise platform.
OpenAI published a press release. Leadership, it said, will be defined by who can scale infrastructure fast enough to meet demand.
There is one additional detail in the public record that was not prominently reported.
The Washington Post and The Wall Street Journal both reported that Anthropic’s AI systems — embedded in the Pentagon’s operations through its Palantir agreement — were used to assess intelligence and identify targets during the Iran strikes.
The US military has not confirmed this. Anthropic has not denied it.
The designation did not interrupt the operation. The system was already running. The paperwork caught up later.
Amazon called the drones that hit its data centers objects. The world was watching. Not all of it from the same angle.
* * *
The Audience
But the most consequential audience for everything that happened in the last week of February 2026 was not in Tehran. It was in Beijing.
China imports a significant portion of its oil through the Strait of Hormuz.
When Chinese Foreign Ministry spokesperson Mao Ning condemned the strikes, the substantive statement came in response to a question about the strait: its closure, she noted, threatened an important international trade route whose stability served the common interests of the international community.
The diplomatic formulation was careful. The anxiety behind it was real.
An Iran conflict that closed the strait and destabilized the Gulf was not, from Beijing’s perspective, a distant regional problem. It was a supply chain crisis with direct implications for the economy that funds the AI programme the 2030 target requires.
Pete Hegseth called China a non-factor. The characterization was militarily accurate and strategically incomplete. The relevant question was never whether China would intervene. It was what China would conclude.
Consider what Beijing observed in the last week of February 2026.
It observed that the United States had designated an American AI company a national security threat for refusing to remove ethical constraints — demonstrating that the government was willing to use procurement power to enforce complete alignment between frontier AI companies and military operational requirements.
It observed that the replacement company signed its Pentagon contract within hours — demonstrating that the transition had been planned.
It observed that AI systems shortened kill chains sufficiently to enable strikes against 1,000 targets in the first 24 hours — demonstrating the military effectiveness of frontier AI at a scale that no previous conflict had made visible.
And it observed that the data centers of the company that had just closed the largest private funding round in the history of capitalism were burning in the UAE two days later — demonstrating that AI infrastructure was now a primary military target in great power conflict.
Every one of these observations pointed toward the same conclusion for any Chinese defense planner thinking clearly about Taiwan, about the South China Sea, about the next decade.
That dependence on American AI infrastructure, American chips, or American cloud platforms was a strategic vulnerability that could be activated by a supply chain designation, a sanctions regime, or a drone strike.
That the path to Chinese AI security ran not through integration with the global technology supply chain but through the complete domestication of every critical input — semiconductors, energy, data, models, infrastructure — within Chinese sovereign control.
This is precisely what the chip war beginning in 2022 was designed to prevent. Operation Epic Fury, by demonstrating so publicly what AI-assisted warfare looks like at scale, may have provided Beijing with the domestic political justification for the acceleration that closes the gap faster than the export controls can maintain it.
Hegseth was right that China was a non-factor in the Iran war. He may have been describing the last conflict in which that was true.
* * *
The Mechanism
In his farewell address in January 1961, Dwight Eisenhower warned of a military-industrial complex whose influence was felt in every city, every statehouse, every office of the federal government.
The warning was remarkable for its source — a five-star general, the supreme commander of Allied forces in Europe — and for its precision.
Eisenhower named the thing because it was nameable. Lockheed. Raytheon. The revolving door between the Pentagon and the defense contractors whose profits depended on the continuation of the threat environment that justified their existence.
The complex was legible because its components were discrete and the relationship between them were direct enough to be mapped.
The system documented in this piece does not have a building.
It has data centers that look like office parks in the UAE and Bahrain, which Amazon describes when they are struck by Iranian drones as having been hit by objects.
It has a compliance programme called the National Verified End User, buried in an export control regulation, that determines which companies can access the chips required to remain competitive in the most consequential technological race of the century.
It has a power pledge signed in the East Room, described as a consumer electricity initiative, that was in structural terms a military procurement transaction completed under presidential cover.
It has an AI targeting system that shortened kill chains sufficiently to strike a thousand targets in twenty-four hours, built by a company whose managing director wrote the doctrine that made the system necessary, implemented it, and then returned to government to preside over the ceremony at which the transaction was completed.
None of this required conspiracy. That is the point.
The government needed infrastructure it could not build fast enough through civilian processes. The companies needed chip access they could not obtain without government alignment.
China’s construction speed set the pace. The responsible speed doctrine removed the friction. The NVEU programme closed the transaction.
Each actor pursued its own logic. No single actor designed the whole.
The whole assembled itself from the intersection of rational interests operating under the pressure of a deadline that a Chinese state document had set in 2017 and that no one in the system had the option of ignoring.
The social contract of the internet was built on an assumption that proved to be a category error.
The platforms were assumed to be neutral utilities — global, civilian, indifferent to the political and military uses to which their infrastructure might be put. The neutrality was always partially fictional.
But the fiction was maintained, and it mattered, because it was the basis on which billions of people organized their digital lives.
The 2025 AI Diffusion Rule did not announce the retirement of that fiction. It built the replacement — a legal architecture in which the companies that run the internet’s infrastructure are formally classified as Critical National Security Infrastructure, in which their access to the inputs required to remain competitive is contingent on alignment with American government security policy, and in which the distinction between a technology company and a defense contractor is a matter of public relations rather than operational reality.
Every person who stores data in an American cloud platform is, without having been asked, a participant in that apparatus. Every business that ran on AWS infrastructure in the Gulf region discovered on March 1, 2026 what participation looks like when the apparatus becomes a target.
There is a line in Eisenhower’s farewell address that receives less attention than the military-industrial complex warning but that is, in retrospect, the more prescient one. We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted.
He was describing a combination that was still visible. Still nameable. Still, in principle, subject to the democratic oversight that visibility makes possible.
The system documented here was not debated by Congress. It was not presented to the public as a choice.
It was assembled through executive orders, export control regulations, compliance programmes, and a power pledge described as a consumer protection measure — each component reported separately, in different sections of different publications, for different audiences, by journalists covering technology or defense or energy or politics but not, as a matter of institutional structure, all four simultaneously.
The week of February 28, 2026 was not an anomaly. It was a disclosure. The system that produced it had been under construction for at least seven years, in plain sight, in public documents, by people pursuing rational interests under the pressure of a competition that neither side started and neither side can afford to lose.



