Magical Thinking and High-Tech War Making
The mainstream press is agog with the use of AI in the current bombing campaign of the U.S. and Israel against Iran. Maybe this time the U.S. can win a war thanks to the latest technology! Yes, it didn’t work in Vietnam, or Afghanistan, or Iraq but hope springs eternal.
And that is what it is…hope. Certainly Iran will not be conquered by the U.S. and Israel through air assaults and demands for unconditional surrender. Even the dream of regime change is not going to come true, or at least change to a regime that would support Israel’s goal of dominating the Middle East along with the U.S. Empire.
Still, the Pentagon and the tame press can’t get enough of the latest generation of military computing—Algorithmic Intelligence in the form of large language models (LLMs)—claiming it might well produce painless (at least for one side) techno-victories when correctly prompted. But this fantasy is no more real than Erica Jong’s “zipless fuck” where the clothes of the lovers magically melt away as two anonymous young bodies generate beautiful orgone energy on a rushing train. Zipless fucks are not real in love, nor in war.
So what is this technology? It is Palantir Technologies’ Maven Smart System (MSS), a data management system using older forms of machine learning, married to an LLM. It was first used in 2021 to help manage the U.S. withdrawal from Afghanistan that went so famously well. It is the exact same system that Israel has been using to target Gaza, which has failed to eradicate Hamas, let alone the Palestinians living there, but it has alienated Israel from the rest of the world, including the citizens of their main patron the United States. This is all detailed in my recent book, free online: AI, Sacred Violence, and War — The Case of Gaza.
It is also the same system ICE uses to target its prey, resulting in the detention of thousands of legal immigrants and even U.S. citizens and the deaths of dozens in ICE custody or killed in the streets. (See the excellent research material on the website of the admirable Purge Palantir campaign.)
If you have not discerned incredible accuracy and success in these two deployments, you will not be surprised by the U.S. apparently hitting a girls school in Minab killing over 175 students and teachers at the very start of the current attacks on Iran.
Over the next ten days I will publish 4 related articles explaining why AI Cannot Win the Iran War. First they will be on the Syndicate for Initiative substack (free, just sign up!), and then on Daily Kos a few days later. They will look closely at how the current form of AI is just doubling down on the mistakes Israel and the U.S. have been making for decades in how they wage postmodern war. As of now, the articles will be:
1) The Limits of Correctness and Military Systems
2) AI-Mediated Assassinations
3) Human Shields and Hypocrisy
4) Palantir is Not an Indestructible Crystal Ball
This work is part of my ongoing research on AI and Power: Prefiguring Algorithmic Intelligence, a book-in-progress with Ángel Gordo, that will be published by Intellect next year. That includes “Ten illusions about AI that stand in the way of controlling it.” These essays are an elaboration on one of these illusions:
7) Military AI can win battles and wars. Most wars are not won by the side with the best technology, it is almost always the side the the appropriate technology, tactics, strategy and logistics for the war they are waging. And the will to win…the silliness to not just kill, but be killed. To shed blood and to bleed.
Military research and applications are central to the development of AI. The military is drawn to it not because it leads to victory in battle, but because it mimics the bureaucratic logic of postmodern armies. Nation-states, militaries, the super rich, and most forms of AI share a deep affinity for managerial control based on instrumentalist intelligence. They are natural allies, when not competing.
This first essay will end with a look at what the Maven/Claude system being deployed now to kill Iranians actually does, and then with a Quick and Dirty look at the story of war and technology, the crucial context for this madness.
Killing the “Man-in-the-Loop”
Since World War II the U.S. military has thought in terms of systems made up of humans and machines. One of the key systems in this worldview is the Kill Chain. This is how targets are acquired and destroyed. For years, any discussion of improving the Kill Chain with automation have included promises that there will always be a “man-in-the-loop” for lethal decision making. The current generation of AI has allowed politicians and soldiers to do away with this assurance, never sincere, and now just an outright lie.
The Maven/Claude Kill Chain is automated from aggregating and summarizing intelligence to target acquisition to legal justification to firing decision. The system generates so many targets it is impossible for humans to play a role in the majority of missile launches (most bombs, all smart bombs, are now actually missiles, by the way, or delivered by drones).
The press is reduced to quoting experts proclaiming the advent of an “era of AI-powered bombing quicker than ‘speed of thought’” (Booth and Milmo 2025). It takes about 50 miliseconds for a human to perceive a sensory stimuli. Pretty fast! And nonsense. From intelligence gathering to strike can be years. Certainly, even from the AI system announcing a target to the deployment of explosives on that target is a matter of many seconds if it is a loitering weapons platform. If not, it is still a cycle measured in minutes or hours, at a minimum. But hey, it sounds good, even if an AI can’t kill you faster than you can think. In is horrifying anyway. Look at Gaza, morphing into the world.
Other reporters marveled at strikes hitting Iran at “a blistering 1,000 targets in the first 24 hours” (Copp, Dwoskin and Duncan 2025), although actually it was more like 900, and the U.S. and Israel planned together for months to build “as valuable and extensive a target bank as possible” as they admitted in a statement soon after the assault began (Copp, Dwoskin and Duncan 2025). Besides, not only have there have been significant misses and horrific collateral damage, the whole idea that Iran will collapse because smaller and smaller targets are being hit at faster and faster rates with bigger and bigger explosives is ridiculous.
Bombing people makes them angry. Only troops on the ground launching an invasion where a million people might die could possibly take Iran, and while Trump may be that evil and stupid (Saddam Hussain was), the U.S. military is not.
The technical term for the shortening of the Kill Chain with automation is “data compression.” How is this achieved? By eliminating living human judgement and turning the decisions. over to algorithms (reified and simplified human judgements), which clearly degrades precision, but they are quicker. More enemies are killed; more enemies are captured. This the goal in Gaza, in Iran, and in the cities and towns of the United States where ICE hunts. It is an AI Alibi.
Technology Does Not Win Wars, People Do
Dreams of “silver bullet” weapons (technologies) that magically produce victory have been common among warriors since war was invented, as I chronicle in my 1997 book, free online, Postmodern War. Of course, military technology innovations are a crucial part of the history of war, that is why the Stone Age was succeeded by Bronze and then Iron. But it is not a simple case of the best tech always wins. Usually, the side with the greatest will to win, and the appropriate technologies, are the victors. So in contemporary times Vietnam defeated the French, the Americans, and the Chinese to preserve their independence and Afghanistan saw off the two greatest military empires in the history of the world.
But Western militaries are particularly enamored with the hope that the next Revolution in Military Affairs (RMA) will produce decisive yet easy victories. At first, such “revolutions” happened rarely (stirrups, gunpowder, rifling), but as with so many aspects of humanity they have been coming at an accelerating rate. Just as the number of humans, the amount of carbon in the air, and the extinction of species are rising exponentially so are military technological changes. Today it makes more sense to talk about a Permanent Revolution in Military Affairs or a Perpetual Revolution in Military Affairs, depending on if you prefer allusions to Trotskyism or to fringe physics.
The United States has been the main innovator in this process going back to World War II. Since then U.S. military thinking has prioritized “information” as the most important “force multiplier,” replacing “force of fire” (explosive power on the enemy).
The current enthusiasm for Large Language Models and other forms of Algorithmic Intelligence was completely predictable. The US military has been funding computing generally, and artificial intelligence fantasies specifically, since the very beginning of the Data Age. It’s role in shaping the techno science we have now has been decisive. One can attribute transistors, chips, the Internet, and various forms of AI to Pentagon and CIA investment. It is telling that Palantir Technologies itsel, was originally funded by a CIA hedge fund.
Early military commitments to AI proved massive failures, especially the search for natural language processing and the dream of Star Wars, what Ronald Reagan called his Strategic Defense Initiative (SDI) that would have used AI to control an anti-ballistic missile shield, if it had worked. Even now, Trump is proposing a “Gold Dome” to protect the whole of the United States from ICBM’s as Israel’s AI-enabled Iron Dome protects Israel from most, but not all, Iranian missiles and drones.
In the next article, I will tell the inspiring story of how Computer Professionals for Social Responsibility, organizing against SDI and the Strategic Computing Initiative it needed, brought together a number of experts to articulate the Limits of Correctness for computing systems. These limits, and a few new ones which apply directly to LLM’s, are why humanity should never give up control of powerful weapons, or life-determining decisions, to computers.
References
Booth, Robert and Dan Milmo (2025) “Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’,” The Guardian, March 3.
Copp, Tara, Elizabeth Dwoskin and Ian Duncan (2025) “Anthropic’s AI tools Claude central to US campaign in Iran, amid a bitter feud,” The Washington Post, March 4.
Gray, Chris Hables (2025) AI, Sacred Violence, and War — The Case of Gaza, Palgrave/Pivot.
_____(1997) Postmodern War: The New Politics of Conflict (Guilford/Routledge).
***
Note: This is the first in a series of linked essays about AI and the Iran War. They appear first on my free substack for the Syndicate for Initiative Press. I also post new material on my website, which also has dozens of my older essays and links to my free online books.