Israel’s military chief announced that the Israel Defense Forces had carried out 2,500 strikes using more than 6,000 weapons in the first eight days of the Iran war, a pace and volume of military operations that defense analysts say would be impossible without the AI-powered targeting and intelligence systems that the IDF has developed and deployed in recent years. The extraordinary tempo of strikes has reignited the debate over autonomous and AI-assisted targeting in warfare, particularly as civilian casualty figures continue to climb and incidents like the Minab school strike raise questions about the adequacy of human oversight in the targeting process. (Source: CNN; NCRI)
The Targeting Infrastructure
Israel has been at the forefront of incorporating artificial intelligence into military operations. Reports from the 2023-2024 Gaza conflict revealed systems including what was described as a target generation platform that could identify potential military targets from large datasets including surveillance footage, communications intercepts, and geospatial intelligence. The IDF has not disclosed which specific systems are being used in the Iran campaign, but the scale of operations, with more than 300 strikes per day across a country the size of Alaska, would overwhelm traditional human-only intelligence analysis and targeting approval processes. (Source: CNN)
The U.S. military has also confirmed using a number of new capabilities in its parallel campaign against Iran, though Defense Secretary Hegseth has not specified whether AI-assisted targeting is among them. The combined pace of U.S. and Israeli strikes across Iran, Lebanon, and the broader region suggests a level of automation in the intelligence-to-strike pipeline that would have been impossible in previous Middle Eastern conflicts.
Civilian Protection Questions
The debate over AI in military targeting centers on two competing concerns. Proponents argue that AI systems can process intelligence faster and more accurately than humans, reducing errors and potentially decreasing civilian casualties by enabling more precise target identification. Critics counter that the speed and scale enabled by AI can overwhelm the human oversight that international humanitarian law requires, creating situations where targets are approved and struck faster than commanders can meaningfully assess the risk of civilian harm. (Source: MIT Technology Review)
The Iranian Red Crescent’s report that strikes have hit targets in at least 174 cities, including residential buildings, schools, hospitals, petrol stations, and infrastructure, raises questions about the target selection criteria being applied. UNICEF’s confirmation of at least 181 child fatalities and the Minab school strike that killed 171 girls suggest that whatever targeting systems are in use, they are not preventing catastrophic civilian casualties. The distinction between a targeting system that could protect civilians and one that is actually being used to do so is the critical gap that critics argue AI-enabled warfare has failed to close.
Ethical and Legal Implications
The scale of the Iran campaign is creating a real-world test case for the legal and ethical frameworks governing AI in warfare. The laws of armed conflict require distinction between military targets and civilian objects, proportionality between military advantage and civilian harm, and precautionary measures to minimize civilian casualties. Whether AI-assisted targeting at this tempo and scale can satisfy these requirements, particularly when the conflict produces more than 1,200 deaths in less than two weeks including hundreds of children, is a question that will likely be addressed in international legal proceedings for years to come. (Source: Al Jazeera; UNICEF)
For the AI industry, the Iran war provides uncomfortable evidence that military applications of their technology raise consequences far beyond the commercial realm. Anthropic’s ongoing refusal to provide its models to the Pentagon stands in sharper relief as the human costs of AI-enabled military operations become visible in real time. The war is demonstrating not the theoretical risks of AI in warfare that ethicists have debated for years, but the actual consequences when systems designed to process information at superhuman speed are deployed in the chaos and moral complexity of armed conflict.
Specific incidents including the Minab school strike that killed 171 children raise concrete questions about whether targeting processes, whether AI-assisted or human-directed, are adequate to prevent catastrophic civilian harm. The ICRC has called for all parties to respect international humanitarian law, while human rights organizations have called for independent investigations. The scale of strikes creates a statistical near-certainty of civilian casualties regardless of methodology. (Source: Al Jazeera; UNICEF)
Anthropic’s refusal to provide Claude AI models to the Pentagon gains credibility with each civilian harm incident. OpenAI reversed its military policy in 2024 and signed defense contracts. The divergence illustrates the fundamental tension between commercial opportunity and ethical responsibility. The Iran war is demonstrating not the theoretical risks of AI in warfare but the actual consequences when systems designed for superhuman processing speed are deployed in the chaos of armed conflict. (Source: MIT Technology Review; Newcomer)
The legal frameworks governing AI in warfare are struggling to keep pace with the technology’s deployment. The laws of armed conflict require distinction, proportionality, and precaution, principles designed for human decision-makers operating at human speed. When AI systems enable targeting decisions at a pace that exceeds meaningful human review, the legal requirements may be formally satisfied through automated checklists while the substantive protections they were designed to provide are effectively hollowed out. International legal proceedings examining AI-assisted targeting decisions from the Iran conflict are likely to occupy courts and tribunals for years, potentially establishing precedents that shape the future of warfare.