In a significant shift in military strategy, the US military’s adoption of AI tools, particularly Anthropic’s AI model, Claude, has raised concerns about the potential sidelining of human decision-making in warfare. AI is reportedly streamlining the kill chain, allowing for rapid target identification and approval, exemplified by the recent execution of nearly 900 strikes on Iranian targets, which included the assassination of Iranian supreme leader Ayatollah Ali Khamenei. This surge in strikes within a mere 12 hours illustrates how AI is accelerating military operations.
Experts describe a phenomenon termed “decision compression,” where the time required for planning complex strikes is drastically reduced, potentially reducing the role of human military and legal experts to mere rubber-stamping of automated strike plans. The deployment of Claude, integrated with systems from the war-tech company Palantir, aims to enhance intelligence analysis and improve decision-making processes within the US Department of War and other national security agencies.
The new AI capabilities can swiftly process vast amounts of data, including drone footage, telecommunications, and human intelligence, to identify and target threats efficiently. Craig Jones, a political geography lecturer, highlights that AI’s speed can drastically enhance military efficacy, enabling operations that would historically take days or weeks to be executed in unison.
However, the ethical implications of this reliance on AI are concerning, leading to a sense of detachment among decision-makers regarding the consequences of military actions—a phenomenon described by David Leslie as “cognitive off-loading.” This detachment became hauntingly evident when a missile strike, which reportedly killed 165 people including children in a southern Iranian school, sparked accusations of a humanitarian law violation.
While it remains unclear how advanced Iran’s own AI capabilities are, reports indicate they have claimed to employ AI in missile-targeting systems, yet their technology appears significantly less developed compared to that of the US and China, which are leading in AI military development.
Despite prior intentions to phase out Anthropic’s AI technology due to its refusal to support fully autonomous weapons, it continues to be employed in military applications until its planned elimination. In response, rival OpenAI has secured its own military contract. The overarching concern is that while AI systems offer recommendations to human decision-makers promptly, they also compress the time available to evaluate these options, raising critical questions about oversight and accountability in military operations.
As the deployment of AI in military contexts expands, experts like Prerana Joshi emphasize its potential to enhance decision-making efficiency and productivity across various aspects of defense, highlighting the necessity for balance between speed and ethical considerations in the use of such technology in warfare.
