E108: AI-Assisted War Crimes?

On today's show, Alex and Calvin continue our discussion about the ongoing war in Iran, focusing on the literal use of artificial intelligence in the imperialist campaign being waged by the US and Israeli militaries. We analyze statements from major AI companies regarding their military contracts, unpacking the conflict between the “Department of War” and Anthropic, and contrasting this with the increasingly cozy relationship between OpenAI and the military. We argue, based on a close look at the language of both companies’ statements, that despite all the hype there isn’t much ideological gap between these two companies. While each claims to draw moral red lines against mass domestic surveillance and fully autonomous weapons, they both rely heavily on technical jargon to justify their ongoing military partnerships and affirm numerous arbitrary assumptions about US nationalism and the non-universality of human rights. We explore how these corporate statements might function to protect the companies’ progressive brand identities, showing how they still accommodate US imperial objectives.

Later in the episode, we shift our focus to Palantir and its Project Maven Smart System. We explore how the military’s Chief Digital and Artificial Intelligence Officer, Cameron Stanley, presents the Maven system’s “targeting workflow” as an AI-based platform for detecting targets and suggesting convenient and efficient options for killing them. We talk about how this kind of interface gamifies the battlefield, and we argue that it severely dehumanizes the victims of military violence. We go on to discuss these AI-based systems in relation to the recent US military strike on the Minab girls school in Iran, in which at least 175 people were killed, including dozens of children. While the media and the military might refer to this tragic event as an error, we suggest that this specific language is a framing device that treats a moral failure as a simple technical glitch. We close by reporting on the results of an experiment in which we tested to what extent Anthropic’s Claude (which is integrated into the Maven Smart System) will acknowledge its own culpability in the Minab school strike when prompted. Spoiler alert: it will do so, but we are dubious about the ultimate significance of this given all chatbots’ tendency towards sycophancy. In the end, AI tools are designed and guided by human intentions, so we must hold the people who build and use these systems accountable for their devastating consequences.

Texts Analyzed in this Episode

Works & Concepts Referenced in this Episode

Haskins, C. (March 13, 2026). Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans. Wired.  

Pynchon, T. (2012). Mason & Dixon. Penguin.

Ramkumar, A., Hagey, K., & Bergengruen, V. (February 15, 2026). Pentagon Used Anthropic's Claude in Maduro Venezuela Raid. Wired

Read, Max. (Feb 27, 2026). What Anthropic's fight with the Pentagon tells us about the politics of Silicon Valley. Read Max

An accessible transcript of this episode can be found here (via Descript)

Alex Helberg