US Military Leverages Anthropic’s AI Technology in Iran Airstrike Despite Planned Phase-Out

This article was generated by AI and cites original sources.

In a recent development, the U.S. government employed AI tools developed by Anthropic during an airstrike on Iran, shortly after announcing plans to discontinue the use of the technology from the AI startup. According to a report by The Wall Street Journal, Anthropic’s Claude AI was utilized by various military commands, including U.S. Central Command in the Middle East, to support intelligence assessments, target identification, and simulation of battle scenarios during the Iran attack.

Prior to this incident, Anthropic’s AI had also been involved in the Pentagon’s operation to capture the Venezuelan president Nicolás Maduro, highlighting its growing significance in military operations. The decision to phase out the technology was attributed to concerns over the use of Claude in critical missions, prompting the U.S. administration to initiate a six-month transition period away from Anthropic’s products.

The dispute between the U.S. and Anthropic revolves around AI safety concerns, with disagreements arising over the permissible applications of the company’s AI models in national defense. Anthropic has contested the U.S. classification of the company as a ‘supply chain risk’ and vowed to challenge this designation in court, emphasizing restrictions on mass domestic surveillance and fully autonomous weapons in the use of their AI technology.

Source: mint – technology