The US Defense Department’s Chief Technology Officer, Emil Michael, has disclosed the reasons behind identifying Anthropic’s AI models as a national security supply-chain risk. In an interview on CNBC’s Squawk Box, Michael highlighted the presence of conflicting ‘policy preferences’ within Anthropic’s Claude system that could jeopardize military AI applications. These concerns have led to the classification of Anthropic as a risk to defense supply chains, prompting the company to challenge the decision through legal action.
Michael clarified that the government’s actions were motivated by the need to safeguard military interests from potential disruptions caused by divergent policy frameworks embedded in AI models. Despite the classification, he emphasized that the intention was not punitive but focused on ensuring the effectiveness and reliability of defense technologies.
Anthropic’s response to the supply-chain risk designation included legal recourse against the decision, citing the move as unprecedented and unlawful. The company’s discontent raises broader questions about the alignment of AI technologies with national security imperatives and the complexities of integrating diverse AI policies into defense applications.
Source: Tech-Economic Times