Anthropic declines the latest Pentagon offer to modify the contract, stating that the proposed changes do not address the company’s concerns about the use of artificial intelligence for mass surveillance or in fully autonomous weapons systems.
Both sides diverge on restrictions on the use of Claude – the first AI system to be integrated into a classified military network.
On Tuesday, Defense Secretary Pete Hegset told Anthropic CEO Dario Amodei: if Anthropic does not allow the use of its AI model “for all lawful purposes,” the Pentagon will cancel Anthropic’s contract worth $200 million. In addition to canceling the contract, Anthropic could be designated a “supply chain risk,” typically applied to companies tied to foreign adversaries, Heget said.
Anthropic said in a statement that the Pentagon’s new formulation was seen as a compromise, but “was undermined by legal wording that would allow these safeguards to be discarded at will.”
In a lengthy blog post on Thursday, Amodei wrote: “I deeply believe in the existential importance of using AI to defend the United States and other democracies, as well as to prevail against our autocratic adversaries.”
I deeply believe in the existential importance of using AI to defend the United States and other democracies, as well as to prevail against our autocratic adversaries.
– Dario Amodei
Amodei also noted that Anthropic understands: “not private companies, but the Pentagon makes military decisions.” However “in a narrow set of cases we believe AI can undermine rather than defend democratic values.” In the case of mass surveillance and autonomous weapons these capabilities are “beyond what today’s technologies can safely and reliably achieve.”
Anthropic understands that the Pentagon, not private companies, makes military decisions. But in a narrow set of cases we believe AI can undermine rather than defend democratic values. In the case of mass surveillance and autonomous weapons this is beyond what today’s technologies can safely and reliably provide.
– Dario Amodei
“Threats do not change our position: we cannot, in good conscience, agree to their request.”
– Dario Amodei
The Pentagon did not immediately respond to requests for comment.
Context and prospects for further negotiations
This split in positions highlights the challenges of regulatory frameworks for AI use in the US defense sector. Anthropic emphasizes responsible AI use principles and safeguards intended to prevent excessive control or misuse of the technology. On the other hand, the Pentagon aims to ensure clear terms of use when operating with critical infrastructure and defense systems. Negotiations appear to be ongoing, but as of now there have been no public breakthroughs.






