Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
Anthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July.

Briefing Summary
AI-generatedAnthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July. CEO Dario Amodei stated the company's preference to continue serving the DoD with the safeguards in place. The standoff is a test of Anthropic's commitment to AI safety and whether AI companies will resist government pressure for controversial uses of the technology.
Article analysis
Model · rule-basedKey claims
5 extractedAnthropic was one of several big tech companies to receive up to $200m contracts with the DoD in July of last year.
Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems.
The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude.
The Department of Defense had threatened to cancel a $200m contract if the company did not comply with the request by Friday.
Anthropic said it ‘cannot in good conscience’ comply with a demand from the Pentagon to remove safety precautions from its AI model.