NEWSAR
Multi-perspective news intelligence
SRCThe Guardian - World News
LANGEN
LEANCenter-Left
WORDS607
ENT9
THU · 2026-02-26 · 23:28 GMTBRIEF NSR-2026-0227-19672
News/Anthropic sues US government for calling/Anthropic says it ‘cannot in good conscience’ allow Pentagon…
NSR-2026-0227-19672News Report·EN·National Security

Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

Anthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July.

Nick Robins-EarlyThe Guardian - World NewsFiled 2026-02-26 · 23:28 GMTLean · Center-LeftRead · 3 min
Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks
The Guardian - World NewsFIG 01
Reading time
3min
Word count
607words
Sources cited
3cited
Entities identified
9entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

Anthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July. CEO Dario Amodei stated the company's preference to continue serving the DoD with the safeguards in place. The standoff is a test of Anthropic's commitment to AI safety and whether AI companies will resist government pressure for controversial uses of the technology.

Confidence 0.90Sources 3Claims 5Entities 9
§ 02

Article analysis

Model · rule-based
Framing
National Security
Technology
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.80 / 1.00
Factual
LowHigh
Sources cited
3
Well sourced
FewMany
§ 03

Key claims

5 extracted
01

Anthropic was one of several big tech companies to receive up to $200m contracts with the DoD in July of last year.

factual
Confidence
1.00
02

Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems.

factual
Confidence
1.00
03

The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude.

factual
Confidence
1.00
04

The Department of Defense had threatened to cancel a $200m contract if the company did not comply with the request by Friday.

factual
Confidence
1.00
05

Anthropic said it ‘cannot in good conscience’ comply with a demand from the Pentagon to remove safety precautions from its AI model.

quoteAnthropic
Confidence
1.00
§ 04

Full report

3 min read · 607 words
Anthropic said Thursday it “cannot in good conscience” comply with a demand from the Pentagon to remove safety precautions from its artificial intelligence model and grant the US military unfettered access to its AI capabilities.The Department of Defense had threatened to cancel a $200m contract and deem Anthropic a “supply chain risk”, a designation with serious financial implications, if the company did not comply with the request by Friday.Chief executive Dario Amodei said in a statement that the threats from the defense secretary, Pete Hegseth, would not change the company’s position, and that he hoped Hegseth would “reconsider”.“Our strong preference is to continue to serve the Department and our warfighters – with our two requested safeguards in place,” he said. “We remain ready to continue our work to support the national security of the United States.”At the core of the Department of Defense and Anthropic’s standoff is a disagreement over how the AI company will permit its product, Claude, to be used. The Pentagon has demanded that Anthropic turn off safety guardrails and allow any lawful use of Claude, while Anthropic has pushed back against allowing Claude to be used for mass domestic surveillance or in autonomous weapons systems that can kill people without human input.After months of dispute and pressure from the government, Hegseth reportedly gave Amodei until Friday evening to agree to the Pentagon’s demands or face punitive action.Whether Anthropic would concede was seen as a high-profile test of its claim to be the most safety-conscious of the major AI firms, as well as whether any part of the AI industry would push back against government desires to use the technology for controversial, potentially lethal purposes.In his statement, Amodei said using AI for autonomous weapons and mass domestic surveillance is “simply outside the bounds of what today’s technology can safely and reliably do”.The Department of Defense has handed a number of lucrative deals to tech firms in recent years for the companies to build or integrate AI technology into US military systems. In July of last year, Anthropic was one of several big tech companies including Google and OpenAI to receive up to $200m contracts with the DoD. What set Anthropic apart, and has intensified its conflict with the Pentagon, is that until this week it was the only AI model that had been approved for use in the military’s classified systems. (Elon Musk’s xAI reached an agreement earlier this week to also be used in classified systems).Anthropic’s technology has reportedly already been used for military applications, including the US capture of Venezuelan leader Nicolás Maduro last month, highlighting the growing use of AI in conflict. The growth of autonomous weapons technology, such as drones that can carry out operations even after their connection to a human operator has been severed, has also intensified longstanding concerns around how AI will be used in life-and-death situations.Anthropic and Amodei have long been some of the industry’s most prominent advocates for regulation and safety precautions in developing AI, even as they have struck deals with the military and this week watered down a core policy to not release new AI models without first guaranteeing their safety. Amodei’s calls for regulation, and history of political opposition to Donald Trump, have run up against Hegseth’s vows to remove “wokeness” from the armed forces and pursue aggressive military policies.If Hegseth follows through with his threat to categorize Anthropic as a supply chain risk, it would be a huge blow to the AI company. The designation, which is more commonly intended to be used for foreign adversaries, would prohibit other vendors that do business with the US military from using Anthropic’s products.
§ 05

Entities

9 identified
§ 06

Keywords & salience

10 terms
anthropic
1.00
artificial intelligence
1.00
department of defense
0.90
ai safety
0.90
autonomous weapons
0.80
ai ethics
0.70
mass domestic surveillance
0.70
government regulation
0.60
national security
0.50
supply chain risk
0.40
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles