Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

The Guardian - World NewsCenter-LeftEN 3 min read 100% complete by Nick Robins-EarlyFebruary 27, 2026 at 12:28 AM
Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

AI Summary

medium article 3 min

Anthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July. CEO Dario Amodei stated the company's preference to continue serving the DoD with the safeguards in place. The standoff is a test of Anthropic's commitment to AI safety and whether AI companies will resist government pressure for controversial uses of the technology.

Keywords

anthropic 100% artificial intelligence 100% department of defense 90% ai safety 90% autonomous weapons 80% ai ethics 70% mass domestic surveillance 70% government regulation 60% national security 50% supply chain risk 40%

Sentiment Analysis

Negative
Score: -0.20

Source Transparency

Source
The Guardian - World News
Political Lean
Center-Left (-0.40)
Far LeftCenterFar Right
Classification Confidence
90%
Geographic Perspective
United States

This article was automatically classified using rule-based analysis. The political bias score ranges from -1 (far left) to +1 (far right).

Find Similar Articles

AI-Powered

Discover articles with similar content using semantic similarity analysis.