How AI firm Anthropic wound up in the Pentagon’s crosshairs

The Guardian - World NewsCenter-LeftEN 9 min read 100% complete by Nick Robins-EarlyMarch 9, 2026 at 12:00 PM
How AI firm Anthropic wound up in the Pentagon’s crosshairs

AI Summary

long article 9 min

Anthropic, an AI firm previously known for its low profile, is now embroiled in a dispute with the U.S. Department of Defense (DoD). The conflict arose after Anthropic refused to allow its AI chatbot, Claude, to be used for domestic mass surveillance and autonomous weapons. This led to accusations from the DoD and a formal declaration that Anthropic is a supply-chain risk, potentially impacting its business. The disagreement highlights the ethical concerns surrounding AI's role in warfare and has sparked debate within the tech industry and the Trump administration. OpenAI's subsequent deal with the DoD further intensified the situation, exposing contradictions in Anthropic's mission of AI safety while engaging in classified work with the Pentagon.

Keywords

anthropic 100% department of defense 90% artificial intelligence 90% autonomous weapons systems 70% mass surveillance 60% ai ethics 60% openai 50% supply-chain risk 50% dario amodei 40% trump administration 40%

Sentiment Analysis

Negative
Score: -0.40

Source Transparency

Source
The Guardian - World News
Political Lean
Center-Left (-0.40)
Far LeftCenterFar Right
Classification Confidence
90%
Geographic Perspective
United States

This article was automatically classified using rule-based analysis. The political bias score ranges from -1 (far left) to +1 (far right).