
Claude
PersonAnthropic's Claude AI is at the center of US military AI ethics, blacklisting, and Iran war use.
Total Mentions:19
Last 7 Days:1
Velocity:-66.7%
Trending:100%
About
Anthropic is an AI company whose Claude platform has become a focal point in debates surrounding the ethical use of AI in the military. Recent news highlights a conflict between Anthropic and the Trump administration, leading to the company being designated a 'supply chain risk' and a government-wide ban after Anthropic refused to remove guardrails on its technology, particularly regarding wartime applications and mass surveillance. Despite this ban, reports indicate the US military used Claude in planning and executing strikes against Iran, raising questions about the enforceability of the ban and the military's reliance on AI. The US Treasury has ceased using Anthropic products. Anthropic is suing the Trump administration over the blacklisting. The situation underscores broader concerns about AI readiness in the military, the potential for AI to accelerate military operations, and the need for regulation in military AI applications.
Last updated: March 21, 2026
Recent News


Deadly strike on Iranian primary school raises questions about AI, accountability

Iran strikes are a wake-up call to regulate military AI
Anthropic sues Trump administration seeking to undo ‘supply chain risk’ designation
Pentagon dispute bolsters Anthropic reputation but raises questions about AI readiness in military

OpenAI amends Pentagon deal as Sam Altman admits it looks ‘sloppy’

Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’

US Treasury to stop using Anthropic AI tech, including Claude platform

US military reportedly used Claude in Iran strikes despite Trump’s ban
