Anthropic refuses to bend to
Pentagon on AI safeguards as dispute nears deadline 1 of 3 |
Dario Amodei, CEO and co-founder of
Anthropic, attends the annual meeting of the
World Economic Forum in
Davos,
Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File) 2 of 3 | Pages from the
Anthropic website and the company’s logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison) 3 of 3 | Defense Secretary
Pete Hegseth stands outside the
Pentagon during a welcome ceremony for the Japanese defense minister at the
Pentagon in Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File) 1 of 3
Dario Amodei, CEO and co-founder of
Anthropic, attends the annual meeting of the
World Economic Forum in
Davos,
Switzerland, Jan. 23, 2025. (AP Photo/Markus Schreiber, File) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. 2 of 3 Pages from the
Anthropic website and the company’s logos are displayed on a computer screen in New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. 3 of 3 Defense Secretary
Pete Hegseth stands outside the
Pentagon during a welcome ceremony for the Japanese defense minister at the
Pentagon in Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. Updated [hour]:[minute] [AMPM] [timezone], [monthFull] [day], [year] A public showdown between the Trump administration and
Anthropic is hitting an impasse as military officials demand the artificial intelligence company bend its ethical policies by Friday or risk damaging its business.
Anthropic CEO
Dario Amodei drew a sharp red line 24 hours before the deadline, declaring his company “cannot in good conscience accede” to the
Pentagon’s final demand to allow unrestricted use of its technology.
Anthropic, maker of the chatbot Claude, can afford to lose a defense contract. But the ultimatum this week from Defense Secretary
Pete Hegseth posed broader risks at the peak of the company’s meteoric rise from a little-known computer science research lab in
San Francisco to one of the world’s most valuable startups.If Amodei doesn’t budge, military officials have warned they will not just pull
Anthropic’s contract but also “deem them a supply chain risk,” a designation typically stamped on foreign adversaries that could derail the company’s critical partnerships with other businesses. And if Amodei were to cave, he could lose trust in the booming AI industry, particularly from top talent drawn to the company for its promises of responsibly building better-than-human AI that, without safeguards, could pose catastrophic risks.
Anthropic said it sought narrow assurances from the
Pentagon that Claude won’t be used for mass surveillance of Americans or in fully autonomous weapons. But after months of private talks exploded into public debate, it said in a Thursday statement that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.” That was after Sean Parnell, the
Pentagon’s top spokesman, posted on social media that “we will not let ANY company dictate the terms regarding how we make operational decisions” and added the company has “until 5:01 p.m. ET on Friday to decide” if it would meet the demands or face consequences. Emil Michael, the defense undersecretary for research and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”That message hasn’t resonated in much of Silicon Valley, where a growing number of tech workers from
Anthropic’s top rivals, OpenAI and Google, voiced support for Amodei’s stand late Thursday in an open letter. OpenAI and Google, along with Elon Musk’s xAI, also have contracts to supply their AI models to the military.“The
Pentagon is negotiating with Google and OpenAI to try to get them to agree to what
Anthropic has refused,” the open letter says. “They’re trying to divide each company with fear that the other will give in.”Also raising concerns about the
Pentagon’s approach were Republican and Democratic lawmakers and a former leader of the Defense Department’s AI initiatives.“Painting a bullseye on
Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Force Gen. Jack Shanahan in a social media post. Shanahan faced a different wave of tech worker opposition during the first Trump administration when he led Maven, a project to use AI technology to analyze drone footage and target weapons. So many Google employees protested its participation in Project Maven at the time that the tech giant declined to renew the contract and then pledged not to use AI in weaponry.“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the
Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to
Anthropic’s position. More so than I was to Google’s in 2018.”He said Claude is already being widely used across the government, including in classified settings, and
Anthropic’s red lines are “reasonable.” He said the AI large language models that power chatbots like Claude are also “not ready for prime time in national security settings,” particularly not for fully autonomous weapons.“They’re not trying to play cute here,” he wrote. Parnell asserted Thursday that the
Pentagon wants to “ use
Anthropic’s model for all lawful purposes” and said opening up use of the technology would prevent the company from “jeopardizing critical military operations,” though neither he nor other officials have detailed how they want to use the technology.The military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement,” Parnell wrote.When Hegseth and Amodei met Tuesday, military officials warned that they could designate
Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve. Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He said he hopes the
Pentagon will reconsider given Claude’s value to the military, but, if not,
Anthropic “will work to enable a smooth transition to another provider.”—-AP reporter Konstantin Toropin contributed to this report. O’Brien covers the business of technology and artificial intelligence for The Associated Press.