Anthropic CEO says AI company ‘cannot in good conscience accede’ to
Pentagon’s demands 1 of 3 | Pages from the
Anthropic website and the company’s logos are displayed on a computer screen in
New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison) 2 of 3 | Defense Secretary
Pete Hegseth speaks during a cabinet meeting at the
White House, Thursday, Jan. 29, 2026, in
Washington. (AP Photo/Evan Vucci) 3 of 3 | Defense Secretary
Pete Hegseth stands outside the
Pentagon during a welcome ceremony for the Japanese defense minister at the
Pentagon in
Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File) 1 of 3 Pages from the
Anthropic website and the company’s logos are displayed on a computer screen in
New York on Thursday, Feb. 26, 2026. (AP Photo/Patrick Sison) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. 2 of 3 Defense Secretary
Pete Hegseth speaks during a cabinet meeting at the
White House, Thursday, Jan. 29, 2026, in
Washington. (AP Photo/Evan Vucci) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. 3 of 3 Defense Secretary
Pete Hegseth stands outside the
Pentagon during a welcome ceremony for the Japanese defense minister at the
Pentagon in
Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File) Add AP News on Google Add AP News as your preferred source to see more of our stories on Google. Updated [hour]:[minute] [AMPM] [timezone], [monthFull] [day], [year]
Washington (AP) —
Anthropic CEO
Dario Amodei said Thursday the artificial intelligence company “cannot in good conscience accede” to the
Pentagon’s demands to allow wider use of its technology.The maker of the AI chatbot
Claude said in a statement that it’s not walking away from negotiations, but that new contract language received from the
Defense Department “made virtually no progress on preventing
Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”The
Pentagon’s top spokesman has reiterated that the military wants to use
Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.
Sean Parnell said Thursday on social media that the
Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models, such as its chatbot
Claude, from being used for those purposes. It’s the last of its peers — the
Pentagon also has contracts with Google, OpenAI and Elon Musk’s xAI — to not supply its technology to a new U.S. military internal network. Parnell said the
Pentagon wants to “use
Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.” “We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.During a meeting on Tuesday between Defense Secretary
Pete Hegseth and Amodei, military officials warned that they could cancel
Anthropic’s contract, designate the company as a supply chain risk, or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve. Amodei said Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels
Claude as essential to national security.”Parnell left out the threatened use of the Defense Production Act in the Thursday post on X and said
Anthropic has “until 5:01 PM ET on Friday to decide.” “Otherwise, we will terminate our partnership with
Anthropic and deem them a supply chain risk,” he wrote.The talks that escalated this week began months ago. Amodei said that given “the substantial value that
Anthropic’s technology provides to our armed forces, we hope they reconsider.” But if they don’t, he said
Anthropic “will work to enable a smooth transition to another provider.”Sen. Thom Tillis, a North Carolina Republican who is not seeking reelection, said Thursday that the
Pentagon has been handling the matter unprofessionally while
Anthropic is “trying to do their best to help us from ourselves.”“Why in the hell are we having this discussion in public?” Tillis told reporters. “This is not the way you deal with a strategic vendor that has contracts.” He added, “When a company is resisting a market opportunity for fear of negative consequences, you should listen to them and then behind closed doors figure out what they’re really trying to solve.”Sen. Mark Warner of Virginia, the ranking Democrat on the Senate Intelligence Committee, said he was “deeply disturbed” by reports that the
Pentagon is “working to bully a leading U.S. company.”“Unfortunately, this is further indication that the Department of Defense seeks to completely ignore AI governance,” Warner said in a statement. It “further underscores the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”As
Pentagon officials say they always will follow the law with their use of AI models, Hegseth told Fox News last February, weeks after becoming defense secretary, that “ultimately, we want lawyers who give sound constitutional advice and don’t exist to attempt to be roadblocks to anything.”___Associated Press writer Ben Finley contributed to this report. O’Brien covers the business of technology and artificial intelligence for The Associated Press.