NEWSAR
Multi-perspective news intelligence
SRCThe Guardian - World News
LANGEN
LEANCenter-Left
WORDS689
ENT8
SAT · 2026-02-28 · 17:06 GMTBRIEF NSR-2026-0228-20162
News/Anthropic sues US government for calling/OpenAI to work with Pentagon after Anthropic dropped by Trum…
NSR-2026-0228-20162News Report·EN·Political Strategy

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

OpenAI has agreed to provide AI services to classified US military networks, a deal announced shortly after Donald Trump directed federal agencies to cease using Anthropic's AI technology. Anthropic's agreement with the Trump administration broke down due to concerns about mass surveillance and autonomous weapons.

Adam Gabbatt and Dara KerrThe Guardian - World NewsFiled 2026-02-28 · 17:06 GMTLean · Center-LeftRead · 3 min
OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns
The Guardian - World NewsFIG 01
Reading time
3min
Word count
689words
Sources cited
3cited
Entities identified
8entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

OpenAI has agreed to provide AI services to classified US military networks, a deal announced shortly after Donald Trump directed federal agencies to cease using Anthropic's AI technology. Anthropic's agreement with the Trump administration broke down due to concerns about mass surveillance and autonomous weapons. OpenAI CEO Sam Altman stated the Pentagon agreed to ethical principles prohibiting domestic mass surveillance and ensuring human responsibility in the use of force, including autonomous weapons. Trump criticized Anthropic for attempting to impose its terms of service on the Pentagon, while OpenAI employees had previously expressed solidarity with Anthropic. Altman reassured OpenAI employees that the agreement includes ethical safeguards.

Confidence 0.90Sources 3Claims 5Entities 8
§ 02

Article analysis

Model · rule-based
Framing
Political Strategy
Technology
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.80 / 1.00
Factual
LowHigh
Sources cited
3
Well sourced
FewMany
§ 03

Key claims

5 extracted
01

The Pentagon had demanded Anthropic loosen ethical guidelines on its AI systems or face severe consequences.

factualnull
Confidence
1.00
02

Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force.

quoteSam Altman
Confidence
1.00
03

Anthropic sought assurances its technology would not be used for mass surveillance or autonomous weapons.

factualnull
Confidence
1.00
04

Donald Trump ordered the government to stop using Anthropic's services.

factualDonald Trump
Confidence
1.00
05

OpenAI struck a deal with the Pentagon to supply AI to classified US military networks.

factualOpenAI
Confidence
1.00
§ 04

Full report

3 min read · 689 words
OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”.If OpenAI’s deal does prohibit its systems from being used for unethical ends, it would appear the company has succeeded in receiving assurances where Anthropic could not. Altman announced the deal with the government shortly after the president said he would direct all federal agencies to “IMMEDIATELY CEASE” all use of Anthropic technology.The Pentagon had demanded Anthropic loosen ethical guidelines on its AI systems or face severe consequences.Trump said on his Truth Social platform: “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the [Pentagon], and force them to obey their Terms of Service instead of our Constitution.”It remains to be seen how OpenAI staff respond to the government deal. In its battle with the Trump administration, Anthropic has drawn support from its most fierce rivals. Nearly 500 OpenAI and Google employees signed on to an open letter saying “we will not be divided”.“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the letter reads. “They’re trying to divide each company with fear that the other will give in.”Altman sought to reassure OpenAI employees in a memo sent on Friday night.“Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance,” Altman wrote in the memo, which was obtained by Axios.“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”Altman added: “We are going to see if there is a deal with the [Pentagon] that allows our models to be deployed in classified environments and that fits with our principles. We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”Anthropic, which presents itself as the most safety-forward of the leading AI companies, had been mired in months of disagreement with the Pentagon. US defense officials had pushed for unfettered access to Claude’s capabilities that they say can help protect the country. Meanwhile, Anthropic has resisted allowing its product to be used for surveilling en masse or weapons systems that can kill people autonomously.“No amount of intimidation or punishment from the [Pentagon] will change our position on mass domestic surveillance or fully autonomous weapons,” Anthropic said in its statement on Friday night.“We have tried in good faith to reach an agreement with the [Pentagon], making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above,” the company continued. “To the best of our knowledge, these exceptions have not affected a single government mission to date.”OpenAI on Friday said it is raising $110bn in a blockbuster funding round which would value the company at $840bn.
§ 05

Entities

8 identified
§ 06

Keywords & salience

9 terms
openai
0.90
artificial intelligence
0.90
pentagon
0.80
anthropic
0.80
ethics
0.70
autonomous weapons systems
0.70
government agreement
0.60
mass surveillance
0.60
ethical guidelines
0.50
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles