NEWSAR
Multi-perspective news intelligence

Anthropic sues US government for calling it a risk

26 articles
5 sources
0% diversity
Updated 9.3.2026
Key Topics & People
Anthropic *Dario Amodei Pentagon Pete Hegseth Department of Defense

Coverage Framing

15
5
5
1
National Security(15)
Legal & Judicial(5)
Political Strategy(5)
Conflict(1)
Avg Factuality:75%
Avg Sensationalism:Moderate

Story Timeline

Mar 9, 2026

5 articles|5 sources
artificial intelligenceanthropicmilitary uselawsuitsupply chain risk
Legal & Judicial(4)
BBC News - WorldMar 9

Anthropic sues US government for calling it a risk

AI firm Anthropic is suing the US government, alleging it was unfairly labeled a "supply chain risk" after refusing to grant the military unfettered access to its AI tools. The lawsuit, filed in California federal court, names multiple government agencies and officials, including the Department of Defense and figures from the Trump administration. Anthropic claims the government's action is unlawful and retaliatory, violating its right to protected speech. The dispute stems from Anthropic's refusal to remove usage restrictions related to lethal autonomous warfare and mass surveillance from its defense contracts, despite having worked with the government since 2024. The government, in turn, accuses Anthropic of being a "woke" company attempting to dictate military policy.

MeasuredFactual6 sources
Neutral
South China Morning PostMar 9

Anthropic sues Trump administration as row over AI use by military deepens

Anthropic, an artificial intelligence lab, filed a lawsuit against the Pentagon in California on Monday to block its placement on a national security blacklist. The lawsuit claims the designation is unlawful, violating free speech and due process rights. The Pentagon designated Anthropic as a supply-chain risk after the company refused to remove guardrails preventing its AI from being used for autonomous weapons or domestic surveillance. The designation limits the use of Anthropic's technology, reportedly being used for military operations in Iran. Anthropic hopes to undo the designation and prevent federal agencies from enforcing it, but remains open to negotiations with the US government.

MeasuredFactual2 sources
Neutral
Associated Press (AP)Mar 9

Anthropic sues Trump administration seeking to undo ‘supply chain risk’ designation

AI company Anthropic is suing the Trump administration to reverse its "supply chain risk" designation and an order barring federal employees from using its Claude chatbot. The lawsuit, filed in federal court on Monday, challenges the Pentagon's decision, alleging it's retaliation for Anthropic's refusal to allow unrestricted military use of its AI technology. The company claims the actions are unlawful and part of a campaign against them. This legal battle highlights a public disagreement over the ethical use of AI in warfare and surveillance and involves Anthropic's competitors in the tech industry. The dispute began when Anthropic resisted allowing the military unfettered access to its AI models.

MeasuredFactual1 source
Neutral
National Security(1)
The Guardian - World NewsMar 9

How AI firm Anthropic wound up in the Pentagon’s crosshairs

Anthropic, an AI firm previously known for its low profile, is now embroiled in a dispute with the U.S. Department of Defense (DoD). The conflict arose after Anthropic refused to allow its AI chatbot, Claude, to be used for domestic mass surveillance and autonomous weapons. This led to accusations from the DoD and a formal declaration that Anthropic is a supply-chain risk, potentially impacting its business. The disagreement highlights the ethical concerns surrounding AI's role in warfare and has sparked debate within the tech industry and the Trump administration. OpenAI's subsequent deal with the DoD further intensified the situation, exposing contradictions in Anthropic's mission of AI safety while engaging in classified work with the Pentagon.

Mixed toneFactual5 sources
Negative

Key Claims

factual

Anthropic has filed a lawsuit against the US government over claims that it is a 'supply chain risk'.

— Reuters

factual

The Pentagon retaliated by making Anthropic the first US company to be labelled a 'supply chain risk'.

— Reuters

quote

The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.

— Anthropic

factual

Anthropic has been used by the US government and military since 2024.

— Reuters

quote

Liz Huston, a spokeswoman for the White House, told the BBC that Anthropic is 'a radical left, woke company'.

— Liz Huston

Mar 6, 2026

2 articles|2 sources
supply chain riskanthropicpentagonartificial intelligencemass surveillance
Legal & Judicial(1)
BBC News - WorldMar 6

Anthropic vows to sue Pentagon over risk designation

The AI firm Anthropic intends to sue the Pentagon after being designated a supply chain risk, a first for a U.S. company. This designation, effective immediately, means the government deems Anthropic's AI tools not secure enough for its use. Anthropic, which has resisted granting defense agencies unrestricted access due to concerns about surveillance and autonomous weapons, believes the designation is legally unsound. CEO Dario Amodei stated the company will challenge the decision in court. The dispute follows unsuccessful talks with the Department of Defense, exacerbated by public criticism from former President Trump. The designation restricts businesses working with the military from engaging in commercial activity with Anthropic.

MeasuredFactual5 sources
Neutral
National Security(1)
Associated Press (AP)Mar 6

Pentagon says it is labeling AI company Anthropic a supply chain risk ‘effective immediately’

The Pentagon has labeled AI company Anthropic a supply chain risk, effective immediately. This decision follows accusations from President Trump and Defense Secretary Hegseth that Anthropic endangers national security. The Trump administration's action could force government contractors to cease using Anthropic's Claude chatbot. The move comes after Anthropic CEO Dario Amodei refused to concede to concerns that the company's AI could be used for mass surveillance or autonomous weapons. The announcement shuts down further negotiation with Anthropic.

MeasuredFactual4 sources
Neutral

Key Claims

factual

The US has officially deemed AI firm Anthropic a supply chain risk.

— Reuters

factual

Anthropic vows to sue the Pentagon over the risk designation.

— Reuters

factual

Anthropic refused to give defence agencies unfettered access to its AI tools.

— Reuters

quote

We do not believe this action is legally sound, and we see no choice but to challenge it in court.

— Dario Amodei, chief executive

factual

Pentagon is labeling AI company Anthropic a supply chain risk ‘effective immediately’

— Pentagon

Mar 5, 2026

1 articles|1 sources
anthropicartificial intelligenceai startupsupply chain riskus military
Political Strategy(1)
The Guardian - World NewsMar 5

Trump says he fired Anthropic ‘like dogs’ as Pentagon formally blacklists AI startup

Donald Trump claimed he "fired" AI startup Anthropic, coinciding with the Pentagon formally designating the company a "supply chain risk," preventing government contractors from using its technology. This occurred despite reports that negotiations between the Pentagon and Anthropic had resumed regarding the military's use of Anthropic's AI, including its integration into Palantir's Maven system. The dispute stems from Anthropic's refusal to make a deal with the government over concerns its model could be used for domestic mass surveillance or autonomous weapons. Anthropic's CEO, Dario Amodei, has criticized the Trump administration and OpenAI's CEO, Sam Altman, after OpenAI announced its own deal with the military. The State and Treasury departments had already begun severing ties with Anthropic following Trump's order for the federal government to cease using the company's tech.

Mixed toneFactual8 sources
Neutral

Key Claims

quote

Trump said he fired Anthropic 'like dogs'.

— Donald Trump

factual

The Pentagon officially designated Anthropic a 'supply chain risk'.

— Pentagon official

factual

Negotiations had restarted between the Pentagon and Anthropic.

— Financial Times and Bloomberg

factual

Anthropic refused a deal with the government over surveillance concerns.

— null

factual

Anthropic's most recent round of financing, about $60bn, is in jeopardy.

— Axios

Mar 4, 2026

1 articles|1 sources
artificial intelligenceanthropicpentagonai readinessethical safeguards
National Security(1)
Associated Press (AP)Mar 4

Pentagon dispute bolsters Anthropic reputation but raises questions about AI readiness in military

A dispute between the Pentagon and AI company Anthropic is highlighting ethical concerns and questions about the readiness of AI for military applications. Anthropic's CEO, Dario Amodei, refused to remove ethical safeguards preventing the use of their chatbot, Claude, in autonomous weapons and mass surveillance, leading the Trump administration to order government agencies to stop using Claude. This decision has boosted Anthropic's reputation, with Claude surpassing ChatGPT in U.S. phone app downloads. While some experts applaud Anthropic's ethical stance, others criticize the company for previously promoting the capabilities of AI, leading to its adoption in high-stakes government tasks. Anthropic plans to challenge the Pentagon's penalties in court.

MeasuredFactual2 sources
Neutral

Key Claims

statistic

Anthropic's chatbot Claude outpaced ChatGPT in phone app downloads in the US this week.

— market research firm Sensor Tower

factual

The Trump administration ordered government agencies to stop using Claude.

— null

factual

Anthropic CEO Dario Amodei refused to bend his company’s ethical safeguards.

— null

factual

Anthropic will challenge the Pentagon in court once it receives formal notice of the penalties.

— Anthropic

quote

Government agencies should prohibit the use of generative AI to control, direct, guide or govern anything.

— Missy Cummings

Mar 2, 2026

3 articles|2 sources
pentagonartificial intelligencenational securityanthropicus treasury department
National Security(2)
Al JazeeraMar 2

How the US hit Iran with a banned AI

The United States reportedly utilized AI technology from Anthropic in a military operation in Iran, despite blacklisting the company a day prior. Al Jazeera reports the US military action occurred one day after the US blacklisted Anthropic. The Pentagon and Anthropic are in dispute over the ethical considerations and acceptable uses of AI in warfare. The report highlights a conflict between the US government's public stance against Anthropic and its alleged use of the company's AI in a military operation. The details of the AI's specific use and the exact nature of the military operation remain undisclosed in the summary.

Mixed toneFactual1 source
Negative
South China Morning PostMar 2

US Treasury to stop using Anthropic AI tech, including Claude platform

The US Treasury Department will discontinue using all Anthropic products, including the Claude platform. This decision, announced by Secretary Scott Bessent on Monday, follows a government-wide ban initiated by the Trump administration. The ban stems from Anthropic's refusal to unconditionally allow the Pentagon to use its Claude models for military purposes. Anthropic has stated it will sue over perceived intimidation and maintains its technology should not be used for mass surveillance or autonomous weapons. The Treasury's action aligns with President Trump's stance that private companies should not dictate national security terms.

MeasuredFactual2 sources
Neutral
Conflict(1)
South China Morning PostMar 2

US using AI, B-2 bombers and suicide drones in Iran strikes

The United States conducted strikes against Iranian targets on Saturday as part of Operation Epic Fury, employing a range of weaponry including Tomahawk cruise missiles, F-18 and F-35 fighter jets, and B-2 stealth bombers. For the first time in combat, the US also deployed low-cost, one-way attack drones modeled after Iranian designs. According to a source, the Pentagon utilized artificial intelligence services from Anthropic, including its Claude tools, during the operation. This occurred a day after the US government declared Anthropic a supply chain risk, and President Trump directed the government to cease working with the start-up. The B-2 bombers were used to target hardened, underground Iranian missile facilities.

Mixed toneFactual2 sources
Neutral

Key Claims

quote

US Treasury is ending use of all Anthropic products, including Claude.

— Secretary Scott Bessent

quote

The decision comes at the direction of Trump.

— Secretary Scott Bessent

factual

US Central Command released photographs showing Tomahawk missiles, F-18 and F-35 fighter jets.

— null

factual

The US blacklisted the AI company Anthropic a day before attacking Iran.

— null

factual

Anthropic turned down the Pentagon’s demand for unconditional military use of Claude models.

— null

Mar 1, 2026

1 articles|1 sources
artificial intelligenceus militaryclaude ai modeliran strikesanthropic
National Security(1)
The Guardian - World NewsMar 1

US military reportedly used Claude in Iran strikes despite Trump’s ban

Despite Donald Trump's recent ban on Anthropic's AI model Claude, the US military reportedly used it during the joint US-Israel bombardment of Iran. The AI was used for intelligence, target selection, and battlefield simulations. Trump's ban stemmed from concerns about Anthropic's alleged left-leaning bias and restrictions on using Claude for military purposes, particularly after its use in a raid to capture Venezuela's president. The Defense Secretary criticized Anthropic's restrictions but acknowledged the difficulty of immediately removing the AI from military systems, allowing a six-month transition period. OpenAI has since stepped in, agreeing to provide its AI tools, including ChatGPT, for use on the Pentagon's classified network.

Mixed toneFactual6 sources
Neutral

Key Claims

factual

Trump ordered all federal agencies to stop using Claude immediately.

— null

factual

Anthropic objected to the use of Claude for violent ends, weapons development, or surveillance.

— Anthropic

quote

Hegseth accused Anthropic of “arrogance and betrayal”.

— Pete Hegseth

factual

US military reportedly used Claude in Iran strikes despite Trump’s ban.

— Wall Street Journal and Axios

factual

OpenAI has reached agreement with the Pentagon for use of its tools.

— Sam Altman

Feb 28, 2026

3 articles|2 sources
artificial intelligenceanthropicopenaipentagondepartment of defense
Political Strategy(2)
The Guardian - World NewsFeb 28

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

OpenAI has agreed to provide AI services to classified US military networks, a deal announced shortly after Donald Trump directed federal agencies to cease using Anthropic's AI technology. Anthropic's agreement with the Trump administration broke down due to concerns about mass surveillance and autonomous weapons. OpenAI CEO Sam Altman stated the Pentagon agreed to ethical principles prohibiting domestic mass surveillance and ensuring human responsibility in the use of force, including autonomous weapons. Trump criticized Anthropic for attempting to impose its terms of service on the Pentagon, while OpenAI employees had previously expressed solidarity with Anthropic. Altman reassured OpenAI employees that the agreement includes ethical safeguards.

MeasuredFactual3 sources
Neutral
Al JazeeraFeb 28

Trump orders federal agencies to stop using Anthropic as dispute escalates

In February 2026, US President Donald Trump ordered all federal agencies to immediately cease using AI technology from Anthropic, citing a dispute with the Pentagon. The directive follows weeks of conflict between the Defense Department and the San Francisco-based AI company over concerns about the military's potential use of AI in warfare. While most agencies must immediately halt use, the Pentagon has a six-month phaseout period for existing systems. Trump criticized Anthropic for resisting the Pentagon's demands for unrestricted military access to its AI, despite the company holding a $200 million contract with the Department of Defense. The disagreement centers on the ethical implications of AI's role in national security.

Mixed toneFactual2 sources
Negative
National Security(1)
Al JazeeraFeb 28

OpenAI strikes deal with Pentagon to use tech in ‘classified network’

OpenAI, led by CEO Sam Altman, has reached an agreement with the U.S. Department of Defense to utilize its AI technology on a classified network. This deal follows ethical concerns raised by Anthropic, a previous contractor, regarding the military's potential misuse of AI. Altman stated that the Pentagon demonstrated a commitment to safety and agreed that OpenAI's technology would not be used for domestic mass surveillance or autonomous weapons, ensuring human control over the use of force. The announcement came after President Trump ordered federal agencies to cease using Anthropic, whose CEO refused to remove safeguards preventing AI misuse. Human rights advocates have expressed concerns about the unregulated use of AI by militaries.

MeasuredFactual4 sources
Neutral

Key Claims

factual

OpenAI struck a deal with the Pentagon to supply AI to classified US military networks.

— OpenAI

factual

Donald Trump ordered the government to stop using Anthropic's services.

— Donald Trump

factual

Anthropic sought assurances its technology would not be used for mass surveillance or autonomous weapons.

— null

quote

Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force.

— Sam Altman

factual

The Pentagon had demanded Anthropic loosen ethical guidelines on its AI systems or face severe consequences.

— null

Feb 27, 2026

5 articles|4 sources
anthropicartificial intelligencedepartment of defenseautonomous weaponssupply chain risk
National Security(3)
The Guardian - World NewsFeb 27

Trump orders US agencies to stop use of Anthropic technology amid dispute over ethics of AI

Donald Trump has ordered all US federal agencies to immediately cease using Anthropic's AI technology due to a dispute between the Department of Defense and the AI company. The Pentagon demanded Anthropic loosen its ethical guidelines for AI use, but Anthropic refused, citing concerns about unrestricted access to its Claude AI system. The Department of Defense will classify Anthropic as a national security risk, prohibiting contractors and partners from working with them. The Pentagon, which had a $200 million agreement with Anthropic, will continue using their services for a six-month transition period. The disagreement stems from the Pentagon's desire for unfettered access to Claude's capabilities, conflicting with Anthropic's safety-focused approach.

Mixed toneFactual4 sources
Negative
The Guardian - World NewsFeb 27

Anthropic says it ‘cannot in good conscience’ allow Pentagon to remove AI checks

Anthropic is refusing a demand from the Pentagon to remove safety precautions from its AI model, Claude, despite the threat of contract cancellation and being labeled a "supply chain risk." The Department of Defense (DoD) wants unfettered access to Claude, while Anthropic opposes its use in autonomous weapons systems and mass domestic surveillance, citing safety concerns. The disagreement follows a $200 million contract awarded to Anthropic last July. CEO Dario Amodei stated the company's preference to continue serving the DoD with the safeguards in place. The standoff is a test of Anthropic's commitment to AI safety and whether AI companies will resist government pressure for controversial uses of the technology.

MeasuredFactual3 sources
Neutral
Political Strategy(2)
BBC News - WorldFeb 27

Trump orders government to stop using Anthropic in battle over AI use

Former President Trump has ordered all federal agencies to immediately cease using AI technology from Anthropic due to a dispute over military access. The conflict arose after Anthropic refused to grant the U.S. military unfettered access to its AI tools, raising concerns about potential misuse in mass surveillance and autonomous weapons. Secretary of Defense Pete Hegseth has labeled Anthropic a "supply chain risk," potentially barring companies working with the military from doing business with the AI developer. Anthropic plans to challenge this designation in court, arguing it sets a dangerous precedent for companies negotiating with the government. The government will phase out Anthropic's tools over the next six months, primarily impacting companies that also contract with the military.

Mixed toneFactual5 sources
Neutral
South China Morning PostFeb 27

Trump tells US government to ‘immediately’ stop using Anthropic AI tech

President Trump has ordered all US federal agencies to immediately stop using Anthropic's AI technology. This directive follows Anthropic's refusal to agree to the Pentagon's demand for unconditional military use of its Claude models, citing concerns about mass surveillance and autonomous weapons. Trump announced a six-month phase-out period for agencies like the Department of Defense currently using Anthropic's products. He warned Anthropic to cooperate during the transition or face potential legal repercussions. The order stems from a disagreement over the terms of use for Anthropic's AI, with the government asserting its right to utilize contracted products as it sees fit.

Mixed toneFactual1 source
Negative

Key Claims

factual

Trump orders federal agencies to immediately stop using technology from AI developer Anthropic.

— Donald Trump

factual

Anthropic refused demands to give the US military unfettered access to its AI tools.

— Reuters

quote

Secretary of Defense Pete Hegseth deemed Anthropic a 'supply chain risk'.

— Pete Hegseth

quote

Anthropic said it 'will challenge any supply chain risk designation in court'.

— Anthropic

factual

Anthropic's tools will be phased out of all government work over the next six months.

— Donald Trump

Feb 26, 2026

2 articles|1 sources
anthropicartificial intelligencepentagondefense production actmass surveillance
National Security(2)
Associated Press (AP)Feb 26

Anthropic CEO says AI company ‘cannot in good conscience accede’ to Pentagon’s demands

Anthropic CEO Dario Amodei stated the AI company cannot agree to the Pentagon's demands for wider use of its technology. Anthropic is concerned about the potential for its AI chatbot Claude to be used for mass surveillance and in fully autonomous weapons. The Pentagon maintains it intends to use Anthropic's AI legally and denies wanting to use it for mass surveillance or autonomous weapons without human involvement. Negotiations are ongoing, but Anthropic feels the Pentagon's proposed contract language does not adequately address its concerns. The Pentagon has set a Friday deadline for Anthropic to agree to its demands.

MeasuredFactual2 sources
Neutral
Associated Press (AP)Feb 26

What to know about Defense Protection Act and the Pentagon’s Anthropic ultimatum

Defense Secretary Pete Hegseth has given AI company Anthropic until Friday to allow unrestricted military use of its technology or risk losing its government contract. The Trump administration defense officials are also considering designating Anthropic, maker of the Claude chatbot, as a supply chain risk. They may invoke the Defense Production Act, a Cold War-era law, to gain broader authority over Anthropic's products, even without the company's consent. Experts believe using the Act in this manner would be unprecedented and could face legal challenges. This ultimatum highlights the ongoing debate surrounding the role of AI in national security. The Defense Production Act grants the federal government extensive power to direct private companies to fulfill national defense needs.

MeasuredFactual2 sources
Neutral

Key Claims

quote

Anthropic CEO said the company “cannot in good conscience accede” to the Pentagon’s demands.

— Dario Amodei

quote

New contract language from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.”

— Anthropic

factual

The Pentagon wants to use Anthropic’s artificial intelligence technology in legal ways.

— Pentagon's top spokesman

quote

The Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”

— Sean Parnell

factual

Anthropic's policies prevent its models from being used for mass surveillance or autonomous weapons.

— null

Feb 25, 2026

1 articles|1 sources
artificial intelligenceanthropicai ethicsus governmentdefense department
National Security(1)
Al JazeeraFeb 25

Anthropic vs the Pentagon: Why AI firm is taking on Trump administration

Anthropic, an AI company known for its Claude software, is in a dispute with the U.S. government in February 2026. The Pentagon is pressuring Anthropic to loosen its AI usage terms after reports that Claude was used in a U.S. military operation that resulted in the abduction of Venezuelan President Nicholas Maduro in January. The Defense Secretary has given Anthropic until Friday to comply or risk losing its government contract. Anthropic is resisting, citing safeguards that prevent its technology from being used for domestic surveillance or autonomous weapons development. Anthropic, founded in 2021 by former OpenAI executives, was the first AI developer used in classified operations by the U.S. Defense Department.

MeasuredFactual4 sources
Negative

Key Claims

factual

Anthropic was the first AI developer to be used in classified operations by the US Defense Department.

— null

factual

Anthropic is refusing to back down over safeguards which prevent its technology from being used to conduct US domestic surveillance.

— null

factual

Anthropic positions itself as a “responsible” developer in the AI landscape.

— null

factual

US Defense Secretary Pete Hegseth has given Anthropic until Friday to loosen its rules or risk losing its government contract.

— The Associated Press and Reuters news agencies

factual

Anthropic alleged that a Chinese state-sponsored hacking group had manipulated the Claude code.

— Anthropic

Feb 24, 2026

2 articles|2 sources
artificial intelligenceanthropicmilitary use of aiai safeguardsmilitary use
National Security(2)
The Guardian - World NewsFeb 24

US military leaders meet with Anthropic to argue against Claude safeguards

US military leaders, including Defense Secretary Pete Hegseth, met with Anthropic CEO Dario Amodei to resolve a dispute over the military's use of Anthropic's AI model, Claude. The Department of Defense (DoD) seeks unfettered access, while Anthropic resists allowing Claude to be used for mass surveillance or autonomous weapons. The DoD has threatened penalties, including contract cancellation and designation as a "supply chain risk," if Anthropic doesn't comply by Friday. The disagreement highlights the tension between the AI industry and government demands for military applications. The DoD recently approved Elon Musk's xAI chatbot for classified systems, as well as OpenAI, after they agreed to the government's terms, while Anthropic has been hesitant. The meeting follows reports that the US military used Claude in the capture of Venezuelan leader Nicolás Maduro.

MeasuredFactual6 sources
Neutral
Associated Press (AP)Feb 24

Hegseth and Anthropic CEO set to meet as debate intensifies over the military’s use of AI

Defense Secretary Pete Hegseth is scheduled to meet with Anthropic CEO Dario Amodei amidst growing debate over the military's use of artificial intelligence. The meeting, confirmed by a defense official, takes place as Anthropic is the only AI company among its peers that has not supplied its technology to a new U.S. military internal network. Amodei has expressed ethical concerns regarding unchecked government use of AI, particularly regarding autonomous weapons and mass surveillance. This meeting highlights the ongoing discussion about AI's role in national security and potential risks associated with its use in sensitive situations. The meeting comes as Hegseth has publicly stated his intention to address what he considers a "woke culture" within the armed forces.

MeasuredFactual3 sources
Neutral

Key Claims

factual

Defense Secretary Pete Hegseth plans to meet with the CEO of Anthropic.

— AP

factual

Anthropic is the only AI company of its peers to not supply its technology to a new U.S. military internal network.

— AP

factual

Anthropic CEO Dario Amodei has ethical concerns about unchecked government use of AI.

— AP

factual

The Pentagon awarded defense contracts to Anthropic, Google, OpenAI and Elon Musk’s xAI.

— AP

factual

Each AI contract is worth up to $200 million.

— AP