NEWSAR
Multi-perspective news intelligence
SRCAl Jazeera
LANGEN
LEANCenter
WORDS307
ENT11
TUE · 2026-05-05 · 16:53 GMTBRIEF NSR-2026-0505-73944
News/Microsoft, Google, xAI give US access to AI models for secur…
NSR-2026-0505-73944News Report·EN·National Security

Microsoft, Google, xAI give US access to AI models for security testing

Microsoft, Google, and xAI have agreed to grant the U.S. federal government access to their advanced artificial intelligence models for national security testing.

By AP and ReutersAl JazeeraFiled 2026-05-05 · 16:53 GMTLean · CenterRead · 2 min
Microsoft, Google, xAI give US access to AI models for security testing
Al JazeeraFIG 01
Reading time
2min
Word count
307words
Sources cited
2cited
Entities identified
11entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

Microsoft, Google, and xAI have agreed to grant the U.S. federal government access to their advanced artificial intelligence models for national security testing. This initiative, announced by the Department of Commerce's Center for AI Standards and Innovation (CAISI), allows U.S. officials to evaluate these AI systems before deployment, assessing their capabilities and potential security risks. The agreement aims to identify threats such as cyberattacks and military misuse posed by powerful AI, especially in light of recent concerns about models like Anthropic's Mythos. This move fulfills a previous administration pledge to partner with tech companies on vetting AI for national security. Microsoft will collaborate with U.S. government scientists on testing and developing shared datasets for evaluating AI systems, mirroring a similar agreement with the UK's AI Security Institute.

Confidence 0.90Sources 2Claims 5Entities 11
§ 02

Article analysis

Model · rule-based
Framing
National Security
Technology
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.80 / 1.00
Factual
LowHigh
Sources cited
2
Limited
FewMany
§ 03

Key claims

5 extracted
01

Microsoft will collaborate with US government scientists to test AI systems for unexpected behaviors.

factualMicrosoft
Confidence
1.00
02

The development of advanced AI systems, like Anthropic's Mythos, has raised global concerns about their potential to aid hackers.

factual
Confidence
1.00
03

Concerns are growing in Washington over national security risks posed by powerful AI systems.

factual
Confidence
1.00
04

The agreement allows the US government to evaluate AI models before deployment and assess security risks.

factual
Confidence
1.00
05

Microsoft, Google, and xAI will grant the US government access to their AI models for national security testing.

factualMicrosoft, Google, xAI
Confidence
1.00
§ 04

Full report

2 min read · 307 words
The deal comes days after the Pentagon announces an agreement with seven tech giants to use AI in classified systems.Tech giants Microsoft, Google and xAI say they will allow the United States federal government access to their new artificial intelligence models for national security testing.The Center for AI Standards and Innovation (CAISI) at the Department of Commerce announced the agreement on Tuesday amid increasing concerns about the capabilities that Anthropic’s newly unveiled Mythos model could give hackers.Recommended Stories list of 4 itemslist 1 of 4Trump’s ‘Project Freedom’: Can US navy ‘guide’ stuck ships out of Hormuz?list 2 of 4Satellite imagery reveals how Sudan’s war scorched its ‘breadbasket’list 3 of 4GameStop targets eBay with unsolicited $56bn acquisition offerlist 4 of 4Oil prices surge as violence flares in Strait of Hormuzend of listUnder the new agreement, the US government will be allowed to evaluate the models before deployment and conduct research to assess their capabilities and security risks.The agreement fulfils a pledge the administration of US President Donald Trump made in July to partner with technology companies to vet their AI models for “national security risks”.Microsoft will work with US government scientists to test AI systems “in ways that probe unexpected behaviors”, the company said in a statement. Together they will develop shared data sets and workflows for testing the company’s models, the company said.Microsoft signed a similar agreement with the United Kingdom’s AI Security Institute, according to the statement.Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.The development of advanced AI systems, including Anthropic’s Mythos, in recent weeks has created a stir globally, including among US officials and corporate America, over their ability to supercharge hackers.
§ 05

Entities

11 identified
§ 06

Keywords & salience

10 terms
national security
1.00
artificial intelligence
1.00
security testing
0.90
ai models
0.90
us government
0.80
xai
0.70
google
0.70
microsoft
0.70
cyberattacks
0.60
frontier models
0.50
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles