NEWSAR
Multi-perspective news intelligence
SRCThe Guardian - World News
LANGEN
LEANCenter-Left
WORDS423
ENT12
TUE · 2026-05-05 · 18:44 GMTBRIEF NSR-2026-0505-73981
News/US and tech firms strike deal to review AI models for nation…
NSR-2026-0505-73981News Report·EN·National Security

US and tech firms strike deal to review AI models for national security before public release

The US government, through the Center for AI Standards and Innovation (CAISI), has finalized agreements with major tech firms including Google DeepMind, Microsoft, and xAI. These deals will allow CAISI to review early versions of powerful new AI models before their public release.

Sanya MansoorThe Guardian - World NewsFiled 2026-05-05 · 18:44 GMTLean · Center-LeftRead · 2 min
US and tech firms strike deal to review AI models for national security before public release
The Guardian - World NewsFIG 01
Reading time
2min
Word count
423words
Sources cited
4cited
Entities identified
12entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

The US government, through the Center for AI Standards and Innovation (CAISI), has finalized agreements with major tech firms including Google DeepMind, Microsoft, and xAI. These deals will allow CAISI to review early versions of powerful new AI models before their public release. The collaborations aim to understand the capabilities of these advanced AI systems and mitigate potential national security risks, particularly concerning cybersecurity, biosecurity, and chemical weapons. This initiative builds on similar agreements made with OpenAI and Anthropic two years ago, with CAISI having already conducted over 40 evaluations. The reviews involve examining models with reduced safety guardrails to thoroughly assess national security implications, addressing growing concerns about the potential misuse of advanced AI.

Confidence 0.90Sources 4Claims 5Entities 12
§ 02

Article analysis

Model · rule-based
Framing
National Security
Technology
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.80 / 1.00
Factual
LowHigh
Sources cited
4
Well sourced
FewMany
§ 03

Key claims

5 extracted
01

Microsoft regularly undertakes AI testing, but testing for national security and large-scale public safety risks must be collaborative with governments.

quoteMicrosoft
Confidence
1.00
02

The agreement focuses on identifying national security risks tied to cybersecurity, biosecurity, and chemical weapons.

factualCenter for AI Standards and Innovation (CAISI)
Confidence
1.00
03

The review process would be key to understanding AI model capabilities and protecting US national security.

factualCenter for AI Standards and Innovation (CAISI)
Confidence
1.00
04

US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before public release.

factualCenter for AI Standards and Innovation (CAISI)
Confidence
1.00
05

Fears grow that powerful AI models could help hackers exploit cybersecurity vulnerabilities at an unprecedented scale.

factualAI safety experts, government officials and tech companies
Confidence
0.90
§ 04

Full report

2 min read · 423 words
The US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before they are released to the public.The Center for AI Standards and Innovation (CAISI), part of the US Department of Commerce, announced the agreements on Tuesday, saying the review process would be key to understanding the capabilities of new and powerful AI models as well as to protecting US national security. These collaborations will help the federal government “scale (its) work in the public interest at a critical moment”, the agency said in a press release.“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said Chris Fall, CAISI director.CAISI is an agency meant to facilitate collaboration between the tech industry and the federal government in developing standards and assessing risks for commercial AI systems. The agreement between the agency and the AI firms is focused largely on identifying national security risks tied to cybersecurity, biosecurity and chemical weapons.OpenAI and Anthropic inked similar deals with the Biden administration two years ago and CAISI notes the agency has already completed more than 40 such evaluations, including on unreleased models. It is common for developers to share unreleased AI models with the government that have reduced or removed safety guardrails, CAISI said in its press release. This helps the government “thoroughly evaluate national security-related capabilities and risks”, the agency noted.The new agreements come as fears grow that the newest and most powerful AI models – such as Anthropic’s Mythos – could be dangerous to release to the public; AI safety experts, government officials and tech companies fear the expansive capabilities of these models could help hackers exploit cybersecurity vulnerabilities at an unprecedented scale. Anthropic limited its rollout of Mythos to a few companies, and initiated the collaborative Project Glasswing to bring together tech companies “to secure the world’s most critical software”.The New York Times and Wall Street Journal reported Monday the Trump Administration was mulling over a potential executive order to create a government oversight process for these AI tools; the administration has characterized this reporting as “speculation”.Google and xAI did not immediately respond to a request for comment.Microsoft announced a similar agreement in the UK on Tuesday with the government-backed AI Security Institute, which also focuses on safe AI development.“While Microsoft regularly undertakes many types of AI testing on its own, testing for national security and large-scale public safety risks necessarily must be a collaborative endeavor with governments,” Microsoft wrote in a blog post about the two deals.
§ 05

Entities

12 identified
§ 06

Keywords & salience

10 terms
national security
1.00
ai models
1.00
government review
0.90
tech firms
0.80
risk assessment
0.70
cybersecurity
0.60
ai safety
0.50
biosecurity
0.50
center for ai standards and innovation
0.40
chemical weapons
0.40
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles