US and tech firms strike deal to review AI models for national security before public release
The US government, through the Center for AI Standards and Innovation (CAISI), has finalized agreements with major tech firms including Google DeepMind, Microsoft, and xAI. These deals will allow CAISI to review early versions of powerful new AI models before their public release.

Briefing Summary
AI-generatedThe US government, through the Center for AI Standards and Innovation (CAISI), has finalized agreements with major tech firms including Google DeepMind, Microsoft, and xAI. These deals will allow CAISI to review early versions of powerful new AI models before their public release. The collaborations aim to understand the capabilities of these advanced AI systems and mitigate potential national security risks, particularly concerning cybersecurity, biosecurity, and chemical weapons. This initiative builds on similar agreements made with OpenAI and Anthropic two years ago, with CAISI having already conducted over 40 evaluations. The reviews involve examining models with reduced safety guardrails to thoroughly assess national security implications, addressing growing concerns about the potential misuse of advanced AI.
Article analysis
Model · rule-basedKey claims
5 extractedMicrosoft regularly undertakes AI testing, but testing for national security and large-scale public safety risks must be collaborative with governments.
The agreement focuses on identifying national security risks tied to cybersecurity, biosecurity, and chemical weapons.
The review process would be key to understanding AI model capabilities and protecting US national security.
US government has struck deals with Google DeepMind, Microsoft and xAI to review early versions of their new AI models before public release.
Fears grow that powerful AI models could help hackers exploit cybersecurity vulnerabilities at an unprecedented scale.