Microsoft, Google, xAI give US access to AI models for security testing
Microsoft, Google, and xAI have agreed to grant the U.S. federal government access to their advanced artificial intelligence models for national security testing.

Briefing Summary
AI-generatedMicrosoft, Google, and xAI have agreed to grant the U.S. federal government access to their advanced artificial intelligence models for national security testing. This initiative, announced by the Department of Commerce's Center for AI Standards and Innovation (CAISI), allows U.S. officials to evaluate these AI systems before deployment, assessing their capabilities and potential security risks. The agreement aims to identify threats such as cyberattacks and military misuse posed by powerful AI, especially in light of recent concerns about models like Anthropic's Mythos. This move fulfills a previous administration pledge to partner with tech companies on vetting AI for national security. Microsoft will collaborate with U.S. government scientists on testing and developing shared datasets for evaluating AI systems, mirroring a similar agreement with the UK's AI Security Institute.
Article analysis
Model · rule-basedKey claims
5 extractedMicrosoft will collaborate with US government scientists to test AI systems for unexpected behaviors.
The development of advanced AI systems, like Anthropic's Mythos, has raised global concerns about their potential to aid hackers.
Concerns are growing in Washington over national security risks posed by powerful AI systems.
The agreement allows the US government to evaluate AI models before deployment and assess security risks.
Microsoft, Google, and xAI will grant the US government access to their AI models for national security testing.