NEWSAR
Multi-perspective news intelligence
SRCProPublica
LANGEN
LEANCenter-Left
WORDS910
ENT11
MON · 2026-04-06 · 09:00 GMTBRIEF NSR-2026-0406-54433
News/The Federal Government Is Rushing Toward AI. Our Reporting O…
NSR-2026-0406-54433Analysis·EN·Technology

The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.

A ProPublica cybersecurity reporter highlights potential pitfalls as the U.S. federal government rapidly adopts artificial intelligence.

Renee DudleyProPublicaFiled 2026-04-06 · 09:00 GMTLean · Center-LeftRead · 4 min
The Federal Government Is Rushing Toward AI. Our Reporting Offers Three Cautionary Tales.
ProPublicaFIG 01
Reading time
4min
Word count
910words
Sources cited
3cited
Entities identified
11entities
Quality score
100%
§ 01

Briefing Summary

AI-generated
NEWSAR · AI

A ProPublica cybersecurity reporter highlights potential pitfalls as the U.S. federal government rapidly adopts artificial intelligence. Drawing parallels to the earlier transition to cloud computing, the report cautions against the allure of "free" or low-cost AI tools offered by tech companies. The Trump administration announced agreements with tech companies that would allow federal agencies to purchase enterprise AI tools at government-friendly pricing. An investigation into Microsoft's "free" security upgrades revealed a strategy to lock in government customers, leading to higher subscription fees later. The report suggests policymakers should be wary of such offers and consider the long-term implications of relying on specific vendors as agencies integrate AI technologies.

Confidence 0.90Sources 3Claims 5Entities 11
§ 02

Article analysis

Model · rule-based
Framing
Technology
Economic Impact
Tone
Measured
AI-assessed
CalmNeutralAlarmist
Factuality
0.70 / 1.00
Factual
LowHigh
Sources cited
3
Well sourced
FewMany
§ 03

Key claims

5 extracted
01

Agencies could use OpenAI’s ChatGPT for $1, Google’s Gemini for 47 cents, and Grok by xAI for 42 cents.

factualProPublica Reporting
Confidence
1.00
02

The Trump administration announced agreements with tech companies for federal agencies to purchase AI tools at government-friendly pricing.

factualProPublica Reporting
Confidence
1.00
03

Microsoft pledged $150 million in technical services to help the U.S. bolster its digital security.

factualProPublica Reporting
Confidence
1.00
04

After installing upgrades, federal customers would be effectively locked in, because shifting to a competitor would be cumbersome and costly.

factualProPublica Reporting
Confidence
0.90
05

One former Microsoft salesperson said the plan was successful beyond what any of us could have imagined.

quoteFormer Microsoft salesperson
Confidence
0.80
§ 04

Full report

4 min read · 910 words
As a cybersecurity reporter at ProPublica, much of my work over the past two years has focused on how the federal government and its IT contractors, like Microsoft, have navigated major technological transitions. The one now in the news every day is artificial intelligence. This emerging technology has its grip on everyone: Home users, corporations and the federal government are all rushing to use it. President Donald Trump and his Cabinet say AI will transform the nation, making us more prosperous, efficient and secure — if only we can adopt it fast enough. But this messaging isn’t new. President Barack Obama’s administration used nearly identical language a decade and a half ago as the U.S. barreled into the technological revolution of cloud computing. I’ve studied how the federal government has handled — and mishandled — this transition over the past two decades, and my reporting offers some cautionary tales and valuable lessons as policymakers encourage the use of AI and federal agencies adopt the technology. Then: In the early 2020s, a series of cyberattacks linked to Russia, China and Iran left the federal government reeling. The Biden administration called on major tech companies to help the U.S. bolster its defenses. In response, Microsoft CEO Satya Nadella pledged to give the government $150 million in technical services to help upgrade its digital security. It also offered a “free” security upgrade for government customers. Now: Last year, the Trump administration announced a raft of agreements with tech companies that were meant to help federal agencies “purchase enterprise AI tools at government-friendly pricing.” Agencies could use OpenAI’s ChatGPT for $1. Google’s Gemini for 47 cents. Grok by xAI for 42 cents. The administration hoped that the low-cost pricing would make it “easier for federal teams to acquire powerful AI capabilities … to enhance mission delivery and operational efficiency.” The takeaway: Be wary of freebies. Our investigation into Microsoft’s seemingly straightforward commitment revealed a more complex, profit-driven agenda. After installing the upgrades, federal customers would be effectively locked in, because shifting to a competitor after the free trial would be cumbersome and costly. At that point, the customer would have little choice but to pay for the higher subscription fees. The plan worked: One former Microsoft salesperson told me “it was successful beyond what any of us could have imagined.” In response to questions about the commitment, Microsoft has said its “sole goal during this period was to support an urgent request by the Administration to enhance the security posture of federal agencies who were continuously being targeted by sophisticated nation-state threat actors.” Agencies looking to buy AI tools at discounted rates today must consider how the costs might balloon down the road. The General Services Administration warns that AI “usage costs can grow quickly without proper monitoring and management controls” and advises agencies to “set usage limits and regularly review consumption reports.” Lesson 2: Oversight programs are only as effective as their resources Then: In the Obama era, the federal government shifted its sensitive information and computing needs to data centers owned and operated by private companies. Acknowledging the potential risks, the administration created the Federal Risk and Authorization Management Program, or FedRAMP, in 2011 to help ensure the security of the cloud computing services that it was encouraging U.S. agencies to use. But in my recent investigation of the program, I found it was no match for Microsoft, which effectively wore down the FedRAMP team over five years as the company sought the program’s seal of approval for a major cloud offering known as GCC High. Despite serious reservations about its cybersecurity, FedRAMP ultimately authorized the product, in part because it lacked the resources to keep going. In response to questions, Microsoft told me: “We stand by our products and the comprehensive steps we’ve taken to ensure all FedRAMP-authorized products meet the security and compliance requirements necessary.” Now: Today, this tiny outpost within the General Services Administration has even fewer resources to oversee the cloud technology on which the government relies — including AI. FedRAMP says it now operates “with an absolute minimum of support staff” and “limited customer service.” The program was an early target of the Trump administration’s Department of Government Efficiency. The takeaway: FedRAMP, which a 2024 White House memo said “must be an expert program that can analyze and validate the security claims” of cloud providers, is now little more than a rubber stamp for the tech industry, former employees told me. As federal agencies adopt AI tools that draw upon reams of sensitive information, the implications of this downsizing for federal cybersecurity are far-reaching. A GSA spokesperson defended the program and said FedRAMP now “operates with strengthened oversight and accountability mechanisms.” Lesson 3: “Independent” reviews are only so independent Then: The government has long relied on so-called third-party assessors to verify the security claims made by cloud service providers like Microsoft and Google. In theory, these firms are supposed to be independent experts that offer a recommendation to FedRAMP on whether a product meets federal standards. But in practice, their independence has an asterisk: They are paid by the companies they are evaluating. My recent investigation found that this setup creates an inherent conflict of interest. In the case of Microsoft’s GCC High, two assessors recommended the product despite being unable to fully vet it, according to a former FedRAMP reviewer. One of those firms did not respond to my questions and the other denied this account.
§ 05

Entities

11 identified
§ 06

Keywords & salience

10 terms
artificial intelligence
1.00
federal government
0.90
technology adoption
0.70
cybersecurity
0.70
it contractors
0.60
freebies
0.50
digital security
0.50
cloud computing
0.50
cyberattacks
0.40
profit-driven agenda
0.40
§ 07

Topic connections

Interactive graph
Network visualization showing 51 related topics
View Full Graph
Person Organization Location Event|Click node to navigate|Edge numbers = shared articles