Anthropic accidentally released part of the internal source code for its AI-powered coding assistant,
Claude-code" class="entity-link entity-organization" data-entity-id="77835" data-entity-type="organization">
Claude Code, due to “human error”, the company said on Tuesday.An internal-use file mistakenly included in a software update pointed to an archive containing nearly 2,000 files and 500,000 lines of code, which were quickly copied to developer platform
GitHub. A post on X sharing a link to the leaked code had more than 29m views early on Wednesday, and a rewritten version of the source code quickly became
GitHub’s fastest-ever downloaded repository.
Anthropic issued copyright takedown requests to try to contain the code’s spread. Within the code, users spotted blueprints for a Tamagotchi-esque coding assistant and an always-on AI agent, per the Verge.“Earlier today, a
Claude-code" class="entity-link entity-organization" data-entity-id="77835" data-entity-type="organization">
Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed,” an
Anthropic spokesperson said. “This was a release packaging issue caused by human error, not a security breach.”The exposed code was related to the tool’s internal architecture but did not contain confidential data from
Claude, the underlying AI model by
Anthropic.
Claude-code" class="entity-link entity-organization" data-entity-id="77835" data-entity-type="organization">
Claude Code’s source code was partially known, as the tool had been reverse-engineered by independent developers. An earlier version of the assistant had its source code exposed in February 2025.
Claude-code" class="entity-link entity-organization" data-entity-id="77835" data-entity-type="organization">
Claude Code has emerged as a key product for
Anthropic, as the company’s paid subscriber base continues to grow. TechCrunch reported last week that paid subscriptions have more than doubled this year, per an
Anthropic spokesperson.
Anthropic’s
Claude chatbot also received a popularity boost amid the CEO
Dario Amodei’s tussle with the Pentagon;
Claude climbed to the top spot of
Apple’s chart of top free apps in the US just more than a month ago. Amodei had refused to back down on red lines around the use of his company’s technology for mass surveillance and fully autonomous weapons.This is the second time that
Anthropic has had a data leak in recent weeks. Fortune previously reported on a separate breach and noted that the company was storing thousands of internal files on publicly accessible systems. That included a draft of a blog post that referred to an upcoming model known as “Mythos” and “Capybara”.Some experts worry the leaks suggest internal security vulnerabilities within
Anthropic. That could be particularly troubling for a company focused on AI safety.The leaks could also help competitors, like
OpenAI and
Google, better understand how
Claude-code" class="entity-link entity-organization" data-entity-id="77835" data-entity-type="organization">
Claude Code’s AI system works. The Wall Street Journal reported that the most recent leak included commercially sensitive information, such as tools and instructions for getting its AI models to work as coding agents.The latest breach comes weeks after the US government designated
Anthropic as a supply chain risk;
Anthropic is fighting those allegations in court. Last week, a US district judge granted a temporary injunction to block the designation.