Hey there! 👋
Welcome back to SavvyMonk, your one-stop for AI and tech news that actually matters.
Yesterday, Anthropic did something no company wants to do: it accidentally published the full source code of its most valuable product to a public registry. The internet found it in minutes.
Let's get into it.
The best marketing ideas come from marketers who live it.
That’s what this newsletter delivers.
The Marketing Millennials is a look inside what’s working right now for other marketers. No theory. No fluff. Just real insights and ideas you can actually use—from marketers who’ve been there, done that, and are sharing the playbook.
Every newsletter is written by Daniel Murray, a marketer obsessed with what goes into great marketing. Expect fresh takes, hot topics, and the kind of stuff you’ll want to steal for your next campaign.
Because marketing shouldn’t feel like guesswork. And you shouldn’t have to dig for the good stuff.
TODAY'S DEEP DIVE
Claude Code's Source Code Leaked. Here's What Was Inside.
On the morning of March 31, 2026, security researcher Chaofan Shou was poking around npm when he found something that wasn't supposed to be there.
Version 2.1.88 of the @anthropic-ai/claude-code package had shipped with a 59.8 MB JavaScript source map file bundled inside. Source maps are debugging tools. They translate compressed, minified code back into the readable original. They're meant to live on internal servers, not public package registries.
This one was downloadable from Anthropic's own cloud storage. Shou had posted about it on X with a direct download link.
Anthropic confirmed the incident to VentureBeat: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach." The company has since removed the file and pulled older package versions. The code had already been archived to GitHub, where it surpassed 1,100 stars and 1,900 forks within hours.
What Actually Leaked
The exposed codebase is massive. 1,900 files. 512,000+ lines of TypeScript. This is the complete client-side CLI that developers install and run locally. It is not Claude's model weights, training data, or server infrastructure. Your conversations and API keys were not exposed. But the engineering architecture behind one of the most commercially successful AI tools ever built? That's now public.
The main entry point alone is 785KB. The tool system covers 40+ discrete capabilities, each permission-gated. The Query Engine, which handles all LLM API calls, streaming, caching, and orchestration, is a 46,000-line module. The base tool definition runs to 29,000 lines. This is not a chat wrapper.
The Secrets Inside
Beyond the architecture, the leak exposed things Anthropic clearly wasn't ready to announce.
The code reveals a three-layer memory architecture that developers are pointing to as the reason Claude Code holds up reliably over long, complex work sessions. Most AI coding tools fall apart as sessions grow longer. Claude Code apparently had a specific engineering solution for this, and now everyone knows what it is.
A feature called KAIROS, referenced over 150 times in the source, describes an autonomous background mode. The name draws from the ancient Greek concept meaning at the right time.
The idea: Claude Code keeps working even when you're idle. It consolidates memory, resolves contradictions in its understanding of a project, and sharpens vague observations into reliable facts. When you return to a session, the agent has already done the housework. This was not a public roadmap item.
Then there's what the code calls Undercover Mode, which contains explicit instructions directing the agent to scrub references to its AI origins from public git commit messages when operating in open-source repositories.
Anthropic's internal model names were never supposed to surface in public logs. The irony of a leak exposing a system built to prevent leaks was not lost on anyone.
The code also exposed internal model codenames. Capybara is a Claude 4.6 variant. Fennec is Opus 4.6. A model called Numbat is still in testing and hasn't been announced. Internal comments show that Capybara's eighth iteration carries a false claims rate of 29 to 30 percent, up significantly from 16.7 percent in its fourth version. Developers also found an assertiveness counterweight designed to keep the model from being too aggressive when rewriting code.
Oh, and buried somewhere in those 512,000 lines, a fully functional virtual pet system. 18 species. Rarity tiers. Shiny variants. A SOUL description written by Claude when your pet first hatches. It's gated behind a feature flag called BUDDY. It is completely real.
The Separate Security Problem
There's another issue that's more immediately dangerous than the leak itself.
In the same window the leak happened, attackers slipped malicious code into a popular software library that Claude Code depends on. Think of it like a trusted supplier quietly adding something harmful to a product before it ships. If you updated Claude Code on March 31, 2026, between 12:21 a.m. and 3:29 a.m. UTC, there's a chance that harmful code ended up on your machine alongside it.
If you did update during that window, the safe move is to assume your machine may be compromised, change your passwords and API keys, and reach out to someone technical who can help you check. Anthropic now recommends downloading Claude Code directly from their website rather than through the usual developer channel, because that version doesn't rely on the same third-party libraries where the attack happened.
The Stakes
Claude Code is not a side project. Anthropic's annualized revenue run rate sits at roughly $19 billion as of March 2026.
Claude Code alone accounts for an estimated $2.5 billion of that, a figure that has more than doubled since the start of the year. Eighty percent comes from enterprise clients. What those clients are paying for, in part, is the belief that the technology is proprietary and protected. That belief took a real hit Tuesday morning.
The Bottom Line
This was human error, not an attack, and Anthropic confirmed it quickly.
Although no model weights or customer data were exposed, 512,000 lines of carefully engineered, proprietary architecture are now permanently part of the public record and competitors have a blueprint.
AI PROMPT OF THE DAY
Category: Security Audit
"Review the following list of npm packages in my project's package-lock.json and flag any that have known vulnerabilities, unexpected new versions, or unusual transitive dependencies. For each flagged package, explain the risk and suggest what to do: [paste your dependency list here]."
ONE LAST THING
Anthropic built an entire internal system to prevent its AI from leaking codenames in public git commits. Then it shipped the whole source in a debug file, likely built by Claude itself. There's something almost poetic about that. The real story here isn't embarrassment. It's that the gap between the public face of these tools and what's running under the hood is enormous, and it took an accident to close it. Hit reply, I read every response.
See you in the next one.
— Vivek
P.S. Know a developer or tech-curious friend who'd appreciate a straight take on stories like this? They can subscribe at https://savvymonk.beehiiv.com/


