Hey there! 👋
Welcome back to SavvyMonk, your one-stop for AI and tech news that actually matters.
Today's story reads like a cybersecurity thriller, except every twist makes Anthropic look worse. The company built an AI model so powerful it refused to release it publicly. Then a small group of unauthorized users got in anyway, on day one, by guessing a URL.
Let's get into it.
Speak naturally. Send without fixing.
Wispr Flow turns your voice into clean, professional text you can send the moment you stop talking. Not rough transcription you have to clean up. Actual polished text — ready for email, Slack, or any app.
Speak the way you think. Go on tangents. Change your mind mid-sentence. Flow strips the filler, fixes the grammar, and gives you text that reads like you spent five minutes writing it.
89% of messages sent with zero edits. Millions of professionals use Flow daily, including teams at OpenAI, Vercel, and Clay. Works on Mac, Windows, and iPhone.
TODAY'S DEEP DIVE
Claude Mythos Got Breached, and Three Other Companies Are to Blame
First, some context. Claude Mythos is Anthropic's newest and most restricted AI model, and it's not a chatbot. It's a cybersecurity tool built specifically to find vulnerabilities in software, essentially an AI that scans code and systems looking for weaknesses that hackers could exploit.
Anthropic was so worried about Mythos falling into the wrong hands that it created a special program called Project Glasswing to control access. Only about 40 hand-picked organizations, including Apple, Amazon, Google, Microsoft, Nvidia, and Cisco, were approved to test the model. The idea was simple. Let trusted companies use Mythos to find and fix security holes in their own systems before bad actors get access to equally powerful tools.
Anthropic claimed Mythos had already discovered "thousands of additional high- and critical-severity vulnerabilities" across major operating systems and web browsers. The message to the world was clear. This AI is so powerful, we can't let just anyone use it.
That message lasted about one day.
How a Discord Group Got In
On the same day Anthropic began rolling out the Mythos preview to approved companies in late February, a small group of unauthorized users gained access to the model. They weren't elite hackers but members of a private Discord channel dedicated to unreleased AI models.

Photo by Clint Patterson on Unsplash
Here's how they did it, and this is where it gets interesting.
One person in the group was a contractor working for a third-party vendor that does work on behalf of Anthropic. That person used their existing access to tap into the environment where Mythos lived. But the group also had another trick. They guessed the URL where Mythos was hosted, based on knowledge of how Anthropic names and organizes its internal tools.
How did they know Anthropic's naming conventions? That information came from a completely separate data breach at a company called Mercor.
The Chain of Breaches
This is where the story turns into a cybersecurity domino effect. To understand how Mythos was accessed, you need to follow the chain backward.
It starts with LiteLLM, an open-source tool that helps connect applications to AI services. A hacking group called TeamPCP managed to inject malicious code into LiteLLM, which gave them access to the credentials of any company using the tool.

Home page of Mercor
One of those companies was Mercor, an AI staffing startup valued at around 10 billion dollars. Mercor supplies contractors to major AI labs, including Anthropic, Meta, and OpenAI. A separate hacking group called Lapsus exploited the LiteLLM compromise to break into Mercor specifically. The result was massive. An estimated 4 terabytes of data was stolen, including database records, source code, Slack messages, passport scans, Social Security numbers, and video interviews of contractors.
Buried in that mountain of stolen data were details about how Anthropic names its internal tools and organizes its file systems. That's the information the Discord group used to guess where Mythos was hosted.
So the chain goes like this. LiteLLM gets compromised. That compromise hits Mercor. Mercor's breach exposes Anthropic's internal naming patterns. And those patterns help unauthorized users find and access Mythos.
That was three separate breaches, all connected, all leading to the same place.
What the Unauthorized Users Actually Did
Here's the surprising part. The Discord group reportedly hasn't used Mythos for anything malicious. According to Bloomberg's reporting, they've only asked it to perform simple tasks like creating websites. The group deliberately avoided running cybersecurity-related prompts through the model, apparently to stay under Anthropic's radar and keep their access alive.
Anthropic has confirmed it is investigating but says it has found "no evidence that the unauthorized access impacted any of Anthropic's systems" beyond the third-party vendor's environment.
Is Mythos Actually That Dangerous?
This is where the story gets even more complicated. While Anthropic was marketing Mythos as something too powerful for public consumption, security researchers were starting to push back on those claims.
VulnCheck researcher Patrick Garrity reviewed the evidence and put the real count of significant vulnerabilities at around 40, far fewer than the "thousands" Anthropic claimed.
Mozilla CTO Bobby Holley said Mythos found 271 vulnerabilities in Firefox, but added that none of them appeared beyond what skilled human security researchers could also discover. He described Mythos as more of an "automated security researcher" than a fully autonomous zero-day discovery machine.
Another independent engineer named Devansh went deeper, analyzing Anthropic's published exploit code, system card, and red-team writeups. He concluded that while the bugs Mythos found are real, the broader narrative is "one of misinformation and hype."
Snehal Antani, CEO of penetration testing firm Horizon3.ai, was even more blunt. He told The Register that attackers don't need Mythos to hack anyone, since public models and open-source systems are already speeding up vulnerability research.
The Other Leak
As if one security incident wasn't enough, around the same time, Fortune reported that Anthropic had accidentally made nearly 3,000 internal files publicly accessible. This included roughly 500,000 lines of code from what appears to have been a Claude Code release packaging error. Anthropic acknowledged the leak but called it "human error, not a security breach," and said no customer data or credentials were exposed.
The Bottom Line
Anthropic told the world that Mythos was too dangerous to release publicly, then lost control of it on day one through a chain of third-party breaches it apparently didn't anticipate. The irony is hard to miss. A cybersecurity AI model, built to find weaknesses in other people's systems, was accessed because of weaknesses in Anthropic's own supply chain.
Whether Mythos is truly as powerful as Anthropic claims is still an open question, but one thing is already clear. You're only as secure as the weakest link in your chain, and in AI, those chains are getting longer every day.
AI PROMPT OF THE DAY
Category: Cybersecurity Risk Assessment
"You are a cybersecurity analyst. I run a [type of company] that uses the following third-party tools and services: [list tools]. Map out our potential supply-chain attack surface. For each tool, identify what data it has access to, what would be exposed if it were breached, and recommend one mitigation step we can take today. Present this as a simple risk table with columns for Tool, Data Access, Breach Impact, and Recommended Action."
ONE LAST THING
The Mythos story isn't really about one AI model or one breach. It's about what happens when an entire industry relies on layers of third-party tools, contractors, and vendors, and assumes someone else is handling security. Every company in the AI supply chain thought it was protected. None of them were. Hit reply, I read every response.
See you in the next one.
— Vivek
P.S. Know a developer or tech professional who should be reading this? Forward this their way. They can subscribe at https://savvymonk.beehiiv.com/


