In partnership with

Hey there! 👋

Welcome back to SavvyMonk, your daily dose of AI and tech news that actually matters.

Today’s story is about what happens when the cold war between AI labs stops being cold. A leaked internal memo from Anthropic CEO Dario Amodei reads less like corporate strategy and more like years of bottled-up frustration finally finding a keyboard.

Let’s get into it.

The Tech newsletter for Engineers who want to stay ahead

Tech moves fast, but you're still playing catch-up?

That's exactly why 200K+ engineers working at Google, Meta, and Apple read The Code twice a week.

Here's what you get:

  • Curated tech news that shapes your career - Filtered from thousands of sources so you know what's coming 6 months early.

  • Practical resources you can use immediately - Real tutorials and tools that solve actual engineering problems.

  • Research papers and insights decoded - We break down complex tech so you understand what matters.

All delivered twice a week in just 2 short emails.

TODAY'S DEEP DIVE

Dario is Not Holding Back When it Comes to Show His Displeasure Towards OpenAI

For years, the rivalry between OpenAI and Anthropic has been framed as philosophical. OpenAI was the fast-moving, commercially aggressive lab willing to partner widely and make peace with political power. Anthropic was the careful one, focused on safety, principled about where advanced AI should and shouldn't go.

That framing no longer holds.

On Friday, February 28, The Information published a 1,600-word internal memo that Amodei had sent to Anthropic employees over Slack. In it, he called OpenAI's Pentagon deal "maybe 20% real and 80% safety theatre." He described Sam Altman's public messaging as straight up lies. He accused Altman of gaslighting. He said OpenAI was trying to spin the situation as though Anthropic had been unreasonable and inflexible.

This was not polished PR language. It was personal.

And it was only the beginning.

The Pentagon Deal

Here's the timeline.

Anthropic had a $200 million contract with the Department of Defense. The company's AI model, Claude, was the first to be deployed on classified government networks. Renegotiation talks had been going on for weeks. According to Amodei, the Pentagon agreed to nearly all of Anthropic's terms except one. The military wanted Anthropic to delete a contractual prohibition on "analysis of bulk acquired data." That phrase was Anthropic's safeguard against mass domestic surveillance.

Anthropic refused.

On February 27, the Pentagon labeled Anthropic a "supply chain risk," a designation normally reserved for companies from adversarial nations like China and Russia. President Trump posted on Truth Social directing all federal agencies to "immediately cease" using Anthropic's technology. Hours later, OpenAI announced its own Pentagon deal allowing military use of its models for "all lawful purposes."

If you're Anthropic, that sequence doesn't look accidental. It looks coordinated.

The Memo Gets Personal

The most striking part of Amodei's memo isn't the Pentagon criticism. It's how he talks about Altman.

He pointed to Greg Brockman's $25 million donation to MAGA Inc., Trump's super PAC, while contrasting it with Anthropic's refusal to offer what he called "dictator-style praise." He said the real reason the Trump administration targeted Anthropic was political: the company hadn't donated, hadn't flattered, and had supported AI regulation.

He also called OpenAI employees "gullible" and referred to some of Altman's public supporters as "Twitter morons."

That is not something a CEO typically puts in a company-wide Slack message. But that's what happened.

Then Amodei Walked It Back

By Thursday, March 6, the tone shifted. Amodei published a blog post apologizing for the memo. He said it was written within hours of a chaotic day and didn't reflect his "careful or considered views."

He also confirmed that Anthropic had officially received the supply chain risk designation from the Pentagon. But he said its scope was narrower than Secretary of War Pete Hegseth had claimed. The statute, he argued, only restricts the use of Claude within specific Defense Department contracts, not across all commercial relationships.

In the same statement, he tried to cool things down. He wrote that Anthropic and the Department of Defence "have much more in common than we have differences." He pledged to keep supplying Claude to the military at nominal cost to avoid leaving war fighters without tools during active operations.

But the Pentagon didn't seem interested in reconciliation. Undersecretary of War Emil Michael posted on X: "I want to end all speculation: there is no active negotiation with Anthropic."

And Then Anthropic Sued

On Monday, March 9, Anthropic filed two federal lawsuits challenging the supply chain risk designation.

The first was filed in U.S. District Court in Northern California. It calls the Pentagon's actions unprecedented and unlawful and argues the designation was retaliation for Anthropic's public stance on AI safety. The complaint invokes the First Amendment, claiming the government is punishing the company for its protected speech about the limitations and risks of its own technology.

The second was filed in the D.C. Circuit Court of Appeals, challenging the statutory authority behind the designation.

Anthropic's complaint states that the government's actions could jeopardize hundreds of millions of dollars in revenue and are seeking to destroy the economic value created by one of the world's fastest-growing private companies.

The company is asking courts to vacate the designation, block its enforcement, and require federal agencies to withdraw their directives to drop Anthropic.

Meanwhile, dozens of researchers from OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic, arguing that the designation could harm U.S. competitiveness in AI and chill public discussion about safety.

This Is No Longer a Business Rivalry

It's worth stepping back and noticing what's been building.

In February, Anthropic ran a four-ad Super Bowl campaign called "A Time and a Place" that took direct aim at OpenAI's decision to introduce ads into ChatGPT. Each spot opened with a single word on screen: "betrayal," "deception," "treachery," "violation." Altman responded with a long post on X calling the ads "clearly dishonest."

Then came the India AI Impact Summit on February 19. During a group photo, Prime Minister Modi asked the tech leaders on stage to raise clasped hands. Altman and Amodei, standing side by side, refused to hold hands. They raised fists instead. The clip went viral.

The CEOs of OpenAI and Anthropic did not link hands when other leaders did in the India AI Summit

And then the Pentagon saga.

The pattern is clear. The rivalry between these two companies is no longer abstract or quietly competitive. It's playing out in leaked memos, courtrooms, Super Bowl ads, and viral summit footage. These are two men who have known each other since Amodei served as VP of Research at OpenAI before leaving to co-found Anthropic in early 2021. The trust between them appears to be long gone.

The Contradiction That Tells You Everything

Here's the part that makes this more complicated than a simple hero-villain story.

Anthropic is not anti-defence. It had a $200 million Pentagon contract. It deployed Claude on classified networks. Even after the blow-up, Amodei offered to keep supplying the military at cost. And Claude has reportedly been used in active U.S. military operations, including intelligence assessments in the conflict with Iran.

What Anthropic opposed was two specific things: mass domestic surveillance and fully autonomous weapons without human oversight. Everything else was on the table.

So this is not a values story about whether AI should be used in defence. It's a positioning story about who gets to set the terms. And it's a political story about what happens when a company refuses to play ball with the current administration.

Why This Matters Beyond Silicon Valley

These labs are no longer just building chatbots. They're building the tools that governments, militaries, intelligence agencies, and major corporations rely on. If the relationships between those companies and the state are being negotiated through personal feuds, leaked memos, political donations, and retaliatory designations, that should concern everyone.

The supply chain risk designation sets a precedent. If the government can blacklist a domestic AI company for holding safety red lines, other companies will think twice before drawing their own.

The Anthropic lawsuit will test whether that kind of retaliation is legal.

And the broader signal is this, the frontier AI market is becoming structurally similar to defence contracting and political lobbying. A small number of players. Enormous stakes. And leaders who see each other not as competitors, but as threats.

The Bottom Line

Two weeks ago, this was a contract negotiation. Now it's a federal lawsuit, a First Amendment case, and the most public rift the AI industry has ever seen.

Dario Amodei's memo confirmed what the industry had been hinting at for months. His apology didn't undo it. And Anthropic's lawsuit raises the stakes even further.

The question is no longer whether these labs are rivals. It's whether the government, the courts, or the market will decide how this plays out.

AI PROMPT OF THE DAY

Category: Video Generation

“Act as a trailer editor and AI video prompt writer. I want to create a 20-second cinematic teaser about a high-stakes Silicon Valley rivalry. Write: 1) a dramatic voiceover script, 2) a 6-shot storyboard with camera angles and lighting, 3) three text-to-video prompts optimized for Runway, Sora, Gemini Veo, and 4) on-screen text that feels tense and modern, not cheesy.”

ONE LAST THING

At what point does a rivalry between two AI labs stop being normal competition and start becoming a national-security problem, especially when both are fighting to become the government’s default AI partner? Hit reply, I read every response.

See you tomorrow.

— Vivek

P.S. Know someone following AI, policy, or Silicon Valley power struggles? Forward this. They can subscribe at https://savvymonk.beehiiv.com/

Keep reading