Hey there! 👋
Welcome back to SavvyMonk, your one-stop for AI and tech news that actually matters.
A paper dropped last month from researchers at UPenn and Boston University, and it's been making the rounds online for good reason. It uses formal economic modeling to show that AI-driven layoffs aren't just a workforce problem. They're a structural market failure, and rational companies can't stop it on their own even when they can see exactly where it leads.
Let's get into it.
100 Genius Side Hustle Ideas
Don't wait. Sign up for The Hustle to unlock our side hustle database. Unlike generic "start a blog" advice, we've curated 100 actual business ideas with real earning potential, startup costs, and time requirements. Join 1.5M professionals getting smarter about business daily and launch your next money-making venture.
TODAY'S DEEP DIVE
Companies Are Automating Their Own Customers Out of Existence By Doing Layoffs
Published in March, 2026, "The AI Layoff Trap" by Brett Hemenway Falk (University of Pennsylvania) and Gerry Tsoukalas (Boston University) is a formal economics paper, not a think piece or a warning shot.

It builds a competitive task-based model to examine what happens when AI displaces workers faster than the economy can reabsorb them. The finding is stark: even companies with perfect foresight, meaning they can clearly see the consequences of mass automation, cannot stop themselves from doing it anyway.
The paper landed on arXiv and quickly found its way outside academic circles. That's partly because it puts rigorous math behind something many people have sensed but couldn't fully articulate.
The Trap, Explained
Here's the core dynamic. Workers are also consumers. When a company replaces human workers with AI, it cuts costs and gains a short-term edge. But those fired workers now have less money to spend, and that spending reduction flows back to every business in the economy, including the one that did the firing.

At a small scale, this is manageable. The displaced demand gets absorbed elsewhere. But the paper shows that in a competitive market, every firm faces the same pressure at the same time. If your competitor automates and you don't, they undercut your prices and take your market share. So you automate too. And so does everyone else. By the time the whole industry has automated, no one has captured a lasting edge, and the collective demand destruction is enormous.
The researchers call this a demand externality. Each firm's automation decision imposes a cost on every other firm's revenue base, but no single firm bears that cost directly. So each one keeps automating, well beyond what's optimal for anyone, including the firm owners themselves.
This Isn't Just Workers vs. Bosses
One of the paper's more striking conclusions is that this isn't a story where shareholders win and workers lose. Both sides end up worse off. Workers lose income. Companies lose customers. The paper describes it as a deadweight loss, meaning total economic value is destroyed, not just redistributed. In the model, the resulting harm accumulates across the whole economy rather than flowing to anyone as a gain.
The scale of what's already happening makes this more than theoretical. Over 100,000 tech workers were laid off in 2025, with AI cited as the primary driver in more than half of those cases, concentrated in customer support, operations, and middle management.
Salesforce replaced 4,000 customer-support agents with AI. Block cut nearly half its 10,000-person workforce in February 2026, with CEO Jack Dorsey saying AI had made those roles unnecessary. Research from Eloundou et al. estimates that roughly 80% of U.S. workers hold jobs with tasks susceptible to automation by large language models. None of this is hidden. Companies are doing it with eyes open.
The Red Queen Effect
The paper introduces what it calls a Red Queen effect, borrowing the phrase from evolutionary biology. Better AI doesn't relieve the pressure. It makes things worse. When AI productivity improves, every firm sees a bigger potential gain from automating faster than its rivals.
But at the equilibrium, those gains cancel out because everyone moves together. What's left is only more destroyed demand. Advancing AI technology, in this model, amplifies the distortion rather than resolving it.
This is a counterintuitive result. The standard assumption is that more productive technology should be net positive. The paper shows that in a competitive setting with demand externalities, that assumption breaks down.
What Can't Fix It
The researchers tested a range of proposed policy solutions against their model, and the results are sobering. Wage adjustments, the traditional self-correcting mechanism in labor markets, raise the threshold at which the problem kicks in but can't stop the arms race once it's underway.

Universal basic income preserves consumer spending to a degree but leaves the automation incentive intact, so firms still over-automate. Capital income taxes don't help. Worker equity participation doesn't help. Upskilling programs don't help. Coasian bargaining between firms doesn't help.
Each of these interventions addresses the aftermath of displacement without touching the competitive incentive that drives it.
The One Thing That Works
The paper argues that only a Pigouvian automation tax can correct the distortion. A Pigouvian tax is a standard economics tool for negative externalities, think of it like a carbon tax for pollution. In this case, it would be a per-task charge on automation, calibrated to the demand loss each firm's layoffs impose on the broader economy. By making firms pay for that externality, you realign their private incentive with the social cost. The arms race stops being rational.
The paper also notes that revenue from the tax could fund retraining programs, which would gradually raise the income replacement rate for displaced workers, shrinking the externality over time and making the required tax rate smaller. The two mechanisms reinforce each other.
No major economy has implemented anything close to this yet. The paper doesn't suggest the politics are simple. But it does argue that policy needs to target the incentive structure, not just the humanitarian fallout.
The Bottom Line
The AI Layoff Trap is not a prediction about what might happen if things go wrong. It's a model of what's already structurally underway. Companies aren't automating recklessly or without information.
They're doing it rationally, in a system where the rational choice for each firm produces a collectively irrational outcome. That's the definition of a market failure. The paper's argument is that you can't fix a market failure by appealing to individual restraint. You fix it by changing the incentives.
AI PROMPT OF THE DAY
Category: Economic Policy Research
"I'm trying to understand the economic concept of a Pigouvian tax as applied to AI automation and labor displacement. Explain what a Pigouvian tax is, how it's typically used for negative externalities like pollution, and how researchers are proposing to adapt it for AI-driven layoffs. Then walk me through the strongest arguments for and against implementing such a policy, and flag any open questions that economists haven't resolved yet."
ONE LAST THING
The most unsettling part of this paper isn't the conclusion. It's the premise. These are rational, well-informed companies. They can see the demand destruction coming. And the math shows they'll do it anyway, because the alternative is being outcompeted while everyone else does it first. We tend to assume that if people understand a problem clearly enough, they'll act differently. This paper is a formal proof that sometimes, understanding the problem perfectly isn't enough.
Hit reply, I read every response.
See you in the next one.
— Vivek
P.S. If you know someone who works in policy, economics, or runs a business thinking about AI adoption, this one's worth passing on. They can subscribe at https://savvymonk.beehiiv.com/


