The Anti-Innovator's Dilemma
Why stability looks like surrender at the frontier of AI
Clay Christensen wrote The Innovator’s Dilemma to explain why successful companies miss the next wave. Incumbents become too well adapted to the present. They serve their best customers, optimize the dominant product, protect their margins, and listen to the people already paying them, and over time those instincts compound until the firm has been rationalized into irrelevance.
Disruption, in his model, begins as something that looks worse on every dimension the current business cares about: worse performance, worse customers, worse margins, worse status. The incumbent ignores it because, viewed from inside the existing business, the new thing does not yet look like the future. By the time it does, the firm is too entangled with its own success to respond.
That framework still describes most of business.
In AI, we are watching its inversion. The leading firms show no reluctance to cannibalize themselves, in fact they behave as if cannibalizing themselves is the only sane strategy.
They replace their own models on schedules that outpace adoption, cut prices before any competitor forces the cut (although the cabal often knows whats coming), and treat their previous wins as positions worth abandoning early, on the assumption that if they do not abandon them voluntarily, a rival will force the abandonment on worse terms.
The fear driving these firms has shifted from moving too early and damaging the core business to moving too slowly and letting someone else destroy it first.
This inversion is what I want to call the Anti-Innovator’s Dilemma.
“As a blazing fire takes whatever you throw on it, and makes it light and flame.” — X. 31
Look at the receipts
The pattern shows up in the ordinary product moves of the past year.
In November 2025, Anthropic cut the price of Claude Opus 4.5 by 67%, taking input from $15 to $5 per million tokens overnight, even though the model was still the company’s flagship and no competitive event forced the cut1.
However, the same week, Google launched Gemini 3 Pro at $2 input and $12 output, with pricing aggressive enough to undercut almost everyone at the frontier and pull the entire pricing layer downward.
Five months later, in April 2026, Anthropic replaced Opus 4.6 with Opus 4.7 at the same token rates, which amounts to giving away capability for free2.
OpenAI shipped GPT-5.5 the following week, demoting the previous flagship a tier and leaving free users on GPT-5.33.
And in early 2026, when a billion-dollar Disney deal for Sora collapsed, OpenAI did not try to nurse the Sora product along to preserve the appearance of a flagship video model. They shut it down and moved on4.
None of these moves are isolated. And each teaches the same lesson: products get released, repriced, and retired on rolling cycles, and the cycle itself has become the strategy.
Incumbents acting like the opposite of incumbents
By every conventional metric, these labs are already incumbents.
They have scale, capital, distribution, brand, technical infrastructure, enterprise access, developer ecosystems, and enormous public attention. Firms in this position, in any other era, would have started defending what they had.
They would have stabilized their releases, segmented their pricing, slowed their roadmaps, generated internal reasons why the next thing should wait, and insisted the existing stack be monetized before anything threatening to it shipped.
The labs are doing the opposite.
They keep releasing capabilities that erode the scarcity of what they shipped last quarter, push down the value of their own prior models, and turn what looked like a moat into table stakes before anyone else has time to.
This is not how a normal incumbent behaves but rather the behavior of firms that believe they are competing in a race to AGI5 rather than in a normal product market.
Why they behave this way
The structure of the competition explains it. The labs are competing in a capability gradient, where the position of every product is redefined by whatever was shipped last. In that competition, hesitation is the cardinal sin. If the model you shipped six months ago can be made to look dated by a rival’s next release, protecting the installed base around it becomes a liability. The more you optimize around preserving the current layer of your product, the more exposed you are to the next jump in capability that you did not lead.
This is why every stable product gets treated as something to be dismantled rather than defended. In most markets, incumbents become conservative because stability is profitable and there is no reason to disturb customers who are already paying you. In frontier AI, stability looks like surrender, because the moment your product feels stable is the moment a rival has an opening to make it irrelevant.
What makes this different from normal tech cycles
The obvious objection is that companies have always cannibalized themselves, and the objection is fair. Historically, though, they did it reluctantly, selectively, and with internal antibodies pushing back. What feels different here is the frequency and the intensity. The labs have made the destruction of yesterday’s advantage into a regular operating rhythm rather than an occasional move.
A harder version of the same objection deserves a more careful answer. If incumbents have always cannibalized themselves to some degree, and the only thing different in AI is the pace, then what we are watching is not an inversion of Christensen but an acceleration of him: the same dynamic of displacement, running on a faster clock.
Under that reading, every six months a new “incumbent” rises and falls, and the apparent strangeness of the labs’ behavior is just the strobe effect of a sped-up version of normal industry dynamics. This is the strongest critique of the thesis, and the reason it is wrong is structural rather than temporal.
In Christensen’s model, disruption happens between firms. The incumbent ships the dominant product, a new entrant ships something cheaper and worse, the incumbent ignores it, and the entrant climbs the capability curve until it displaces the incumbent. Disruption is a transfer of position from one firm to another, and acceleration of that model would just mean those transfers happen faster: DeepSeek displacing OpenAI in eighteen months instead of ten years. What is actually happening in frontier AI does not fit that shape.
The disruption is happening inside the firms.
Anthropic’s Opus 4.5 was disrupted by Anthropic’s Opus 4.6, which was disrupted by Anthropic’s Opus 4.7.
Each release made the previous one less valuable, and each was shipped by the same company that had spent the previous quarter convincing customers to care about the predecessor. The thing Christensen specifically predicted incumbents would fail to execute is the deliberate, successful cannibalization of their own profitable products before any external disruptor forces the move and is precisely what these labs are organized around doing routinely. They are demonstrating the move his theory said could not be made.
This becomes easier to make sense of once you reframe what these firms are selling.
The business is the firm’s position on the capability gradient. The model itself is a temporary expression of where they sit on it6.
Different psychology, different shape of company
The classic incumbent reasons that it cannot damage its core business to chase a future that still looks inferior to what already works. The Anti-Innovator incumbent reasons in the opposite direction. It has to damage the core (i.e. its last flagship model), because the future becomes fatal the moment it looks inferior and the moment it becomes obvious enough for a rival to claim.
These are different theories of survival, and they produce different shapes of companies.
The pressure these firms respond to is also unusual. Most firms are disciplined by their existing customers, who reward stability and punish change. The labs respond to rival labs, to the competition for technical talent, to where developers are spending their attention, to cultural prestige, to who is leading the public narrative, and to the fear of falling one visible step behind.
That set of pressures produces an organization that looks like a permanent insurgent with a giant balance sheet.
Why this matters beyond AI
If this pattern holds, much of the business theory built around stable incumbency starts to wobble. For decades, the central question was why successful companies fail to move early enough.
Now, in at least one crucial part of the economy, the question becomes what happens when dominant firms are structurally incentivized to move too early, too often, and against themselves.
That shift changes how products are priced, how customers form expectations, what defensibility means, and what openings remain for startups.
The implication for startups is one worth dwelling on. If the leaders are aggressively collapsing their own product categories, the traditional insurgent fantasy gets weaker. The old story that incumbents are slow, compromised, and unwilling to attack the future because it would hurt today’s revenue still works sometimes, but in AI-adjacent markets the startup is increasingly facing incumbents who are openly willing to burn their own furniture for warmth.
The comfortable assumption that “they will not go there because it threatens their business” no longer holds, because they might go there precisely because it threatens their business.
A more useful thesis runs like this. The labs will commoditize the capability layer, and value will migrate to other parts of the stack. The frontier will remain unstable, and users will pay for trust and workflow stability rather than raw capability. General capabilities will keep falling in price, which opens space in specialized distribution, vertical integration, and accountability.
And the product is whatever survives model churn, not the model.
The next generation of important companies will come from understanding how the labs move and building where that motion creates value rather than destroys it.
This may be a temporary phase
None of this is necessarily permanent. The Anti-Innovator’s Dilemma might be a phase rather than a law. Right now, the frontier is still unstable enough that leading firms have to behave like self-disrupting predators to stay leading. But if the market thickens and if standards settle, enterprise relationships harden, regulation rises, switching costs grow, and distribution consolidates then these same firms will resemble normal incumbents again, and the old Innovator’s Dilemma might return alongside them.
We are living through an incredible transitional period, not a new equilibrium. For now, the most dominant firms in tech are doing the opposite of what dominance usually teaches firms to do.
They are destroying their own advantages on purpose, training their users not to trust stability, and treating yesterday’s flagship as next week’s footnote. It will not last forever.
But for as long as it lasts, it rewrites the very assumptions most have been operating on.
PYMNTS, “Google and Anthropic Drop AI Prices and Release New Models,” November 26, 2025. “Anthropic cut the price of Claude Opus 4.5 by 67%, reducing the cost of the text the model processes from $15 to $5 per million tokens.” https://www.pymnts.com/artificial-intelligence-2/2025/google-and-anthropic-drop-ai-prices-and-release-new-models/
Finout, “OpenAI vs Anthropic API Pricing Comparison (2026): Which LLM Is Actually Cheaper?” May 2026. “Claude Opus 4.7 is the current flagship, having replaced Opus 4.6 on April 16, 2026 at the same token rates.” https://www.finout.io/blog/openai-vs-anthropic-api-pricing-comparison
Fritz AI, “ChatGPT Pricing in 2026: Every Plan, Tier, and Hidden Cost Explained,” April 2026. “GPT-5.5 launched April 23, 2026 across Plus, Pro, Business, and Enterprise in both ChatGPT and Codex. Free stays on GPT-5.3 Instant.” https://fritz.ai/chatgpt-pricing/
Variety, "OpenAI Will Shut Down Sora Video App; Disney Drops Plans for $1 Billion Investment," March 24, 2026. "Disney has now ended its partnership with OpenAI, which included plans for the media conglomerate to take a $1 billion stake in the artificial-intelligence company led by CEO Sam Altman." https://variety.com/2026/digital/news/openai-shutting-down-sora-video-disney-1236698277/
The most influential articulation of the race-to-AGI framing that animates frontier lab behavior, see Leopold Aschenbrenner, “Situational Awareness: The Decade Ahead,” June 2024. “The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.” The essay, written by a former OpenAI Superalignment researcher, became required reading inside the labs and remains the clearest statement of the worldview driving how these firms treat product cycles. https://situational-awareness.ai/
Sam Altman, OpenAI CEO, on the Big Technology Podcast with Alex Kantrowitz, December 18, 2025. Pushing back on the idea that frontier models will simply commoditize, Altman argued that "models will have different strengths and the most economic value will be created by models at the frontier." The claim is that economic value clusters at the frontier rather than in any specific shipped model and is the explicit business logic behind the self-cannibalizing product behavior described here. https://pod.wave.co/podcast/big-technology-podcast/sam-altman-how-openai-wins-ai-buildout-logic-ipo-in-2026


