The only defensible moat in AI: your enterprise's memory
6 min read
—
Sequoia's Pat Grady opened AI Ascent 2026 with a line that should make every enterprise founder uncomfortable
The things that you build might be irrelevant tomorrow
Pat Grady
Sequoia partner
He wasn't being dramatic. He was stating the structural reality of building in a revolution of computation, where the foundation models move faster than any product built on top of them. In a world like that, the question every enterprise asks, "What's our moat?", needs a fundamentally different answer than it did five years ago.
The old moat playbook is underwater
For decades, enterprise moats followed a familiar formula: proprietary technology, switching costs, network effects, and scale. Build something hard to replicate. Lock customers in. Grow until competitors can't catch up.
That playbook assumed the ground stayed still.
It isn't staying still. The models improve every quarter. What took a team of 20 engineers last year takes a prompt today. Andrej Karpathy described building an entire app (menu photo recognition, image generation, rendering) only to realize a single prompt to Gemini made the whole thing "spurious." His word: "That app shouldn't exist." Meanwhile, Jaya Gupta's piece on context graphs and Foundation Capital's the great reorg, that paint a clear picture of why the old formula is breaking from the venture side, too.
When a co-founder of OpenAI says his own software is made obsolete by a prompt, the message is clear: product innovation alone isn't a moat anymore. Not when the model layer can replicate the feature in the next training run.
Pontus from Midday captured the same anxiety from a builder's perspective: watching a product you've spent months on become a feature of someone else's AI overnight. It's not hypothetical. It's Tuesday.
First principles: what can't the model replicate?
If product features erode and technical advantages get absorbed by the next foundation model release, what survives? Strip the question to first principles and three things emerge:
1. Customer context that compounds over time.
The context graph thesis is compelling: the next trillion-dollar platforms won't be built on better models. They'll be built on persistent records of decision traces, the why behind every action, stitched across entities and time. Systems of record store what happened. Context graphs store why it happened, who decided, what the precedent was, and what came of it.
This is a moat because it's not a feature. It's an asset that grows with every decision a customer's organization makes. The longer you run, the deeper the graph, the harder it is for anyone to replicate, because they'd need to replay years of organizational decisions to match what you've already captured.
2. Team intelligence that survives headcount changes.
The scale of transformation already underway is staggering: engineering teams going from 120 people to 25. Expert-to-generalist ratios shifting from 1:6 to 1:100. Three roles collapsing into two. And a haunting question nobody has answered yet: if agents handle all junior work, how does the class of 2035 build expertise?
The enterprises that survive this reorg aren't the ones with the best AI features. They're the ones whose organizational knowledge persists independent of headcount. When your senior engineer leaves, does her context walk out the door, or does it stay in the graph?
3. Being in the execution path, not the read path.
Investors and builders are converging on the same conclusion: moats come from wrapping yourself around the customer, not from technology alone. The companies that capture decision traces at commit time, in the execution loop where decisions actually happen, have a structural advantage over those that analyze decisions after the fact.
Data warehouses sit in the read path. CRMs store current state. Neither captures the reasoning. The moat belongs to whoever is in the loop when the decision is made.
The compound moat
These three principles converge into something we'd call a compound moat, a defensibility that grows not from any single feature, but from the accumulation of organizational context over time.
Here's what it looks like in practice:
A support agent resolves a tricky ticket. The resolution, including the cross-system reasoning that informed it, becomes a searchable precedent. A sales rep structures an unusual deal. The exception logic, the approval chain, the precedent from Q2, all captured at decision time. An engineer fixes a subtle bug. The root cause links to the symptoms, so the next person who sees those symptoms finds the fix in one query.
Each of these moments is small. But each one adds a node to the graph. And after two years of this, across thousands of people, hundreds of systems, millions of decisions, the result is an intelligence layer that no competitor can fast-follow their way into. You can't buy it. You can't train a model on it. You can only build it by being in the execution path, day after day, decision after decision.
The programming paradigm has shifted from code to prompts to skills. But the enduring value isn't in the skill itself. It's in the memory the skill acts on. The context window closes after every prompt. The knowledge graph compounds forever.
The ground will keep shifting. Build on what stays.
The things you build might be irrelevant tomorrow. Foundation models will keep getting better. Features will keep getting absorbed. The productivity gains from AI will keep compressing timelines and headcounts.
But here's what won't change: enterprises will keep making decisions. Those decisions will keep generating context. And the organizations that capture that context, systematically, in real time, with permissions intact, will have an asset that compounds while everything else depreciates.
The old moat was a wall you built around your product. The new moat is a graph you build around your customer's decisions.
Every day it gets deeper. And every day, the gap gets harder to close.
Sources: Sequoia Capital AI Ascent 2026 keynote (Pat Grady, Sonya Huang); Andrej Karpathy, "From vibe coding to agentic engineering" (Sequoia AI Ascent 2026); Jaya Gupta, "Context graphs: AI's trillion-dollar opportunity" and "The great reorg: A human's guide" (Foundation Capital, May 2026); Pontus (@pontusab).
Frequently Asked Questions
Related Articles

Arth Gajjar

Arth Gajjar

Ahmed Bashir

Patrick van de Werken
Computer+ Apps
Our customers
Resources
Initiatives

