Starving Machines and Fairy Dust
Decisions get made in conversation. Strategy emerges from dialogue.
What’s the hard thing about operating a tech business? What about one at scale?
Ask a founder and they’ll say fundraising. Ask an engineer and they’ll say technical debt. Ask a sales leader and they’ll say pipeline. Ask a PM and they’ll say prioritization. Ask an exec and they’ll say hiring. They’re all right. They’re all wrong. They’re all describing symptoms.
The hard thing is building the right thing and building it right. And the thing that determines whether you do? Context.
This has always been true. But two very important things have changed: (1) agents make context exponentially more leveraged, their outputs are directly shaped by it, and (2) we have the technology to capture and synthesize context at scale.
Let me explain what I mean by context, why humans are structurally bad at managing it, and why the shadow cast by coding agents turns this chronic problem into an existential one.
A business, like any living thing, is constantly evolving within the environment it operates. A small startup is trying to find Product Market Fit. A larger business is trying to maintain it. But both are doing the same fundamental work: finding a pain point customers will pay to resolve and delivering a solution that actually resolves it.
To do this well requires the continual never ceasing collection of evidence: talking to customers, gauging the competitive landscape, experimenting with what works, honing intuition through observation.
All of this is context.
And if you’re a product manager, executive, or line-level employee, your job (whether your JD says so or not) is to supply, receive, and act on this context to make better decisions.
The problem is that context degrades every time it moves.
Think about how a feature actually gets built. A customer success manager hears a complaint on a call. Maybe they log it in a CRM, maybe they mention it in Slack, maybe they just remember it. A PM talks to five customers and pattern-matches something. They write a spec that captures maybe 60% of what they actually learned. Engineering interprets that spec through their own lens. By the time code ships, the context has been compressed, lossy-encoded, and re-interpreted three or four times. It’s a game of telephone where the prize is your roadmap.
This is a context fidelity problem. And it gets worse as you scale.
When you’re ten people, everyone’s in the same room. Context flows through osmosis. When you’re a thousand people, the context capture surface area explodes. Your customer base has grown and you can’t talk to all of them. The number of meetings has multiplied and no one can attend all of them. Information lives in a hundred or thousand different Slack channels, docs, and heads.
So what do companies actually do? They talk about the issues that are salient: the ones that happen to surface in the right meeting, get mentioned by the right person, or generate enough volume to become undeniable. They catch some percentage of the signal and use that to prioritize. It’s better than nothing. It’s also how you end up spending six months building a feature because a board member mentioned it in passing while 200 support tickets about a different problem rot in Zendesk because nobody synthesized them into a narrative compelling enough to influence the “decision-makers”.
We’ve tried to solve this before with tooling. Enterprise knowledge management systems have been around for decades. But most of them fail for the same reason: they require humans to do the work of extracting, organizing, and maintaining context. Confluence becomes a graveyard. Notion fragments across fifty workspaces. SharePoint is SharePoint.
These knowledge base becomes an artifact of what someone once knew, not a living representation of what the organization currently understands.
Humans are bad at this. We always have been. In the same way we’re bad at forecasting (see: the entire field of statistics), we’re bad at synthesizing large amounts of unstructured information into coherent, actionable understanding.
But now we’re no longer the only ones who need the context.
The sum total of hours that employees spend talking (to customers, to each other, to themselves in meetings) represents the vast majority of how work actually happens. Decisions get made in conversations. Strategy emerges from dialogue.
The raw material of “what to build” lives in conversations, now transcripts and not documents.
And now we’re starting to talk to agents (a lot).
Software development agents are increasingly capable of doing real work because code’s correctness is verifiable. But their output quality is almost entirely determined by the context they’re given.
Ask an LLM to “write a PRD for a notifications feature” and you’ll get something generic and plausible-sounding.
Feed that same LLM transcripts from your last ten customer calls, your competitor’s recent changelog, and the Slack thread where engineering debated technical constraints and you’ll get something that reflects your specific situation. The delta is the difference between a template and a strategy.
This is already playing out in code generation. The teams getting real leverage from tools like Cursor or Claude Code are the ones who’ve figured out how to feed their codebase, their architectural decisions, their past PR reviews into the context window.
Everyone is using the same set of models and getting wildly different results.
LLMs generally don’t have the context problem humans have. They can process everything you give them until their context window is exhausted. The bottleneck is much different. It’s no longer “can we synthesize all this information?” It’s “are we capturing it correctly in the first place?”
The good part: we’ve already culturally adapted to the hardest part. Recording meetings is basically expected now. The ability to harvest the raw inventory exists.
But the vast majority of these recordings vanish into fairy dust. They sit in Gong or Zoom or Teams and are searchable in theory but useless in practice.
Few are systematically extracting the decisions, synthesizing the patterns, feeding them forward into the systems that determine what gets built.
The companies that figure this out and treat conversations with each other, with agents and with their customers as a truly corpus to be mined rather than ephemera to be forgotten will build the right thing and build it right more often than their competitors. They’ll feed their agents better context and get better outputs. They’ll spot patterns earlier. They’ll make fewer telephone-game errors.
In this new world where agents can increasingly do the building, the bottleneck shifts entirely to knowing what to build.
Context is the whole game and the answers are buried in your conversations.


