The Game of Playing the Game
Lessons from Competitive Magic: The Gathering for Software Teams
There’s a phrase Magic: The Gathering players use that sounds trivial until you’ve lived inside it for a while.
The meta1.
What it refers to isn’t the game itself (the cards, the turns, the rules) but the game that forms around the game. The layer where decisions are made in anticipation of what you might face at any given time. Specifically, if you’re playing a constructed format one way the meta impacts your deck list might play out like this:
You don’t put a card in your deck because it’s powerful on its own.
You include it because you expect to face a certain kind of threat.
You expect that threat because it has quietly become common.
It became common because it performed well last month in a high ranking tournament.
Last week unfolded the way it did because of a recent rule change.
That rule mattered because of a card printed months earlier that now has increased in power level.
Nothing about this is written down in one place. And yet everyone who’s paying attention feels it.
That’s the meta.
When I think about software development now, I realize we are always playing a meta game, though we rarely name it as such. We talk as if we’re writing code against requirements which might be clean(ish) specifications handed down from product managers, user stories with acceptance criteria, tickets with clear definitions of done.
But in reality, we’re writing code against expectations: what we think the product will become, what we think users will tolerate, what we think the team will remember six months from now, what we think future developers will understand when they stumble into that corner of the codebase.
Every architecture is a deck list built in anticipation of an unseen opponent, and that opponent is always time.
In Magic, new players often lose the same way. They build the strongest deck they can imagine in isolation: big creatures with impressive stats, elegant combos that chain together beautifully but require very specific requirements, synergies that feel clever and satisfying when they work. Then they sit down across from someone who built something uglier but tuned for the field, and they’re dead by turn three. Nothing was “wrong” with the deck in any technical sense. The cards were legal, their mana curve was reasonable, the strategy was coherent.
It just wasn’t positioned for the world it actually had to survive in. As Marcus puts it..
“The world is nothing but change. Our life is only perception.” — IV. 3
Software teams do this constantly. They build systems optimized for an imagined world that no longer exists, or worse, never existed at all. A clean abstraction emerges for a problem nobody has anymore. A scalable architecture gets deployed for a scale that market conditions will never allow. A flexible system gets carefully constructed to flex in all the wrong directions, accommodating changes that never come while buckling under the changes that do. The system loses anyway.
The truth about the meta is that it’s collective. No single player controls it, no single meeting defines it, and no single document captures it. It emerges from shared belief, from the accumulated weight of thousands of individual decisions that begin to rhyme with each other.
In Magic, when people start running more removal spells, creatures get smaller and faster because the big expensive ones die before they matter. When people optimize for speed, resilience matters less than timing, and the whole texture of gameplay shifts toward aggression. When one assumption spreads through the competitive community, everything downstream adapts to accommodate it, often in ways that weren’t consciously chosen by anyone.
Software works the same way. Teams develop shared mental models about what matters and what doesn’t, what’s “safe” to change and what’s implicitly taboo, which parts of the system are load-bearing or vestigial.
These models aren’t documented anywhere but they’re absorbed through osmosis, transmitted in code review comments and sighs during planning meetings, encoded in which bugs get fixed quickly and which ones linger in the backlog for months. Over time, those models drift in ways that nobody tracks. People join the team late and inherit assumptions without their original context. People leave and take explanations with them that were never written down. Decisions outlive their justifications by years. What was once obvious becomes folklore, repeated without understanding. What was once intentional becomes accidental, preserved only by the fear of changing something that seems to work.
The meta shifts, sometimes gradually and sometimes all at once. The system doesn’t shift with it, because systems have no way of knowing that the world around them has changed.
In Magic, the best players don’t just ask whether a card is good in the abstract: they ask what that card says about the world they’re playing in, what its presence reveals about the threats they’ll face and the opportunities they’ll have.
Good software engineers eventually learn the same lesson, though it often takes years of confusion first. The hardest part of the work isn’t writing the code. The hardest part is reading the room across time and understanding which assumptions are shared and which are private, which are current and which are outdated, which are stable foundations and which are about to become liabilities the moment the market shifts or the team changes or a dependency gets deprecated.
Most technical debt isn’t technical at all. The code usually works. The tests usually pass. The problem is that the system was built for a meta that no longer exists, and nobody noticed when the transition happened. It’s meta game debt or said differently the accumulated cost of playing last season’s strategy in this season’s tournament.
There’s something comforting about this framing, actually.
It explains why projects feel aligned and coherent right up until the moment they suddenly don’t.
Why the same team that shipped brilliantly last year now struggles to make progress on seemingly simple features.
It explains why rewrites happen without clarity improving, because the new system was built with the invisible (now incorrect) assumptions of the old one.
It explains why speed increases while understanding decays, why teams ship faster and faster while becoming less and less certain about what they’ve built.
It’s not that people are careless or that developers have gotten worse. It’s that the game changed quietly, in ways that were hard to perceive from inside the system, while the deck stayed exactly the same.
Magic has format rotations which are formal moments where old cards cycle out and the meta is forced to reconstitute itself around new constraints. Software doesn’t quite have anything as formal as that.
We keep playing with old cards under new rules against opponents we never imagined when we first shuffled up. And then we’re surprised when the solution that worked perfectly before might no longer do the job.
Maybe the work, then, isn’t just better tools or better processes or better documentation. Though all of those help.
Maybe it’s learning to notice the meta itself, to develop the peripheral vision that competitive card players cultivate through thousands of hours of practice.
To ask not just “What are we building?” but “What game do we think we’re playing? What assumptions are we making about the world this system will inhabit? What threats are we preparing for, and which ones are we ignoring?” And to ask these questions often enough that the answers don’t drift without us noticing, leaving us optimized for a world that quietly ceased to exist.
Here is a great overview from one of MTG’s all time bests, Reid Duke: https://magic.wizards.com/en/news/feature/metagame-2015-06-01


