The Cost of a Bad Question
Your brain needed the hard part
I keyboard smashed “why isn’t this working” into Claude Code this afternoon. Then pasted the body of a Sentry issue as markdown below it. No context other than that. No theory. No sign I’d spent thirty seconds thinking about it first.
Claude dispatched subagents and consumed 100K tokens in read operations. It then gave me a diagnosis that led to a fast solution.
As I reflect on it, that’s the worst thing that could have happened.
Ten years ago, that question would have cost me. Google handed you ten pages of garbage for being that vague. You had to go back, think harder, try again. “My site is slow” got you nothing. “Nginx high TTFB on static assets behind reverse proxy” got you the fix in three minutes. The search box trained you to think. Not because thinking was noble, but because Google was useless if you didn’t.
Stack Overflow was worse. Post a lazy question and strangers shut it down in minutes. Duplicate. Unclear. Show us you tried. The site forced you to earn answers by proving you’d already done some thinking and searching.
The question was the work.
LLMs skip all of that. They’re infinitely patient. Dump a mess of confusion at them and they’ll sort through it, guess what you meant, and hand you something useful (or at minimum glaze you into thinking it’s a good answer). They will rarely make you sharpen your thinking. They barely punish you for being lazy.
So you get lazier. And you don’t notice.
No random actions, none not based on underlying principles. — IV. 2
Every question has invisible parameters. Scope: the whole system or one component? Timeframe: happening now or used to work? Hypothesis: do you have one, or are you asking someone else to form one? What have you ruled out?
These aren’t decorative, they’re the substance. “I’m lost” and “I’m somewhere between 5th and 7th, facing east, near a red awning” are both requests for help. One respects the person you’re asking. The other makes them do all the work.
A good question is proof you already did some thinking. It shows what you know, frames what you don’t, and draws a line around the specific gap you need filled. A bad question dumps the entire cognitive load on whoever hears it.
Good questions don’t appear out of nowhere. There’s a process, and most are tempted to skip it.
First you notice. Something looks off. The data is slightly wrong. The system acts different on Tuesdays. Most people walk past these things all day.
Then you frame it. Systems problem or people problem? One-time event or pattern? This is where most people bail and open the chat window. Framing is uncomfortable because it forces you to commit to a perspective before you’re sure, to be wrong in a specific way instead of vague in a safe way.
Then you scope it. “Why is retention dropping?” is unanswerable. “Why are users who finish onboarding but never return within 48 hours churning at twice the rate?” — that has a fence around it. You can answer it. You can act on the answer.
Then you hypothesize. “Is retention dropping because onboarding doesn’t connect to a clear first-value moment?” Now you’re not requesting labor. You’re requesting judgment. This is a different thing entirely.
Notice, frame, scope, hypothesize. Every one of those steps builds your understanding whether or not anyone ever answers you. The thinking that produces the question is the point.
The cost of a bad question is approaching zero. We haven’t dealt with what that means.
The obvious part is that the muscle atrophies. Not overnight. Steadily, in the background, until you try to write a card by hand and your letters look like a kid’s.
The less obvious part: if you never struggle to form the question, you never build the mental model. The twenty minutes you spent figuring out how to phrase what was wrong wasn’t overhead. You were mapping the system, testing boundaries, sorting what you knew from what you didn’t. By the time you had the question, you were halfway to the answer. The struggle wasn’t a side effect of learning. It was the learning.
And this scales beyond individuals. For years, interviews were secretly question-asking contests. The candidate who asked “what are the consistency requirements?” told you they’d been burned by eventual consistency before. The question was the credential. If that skill decays across a generation, I don’t think we know what replaces it as a signal.
I use LLMs every day. They make me faster. That’s real.
But I’ve started pausing before I type. What do I know? What have I ruled out? What’s my theory? Not every time. Not for small stuff. But for problems I want to understand, not just solve.
Sometimes I write the full question and realize I don’t need to send it. Writing it answered it (or at least helped me restructure my lazy prompt). This used to happen on Stack Overflow when you’d draft a post and delete it because structuring the question revealed the answer. Rubber duck debugging. The duck doesn’t answer. You answer yourself because the question forced clarity.
The discipline now is to be your own duck.
When someone on your team asks a vague question, don’t answer it. Ask them what they’ve tried. Ask them what they think is happening. That’s mentorship now. Not transferring knowledge, but protecting the capacity to ask.
The friction of forming a good question was never a bug. It was how we learned to think clearly.
We removed the friction. Our tools no longer demand the work. So we have to demand it of ourselves. I believe that. I practice it most days. But the next generation of builder will grow up with a tool that answers every half-formed question instantly and kindly. They’ll never draft a Stack Overflow post and delete it. They’ll never be forced to sharpen a question just to get Google to cooperate. So where do they learn that the question was the point?


