“Got the FAANG interview. Months of LeetCode grinding, thousands of problems. Ready to crush it. Dynamic programming, graphs, trees, you name it.
Then the interview? Blew it. Straight up tanked. It was literally a medium that’d I’d solved last week. But under the spotlight? Brain locked up. By the time I got something going, the clock was basically dead. Interviewer X’ed me on the spot.”
Sound familiar?
If you’re a Software Engineer in Tech you’ve been through it. This algorithmic Russian roulette hell is the brutal rite of passage on your way to RSU incentivized riches.
Most jobs in Tech don’t have binary evaluation metrics. When interviewing or once in seat.
The best part about being an Executive is how you’re measured.
Performance objectives are clear. You have a revenue, retention or profitability target. You're accountable for it.
At review time: Did you meet or exceed your number? If not, how far off? There’s (usually) little tolerance for failure. Its part of the agreement.
Outside of sales, close to zero other functions have this clarity. You’re part of a team pushing code to deliver a feature that’s projected to move a key result. If it fails, you try something different. Do you have your manager’s favor? Are you not a dick to work with?
Employment continues at will.
If the gods have made decisions about me and the things that happen to me, then they were good decisions. — VI. 44
Leetcode for CEOs
Interviewing for CxO roles is different. There is no set of arbitrary complex problems with known solutions to solve on coderpad.io.
It’s all about relationships and a history of results. Bonus points if you came from McKinsey.
History isn’t a great predictor of future results if the environment, industry or market you’re moving to is very different from your past.
In September HBR ran a sensationalist article entitled: AI Can (Mostly) Outperform Human CEOs. Spoiler alert: it was a bit of a shill for Strategize.com which is building simulated environments for management decision making.
The results of what they did were interesting:
A real-world experiment tested GPT-4o as a CEO against humans in a business simulation.
The experiment involved 344 participants making decisions in a simulation of the U.S. automotive industry.
GPT-4o outperformed humans in metrics like product design and market response but was fired faster due to struggles with unpredictable — black swan — events like market collapses.
AI and human executives both failed due to overconfidence and short-term thinking, while top-performing students excelled by planning for long-term adaptability.
The article ends with this takeaway: AI complements, not replaces, human CEOs by enhancing decision-making and focusing on data-heavy tasks.
My takeaway?
Simulated decision making environments will become part of the hiring process for CxOs in a future near you.
There just hasn’t been an effective way to do it before.
Despite experiment performance, students won’t have a chance in the “real world”. Relationships, gravitas and presence will all still be the main qualifiers.
But boards with fiduciary duty of care and loyalty to shareholders won’t have a choice. Despite the teeth gritting friction.
Play for pay.
Get ready to start grinding.
There is a business model somewhere in this....