Subscribe to the right channels and we hear it almost daily. A future of digital “super intelligence” is just around the corner1. Terms like AGI (Artificial General Intelligence — matching human abilities across all domains) are used as if it’s a settled destination2.
This vision clashes with more grounded reality coming from other researchers3, who point out that today’s AI models are great at matching patterns but not at genuine reasoning, and that more computing power doesn't automatically create better understanding.
Who is right?
To make sense of the debate, we first have to get clearer on what “intelligence” actually means4. The problem is, we talk about intelligence as if it’s a single thing. A simple score on a scale. But it’s not. Intelligence is a collection of many different abilities. When we compare human abilities to what current AI can do, the differences are stark.
Here’s a simple chart for how I personally like to think about it.
The two are fundamentally different. Think about the intelligence of a skilled mechanic who knows an engine is failing just by its sound and feel. That’s a physical, hands-on intelligence. Think of the emotional intelligence it takes to navigate a difficult family conversation or lead a team through a crisis. Humans operate with this rich mix of logic, intuition, social awareness, and physical understanding. AI, on the other hand, has no body, no emotions, and no real-world life experience.
The easiest way to grasp this divide is to think about the difference between knowing that something is true and knowing how to actually do it.
AI has incredible "book smarts." These models have been trained on almost everything ever written in the public domain and can write a flawless physics dissertation on how a bicycle works. It knows that a bike stays upright through a complex combination of momentum, steering, and balance5.
A person, on the other hand, knows how to ride a bicycle. This is a practical, "street smarts" kind of knowledge learned through the messy, real-world process of wobbling, scraping a knee, and finally finding that click of balance. You can't learn it by reading a book, and a non-embodied AI can't learn it by processing data.
Much of the excitement about a coming "super intelligence" is based on the idea of making the AI's "book smarts" infinitely bigger and faster. The assumption is that if you make an AI powerful enough on paper, it will eventually develop real-world common sense and understanding.
But knowing all the facts in the world doesn't mean you understand the world. An AI has never had to assemble furniture with confusing instructions; it has never had to figure out how to calm a crying baby. It lacks the flexible, common-sense wisdom that comes from simply living.
So, this isn’t a simple race where AI is catching up to humans on the same track.
Whatever this is that I am, it is flesh and a little spirit and an intelligence. —II. 2
We are building a different kind of intelligence. It is an extremely powerful tool that can process information and find patterns at a scale we never could. But it is not a replacement for the messy, embodied, and uniquely adaptable intelligence that makes us human.
Recognizing this difference helps us use this new technology for what it is, while also appreciating the enduring value of our own.
Earlier AGI had been defined as “when OpenAI reaches $100B in profits”. Cue head scratching… https://www.theverge.com/2024/12/26/24329618/openai-microsoft-and-the-100-billion-agi-question
The field is huge and much broader than the treatment I give here. The major theories for side reading include: Gardner's Theory of Multiple Intelligences, Sternberg's Triarchic Theory of Intelligence, Cattell-Horn-Carroll (CHC) Theory, Spearman's g Factor, Guilford's Structure of Intellect Model, etc. This wikipedia entry is a great jumping off point: https://en.wikipedia.org/wiki/Theory_of_multiple_intelligences
But still struggle to draw SVGs of Pelicans: https://simonwillison.net/2025/Jun/6/six-months-in-llms/