'Tis the difference between the lightning bug and the lightning
Will you pay Meta's price for Personal Superintelligence?
Mark speaks of Personal Superintelligence like it's a birthright we've been denied. One that when reached will let us achieve our goals, create our worlds, become our better selves. Who doesn't want their own digital genie?
This rhetoric is intoxicating by design.
The problem isn't that Meta wants to put AI in everyone's hands. The problem is that they want to keep the hands1.
Lets walk through the valley between what is being said and what will be built.
Every technology that promises liberation contains the seed of its opposite.
The printing press was going to democratize knowledge. It also gave us propaganda.
The internet was going to democratize information. It also gave us surveillance capitalism in the cloud.
Meta promises to democratize intelligence itself. The same intelligence that will know your goals better than you do, anticipate your needs before you feel them, optimize your choices until choice becomes a quaint artifact.
The paradox is exquisite:
To serve you perfectly, AI must understand you completely.
To understand you completely, AI will shape you perfectly.
The servant becomes the sculptor.
Consider what Personal Superintelligence actually requires2:
Every conversation you've ever had
Every decision you almost made
Every desire you barely acknowledged
Every fear you never voiced but can be inferred
Its no longer a war for just your attention but rather your soul. The extraction of human possibility into predictable patterns.
We'll volunteer for it.
We'll even likely pay for the privilege.
Because the promise is so beautiful. Your own personal oracle, your digital twin, your better self rendered in silicon.
This Coercion is Existential.
Once your neighbor or colleague has a personal AI that helps them write better, think faster, choose more optimally, how can you afford3 not to?
This arms race is not between nations. No its between versions of your potential self.
Mark wants to put Superintelligence in people's hands. But hands are connected to bodies, and bodies exist in networks.
Your personal AI doesn't just know you: it knows everyone like you. It becomes the average of your desires, the median of your choices, the regression toward everyone's mean.
This is how “freedom” becomes “compulsion”.
How “empowerment” becomes “dependence”.
How “democratization” becomes “control” with a prettier name.
The "personal" in Personal Superintelligence is like calling a McDonald's hamburger "personal cuisine" because you're the one eating it.
What happens to the parts of us that can't be optimized?
The inefficient love that wastes time on the wrong person?
The irrational decision in hindsight that changes our journeys arc?
The creative accident that solved something you didn’t intend?
The wandering subconscious thought that later becomes your insight?
These aren't bugs in our human system. They're features.
They're how we become.
But they might also be precisely what Personal Superintelligence eliminates in the name of better serving us.
Slowly at first but then its a slippery slope.
What is it you want? To keep on breathing? What about feeling? Desiring? Growing? Ceasing to grow? Using your voice? Thinking? Which of them seems worth having? — XII. 31
Questions that matter.
I’m not debating whether to use AI in general — no very far from it. I’m an evangelist.
I’m considering deeply the kind of humans we become in its presence. And what will be worth preserving even if its inefficient, unpredictable, or inconvenient.
The question isn't: How do we give everyone AI?
The question is: How do we ensure that in getting AI, we don't lose what makes us human?
For me, the true danger isn’t that AI will become too powerful.
No. More insidiously its what we might become: Too predictable. Too optimized. Too... useful.
What if instead of “democratizing access” to Personal Superintelligence, we:
Democratized the right to be inefficient?
The right to make mistakes?
The right to have goals that don't align with our previous patterns?
What if we built AI that enhanced human unpredictability instead of looking towards its optimization?
That created more possibilities through serendipity instead of optimizing toward fewer?
What if we stopped asking how to make AI serve human goals and started asking how to keep human goals messy, beautiful, and irreducibly complex?
I don't have the answers.
But I have a compass: Any vision4 of personal AI that doesn't specifically promote human messiness and struggle is a vision of control disguised as empowerment.
The real litmus test is how AI can both serve our goals and ensure we still recognize ourselves when we achieve them.
Between the lightning of Meta's promise and the lightning bug of our actual experience lies a darkness we're barely beginning to navigate.
I’ve felt it in my journey with AI. There are big parts I have yet to fully reconcile.
Mark ends with: "I'm excited to focus Meta's efforts towards building this future."
But that's the point, isn't it? It's Meta's future. Not yours. Not mine.
It's a future where your Superintelligence is personal in the same way your Facebook profile is personal. Nominally yours yet engineered, owned and governed by someone else.
The most sophisticated prison is the one you don't realize you're in because the walls are made of your own desires, perfectly understood and perfectly served.
Tread carefully.
Another way to think about this is “Why giving everyone AI means giving AI everyone”
Out in the open on the Meta AI app was bad enough: https://gizmodo.com/psa-get-your-parents-off-the-meta-ai-app-right-now-2000615122
I find myself barely keeping up with general purpose AI. And I have a lot of subscriptions.
OpenAI’s, Anthropic’s and now Meta’s