A subtle chill is creeping into modern creativity: predictability.
You and I know by now that AI can instantly generate passable art, music, code and prose. Two years on from ChatGPT’s introduction, it rarely births anything that makes us say, “I’ve never seen anything like that before.”
The truth of course is we’ve always recycled the past, but AI is intensifying this at an exponential rate, flooding culture with safe, remix-able content. Indeed remixing is a core concept when you use Open AI’s Sora.
Some — like Kirby Ferguson — have noted a “cultural stagnation” in which “everything looks the same, sounds the same”.
Algorithmic systems that control your feed on TikTok, Instagram and YouTube already push us to replicate proven hits. They stifle risky leaps and real innovation.
When everyone uses the same model(s), we produce a narrow aesthetic. Each with their distinct signatures….
You and I can already smell:
If that code was produced by Claude Sonnet 3.5
If that essay was written by ChatGPT
If that synopsis was the product of Gemini’s Deep Research
Generative AI’s approach thrives on repeating patterns and create a feedback loop of the familiar.
No matter what happens, keep this in mind: It’s the same old thing, from one end of the world to the other. It fills the history books, ancient and modern, and the cities, and the houses too. Nothing new at all. Familiar, transient. — VII. 1
But true originality is a bold, fragile act.
True breakthroughs in music, painting, or storytelling often confuse or unsettle at first. It is precisely this lack of following the crowd that stands in stark contrast with the ruthless efficiency afforded to us with a $20/month subscription to AI.
And I’ve been pondering…
If AI solves our problems, do we still need to learn the deeper “why”?
If knowledge is cheap and commoditized how much more value is placed on the wisdom acquired by hard fought struggle?
If AI is used as an amplifier that allows exploration acceleration, how do we continue to ensure we have odd dreams and take blind leaps?
This leads to choice that we must increasingly make intentionally. And will require:
Embracing the glitch in the pattern
Seeking those questions that AI will never be able to answer
Creating what is by definition difficult for algorithms to categorize
Building intensely based on personal experience that can never be found in a training set
Its the celebration of our inefficiencies. Our irrational leaps. Our ability and intense new need to be productively wrong.
Its using our new foils to become more authentically, radically human.
really like the “productively wrong” framing
Good questions and you’ve provided answers.