The Dead Air
What Orson Welles, new mediums and old martians teach us about surviving the AI era
I went to Profs and Pints in D.C. on Monday with my wife and listened to Daniel H Foster give a talk1 about the famed 1938 War of the Worlds broadcast. Most know the surface story and how Orson Welles panicked ~1 million Americans listening to the broadcast into thinking Martians had actually landed.
But Daniel’s meta point through the talk was about adaptation.
He walked us through how Welles didn’t just read H.G. Wells science fiction novel on the radio.
He purpose built the production for the radio medium. His cast mimicked the newscaster from the Hindenburg disaster to amp up emotional anxiety, he used real place names, and then did something no one had tried before: he let the microphone go silent.
Dead air.
In a medium defined entirely by sound, silence meant death. The listener’s brain did the rest.
Foster called this “the theater of the mind.” The absence was more terrifying than any sound effect could be. Welles understood that the gap and the thing the audience had to fill in themselves was where his power lived.
Now think about what’s happening with AI.
A tool that is always wrong is easy to dismiss. A tool that is mostly right becomes part of your nervous system. It helps write the code, draft the memo, diagnose the bug, summarize the meeting, compose the argument. You stop noticing where you end and it begins.
Then one day it says something plausible and false. And then you start scratching your head? Can I recover?
That’s the gap we might start losing. A year ago, in The Glitch in the Pattern, I argued that the predictable output of generative AI is creating a cultural flatline leading to content that is safe, remix-able and familiar. But I was only describing the symptom we’ve now all seen come to pass The issue is deeper: we are losing our tolerance for the gap itself. The dead air. The not-knowing. The space where adaptation actually happens.
Welles moved fluidly between theater, radio, and film because he was willing to be wrong inside each one long enough to discover what it could uniquely do.
Daniel talked, too, about Edgar Bergen who made a ventriloquist dummy somehow work on radio which is objectively absurd. But it worked precisely because Edgar adapted to the medium.
The Martians in Wells’ story had superior technology. Ray guns, tripods and overwhelming force. But in the end they succumbed to bacteria. Because they never needed to adapt. There was no friction that led to productive failure or as we might experience it: a runny nose.
Superior capability without adaptive pressure is a death sentence. It was true for fictional Martians. It is becoming true for anyone who uses AI as a replacement for struggle rather than a surface to struggle against.
The central question you and I should be wrestling with is how rentable machine intelligence can be used productively without worshipping it. As Marcus would say:
Not to be driven this way and that, but always to behave with justice and see things as they are. — IV. 22
So can you: act before certainty, then revise without shame. Believe provisionally. Stay in motion while knowing your perceptions are under construction.
The real division ahead will be between brittle and adaptive human intelligence.
Welles didn’t panic his audience because he had better technology than all the other radio producers at the time. He panicked them because he adapted the medium faster than they could adapt to him.
The Martians had better machines. We had bacteria. The glitch in their pattern was that they never needed to get sick. Ours is that we might stop letting ourselves.
That night, I found this youtube recording of Daniel giving a very similar but abbreviated version of the same talk online


