Apocaloptimism
The believers sound like fanatics. The fanatics sound like realists. Only one of them is right.
There are believers who sound like fanatics, and there are fanatics who sound like believers. Spend enough time in the AI discourse and you will meet both, often at the same dinner table, sometimes in the same breath. The dissonance is remarkable. Two people can look at the same technology, cite the same papers, reference the same benchmarks, and arrive at conclusions so different you would think they were describing parallel universes.
And yet neither of them is entirely wrong. That is what makes this moment so hard to think clearly about.
We are in the middle of something that has no clean analogy. Not the printing press, not electricity, not the internet. Those comparisons flatten the thing we are actually dealing with. What is arriving is closer to a new kind of cognition entering the world: not human, not quite alien, but undeniably something, getting more capable by the month in ways that continually surprise even the people building it.
The people paying attention have split, almost cleanly, into two congregations.** Complete with schisms, excommunications, and surprisingly good catering.
The first believes we are building God. They speak of artificial superintelligence the way certain evangelists speak of the Second Coming: with shining eyes, total certainty, and a timeline that keeps getting revised but never abandoned. In their telling, AI will cure cancer, solve climate change, abolish poverty, and eventually render death optional.†† Silicon Valley's longest-running product roadmap. Express doubt and you are not cautious. You are a heretic who does not understand exponential curves. They have the energy of people who have glimpsed a vision, and the dangerous conviction that comes from mistaking a trajectory for a destination.
The second believes we are building a bomb. Not metaphorically. An actual civilizational risk so severe that nothing else matters until we have figured out how to defuse it. In their telling, we are sleepwalking toward an alignment catastrophe, constructing systems whose internal reasoning we cannot trace and whose long-term objectives we cannot guarantee. Express optimism and you are not hopeful. You are dangerously naive, the kind of person who looks at a lit fuse and admires the sparkle. They have the energy of people raising a genuine alarm, and the equally dangerous conviction that catastrophe is the only possible outcome unless we hit the brakes yesterday.
Here is what makes this so disorienting: both congregations are populated by some of the smartest people alive. Both have evidence. Both have logic. Both have track records of being right about important things.
And yet they cannot both be fully right. Or can they?
The optimists point to real things. AlphaFold has mapped the structure of nearly every known protein.11 Over 200 million structures predicted. The experimental database had roughly 190,000 before AlphaFold. AI-assisted drug discovery is compressing timelines from years to months. Language models are beginning to bridge gaps in low-resource languages that decades of manual research could not close. Autonomous agents are writing, debugging, and shipping software with an efficiency that would have seemed absurd three years ago. The gains are not hypothetical. They are here, measurable, and compounding.
The pessimists also point to real things. We have built systems whose reasoning we cannot fully interpret. We have deployed models that hallucinate with the confidence of tenured professors.22 The term "hallucination" is itself contested. Some researchers prefer "confabulation," arguing it better captures the mechanism. We have already watched AI be used for mass surveillance, targeted manipulation, and the industrial-scale generation of synthetic content so convincing it threatens to make the very idea of shared truth feel outdated. These are not edge cases or theoretical risks. They are features of the current landscape, happening now, at scale.
Both sides are looking at the same technology and arriving at opposite conclusions. Which tells you something important: the disagreement is not really about the facts. It is about the lens.
Pure optimism is reckless because it treats good outcomes as automatic. It assumes that the same technology reshaping medicine will naturally reshape it toward equity; that models accelerating discovery will naturally accelerate it in directions that benefit everyone. History offers zero evidence for this. Every transformative technology in human history has been captured, weaponized, and hoarded33 The printing press was banned in the Ottoman Empire for decades. Radio was seized by authoritarian regimes within years of its invention. before it was democratized. Often all three simultaneously. There is no reason to believe AI will be the exception unless we deliberately make it one.
Pure pessimism is paralyzing because it treats catastrophe as inevitable. It assumes that the only responsible posture toward powerful technology is retreat. But retreat is not available to us. The models exist. The capabilities are scaling. The frontier labs are racing. The question was never whether powerful AI arrives, but who builds it, who governs it, and with what values it enters the world. Pessimism that produces inaction simply cedes those decisions to whoever is least cautious.
What I have arrived at, and I hold it loosely, because anyone claiming certainty about any of this is either lying to you or to themselves, is something I have started calling apocaloptimism.
Not optimism. Not pessimism. Not the diplomatic "cautious optimism" that usually means optimism without the conviction to own it.
Apocaloptimism is the recognition that we are building something with the genuine capacity to end the world and the genuine capacity to transform it beyond recognition for the better. Both of these possibilities are live, right now, in the same systems, being shaped by the same people, funded by the same capital. It is the refusal to collapse that duality into a comfortable story. And it is the commitment to build anyway. Not because the outcome is guaranteed, but because building responsibly is itself the point. The only available moral act in a moment like this is not to step back. It is to step forward with your eyes open.
There is a particular kind of discomfort that comes from working on AI while holding both of these truths simultaneously. The architecture you are studying could create unprecedented equity. The ecosystem deploying it is doing no such thing. The model you are improving can bridge gaps that matter to billions of people. It can also, with trivial adjustments, do the opposite. The tool and the weapon share the same weights. What separates them is not some fundamental technical safeguard. It is intention, oversight, and the willingness to sit with that tension instead of resolving it prematurely in either direction.
Most people want this resolved. They want to know: is AI good or bad? Should I be excited or afraid? And the honest answer, the one that satisfies no one, is: yes. To both. At the same time. And the discomfort of holding that is not a sign that you are confused. It is a sign that you are paying attention.
The discourse around AI is dominated by two varieties of certainty: the certainty that everything will be fine, and the certainty that nothing will be. Both are comforting in their own way. Certainty always is. It tells you what to feel, who to blame, and when you are allowed to stop thinking. It is the intellectual equivalent of closing your eyes during the difficult part of a film. You miss nothing if the scene is what you expected. You miss everything if it is not.
Apocaloptimism offers no such comfort. It says: the future is not written. The technology is not inherently redemptive or destructive. The people building it are neither saviors nor villains. And the lens we choose, apocalyptic or utopian, fatalistic or messianic, will shape what arrives as surely as the technology itself. Narratives are not neutral descriptions of reality. They are instructions for building it.‡‡ This is the most dangerous sentence in this essay, and I wrote it anyway.
We are living through the opening pages of a chapter that will be studied for centuries. The believers think it is a gospel. The fanatics think it is a warning. Perhaps the honest answer is that it is a draft, and we are still writing it, and the quality of what we produce depends less on the capabilities we develop than on the character we bring to developing them.
The believers sound like fanatics. The fanatics sound like realists. And the only honest position is to take both seriously, hold the tension without flinching, and build.