The AI Doc arrives just as the abstract debates it stages have become painfully concrete, and it has nothing useful to say about any of them.
The doomer versus accelerationist debate is not the real story. The real story is happening in Pentagon contract negotiations and corporate boardrooms right now.
Test paragraph to verify paste works.
The AI Doc arrives just as the abstract debates it stages have become painfully concrete, and it has nothing useful to say about any of them.
The doomer versus accelerationist debate is not the real story. The real story is happening in Pentagon contract negotiations and corporate boardrooms right now.
On March 27, 2026, Focus Features will release The AI Doc: Or How I Became an Apocaloptimist, a documentary directed by Oscar winner Daniel Roher (Navalny) and co-directed by Charlie Tyrell. The film follows Roher as a father-to-be on a quest to understand the world his child is about to inherit. It features interviews with OpenAI’s Sam Altman, Google DeepMind’s Demis Hassabis, Anthropic’s Dario Amodei, and researchers from the Center for Humane Technology and the Center for AI Safety. It is produced by the teams behind Everything Everywhere All at Once and Navalny.
It is also, by nearly every critical account, a film that arrives too late, asks the wrong questions, and resolves into precisely the kind of hand-wringing that the AI industry relies on to avoid accountability.
The AI Doc is not a bad film. It is an irrelevant one. The debates it stages were settled by events before it reached theatres.
The Film
The AI Doc is well-made. Reviews consistently praise the editing, the hand-drawn artwork interspersed throughout, and Roher’s vulnerability as a narrator. On Letterboxd, audience responses range from “one of the most beautiful and moving things I have ever seen” to genuine existential dread. The craft is not in question.
What is in question is the framing. The documentary structures itself around a binary: doomers who believe AI will destroy humanity versus accelerationists who believe it will save us. Roher positions himself between these camps, searching for a middle ground, and lands on “apocaloptimism,” a portmanteau that sounds clever until you realise it commits to nothing.
The Wrap called it “too little, too late.” Brian Tallerico, writing for RogerEbert.com, noted that the film’s vision of AI is given the spotlight “without enough interrogation regarding class and wealth.” At the Variety Sundance panel, Roher himself admitted: “This movie did not want to be a movie.”
He was more right than he knew.
The Framing Problem
The doomer-versus-accelerationist binary is not a neutral lens. It is the AI industry’s preferred framing, and it serves a specific purpose: it keeps the conversation abstract enough that nobody has to answer concrete questions about power, money, and accountability.
If you are debating whether AI will destroy civilisation or usher in a golden age, you are not asking who profits from its deployment today. You are not asking which communities bear the cost of its training data. You are not asking why a Pentagon contract negotiation over safety guardrails collapsed into an unprecedented act of corporate punishment.
The documentary stages exactly this kind of debate. Altman and Hassabis and Amodei discuss the future of intelligence. Researchers warn about existential risk. Roher worries about his unborn child. Nobody discusses the $200 million defence contract that just blew up over whether an AI company should be allowed to refuse to build surveillance tools.
One Letterboxd reviewer put it precisely: the film offers “Joe Rogan-level nodding and agreeing with whatever talking point is dropped in its lap.” Another called the final call to action “noncommittal pap akin to posting an infographic to your Instagram story and calling it a day.”
The doomer-versus-accelerationist debate is not a disagreement. It is a shared assumption that AI is inevitable, imminent, and transformative. The only disagreement is whether to be excited or afraid. That framing benefits exactly one group of people: the ones building the technology.
What Is Actually Happening Right Now
While Roher was editing his film, the abstract questions it poses were being answered by events in real time.
On February 27, 2026, Defence Secretary Pete Hegseth formally designated Anthropic a “supply chain risk to national security,” a label typically reserved for foreign adversaries like Huawei. It was the first time the United States had ever applied this designation to an American company. The reason: Anthropic refused to remove safety guardrails from its Claude model that would have banned its use for mass surveillance of Americans and fully autonomous weapons. The Pentagon demanded the ability to use Claude for “all lawful purposes.” Anthropic said no. The government responded by requiring every defence contractor to certify they do not use Anthropic’s models, according to CNBC.
Hours later, OpenAI announced it had secured the Pentagon contract. CEO Sam Altman later admitted the deal “looked opportunistic and sloppy,” according to Fortune. OpenAI initially sought similar protections against domestic surveillance and autonomous weapons, then amended its agreements. Some OpenAI staff were reportedly furious, according to CNN.
Senator Kirsten Gillibrand called the designation “a dangerous misuse of a tool meant to address adversary-controlled technology.” Anthropic CEO Dario Amodei, one of The AI Doc’s featured interviewees, announced the company would challenge the designation in court, according to CNBC.
This is not a philosophical debate about whether AI might one day be dangerous. This is a live confrontation between a government that wants unrestricted access to AI capabilities and a company that drew a line at surveillance and autonomous weapons, and was punished for it. It is the single most consequential AI safety story of 2026, and it makes every talking-head debate about existential risk look like a distraction.
The AI Doc has nothing to say about it. It was already in the can.
The Bender-Hanna Critique
The sharpest intellectual framework for understanding The AI Doc’s failure was published months before the film premiered, and the documentary appears unaware of it.
In The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, computational linguist Emily M. Bender and AI researcher Alex Hanna argue that AI doomers and AI boosters are “two sides of the same coin.” Both camps, they write, share three core assumptions: that AI is inevitable, that it is imminent, and that it will be super powerful. The only disagreement is the final turn. Boosters say it will solve all problems. Doomers say it will kill us all. But both treat the technology as a force of nature rather than a set of corporate decisions made by identifiable people for identifiable reasons, according to an interview with KQED.
Bender and Hanna’s central insight is that doomerism is itself a form of hype. When you say AI might destroy civilisation, you are also saying it is extraordinarily powerful. That framing serves the companies building it, because it implies their product is world-changing, which is exactly what they need investors and governments to believe.
The AI Doc reproduces this framing wholesale. By staging a debate between people who think AI will save the world and people who think it will end it, the film reinforces the premise that both sides share: that what these companies are building is so powerful it will reshape everything. It never steps outside that frame to ask whether the “intelligence” being sold is actually what the marketing claims, or whether the real risks are mundane, systemic, and already here: job displacement, concentrated corporate power, and the erosion of democratic oversight over technologies deployed in public spaces.
Doomerism is not the opposite of hype. It is hype with a different emotional valence. The AI Doc falls for exactly this trap.
What the Film Could Have Been
The documentary Roher could have made was sitting right in front of him.
He had access to Dario Amodei, the CEO of the company that would refuse to build surveillance tools for the Pentagon and get designated a national security threat for it. He had access to Sam Altman, whose company would rush to fill the gap. He had the Center for Humane Technology, whose co-founder Tristan Harris has spent years arguing that the real danger of technology is not sentience but manipulation.
A documentary that followed the money, the contracts, and the policy fights would have been more urgent, more specific, and more useful than a meditation on whether AI will be good or bad for humanity. The question is not whether AI will be good or bad. The question is: who decides how it is used, who profits, and who bears the consequences when those decisions go wrong?
The Anthropic-Pentagon standoff is a perfect case study. A company built by people who left OpenAI over safety concerns drew a line at military surveillance. The government punished them. A competitor stepped in. The line was redrawn. That story has heroes and villains and genuine moral complexity. It is also a story about power, not philosophy.
But telling that story would require naming names, assigning responsibility, and treating the people featured in the documentary as actors with interests rather than visionaries with opinions. It would require the filmmaker to stop asking “should we be optimistic or pessimistic?” and start asking “who benefits from the way this technology is being deployed, and who is paying the price?”
What to Watch
The AI Doc will likely do well commercially. The subject is timely, the craft is strong, and the “apocaloptimist” coinage gives audiences a comfortable place to land. It will spark conversations. Those conversations will mostly reproduce the exact framing the AI industry prefers.
For anyone following the actual trajectory of AI policy, the real documentary is being written in real time:
Anthropic’s legal challenge to the supply chain risk designation, which could set precedent for whether AI companies can refuse government contracts on ethical grounds
OpenAI’s Pentagon deal, which tests whether voluntary safety commitments survive contact with commercial incentives
The EU AI Act’s implementation, which begins classifying AI systems by risk level and may be the first regulatory framework to address autonomous agents in military and surveillance contexts
Corporate accountability, as the question of who is responsible when AI systems cause harm moves from academic conferences to courtrooms
The AI Doc asks whether we should be optimistic or pessimistic about artificial intelligence. That is the wrong question. The right question is the one Anthropic just answered at a cost of $200 million in Pentagon contracts: where is the line, and what happens to the companies that refuse to cross it?