Episode 72

AI's Loudest Critic Argues with His Soul, Not His Brain

Ed Zitron is one of the internet's most vocal AI skeptics. But when you look past the confidence, his arguments rely on ad hominem attacks, conspiracy theories, and 'soul-based reasoning' instead of evidence.

Ed Zitron runs a PR firm, writes a tech newsletter with 85,000 subscribers, and recently appeared on The Last Invention podcast to declare that AI “doesn’t f***ing work.” He dismissed Geoffrey Hinton — a Nobel Prize winner in Physics — as someone who just “occasionally speaks to somebody and goes, I’m scared of a computer.” He called the entire AI industry a grift. And when a Guardian reporter asked how he could be so certain AI would collapse, his answer was: “I feel it in my soul.”

That last quote is the whole problem in a nutshell. Zitron’s brand is demanding evidence from everyone else while arriving at his own conclusions through gut feeling. His segment on The Last Invention is a masterclass in bad argumentation: dismiss the technology without engaging with it, attack the people instead of their ideas, frame everything as a money-motivated conspiracy, and when cornered, pivot to an unrelated social issue. He accused Hinton’s AI safety warnings of secretly being marketing for AI companies — a conspiracy theory about a 76-year-old academic who left Google specifically to speak freely about risks.

What makes this frustrating is that good AI skepticism exists on the very same episode. Gary Marcus, a cognitive scientist at NYU, makes specific technical arguments about architectural limitations in large language models. Arvind Narayanan from Princeton argues that even transformative technologies take 50-60 years to fully integrate into society. These are evidence-based critiques that make you think harder. Zitron’s approach makes you think less.

The irony runs deeper than rhetoric. In October 2025, WIRED published a profile with the headline: “Ed Zitron Gets Paid to Love AI. He Also Gets Paid to Hate AI.” His PR firm EZPR counts AI companies among its clients. The man telling everyone that AI is snake oil is literally doing marketing for the snake oil sellers. That doesn’t automatically make him wrong, but when your core argument is “everyone else is just motivated by money,” the conflict of interest deserves a mention.

The AI industry absolutely needs critics asking hard questions about hallucination rates, training data transparency, bias, and labor displacement. But those critics need to engage with the technology on its own terms. You can’t wave your hand and say “it doesn’t work” when hundreds of millions of people use it productively every day. And you definitely can’t say “I feel it in my soul” and expect to be taken seriously as an evidence-based critic. Bad skepticism doesn’t just fail on its own merits — it makes it easier for AI companies to dismiss all criticism as unserious.

Let’s dig deeper into each of Zitron’s arguments.

Ed Zitron runs a PR firm, writes a tech newsletter with about 85,000 subscribers, and hosts a podcast called Better Offline. He’s been described as “one of the most pugnacious critics of big tech.” And he just spent about 25 minutes on The Last Invention telling the host that AI is a complete grift, that none of the people building it are sincere, and that we are, quote, “at the limits of what these things can do.”

And I want to be clear about what we’re doing today. We are not anti-skeptic. Skepticism is essential. In fact, two other guests on this same episode, Gary Marcus and Arvind Narayanan, made genuinely thoughtful critiques that deserve real engagement. But Zitron’s segment is something different. It’s a masterclass in bad argumentation dressed up as contrarianism. And I think it’s worth picking apart because a lot of people hear confident, loud skepticism and assume it must be well-reasoned.

At about 8 minutes in, Zitron says this: “AI is a marketing term and it doesn’t mean anything. It just means something related to whatever we can raise investment dollars for.”

So this is a classic move where you take a grain of truth and stretch it until it snaps. Yes, “AI” gets slapped onto products that barely qualify. Your toaster doesn’t need a neural network. But the field of artificial intelligence has had a formal definition since 1956. It encompasses machine learning, computer vision, natural language processing, robotics, and more. Dismissing all of that as “just marketing” is like saying “medicine is just a marketing term” because supplement companies make bogus health claims.

This one really got me. Around 10 minutes in, the host brings up Geoffrey Hinton, and Zitron’s response is: “Oh, Jeff Hinton. Who? Sorry, I shouldn’t mock him.” And then he proceeds to mock him for the next several minutes.

Right. He questions what Hinton has “been up to” other than “occasionally speaking to somebody and going, I’m scared of a computer.” This is a Nobel Prize winner in physics. A man who spent 40 years developing the foundational techniques behind modern neural networks. And Zitron’s rebuttal is essentially: “He’s old and gives speeches.”

And then at around 13 minutes, he says: “For all his bloviating, Jeff seems to provide very little actual commentary.” And then the wild part: he claims Hinton’s safety warnings are actually marketing for AI companies. The logic being that if someone says your product is scary, that makes it seem powerful, which raises its valuation.

Which is a conspiracy theory. You’re taking a 76-year-old academic who left Google specifically so he could speak freely about AI risks, and claiming he’s secretly doing PR for the companies he’s criticizing. That’s not analysis. That’s a plot from a bad thriller.

This is where things get really bizarre. Around 16 minutes in, Zitron pivots to: “Why doesn’t Jeff Hinton seem to care about black people in disadvantaged neighborhoods who are contracting horrible diseases from these horrible data centers?”

So just to be clear about what happened here. The host asked Zitron to engage with Hinton’s technical arguments about AI risk. Zitron’s response was to accuse Hinton of not caring about environmental racism. That’s textbook whataboutism. It doesn’t address the argument. It changes the subject entirely and puts the other person on moral defense.

And Hinton does actually speak about near-term harms like job displacement and bias. The host points this out. Zitron just barrels through it.

At around 20 minutes, the host brings up internal OpenAI emails from 2017 where Altman, Musk, and Sutskever discuss their genuine fears about AGI. These are private emails, not public statements. And Zitron’s response is: “These are bloviating rich guys sitting around huffing their own farts.”

The host is presenting evidence of sincere belief, private communications where these people expressed concern about the power of what they were building. And Zitron’s entire counter-argument is: “They’re liars.” That’s it. No engagement with the content of the emails. No alternative explanation for why they’d express these concerns privately. Just: they’re bad people, therefore nothing they say counts.

At 26 minutes, the host asks Zitron point blank whether all this investment and talent could produce something truly transformational. And Zitron says: “No. We are at the limits of what these things can do. We are absolutely not at some early stage.”

This is the most falsifiable claim he makes. And it’s stated with absolute certainty. Zero hedging. No “probably” or “I think.” Just: no, this is the ceiling. Now, we’ve heard this kind of prediction before. In 2016, people said neural networks had peaked. In 2022, people said GPT-3 was the ceiling. Each time, the next generation blew past those predictions.

He doesn’t offer any. No technical argument about why scaling would stop working. No discussion of architectural limitations. No engagement with recent breakthroughs in reasoning, coding, or multimodal capability. His evidence is… he’s confident about it.

Which connects to something from his other media appearances. When a Guardian reporter asked how he could be so certain AI would collapse, his answer was: “I feel it in my soul.”

A tech critic whose entire brand is demanding evidence from everyone else… arrives at his most important conclusion through soul-based reasoning.

Look, I don’t want to be too mean about it. Everyone has intuitions. But if your job is to be the evidence guy, the “just look at the data” guy, you cannot then pivot to “I feel it in my soul” when asked for your evidence. That’s not skepticism. That’s faith.

It’s remarkably consistent. First, never engage with the technology itself. Don’t talk about benchmarks, architectures, or capabilities. Second, attack the people instead of the arguments. Hinton is old. Altman is a liar. Musk is a narcissist. Third, frame everything as a conspiracy motivated by money. Fourth, when cornered, pivot to an unrelated social issue. And fifth, make your position unfalsifiable. If AI works, it’s marketing. If AI fails, you told us so.

Frequently Asked Questions

What are the main arguments against artificial general intelligence?

Key arguments include: consciousness may be non-computable (Penrose), intelligence requires embodied experience (Dreyfus), current AI is pattern matching not understanding (Searle’s Chinese Room), and recursive self-improvement may face diminishing returns. Skeptics argue we’re building powerful tools, not minds.

Can AI be truly intelligent without being conscious?

This is philosophy’s hard problem applied to AI. Many argue that functional intelligence doesn’t require consciousness — a system could solve any cognitive task without subjective experience. Others contend that genuine understanding, creativity, and general intelligence may require consciousness, which silicon may never achieve.

If you enjoyed this episode, check out these related deep dives:

Related Articles

Episode 1Jul 18

Creatine: From Discovery to Health Benefits

Discover the science behind creatine supplementation: muscle growth, brain health benefits, exercise performance, and safety. Learn how this natural compound powers your cells and enhances both physical and cognitive function.

Read More
Episode 12Aug 4

Is Coffee Good for You? Health Benefits & Risks

Discover the science of coffee: health benefits, caffeine effects, antioxidants, and longevity research. Learn optimal consumption, brewing methods, and how coffee impacts brain health, metabolism, and cardiovascular function.

Read More