After our Episode 56 coverage of the AI-2027 paper — which lays out a timeline to artificial superintelligence by December 2027 — Neil deGrasse Tyson dropped a StarTalk episode called “Why the Singularity Is Probably Wrong.” We love Neil, but on this topic, we think he and guest Adam Becker are making some real mistakes.
Their five core arguments:
1. Exponential growth always hits a ceiling. Becker argues that singularity believers project exponential curves without acknowledging S-curves. He’s right that every individual technology follows an S-curve — vacuum tubes maxed out, transistors are approaching atomic limits. But the singularity hypothesis isn’t about one technology. It’s about paradigm shifts. Each S-curve tops out, but a new one starts before the old one finishes. And in AI specifically, the rate of paradigm-level breakthroughs has been accelerating.
2. Intelligence isn’t a single dial. This is Becker’s strongest point — AI can be superhuman at chess while having zero social awareness. But nobody serious in AI claims you “just turn up one dial.” Modern development involves architecture innovations, training methodology, RLHF, multi-modal integration, and tool use. More importantly, the trend shows AI becoming more general, not less. The “intelligence is too complex” argument has been the retreat position for decades as goalposts keep moving.
3. The brain is not a computer. True — the brain isn’t a Von Neumann machine. But you don’t need to simulate a brain to achieve general intelligence. Planes don’t flap their wings. The question isn’t whether AI works like a brain, but whether it can achieve comparable cognitive outputs.
4. Moore’s Law is dead. Traditional transistor scaling is slowing, but computing capability continues advancing through new architectures, specialized hardware, algorithmic efficiency gains, and neuromorphic computing. The computational resources available for AI training have grown by roughly 10x per year even as Moore’s Law plateaus.
5. The real problems are political, not technical. Tyson argues we should worry about inequality and climate change instead. But this is a false dichotomy — AI could be the most powerful tool for addressing those very problems. And responsible development requires understanding what’s coming, not dismissing it.
We respect Tyson enormously, but his platform means his AI skepticism shapes how millions think about the most transformative technology of our era. Getting this conversation right matters.