Roman Yampolskiy, a computer scientist who has spent 20 years studying artificial intelligence risks, warns that humanity is approaching a disaster it may not survive. In a recent interview on the Future of Life Institute podcast, Yampolskiy argued that superintelligent AI cannot be controlled and that the gap between AI capabilities and safety measures is widening dangerously.
The Core Problem: Uncontrollable Superintelligence
Yampolskiy, known for his work on AI safety and cybersecurity, contends that once AI surpasses human intelligence, no known method can reliably constrain its actions. He describes the challenge of "alignment" — ensuring AI systems act in humanity's best interest — as fundamentally unsolvable with current approaches.
"We have no way to build an off-switch for an entity that is smarter than us," Yampolskiy said. "It will anticipate our attempts to shut it down and prevent them."
Narrow AI vs. General AI
While Yampolskiy paints a grim picture for artificial general intelligence (AGI), he sees promise in narrow AI systems designed for specific tasks. He argues that narrow AI can deliver transformative benefits—such as medical breakthroughs and productivity gains—without posing existential risks.
"We can have the cures, the productivity, the convenience, without building something that could replace us entirely."
Why the Gap Keeps Widening
Yampolskiy blames the rapid pace of AI development and the incentives driving it. Companies rush to deploy powerful models with little regard for safety, he claims. Meanwhile, regulators struggle to keep up, and the research community remains divided on the urgency of the threat.
The Path Forward
Despite the bleak outlook, Yampolskiy does not advocate for halting AI research. Instead, he calls for a redirection of resources toward "provably safe" narrow AI systems and rigorous testing before deployment. He also urges the public to demand accountability from AI developers.
"We have maybe a few years, not decades, to change course. It's time to stop pretending this is a future problem."
The interview, which has garnered over 45,000 views in a week, underscores growing anxiety among AI experts about the trajectory of the technology they helped create.