AI-generated video has evolved from a curiosity into a legitimate creative tool almost overnight, and Runway has had a front-row seat to the transformation. The New York–based startup has raised nearly $860 million at a $5.3 billion valuation, positioning its models to compete directly with deep-pocketed labs like Google and OpenAI.
But for Runway co-founder and CEO Cristóbal Valenzuela, video generation is just the beginning. On a recent episode of TechCrunch's Equity podcast, Valenzuela laid out his vision for the next frontier: general world models that extend well beyond Hollywood and into gaming, robotics, and potentially artificial general intelligence.
"The real constraint on filmmaking has never been technology," Valenzuela said. "What changes when it is?" He argues that removing technical barriers will unleash a wave of creativity, enabling studios to produce dozens of films for the cost of a single blockbuster.
Runway's approach to world models differs from that of Google and other labs. Instead of focusing solely on simulation accuracy, the company aims to build models that can generate "nonlinear media" — interactive, real-time experiences that blur the line between content consumption and creation. Real-time video generation, Valenzuela explains, opens up use cases far beyond traditional content production, from dynamic game environments to responsive AI assistants.
Valenzuela also pushed back on the notion that AI companions are inherently dystopian. "People find meaning in relationships with all kinds of entities," he said. "If an AI can provide comfort or companionship, that's not automatically a bad thing."
For the full conversation — including Runway's competitive strategy, the technical challenges of building world models, and Valenzuela's thoughts on the future of media — listen to the complete episode on YouTube, Apple Podcasts, Overcast, Spotify, or wherever you get your podcasts. Follow Equity on X and Threads at @EquityPod.