DailyGlimpse

Google's Gemma 4 VLA Runs on NVIDIA Jetson Orin Nano: A New Era for On-Device AI

AI
April 26, 2026 · 4:00 PM
Google's Gemma 4 VLA Runs on NVIDIA Jetson Orin Nano: A New Era for On-Device AI

Google has released a demonstration of its Gemma 4 Vision-Language-Action (VLA) model running on the NVIDIA Jetson Orin Nano Super developer kit. This marks a significant step toward deploying advanced multimodal AI models on edge devices, enabling robots and embedded systems to perceive, reason, and act without relying on cloud connectivity.

The demo showcases the model's ability to process visual input and generate corresponding actions in real time, all within the power envelope of a compact, low-cost module. The Jetson Orin Nano Super, with its 40 TOPS of AI performance, provides sufficient compute for running such models locally.

"This is a game-changer for robotics and IoT applications where latency, privacy, and bandwidth constraints make cloud inference impractical," said a Google AI researcher involved in the project.

The Gemma 4 VLA model is built upon Google's Gemma architecture, which is optimized for efficient inference on resource-constrained hardware. By combining vision and language understanding with action generation, the model can interpret complex scenes and execute tasks like grasping objects or navigating environments.

The demonstration is part of a broader trend of moving AI workloads to the edge, driven by advances in model compression and specialized hardware. Google has not yet announced a public release timeline for Gemma 4 VLA on Jetson, but the demo suggests a working prototype is already in hand.