In an unexpected move, Google has introduced two new Tensor Processing Units (TPUs) that caught the tech world off guard. The announcement came alongside other AI developments, including a demonstration of NVIDIA's 1.5-billion-parameter Gemma 4 VLA model running on a compact Jetson Orin Nano Super edge device, and MIT Technology Review's summary of top tech trends.
The new TPUs, details of which remain sparse, appear to be designed for accelerating AI workloads, though Google has not yet disclosed their specifications or target applications. The chips add to Google's existing TPU lineup, which includes the v4, v5e, and v5p models used for training and inference in its cloud services.
Meanwhile, NVIDIA showcased the ability of its Gemma 4 VLA model to perform real-time inference on a tiny Jetson Orin Nano Super, highlighting the growing trend of edge AI. The model, with 1.5 billion parameters, could run efficiently on a low-power device, opening possibilities for on-device AI applications without cloud reliance.
MIT Technology Review also released its annual list of breakthrough technologies, sifting through thousands of headlines to identify the most impactful innovations of the year.
These developments underscore the rapid pace of AI hardware and software evolution, with companies pushing the boundaries of what's possible both in the cloud and at the edge.