DailyGlimpse

Hugging Face and Graphcore Join Forces to Accelerate Transformer Models on IPU Hardware

AI
April 26, 2026 · 5:48 PM
Hugging Face and Graphcore Join Forces to Accelerate Transformer Models on IPU Hardware

Hugging Face has announced a new Hardware Partner Program at the 2021 AI Hardware Summit, with Graphcore as a founding member. The collaboration aims to make it easier for developers to run state-of-the-art Transformer models on Graphcore's Intelligence Processing Unit (IPU) at production scale with minimal coding complexity.

Graphcore's IPU is a processor designed specifically for AI workloads, featuring a massively parallel MIMD architecture and on-chip memory to deliver high performance and efficiency. The IPU is supported by the Poplar SDK, which integrates with standard frameworks like PyTorch and TensorFlow.

Through Hugging Face's new Optimum library, developers will gain access to hardware-optimized models certified by Hugging Face. The first IPU-optimized Transformer models are expected later this year, covering applications in vision, speech, translation, and text generation.

“Developers all want access to the latest and greatest hardware – like the Graphcore IPU, but there’s always that question of whether they’ll have to learn new code or processes,” said Hugging Face CEO Clément Delangue. “With Optimum and the Hugging Face Hardware Program, that’s just not an issue. It’s essentially plug-and-play.”

Early benchmarks show dramatic performance gains for BERT on IPU compared to GPU-based systems, potentially saving researchers valuable training time and enabling more iterations when developing new models.