DailyGlimpse

Diffusers 0.3: Image-to-Image, Textual Inversion, and GPU Optimizations

AI
April 26, 2026 · 5:21 PM
Diffusers 0.3: Image-to-Image, Textual Inversion, and GPU Optimizations

A month and a half after releasing the diffusers library, the Hugging Face team has introduced version 0.3 with several highly requested features. The library provides a modular toolbox for diffusion models, and the latest update brings image-to-image generation, textual inversion, inpainting, and optimizations for smaller GPUs.

Image to Image Pipeline The new image-to-image pipeline allows users to input an image and a prompt, generating a new image based on both. This feature was one of the most requested. Users can also try it via a Space demo without writing code.

Textual Inversion Textual inversion enables personalization of Stable Diffusion on just 3-5 sample images. The community has already shared over 200 concepts. Resources include a concept library, a visual navigator Colab, a training Colab, and an inference Colab.

Experimental Inpainting The experimental inpainting pipeline lets users provide an image, select an area (or mask), and use Stable Diffusion to replace that area. A minimal Colab notebook is available, with a demo coming soon.

GPU Optimizations With memory optimizations, Stable Diffusion now requires only 3.2 GB of VRAM, at the cost of about 10% speed. This makes it accessible on smaller GPUs.

Other Updates

  • Mac OS support
  • Experimental ONNX exporter and pipeline
  • New documentation
  • Community contributions: Stable Diffusion Videos, Diffusers Interpret, Japanese Stable Diffusion, Waifu Diffusion, Cross Attention Control, and Reusable Seeds

For more details, check out the GitHub repository and give it a star!