DailyGlimpse

ONNX Runtime and Olive Speed Up SD Turbo and SDXL Turbo Inference

AI
April 26, 2026 · 4:37 PM
ONNX Runtime and Olive Speed Up SD Turbo and SDXL Turbo Inference

Stable Diffusion Turbo and SDXL Turbo models can now run faster using ONNX Runtime and Olive. This optimization reduces inference latency, making image generation more efficient for developers and users. By leveraging these tools, the models achieve better performance without sacrificing quality.