DailyGlimpse

Hugging Face and Bitsandbytes Launch 4-Bit Model Support for LLMs, Enabling Training on Consumer GPUs

AI
April 26, 2026 · 4:55 PM
Hugging Face and Bitsandbytes Launch 4-Bit Model Support for LLMs, Enabling Training on Consumer GPUs

Large language models (LLMs) are notoriously resource-intensive, making them difficult to run or train on consumer hardware. A new collaboration between Hugging Face and the bitsandbytes library aims to change that by bringing 4-bit quantization to a wide range of models, including text, vision, and multimodal architectures. This integration allows users to inference and finetune models with drastically reduced memory requirements, even on modest hardware like a single Google Colab GPU.

The approach builds on earlier work, such as the LLM.int8 method, and now introduces 4-bit precision supported by the bitsandbytes library. A central innovation is QLoRA, a finetuning technique that uses 4-bit quantized pretrained language models combined with Low Rank Adapters (LoRA). QLoRA reduces memory usage enough to finetune a 65-billion-parameter model on a single 48GB GPU while preserving full 16-bit performance. The resulting model family, Guanaco, achieves 99.3% of ChatGPT's performance on the Vicuna benchmark with just 24 hours of finetuning on a single GPU.

Key QLoRA innovations include:

  • 4-bit NormalFloat (NF4): An information-theoretically optimal data type for normally distributed weights.
  • Double quantization: Quantizes the quantization constants to further reduce memory footprint.
  • Paged optimizers: Manage memory spikes during training.

The release includes:

The blogpost also explains FP8 and FP4 floating-point formats relevant to quantization. FP8 comes in two variants: E4M3 (4 exponent bits, 3 mantissa) with range -448 to 448, and E5M2 (5 exponent bits, 2 mantissa) with range -57344 to 57344. FP4 allows flexible combinations of exponent and mantissa bits, with 3 exponent bits generally performing best.

This advancement makes LLMs far more accessible for researchers and developers, enabling experimentation and finetuning without expensive hardware.