Thanks to Apple engineers, you can now run Stable Diffusion on Apple Silicon using Core ML. Apple has released a repository with conversion scripts and inference code based on the ๐งจ Diffusers library from Hugging Face. The team at Hugging Face has converted the official Stable Diffusion checkpoints and made them available on the Hugging Face Hub.
Update: A few weeks after this post, Hugging Face released a native Swift app, available on the Mac App Store, along with the source code for other projects.
Available Checkpoints
The converted models include:
- Stable Diffusion v1.4: converted
- Stable Diffusion v1.5: converted
- Stable Diffusion v2 base: converted
- Stable Diffusion v2.1 base: converted
Core ML supports CPU, GPU, and Apple's Neural Engine (NE), with options to split computation across devices for better performance. Each model has multiple variants to suit different hardware.
Performance Notes
Key variants:
- Attention type:
original(CPU/GPU only) vssplit_einsum(all compute units).originalmay be faster on some devices. - Format:
packagesfor Python,compiledfor Swift. Compiled models split the UNet into multiple files for iOS/iPadOS compatibility.
Testing on a MacBook Pro with M1 Max (32 GPU cores, 64 GB RAM), the best results came from using original attention, all compute units, and macOS Ventura 13.1 Beta 4 (22C5059b). This generated one image in 18 seconds.
Note: macOS Ventura 13.1 is required for Apple's implementation. Older versions may produce black images and slower times.
Example repo structure:
coreml-stable-diffusion-v1-4
โโโ README.md
โโโ original
โ โโโ compiled
โ โโโ packages
โโโ split_einsum
โโโ compiled
โโโ packages
Core ML Inference in Python
Prerequisites
pip install huggingface_hub
pip install git+https://github.com/apple/ml-stable-diffusion
Download the Model
from huggingface_hub import snapshot_download
from pathlib import Path
repo_id = "apple/coreml-stable-diffusion-v1-4"
variant = "original/packages"
model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)
print(f"Model downloaded at {model_path}")
Inference
Use Apple's Python script:
python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i models/coreml-stable-diffusion-v1-4_original_packages -o </path/to/output/image> --compute-unit ALL --seed 93
Options: --compute-unit can be ALL, CPU_AND_GPU, CPU_ONLY, or CPU_AND_NE. If using a different model, specify its Hub ID with --model-version.
Core ML Inference in Swift
Download
Download the compiled variant. Example for original/compiled:
from huggingface_hub import snapshot_download
from pathlib import Path
repo_id = "apple/coreml-stable-diffusion-v1-4"
variant = "original/compiled"
model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)
print(f"Model downloaded at {model_path}")
Inference
Use the Swift package.
Bring Your Own Model
Convert custom models using Apple's conversion script. For example:
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version runwayml/stable-diffusion-v1-5 -o ./output
Use --chunk-unet for iOS compatibility.
Next Steps
Explore the Hugging Face Swift app or the Apple repository for more details.