Image Generation with Diffusers#
In this notebook, we will explore how to use diffusion models with Hugging Faceโs Diffusers library. Since these models are computationally intensive, we will stick to a small example using a lightweight model.
Implementation#
We use the small-stable-diffusion model by OFA-Sys (OFA-Sys/small-stable-diffusion-v0).
from diffusers import DiffusionPipeline
diffuser = DiffusionPipeline.from_pretrained("OFA-Sys/small-stable-diffusion-v0") #"stabilityai/sdxl-turbo"
vae/diffusion_pytorch_model.safetensors not found
Loading pipeline components...: 29%|โโโ | 2/7 [00:01<00:03, 1.48it/s]The config attributes {'predict_epsilon': True} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Loading pipeline components...: 100%|โโโโโโโโโโ| 7/7 [00:01<00:00, 4.05it/s]
Generation may take a few minutes, depending on your computer.
outputAutre=diffuser("A car in the winter")
100%|โโโโโโโโโโ| 50/50 [03:01<00:00, 3.63s/it]
import matplotlib.pyplot as plt
print(outputAutre.images[0])
plt.imshow(outputAutre.images[0])
plt.axis("off")
plt.show()
<PIL.Image.Image image mode=RGB size=512x512 at 0x733D54E7F210>

You now know how to generate images using Hugging Faceโs Diffusers library.
To learn more: If you want to explore more about the possible uses of diffusion models, I recommend the free course โPrompt Engineering for Vision Modelsโ on deeplearning.ai. You will learn how to replace an object in an image using SAM and a diffusion model.