Image Generation with Diffusers

Contents

Image Generation with Diffusers#

In this notebook, we will explore how to use diffusion models with Hugging Faceโ€™s Diffusers library. Since these models are computationally intensive, we will stick to a small example using a lightweight model.

Implementation#

We use the small-stable-diffusion model by OFA-Sys (OFA-Sys/small-stable-diffusion-v0).

from diffusers import DiffusionPipeline

diffuser = DiffusionPipeline.from_pretrained("OFA-Sys/small-stable-diffusion-v0") #"stabilityai/sdxl-turbo"
vae/diffusion_pytorch_model.safetensors not found
Loading pipeline components...:  29%|โ–ˆโ–ˆโ–Š       | 2/7 [00:01<00:03,  1.48it/s]The config attributes {'predict_epsilon': True} were passed to DPMSolverMultistepScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Loading pipeline components...: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [00:01<00:00,  4.05it/s]

Generation may take a few minutes, depending on your computer.

outputAutre=diffuser("A car in the winter")
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 50/50 [03:01<00:00,  3.63s/it]
import matplotlib.pyplot as plt
print(outputAutre.images[0])
plt.imshow(outputAutre.images[0])
plt.axis("off")
plt.show()
<PIL.Image.Image image mode=RGB size=512x512 at 0x733D54E7F210>
../_images/6c45abba6edd2fcf58502c6d809d9acb7c343edc6d9c534124d729c8b5946583.png

You now know how to generate images using Hugging Faceโ€™s Diffusers library.

To learn more: If you want to explore more about the possible uses of diffusion models, I recommend the free course โ€œPrompt Engineering for Vision Modelsโ€ on deeplearning.ai. You will learn how to replace an object in an image using SAM and a diffusion model.