Skip to content

Flux Pipeline Tuning with NVIDIA AITune

This example demonstrates how to use NVIDIA AITune to tune the Stable Diffusion text-to-image model from Hugging Face's diffusers library.

Environment Setup

You can use either of the following options to set up the environment:

Option 1 - virtual environment managed by you

Activate your virtual environment and install the dependencies:

pip install --extra-index-url https://pypi.nvidia.com .

Option 2 - virtual environment managed by uv

Install dependencies:

uv sync

Usage

Tuning the model

To tune the Stable Diffusion model, run:

tune --model-name stabilityai/stable-diffusion-3-medium-diffusers --prompt "A futuristic cityscape with neon lights"

You can customize the following parameters: - --model-name: HuggingFace model name or path (default: "stabilityai/stable-diffusion-3-medium-diffusers") - --prompt: Text prompt for image generation - --negative-prompt: Negative text prompt (default: "low quality, blurry") - --height: Height of the generated image (default: 512) - --width: Width of the generated image (default: 512) - --steps: Number of inference steps (default: 50)

Generating images with the tuned model

After tuning, generate images with:

inference --prompt "A beautiful landscape with mountains and a lake" --output-dir output

The generated image will be saved in the specified output directory.

AI Dynamo Stable Diffusion Deployment

To run Stable Diffusion as AI Dynamo service, we have prepared a few additional configs and scripts.

Code starts in stable_diffusion/dynamo/backend.py. Docker and Docker Compose are used to make setup simple.

First, start all services by running docker compose --profile all up --detach. This will build and start all required services.

After successful tuning and services start run below command to test the service.

python -m stable_diffusion.dynamo.client --help # to see the prompts
python -m stable_diffusion.dynamo.client --num-requests 1
python -m stable_diffusion.dynamo.client --num-requests 2
python -m stable_diffusion.dynamo.client --num-requests 4
python -m stable_diffusion.dynamo.client --num-requests 8
python -m stable_diffusion.dynamo.client --num-requests 100

Finally, to shut it down use docker compose --profile all down.

Dynamic batching

The service uses dynamic batching — requests are grouped and processed together for efficiency. Currently, there is one frontend and one worker. To support multiple workers, move batching to a separate service that handles request grouping.

Model Details

The Stable Diffusion model is a text-to-image diffusion model that generates high-quality images from text descriptions. The model is trained on a large dataset of images and text, and can generate realistic images across various domains.

For more information, visit the Stable Diffusion model page on HuggingFace.