docker.recipes

Stable Diffusion WebUI

intermediate

Web interface for Stable Diffusion image generation.

Overview

Stable Diffusion is a revolutionary open-source deep learning model that generates high-quality images from text descriptions using latent diffusion techniques. Released in 2022 by Stability AI, it democratized AI image generation by running efficiently on consumer hardware, unlike previous models that required expensive cloud services. The AUTOMATIC1111 WebUI has become the de facto standard interface for Stable Diffusion, providing an intuitive web-based platform with advanced features like inpainting, outpainting, and ControlNet integration. This Docker configuration deploys the AUTOMATIC1111 Stable Diffusion WebUI with full GPU acceleration support, enabling users to generate stunning artwork, concept designs, and creative imagery through a browser interface. The WebUI includes sophisticated sampling methods, prompt engineering tools, and extension support that transforms basic text-to-image generation into a comprehensive creative suite. The containerized approach eliminates complex Python environment management while providing consistent performance across different systems. This stack is ideal for digital artists, game developers, content creators, and AI enthusiasts who want professional-grade image generation capabilities without cloud dependencies. The WebUI's extensive customization options, model management system, and active community ecosystem make it perfect for both hobbyists exploring AI art and professionals integrating generative AI into production workflows.

Key Features

  • Text-to-image generation with 50+ sampling algorithms including DPM++, Euler, and DDIM
  • Image-to-image transformation with strength controls and noise injection
  • Inpainting and outpainting for precise image editing and extension
  • LoRA (Low-Rank Adaptation) support for style fine-tuning and character consistency
  • Textual Inversion embeddings for custom concepts and artistic styles
  • ControlNet integration for pose, depth, and edge-guided image generation
  • Extensions marketplace with 200+ community plugins for advanced functionality
  • Batch processing and grid generation for exploring parameter variations

Common Use Cases

  • 1Digital art studios generating concept art and character designs for games and films
  • 2Marketing agencies creating unique visuals for campaigns and social media content
  • 3Independent artists exploring AI-assisted creative workflows and style experimentation
  • 4Game developers prototyping environments, textures, and asset concepts
  • 5E-commerce businesses generating product mockups and lifestyle imagery
  • 6Educational institutions teaching AI art generation and machine learning concepts
  • 7Content creators producing thumbnails, illustrations, and visual storytelling elements

Prerequisites

  • NVIDIA GPU with 8GB+ VRAM (RTX 3070 or better recommended for optimal performance)
  • Docker and Docker Compose installed with NVIDIA Container Toolkit configured
  • 50GB+ available disk space for models, outputs, and extensions
  • 16GB+ system RAM for stable operation during high-resolution generation
  • Understanding of Stable Diffusion concepts: prompts, samplers, and CFG scale
  • Basic familiarity with model formats (.safetensors, .ckpt) and installation procedures

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 stable-diffusion:
3 image: ghcr.io/geszti/sd-webui:latest
4 container_name: stable-diffusion
5 restart: unless-stopped
6 volumes:
7 - sd_models:/app/stable-diffusion-webui/models
8 - sd_outputs:/app/stable-diffusion-webui/outputs
9 ports:
10 - "7860:7860"
11 deploy:
12 resources:
13 reservations:
14 devices:
15 - driver: nvidia
16 count: all
17 capabilities: [gpu]
18
19volumes:
20 sd_models:
21 sd_outputs:

.env Template

.env
1# Requires NVIDIA GPU with 8GB+ VRAM

Usage Notes

  1. 1Docs: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki
  2. 2Access at http://localhost:7860 - first start downloads model (~4GB)
  3. 3Place .safetensors models in /models/Stable-diffusion folder
  4. 4LoRAs go in /models/Lora, embeddings in /embeddings
  5. 5Requires 8GB+ VRAM, use --medvram or --lowvram for less
  6. 6Extensions marketplace available in UI for extra features

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 stable-diffusion:
5 image: ghcr.io/geszti/sd-webui:latest
6 container_name: stable-diffusion
7 restart: unless-stopped
8 volumes:
9 - sd_models:/app/stable-diffusion-webui/models
10 - sd_outputs:/app/stable-diffusion-webui/outputs
11 ports:
12 - "7860:7860"
13 deploy:
14 resources:
15 reservations:
16 devices:
17 - driver: nvidia
18 count: all
19 capabilities: [gpu]
20
21volumes:
22 sd_models:
23 sd_outputs:
24EOF
25
26# 2. Create the .env file
27cat > .env << 'EOF'
28# Requires NVIDIA GPU with 8GB+ VRAM
29EOF
30
31# 3. Start the services
32docker compose up -d
33
34# 4. View logs
35docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/stable-diffusion-webui/run | bash

Troubleshooting

  • CUDA out of memory errors: Add --medvram or --lowvram arguments to container startup command
  • Model download failures: Manually download models to the sd_models volume and restart container
  • WebUI not accessible on port 7860: Check firewall settings and ensure port mapping is correct
  • Extensions failing to install: Clear extension cache and ensure stable internet connection
  • Generated images appear corrupted: Update to latest WebUI version and verify model integrity
  • Slow generation times: Enable xformers optimization and adjust batch size settings

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space