docker.recipes

ComfyUI

intermediate

Powerful node-based GUI for Stable Diffusion with advanced workflow capabilities

Overview

ComfyUI is a powerful node-based graphical user interface for Stable Diffusion that revolutionizes how users create AI-generated images through visual workflows. Developed as an alternative to traditional text-based interfaces, ComfyUI allows users to build complex image generation pipelines by connecting nodes that represent different operations like text encoding, sampling, VAE processing, and model loading. This visual approach makes it easier to understand the underlying processes and experiment with advanced techniques like ControlNet integration, LoRA blending, and multi-stage refinement. This Docker configuration leverages the ai-dock optimized ComfyUI image to provide GPU-accelerated inference with CUDA support and automatic model management. The stack includes persistent storage for models, custom nodes, workflows, and generated outputs, enabling users to build a comprehensive AI image generation workspace. The container comes pre-configured with Python dependencies and CUDA libraries, eliminating the complexity of manual environment setup while maintaining the flexibility to install custom nodes and models. This setup is ideal for AI researchers, digital artists, content creators, and developers who need more control and flexibility than traditional Stable Diffusion interfaces provide. ComfyUI's node-based approach makes it particularly valuable for users who want to understand the image generation process, create reusable workflows, or implement complex multi-model pipelines that would be difficult to achieve with simpler interfaces.

Key Features

  • Node-based visual workflow editor for building complex Stable Diffusion pipelines without coding
  • GPU acceleration with NVIDIA CUDA support for fast image generation and model inference
  • Support for multiple Stable Diffusion model formats including CKPT, SafeTensors, and Diffusers
  • Extensive LoRA and embedding support with real-time weight adjustment and blending capabilities
  • ControlNet integration for precise pose, depth, and edge guidance in image generation
  • Custom node ecosystem allowing installation of community-developed extensions and features
  • Workflow serialization as JSON files for sharing and version control of generation pipelines
  • Advanced sampling methods including DPM++, Euler, and custom schedulers with fine-grained control

Common Use Cases

  • 1Digital artists creating concept art with precise control over composition, style, and details
  • 2AI researchers experimenting with multi-model workflows and advanced sampling techniques
  • 3Content creators building reusable templates for consistent character or product generation
  • 4Game developers generating textures, sprites, and concept art with ControlNet guidance
  • 5Marketing teams creating product variations and advertising visuals with batch processing workflows
  • 6Hobbyists exploring advanced AI art techniques like image-to-image transformation and inpainting
  • 7Educational institutions teaching AI image generation concepts through visual workflow representation

Prerequisites

  • NVIDIA GPU with at least 6GB VRAM for Stable Diffusion 1.5 models or 12GB+ for SDXL models
  • NVIDIA Container Toolkit installed and configured for Docker GPU access
  • At least 16GB system RAM for smooth operation with large models and complex workflows
  • 50GB+ free disk space for models, custom nodes, and generated image storage
  • Basic understanding of Stable Diffusion concepts like checkpoints, VAE, and sampling methods
  • Docker and Docker Compose installed with user added to docker group for container management

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 comfyui:
3 image: ghcr.io/ai-dock/comfyui:latest
4 container_name: comfyui
5 restart: unless-stopped
6 ports:
7 - "${COMFYUI_PORT:-8188}:8188"
8 volumes:
9 - ./models:/opt/ComfyUI/models
10 - ./output:/opt/ComfyUI/output
11 - ./input:/opt/ComfyUI/input
12 - ./custom_nodes:/opt/ComfyUI/custom_nodes
13 - ./workflows:/opt/ComfyUI/user/default/workflows
14 environment:
15 - NVIDIA_VISIBLE_DEVICES=all
16 - CLI_ARGS=${CLI_ARGS:---listen 0.0.0.0}
17 deploy:
18 resources:
19 reservations:
20 devices:
21 - driver: nvidia
22 count: all
23 capabilities: [gpu]

.env Template

.env
1# ComfyUI Configuration
2COMFYUI_PORT=8188
3
4# Additional CLI arguments
5# --listen 0.0.0.0: Listen on all interfaces
6# --port 8188: Port number
7# --enable-cors-header: Enable CORS
8CLI_ARGS=--listen 0.0.0.0

Usage Notes

  1. 1Requires NVIDIA GPU with CUDA support
  2. 2WebUI available at http://localhost:8188
  3. 3Download models to ./models/ subdirectories
  4. 4Install custom nodes to ./custom_nodes/
  5. 5Save/load workflows as JSON files
  6. 6More flexible than AUTOMATIC1111 for complex pipelines

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 comfyui:
5 image: ghcr.io/ai-dock/comfyui:latest
6 container_name: comfyui
7 restart: unless-stopped
8 ports:
9 - "${COMFYUI_PORT:-8188}:8188"
10 volumes:
11 - ./models:/opt/ComfyUI/models
12 - ./output:/opt/ComfyUI/output
13 - ./input:/opt/ComfyUI/input
14 - ./custom_nodes:/opt/ComfyUI/custom_nodes
15 - ./workflows:/opt/ComfyUI/user/default/workflows
16 environment:
17 - NVIDIA_VISIBLE_DEVICES=all
18 - CLI_ARGS=${CLI_ARGS:---listen 0.0.0.0}
19 deploy:
20 resources:
21 reservations:
22 devices:
23 - driver: nvidia
24 count: all
25 capabilities: [gpu]
26EOF
27
28# 2. Create the .env file
29cat > .env << 'EOF'
30# ComfyUI Configuration
31COMFYUI_PORT=8188
32
33# Additional CLI arguments
34# --listen 0.0.0.0: Listen on all interfaces
35# --port 8188: Port number
36# --enable-cors-header: Enable CORS
37CLI_ARGS=--listen 0.0.0.0
38EOF
39
40# 3. Start the services
41docker compose up -d
42
43# 4. View logs
44docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/comfyui/run | bash

Troubleshooting

  • CUDA out of memory errors: Reduce batch size, use model offloading, or switch to lower precision models in workflow settings
  • Models not loading: Ensure model files are placed in correct subdirectories within ./models/ (checkpoints, vae, loras, etc.)
  • Custom nodes failing to install: Check node compatibility with ComfyUI version and manually install dependencies via container shell
  • Slow generation times: Verify GPU is being utilized with nvidia-smi and check NVIDIA_VISIBLE_DEVICES environment variable
  • Workflow JSON import errors: Update ComfyUI version as older versions may not support newer node types and connections
  • Web interface not accessible: Confirm port 8188 is not blocked by firewall and CLI_ARGS includes --listen 0.0.0.0 for external access

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space