ComfyUI
Powerful node-based GUI for Stable Diffusion with advanced workflow capabilities
Overview
ComfyUI is a powerful node-based graphical user interface for Stable Diffusion that revolutionizes how users create AI-generated images through visual workflows. Developed as an alternative to traditional text-based interfaces, ComfyUI allows users to build complex image generation pipelines by connecting nodes that represent different operations like text encoding, sampling, VAE processing, and model loading. This visual approach makes it easier to understand the underlying processes and experiment with advanced techniques like ControlNet integration, LoRA blending, and multi-stage refinement.
This Docker configuration leverages the ai-dock optimized ComfyUI image to provide GPU-accelerated inference with CUDA support and automatic model management. The stack includes persistent storage for models, custom nodes, workflows, and generated outputs, enabling users to build a comprehensive AI image generation workspace. The container comes pre-configured with Python dependencies and CUDA libraries, eliminating the complexity of manual environment setup while maintaining the flexibility to install custom nodes and models.
This setup is ideal for AI researchers, digital artists, content creators, and developers who need more control and flexibility than traditional Stable Diffusion interfaces provide. ComfyUI's node-based approach makes it particularly valuable for users who want to understand the image generation process, create reusable workflows, or implement complex multi-model pipelines that would be difficult to achieve with simpler interfaces.
Key Features
- Node-based visual workflow editor for building complex Stable Diffusion pipelines without coding
- GPU acceleration with NVIDIA CUDA support for fast image generation and model inference
- Support for multiple Stable Diffusion model formats including CKPT, SafeTensors, and Diffusers
- Extensive LoRA and embedding support with real-time weight adjustment and blending capabilities
- ControlNet integration for precise pose, depth, and edge guidance in image generation
- Custom node ecosystem allowing installation of community-developed extensions and features
- Workflow serialization as JSON files for sharing and version control of generation pipelines
- Advanced sampling methods including DPM++, Euler, and custom schedulers with fine-grained control
Common Use Cases
- 1Digital artists creating concept art with precise control over composition, style, and details
- 2AI researchers experimenting with multi-model workflows and advanced sampling techniques
- 3Content creators building reusable templates for consistent character or product generation
- 4Game developers generating textures, sprites, and concept art with ControlNet guidance
- 5Marketing teams creating product variations and advertising visuals with batch processing workflows
- 6Hobbyists exploring advanced AI art techniques like image-to-image transformation and inpainting
- 7Educational institutions teaching AI image generation concepts through visual workflow representation
Prerequisites
- NVIDIA GPU with at least 6GB VRAM for Stable Diffusion 1.5 models or 12GB+ for SDXL models
- NVIDIA Container Toolkit installed and configured for Docker GPU access
- At least 16GB system RAM for smooth operation with large models and complex workflows
- 50GB+ free disk space for models, custom nodes, and generated image storage
- Basic understanding of Stable Diffusion concepts like checkpoints, VAE, and sampling methods
- Docker and Docker Compose installed with user added to docker group for container management
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 comfyui: 3 image: ghcr.io/ai-dock/comfyui:latest4 container_name: comfyui5 restart: unless-stopped6 ports: 7 - "${COMFYUI_PORT:-8188}:8188"8 volumes: 9 - ./models:/opt/ComfyUI/models10 - ./output:/opt/ComfyUI/output11 - ./input:/opt/ComfyUI/input12 - ./custom_nodes:/opt/ComfyUI/custom_nodes13 - ./workflows:/opt/ComfyUI/user/default/workflows14 environment: 15 - NVIDIA_VISIBLE_DEVICES=all16 - CLI_ARGS=${CLI_ARGS:---listen 0.0.0.0}17 deploy: 18 resources: 19 reservations: 20 devices: 21 - driver: nvidia22 count: all23 capabilities: [gpu].env Template
.env
1# ComfyUI Configuration2COMFYUI_PORT=818834# Additional CLI arguments5# --listen 0.0.0.0: Listen on all interfaces6# --port 8188: Port number7# --enable-cors-header: Enable CORS8CLI_ARGS=--listen 0.0.0.0Usage Notes
- 1Requires NVIDIA GPU with CUDA support
- 2WebUI available at http://localhost:8188
- 3Download models to ./models/ subdirectories
- 4Install custom nodes to ./custom_nodes/
- 5Save/load workflows as JSON files
- 6More flexible than AUTOMATIC1111 for complex pipelines
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 comfyui:5 image: ghcr.io/ai-dock/comfyui:latest6 container_name: comfyui7 restart: unless-stopped8 ports:9 - "${COMFYUI_PORT:-8188}:8188"10 volumes:11 - ./models:/opt/ComfyUI/models12 - ./output:/opt/ComfyUI/output13 - ./input:/opt/ComfyUI/input14 - ./custom_nodes:/opt/ComfyUI/custom_nodes15 - ./workflows:/opt/ComfyUI/user/default/workflows16 environment:17 - NVIDIA_VISIBLE_DEVICES=all18 - CLI_ARGS=${CLI_ARGS:---listen 0.0.0.0}19 deploy:20 resources:21 reservations:22 devices:23 - driver: nvidia24 count: all25 capabilities: [gpu]26EOF2728# 2. Create the .env file29cat > .env << 'EOF'30# ComfyUI Configuration31COMFYUI_PORT=81883233# Additional CLI arguments34# --listen 0.0.0.0: Listen on all interfaces35# --port 8188: Port number36# --enable-cors-header: Enable CORS37CLI_ARGS=--listen 0.0.0.038EOF3940# 3. Start the services41docker compose up -d4243# 4. View logs44docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/comfyui/run | bashTroubleshooting
- CUDA out of memory errors: Reduce batch size, use model offloading, or switch to lower precision models in workflow settings
- Models not loading: Ensure model files are placed in correct subdirectories within ./models/ (checkpoints, vae, loras, etc.)
- Custom nodes failing to install: Check node compatibility with ComfyUI version and manually install dependencies via container shell
- Slow generation times: Verify GPU is being utilized with nvidia-smi and check NVIDIA_VISIBLE_DEVICES environment variable
- Workflow JSON import errors: Update ComfyUI version as older versions may not support newer node types and connections
- Web interface not accessible: Confirm port 8188 is not blocked by firewall and CLI_ARGS includes --listen 0.0.0.0 for external access
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
comfyui
Tags
#ai#image-generation#stable-diffusion#gpu#node-based#workflow
Category
AI & Machine LearningAd Space
Shortcuts: C CopyF FavoriteD Download