docker.recipes

Flowise

beginner

Low-code tool for building LLM apps with drag-and-drop.

Overview

Flowise is an open-source low-code platform that enables users to build Language Model applications through an intuitive drag-and-drop interface. Created as a visual workflow builder for LangChain and LlamaIndex, Flowise democratizes AI application development by allowing non-technical users to create sophisticated chatbots, document Q&A systems, and AI agents without writing code. The platform supports integration with major AI providers including OpenAI, Azure OpenAI, HuggingFace, Anthropic, and local models through Ollama. This Docker deployment creates a self-contained Flowise instance with persistent storage for your chatflows, credentials, and configuration. The container exposes a web interface where you can design AI workflows by connecting nodes representing different components like language models, vector stores, memory systems, and tools. Built-in authentication protects your instance while volume mounting ensures your created workflows survive container restarts. This setup is ideal for AI enthusiasts, rapid prototyping teams, and organizations wanting to experiment with Large Language Models without complex infrastructure. Flowise bridges the gap between powerful AI capabilities and accessible implementation, making it perfect for proof-of-concepts, internal tools, and customer-facing AI applications that can be deployed as REST APIs.

Key Features

  • Visual drag-and-drop interface for building LangChain and LlamaIndex workflows without coding
  • Built-in marketplace with pre-configured templates for common AI use cases like customer support and document analysis
  • Multi-model support including OpenAI GPT, Claude, local Ollama models, and HuggingFace transformers
  • Integrated vector database connections for Pinecone, Weaviate, Chroma, and FAISS document retrieval
  • Export chatflows as REST API endpoints for direct integration with external applications
  • Memory management systems including buffer memory, conversation summary, and entity memory
  • Real-time chat testing interface with conversation history and debug information
  • Credential management system for securely storing API keys and connection strings

Common Use Cases

  • 1Building customer support chatbots with company document knowledge bases using RAG patterns
  • 2Creating internal AI assistants for employee onboarding and FAQ automation
  • 3Prototyping conversational AI applications before committing to custom development
  • 4Educational environments for teaching AI concepts and LangChain workflows visually
  • 5Small business automation for lead qualification and appointment scheduling chatbots
  • 6Content creation workflows combining multiple AI models for research and writing assistance
  • 7Document analysis systems for processing contracts, invoices, and compliance materials

Prerequisites

  • Docker and Docker Compose installed with at least 2GB available RAM for model inference
  • API keys for chosen AI providers (OpenAI, Anthropic, etc.) or local Ollama installation
  • Port 3000 available on the host system for web interface access
  • Basic understanding of AI concepts like embeddings, vector databases, and prompt engineering
  • Administrative credentials defined in environment variables for secure access
  • Sufficient disk space for vector embeddings and conversation history storage

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 flowise:
3 image: flowiseai/flowise:latest
4 container_name: flowise
5 restart: unless-stopped
6 environment:
7 PORT: 3000
8 FLOWISE_USERNAME: ${ADMIN_USER}
9 FLOWISE_PASSWORD: ${ADMIN_PASSWORD}
10 volumes:
11 - flowise_data:/root/.flowise
12 ports:
13 - "3000:3000"
14
15volumes:
16 flowise_data:

.env Template

.env
1ADMIN_USER=admin
2ADMIN_PASSWORD=changeme

Usage Notes

  1. 1Docs: https://docs.flowiseai.com/
  2. 2Access at http://localhost:3000 - login with env credentials
  3. 3Drag-and-drop LangChain/LlamaIndex workflow builder
  4. 4Connect to OpenAI, Ollama, HuggingFace, Azure, and more
  5. 5Export chatflows as API endpoints for integration
  6. 6Marketplace has pre-built templates for common use cases

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 flowise:
5 image: flowiseai/flowise:latest
6 container_name: flowise
7 restart: unless-stopped
8 environment:
9 PORT: 3000
10 FLOWISE_USERNAME: ${ADMIN_USER}
11 FLOWISE_PASSWORD: ${ADMIN_PASSWORD}
12 volumes:
13 - flowise_data:/root/.flowise
14 ports:
15 - "3000:3000"
16
17volumes:
18 flowise_data:
19EOF
20
21# 2. Create the .env file
22cat > .env << 'EOF'
23ADMIN_USER=admin
24ADMIN_PASSWORD=changeme
25EOF
26
27# 3. Start the services
28docker compose up -d
29
30# 4. View logs
31docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/flowise/run | bash

Troubleshooting

  • Authentication failed or blank login page: Verify FLOWISE_USERNAME and FLOWISE_PASSWORD environment variables are properly set
  • Chatflow fails with API rate limit errors: Check your AI provider API key quotas and implement retry mechanisms in your flows
  • Vector similarity search returns poor results: Ensure document chunking size matches your embedding model's context window
  • Container startup fails with permission errors: Check that the flowise_data volume has proper read/write permissions for the container user
  • Memory errors during large document processing: Increase Docker container memory limits and reduce document batch sizes
  • Exported API endpoints return 404 errors: Verify the chatflow is properly saved and deployed through the Flowise interface before API access

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space