$docker.recipes
·12 min read·Updated February 2026

Self-Hosted Automation with n8n and Docker Compose: Replace Zapier for Free

Automate workflows between your self-hosted services with n8n. Build integrations visually, trigger on webhooks, and run AI workflows — all on your own infrastructure.

n8nautomationself-hostingdocker-composeworkflows

01Why Self-Host Your Automation?

I was paying $49/month for Zapier's Starter plan. I had about 30 automations: Slack notifications for GitHub events, RSS-to-email digests, form submissions to spreadsheets, and a few custom integrations between internal tools. The automations worked fine, but I kept hitting limits — task counts, polling intervals, and the growing unease of routing sensitive data through a third party. n8n is the self-hosted alternative that finally made me cancel my Zapier subscription. It's a visual workflow automation tool with 400+ integrations, a node-based editor, and — crucially — it runs on your infrastructure. Your data never leaves your servers. The free self-hosted version has no task limits, no workflow limits, and no artificial restrictions on polling intervals. The same 30 automations that cost me $49/month on Zapier run on my VPS at zero marginal cost, and I've since expanded to over 80 workflows because there's no cost penalty for adding more. The trade-off is maintenance: you're responsible for updates, backups, and uptime. But if you're already running a Docker Compose stack, adding n8n is straightforward.

02Docker Compose Setup with PostgreSQL

n8n supports SQLite and PostgreSQL. I recommend PostgreSQL for anything beyond personal experimentation — it handles concurrent workflow executions better and makes backups easier to manage.
[docker-compose.yml]
1services:
2 n8n:
3 image: n8nio/n8n:1.82
4 environment:
5 - N8N_HOST=n8n.example.com
6 - N8N_PORT=5678
7 - N8N_PROTOCOL=https
8 - WEBHOOK_URL=https://n8n.example.com/
9 - DB_TYPE=postgresdb
10 - DB_POSTGRESDB_HOST=postgres
11 - DB_POSTGRESDB_DATABASE=n8n
12 - DB_POSTGRESDB_USER=n8n
13 - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
14 - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
15 - GENERIC_TIMEZONE=America/New_York
16 volumes:
17 - n8n-data:/home/node/.n8n
18 ports:
19 - "5678:5678"
20 depends_on:
21 postgres:
22 condition: service_healthy
23 restart: unless-stopped
24
25 postgres:
26 image: postgres:16-alpine
27 environment:
28 - POSTGRES_DB=n8n
29 - POSTGRES_USER=n8n
30 - POSTGRES_PASSWORD=${DB_PASSWORD}
31 volumes:
32 - postgres-data:/var/lib/postgresql/data
33 healthcheck:
34 test: ["CMD", "pg_isready", "-U", "n8n"]
35 interval: 10s
36 timeout: 5s
37 retries: 5
38 restart: unless-stopped
39
40volumes:
41 n8n-data:
42 postgres-data:

The N8N_ENCRYPTION_KEY is used to encrypt credentials stored in the database. Generate it once with openssl rand -hex 32 and save it securely. If you lose this key, all stored credentials become unreadable and you'll need to re-enter them. Back it up alongside your database.

03Building Your First Workflow

n8n's visual editor is intuitive — you drag nodes, connect them, and configure each step. But for reproducibility and version control, you can also export and import workflows as JSON via the CLI. A simple example: monitor an RSS feed and send new posts to a Slack channel. In n8n, this is three nodes: an RSS Feed Read trigger, a filter (optional, to match keywords), and a Slack node. The trigger runs on a schedule you define — I check RSS feeds every 15 minutes. For more complex workflows, n8n supports branching (IF nodes), loops (SplitInBatches), error handling, sub-workflows, and even custom JavaScript/Python code nodes for logic that doesn't fit a pre-built integration. The webhook trigger is particularly powerful. You can create a workflow that responds to HTTP requests, which means any service that supports webhooks can trigger n8n automations. I use this for GitHub push events, form submissions, and monitoring alerts.
[terminal]
1# Export all workflows for backup
2docker compose exec n8n n8n export:workflow --all --output=/home/node/.n8n/backups/
3
4# Import a workflow from JSON
5docker compose exec n8n n8n import:workflow --input=/home/node/.n8n/workflows/my-workflow.json
6
7# Export credentials (encrypted)
8docker compose exec n8n n8n export:credentials --all --output=/home/node/.n8n/backups/
9
10# List all workflows
11docker compose exec n8n n8n list:workflow

04Connecting Your Self-Hosted Stack

The real advantage of self-hosted n8n is direct access to your other Docker services. Since n8n runs on the same Docker network, it can reach internal services by container name — no public URLs or API gateways needed.
[docker-compose.yml]
1services:
2 n8n:
3 image: n8nio/n8n:1.82
4 networks:
5 - n8n-net
6 - monitoring-net # Access Grafana, Prometheus
7 - app-net # Access your application stack
8 # ... rest of config
9
10networks:
11 n8n-net:
12 monitoring-net:
13 external: true # Created by monitoring stack
14 app-net:
15 external: true # Created by app stack

When connecting n8n to internal services, use dedicated API keys with minimal permissions. n8n stores credentials encrypted in its database, but if your n8n instance is compromised, those credentials provide access to every connected service. Apply least-privilege: read-only API keys where possible, scoped tokens instead of admin credentials.

05Advanced Patterns

After 10 months with n8n, these are the patterns I use most: Scheduled backups: A workflow runs at 2 AM daily, triggers restic backup commands via SSH on each server, checks the exit code, and sends a Slack summary. If any backup fails, I get an immediate alert. This replaced a fragile cron + email setup that silently failed for weeks. AI workflows with Ollama: n8n has built-in AI nodes that connect to OpenAI, Anthropic, or self-hosted models via Ollama. I have a workflow that summarizes long GitHub issues and posts the summary to Slack. Running Ollama locally means no API costs and the data stays on my server. Error handling: Every critical workflow has an error trigger node that catches failures, logs the error to Loki, and sends a Slack notification. n8n's retry mechanism handles transient failures (API rate limits, network blips) automatically. GitHub automation: On every push to main, a webhook triggers an n8n workflow that runs tests, builds the Docker image, pushes to my private registry, and triggers a docker compose pull && docker compose up -d on the production server via SSH. It's a simple CI/CD pipeline built entirely in n8n's visual editor. Database maintenance: A weekly workflow connects to PostgreSQL, runs VACUUM ANALYZE, checks for tables approaching size thresholds, and reports the results. Simple but surprisingly useful for catching storage issues early.

06n8n vs Zapier vs Huginn vs Node-RED

Here's my honest comparison after using each: n8n: Best visual editor, most integrations (400+), excellent self-hosted experience. The community edition is powerful enough for most users. Downside: the interface can feel overwhelming for simple automations, and some advanced features (like LDAP, SSO) are enterprise-only. Zapier: Best for non-technical users. Polished, reliable, and the marketplace of pre-built automations (Zaps) saves time. Downside: expensive at scale ($49-$99/month for serious use), data leaves your infrastructure, and rate limits on lower tiers are frustrating. Huginn: The OG self-hosted automation. Written in Ruby, incredibly flexible with custom agents. Downside: the UI feels dated compared to n8n, and the learning curve is steep. I'd only recommend Huginn if you have specific needs that n8n can't handle, which is rare. Node-RED: Built for IoT and hardware automation — that's where it excels. The flow-based programming model is different from n8n's node model and suits different use cases. Downside: fewer SaaS integrations, and building web automations requires more manual work. Best for smart home and IoT workflows. My recommendation: n8n for most self-hosters. It has the best balance of power, usability, and self-hosted experience. Start with the Docker Compose setup above, build a few simple workflows, and expand from there.

About the Author

Frank Pegasus

DevOps engineer and self-hosting enthusiast with over a decade of experience running containerized workloads in production. Creator of docker.recipes.