$docker.recipes
·16 min read·Updated July 2025

How to Set Up a Complete Home Lab with Docker Compose

A step-by-step guide to building a home lab that handles file storage, media streaming, password management, monitoring, and more — all with Docker Compose.

homelabdocker-composeself-hostingtutorial

01Why I Built a Home Lab

My home lab started with a single Raspberry Pi running Pi-hole for ad blocking. Two years later, I'm running 23 containerized services on a mini PC that cost less than a year's worth of streaming subscriptions. A home lab isn't just about saving money (though you will). It's a learning environment where you can experiment with networking, security, automation, and infrastructure without risking production systems. Every concept I've learned in my home lab has directly improved my professional work. This guide walks you through building a complete home lab from scratch. We'll cover hardware selection, base setup, and deploying a full suite of services using Docker Compose.

02Choosing Your Hardware

You don't need expensive hardware to start. Here are three tiers: Budget ($50-100): A Raspberry Pi 5 with 8GB RAM handles 5-10 lightweight services easily. Add an NVMe SSD via the M.2 HAT for better I/O performance. This is enough for Pi-hole, Vaultwarden, Uptime Kuma, and a few other services. Mid-range ($200-400): A used mini PC (Lenovo ThinkCentre Tiny, HP EliteDesk Mini, or Intel NUC) with 16-32GB RAM and a 500GB-1TB SSD. This is the sweet spot for most home labs. You can run 15-25 services comfortably, including heavier applications like Nextcloud and Jellyfin. Power user ($500+): A dedicated server with 64GB+ RAM, multiple drives in a RAID configuration, and a UPS for power protection. This is for people running media transcoding, VMs alongside containers, or serving content to others. I recommend starting with the mid-range option. A refurbished ThinkCentre M720q with 32GB RAM and a 1TB NVMe costs about $250 and is virtually silent.

03Base System Setup

Install Ubuntu Server 24.04 LTS (or Debian 12) as your base OS. Skip the desktop environment — you'll manage everything via SSH and web interfaces. After installation, install Docker and Docker Compose:
[terminal]
1# Update system
2sudo apt update && sudo apt upgrade -y
3
4# Install Docker using the official script
5curl -fsSL https://get.docker.com | sh
6
7# Add your user to the docker group
8sudo usermod -aG docker $USER
9
10# Verify installation
11docker --version
12docker compose version
13
14# Create a directory structure for your services
15mkdir -p ~/docker/{traefik,nextcloud,vaultwarden,jellyfin,monitoring,homepage}

Set a static IP address for your home lab machine in your router's DHCP settings. This ensures your services are always reachable at the same address.

04Step 1: Reverse Proxy with Traefik

Before deploying any services, set up a reverse proxy. Traefik automatically discovers your Docker containers and routes traffic to them, with free SSL certificates from Let's Encrypt. This is the foundation of your home lab. Every other service will register itself with Traefik via Docker labels, so you access each service at a clean URL like nextcloud.yourdomain.com instead of remembering port numbers. If you have a domain name, point a wildcard DNS record (*.yourdomain.com) to your home IP, or use Cloudflare Tunnel for a zero-trust approach that doesn't require opening ports on your router. Check out our Traefik recipes for production-ready configurations with automatic SSL, middleware for authentication, and dashboard access.
[docker-compose.yml]
1# ~/docker/traefik/docker-compose.yml
2services:
3 traefik:
4 image: traefik:v3.0
5 container_name: traefik
6 restart: unless-stopped
7 ports:
8 - "80:80"
9 - "443:443"
10 volumes:
11 - /var/run/docker.sock:/var/run/docker.sock:ro
12 - ./traefik.yml:/traefik.yml:ro
13 - ./acme.json:/acme.json
14 networks:
15 - proxy
16
17networks:
18 proxy:
19 external: true

05Step 2: Core Services

With Traefik running, deploy your essential services. I recommend this order: Vaultwarden (Password Manager): This is service number one. A Bitwarden-compatible server that uses minimal resources. Migrate your passwords from whatever you're using now, and install the browser extension and mobile app. You'll wonder how you ever lived without self-hosted password management. Nextcloud (File Sync & Office): Your personal cloud. File sync across all devices, collaborative document editing, calendar, contacts, and hundreds of apps. Deploy it with PostgreSQL and Redis for the best performance. Homepage (Dashboard): A beautiful dashboard that shows all your services in one place. It integrates with Docker to automatically detect running containers and can show widget data from services like Sonarr, Radarr, and Pi-hole. Uptime Kuma (Monitoring): A lightweight monitoring tool that checks whether your services are running and alerts you via Telegram, Discord, or email when something goes down. Each of these has a ready-to-use Docker Compose configuration in our homelab recipes. The configurations include all the environment variables, volumes, and networking settings you need.

06Step 3: Media Stack

If you have a media collection (or plan to build one), a self-hosted media stack is one of the most satisfying parts of a home lab: Jellyfin: A completely free media server with no premium tiers. Supports movies, TV shows, music, and books. Clients are available for every platform including mobile, smart TVs, and game consoles. If you want automated media management, add the "arr" stack: Sonarr for TV, Radarr for movies, Prowlarr for indexer management, and Bazarr for subtitles. These tools work together to automatically organize and manage your library. For photos, Immich is a Google Photos replacement that's genuinely impressive. It supports facial recognition, location-based browsing, and automatic mobile uploads. It's one of the fastest-growing self-hosted projects for good reason. Browse our media category for complete configurations of each of these services, including multi-container stacks that combine related tools.

07Maintenance and Backups

A home lab is only as good as your backup strategy. Here's what I do: Daily automated backups of all Docker volumes to a separate drive using a simple bash script with rsync. This protects against accidental deletion and corruption. Weekly off-site backups to Backblaze B2 using rclone. This protects against hardware failure, theft, or disaster. The cost is about $5/month for 100GB of backup data. Container updates using Watchtower, which automatically pulls new images and restarts containers. I run it in monitor-only mode and approve updates manually for critical services, but let it auto-update low-risk ones. Monitoring with Uptime Kuma checks every service every 60 seconds and sends me a Telegram message if anything goes down. In two years of running my home lab, I've had maybe a dozen alerts, and most were caused by my own configuration changes. The beauty of Docker Compose is that your entire infrastructure is defined in text files. If your hardware dies, you can rebuild everything on new hardware by copying your compose files, .env files, and volume backups. I've tested this — a full rebuild from backup takes about 30 minutes.

Always test your backups. A backup you haven't tested is not a backup. At least once a quarter, try restoring a service from backup to verify the process works.

About the Author

Frank Pegasus

DevOps engineer and self-hosting enthusiast with over a decade of experience running containerized workloads in production. Creator of docker.recipes.