01Introduction
Your Docker volumes contain irreplaceable data—databases, configurations, user uploads. A solid backup strategy is non-negotiable. This guide covers methods from simple tar archives to automated backup solutions.
02Stop Containers or Hot Backup?
The safest backup stops the container first. For databases, a hot backup without stopping is possible but requires database-specific tools. Never just copy database files while the database is running.
Copying PostgreSQL or MySQL data files while the database is running will result in a corrupted backup. Always use proper database dump tools for hot backups.
03Basic Volume Backup
The simplest backup uses a temporary container to tar the volume contents. This works for any volume type.
1# Stop the container first (safest)2docker compose stop34# Backup a named volume5docker run --rm \6 -v myapp_data:/source:ro \7 -v $(pwd)/backups:/backup \8 alpine tar -czf /backup/myapp_data_$(date +%Y%m%d).tar.gz -C /source .910# Restart11docker compose start1213# Restore a backup14docker run --rm \15 -v myapp_data:/target \16 -v $(pwd)/backups:/backup \17 alpine sh -c "cd /target && tar -xzf /backup/myapp_data_20240115.tar.gz"04Database-Specific Backups
Databases need special handling. Use built-in dump tools that create consistent snapshots without stopping the database.
1# PostgreSQL backup (hot backup, no downtime)2docker exec postgres pg_dumpall -U postgres > backup_$(date +%Y%m%d).sql34# Or compressed5docker exec postgres pg_dump -U postgres dbname | gzip > backup_$(date +%Y%m%d).sql.gz67# MySQL backup8docker exec mysql mysqldump -u root -p${MYSQL_ROOT_PASSWORD} --all-databases > backup.sql910# MongoDB backup11docker exec mongo mongodump --archive --gzip > backup_$(date +%Y%m%d).gz1213# Restore PostgreSQL14cat backup.sql | docker exec -i postgres psql -U postgres1516# Restore MySQL17cat backup.sql | docker exec -i mysql mysql -u root -p${MYSQL_ROOT_PASSWORD}Database dumps are portable across different Docker hosts and even different database versions (within reason).
05Automated Backup with Docker
Use a backup container that runs on a schedule. Several purpose-built images make this easy.
1services: 2 # Your application3 postgres: 4 image: postgres:155 volumes: 6 - postgres_data:/var/lib/postgresql/data78 # Automated backup container9 backup: 10 image: prodrigestivill/postgres-backup-local11 environment: 12 - POSTGRES_HOST=postgres13 - POSTGRES_DB=myapp14 - POSTGRES_USER=postgres15 - POSTGRES_PASSWORD=${DB_PASSWORD}16 - SCHEDULE=@daily17 - BACKUP_KEEP_DAYS=718 - BACKUP_KEEP_WEEKS=419 - BACKUP_KEEP_MONTHS=620 volumes: 21 - ./backups:/backups22 depends_on: 23 - postgres2425volumes: 26 postgres_data: 06Offsite Backup Strategies
Local backups protect against container failures but not disk failures or disasters. Send backups offsite using rclone, restic, or cloud storage.
1# Using rclone to sync backups to cloud storage2# First, configure rclone: rclone config34# Sync backup folder to remote5rclone sync ./backups remote:docker-backups67# Automated with cron (add to crontab -e)80 3 * * * /usr/bin/rclone sync /home/user/backups remote:docker-backups910# Using restic for encrypted, deduplicated backups11restic init --repo s3:s3.amazonaws.com/bucket-name/backups12restic backup --repo s3:s3.amazonaws.com/bucket-name/backups ./backupsRestic deduplicates data, so daily backups of large volumes won't consume proportionally large storage.
07Complete Backup Script
Here's a ready backup script that handles multiple volumes and includes error checking.
1#!/bin/bash2# backup.sh - Backup all Docker volumes34BACKUP_DIR="/backups/docker"5DATE=$(date +%Y%m%d_%H%M%S)6RETENTION_DAYS=3078# Create backup directory9mkdir -p "$BACKUP_DIR"1011# Function to backup a volume12backup_volume() {13 local volume=$114 echo "Backing up $volume..."15 docker run --rm \16 -v "$volume":/source:ro \17 -v "$BACKUP_DIR":/backup \18 alpine tar -czf "/backup/${volume}_${DATE}.tar.gz" -C /source . \19 && echo " Success: ${volume}_${DATE}.tar.gz" \20 || echo " FAILED: $volume"21}2223# Backup specific volumes24backup_volume "myapp_postgres_data"25backup_volume "myapp_redis_data"26backup_volume "myapp_uploads"2728# Clean old backups29find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete3031echo "Backup complete. Location: $BACKUP_DIR"08Test Your Backups
A backup you haven't tested is not a backup. Regularly restore to a test environment to verify integrity.
1# Create a test volume2docker volume create test_restore34# Restore backup to test volume5docker run --rm \6 -v test_restore:/target \7 -v $(pwd)/backups:/backup \8 alpine sh -c "cd /target && tar -xzf /backup/myapp_data_20240115.tar.gz"910# Start a test container with the restored data11docker run --rm -v test_restore:/data alpine ls -la /data1213# Clean up14docker volume rm test_restoreSchedule quarterly restore tests. Put it in your calendar. Untested backups have a habit of being corrupt when you need them most.