docker.recipes
Operations9 min read

How to Back Up Docker Volumes Safely

Learn reliable strategies for backing up your Docker volumes without data corruption or downtime.

01Introduction

Your Docker volumes contain irreplaceable data—databases, configurations, user uploads. A solid backup strategy is non-negotiable. This guide covers methods from simple tar archives to automated backup solutions.

02Stop Containers or Hot Backup?

The safest backup stops the container first. For databases, a hot backup without stopping is possible but requires database-specific tools. Never just copy database files while the database is running.

Copying PostgreSQL or MySQL data files while the database is running will result in a corrupted backup. Always use proper database dump tools for hot backups.

03Basic Volume Backup

The simplest backup uses a temporary container to tar the volume contents. This works for any volume type.
1# Stop the container first (safest)
2docker compose stop
3
4# Backup a named volume
5docker run --rm \
6 -v myapp_data:/source:ro \
7 -v $(pwd)/backups:/backup \
8 alpine tar -czf /backup/myapp_data_$(date +%Y%m%d).tar.gz -C /source .
9
10# Restart
11docker compose start
12
13# Restore a backup
14docker run --rm \
15 -v myapp_data:/target \
16 -v $(pwd)/backups:/backup \
17 alpine sh -c "cd /target && tar -xzf /backup/myapp_data_20240115.tar.gz"

04Database-Specific Backups

Databases need special handling. Use built-in dump tools that create consistent snapshots without stopping the database.
1# PostgreSQL backup (hot backup, no downtime)
2docker exec postgres pg_dumpall -U postgres > backup_$(date +%Y%m%d).sql
3
4# Or compressed
5docker exec postgres pg_dump -U postgres dbname | gzip > backup_$(date +%Y%m%d).sql.gz
6
7# MySQL backup
8docker exec mysql mysqldump -u root -p${MYSQL_ROOT_PASSWORD} --all-databases > backup.sql
9
10# MongoDB backup
11docker exec mongo mongodump --archive --gzip > backup_$(date +%Y%m%d).gz
12
13# Restore PostgreSQL
14cat backup.sql | docker exec -i postgres psql -U postgres
15
16# Restore MySQL
17cat backup.sql | docker exec -i mysql mysql -u root -p${MYSQL_ROOT_PASSWORD}

Database dumps are portable across different Docker hosts and even different database versions (within reason).

05Automated Backup with Docker

Use a backup container that runs on a schedule. Several purpose-built images make this easy.
1services:
2 # Your application
3 postgres:
4 image: postgres:15
5 volumes:
6 - postgres_data:/var/lib/postgresql/data
7
8 # Automated backup container
9 backup:
10 image: prodrigestivill/postgres-backup-local
11 environment:
12 - POSTGRES_HOST=postgres
13 - POSTGRES_DB=myapp
14 - POSTGRES_USER=postgres
15 - POSTGRES_PASSWORD=${DB_PASSWORD}
16 - SCHEDULE=@daily
17 - BACKUP_KEEP_DAYS=7
18 - BACKUP_KEEP_WEEKS=4
19 - BACKUP_KEEP_MONTHS=6
20 volumes:
21 - ./backups:/backups
22 depends_on:
23 - postgres
24
25volumes:
26 postgres_data:

06Offsite Backup Strategies

Local backups protect against container failures but not disk failures or disasters. Send backups offsite using rclone, restic, or cloud storage.
1# Using rclone to sync backups to cloud storage
2# First, configure rclone: rclone config
3
4# Sync backup folder to remote
5rclone sync ./backups remote:docker-backups
6
7# Automated with cron (add to crontab -e)
80 3 * * * /usr/bin/rclone sync /home/user/backups remote:docker-backups
9
10# Using restic for encrypted, deduplicated backups
11restic init --repo s3:s3.amazonaws.com/bucket-name/backups
12restic backup --repo s3:s3.amazonaws.com/bucket-name/backups ./backups

Restic deduplicates data, so daily backups of large volumes won't consume proportionally large storage.

07Complete Backup Script

Here's a ready backup script that handles multiple volumes and includes error checking.
1#!/bin/bash
2# backup.sh - Backup all Docker volumes
3
4BACKUP_DIR="/backups/docker"
5DATE=$(date +%Y%m%d_%H%M%S)
6RETENTION_DAYS=30
7
8# Create backup directory
9mkdir -p "$BACKUP_DIR"
10
11# Function to backup a volume
12backup_volume() {
13 local volume=$1
14 echo "Backing up $volume..."
15 docker run --rm \
16 -v "$volume":/source:ro \
17 -v "$BACKUP_DIR":/backup \
18 alpine tar -czf "/backup/${volume}_${DATE}.tar.gz" -C /source . \
19 && echo " Success: ${volume}_${DATE}.tar.gz" \
20 || echo " FAILED: $volume"
21}
22
23# Backup specific volumes
24backup_volume "myapp_postgres_data"
25backup_volume "myapp_redis_data"
26backup_volume "myapp_uploads"
27
28# Clean old backups
29find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
30
31echo "Backup complete. Location: $BACKUP_DIR"

08Test Your Backups

A backup you haven't tested is not a backup. Regularly restore to a test environment to verify integrity.
1# Create a test volume
2docker volume create test_restore
3
4# Restore backup to test volume
5docker run --rm \
6 -v test_restore:/target \
7 -v $(pwd)/backups:/backup \
8 alpine sh -c "cd /target && tar -xzf /backup/myapp_data_20240115.tar.gz"
9
10# Start a test container with the restored data
11docker run --rm -v test_restore:/data alpine ls -la /data
12
13# Clean up
14docker volume rm test_restore

Schedule quarterly restore tests. Put it in your calendar. Untested backups have a habit of being corrupt when you need them most.