docker.recipes
Fundamentals7 min read

CPU and Memory Limits in Docker Compose

Prevent runaway containers from crashing your server with proper resource constraints.

01Why Resource Limits Matter

Without limits, a single misbehaving container can consume all your server's RAM or CPU, taking down everything else. This is especially critical for homelabs where you're running multiple services on limited hardware. Resource limits protect your entire stack from one bad actor.

A container without memory limits can trigger the Linux OOM killer, which may terminate random processes—including other containers.

02Memory Limits and Reservations

Docker offers two memory controls: **limits** (hard cap) and **reservations** (soft guarantee). Limits prevent a container from using more than specified. Reservations tell Docker to prefer giving that much memory when resources are constrained.
1services:
2 app:
3 image: myapp:latest
4 deploy:
5 resources:
6 limits:
7 memory: 512M # Hard cap - container killed if exceeded
8 reservations:
9 memory: 256M # Soft guarantee - Docker tries to provide this
10
11 database:
12 image: postgres:16-alpine
13 deploy:
14 resources:
15 limits:
16 memory: 1G
17 reservations:
18 memory: 512M
19
20 cache:
21 image: redis:alpine
22 deploy:
23 resources:
24 limits:
25 memory: 128M
26 reservations:
27 memory: 64M

Set limits based on peak usage, reservations based on typical usage. Monitor your containers to find these values.

03CPU Limits and Shares

CPU limits use 'cpus' (fractional cores) and 'cpu_shares' (relative weight). A limit of 0.5 means 50% of one core. CPU shares only matter when there's contention—a container with 1024 shares gets twice the CPU of one with 512 shares when both are competing.
1services:
2 # CPU-intensive service: limit to 2 cores
3 video-encoder:
4 image: encoder:latest
5 deploy:
6 resources:
7 limits:
8 cpus: '2.0' # Max 2 full CPU cores
9 reservations:
10 cpus: '1.0' # Guarantee at least 1 core
11
12 # Web app: lightweight, share CPU fairly
13 webapp:
14 image: webapp:latest
15 deploy:
16 resources:
17 limits:
18 cpus: '0.5' # Max half a core
19 memory: 256M
20 reservations:
21 cpus: '0.25'
22 memory: 128M
23
24 # Background worker: low priority
25 worker:
26 image: worker:latest
27 deploy:
28 resources:
29 limits:
30 cpus: '1.0'
31 memory: 512M
32 reservations:
33 cpus: '0.1' # Minimal guarantee

04Understanding OOM Behavior

When a container exceeds its memory limit, Docker kills it. The restart policy determines what happens next. Understanding this helps you tune limits appropriately.
1services:
2 app:
3 image: myapp:latest
4 restart: unless-stopped # Restart after OOM kill
5 deploy:
6 resources:
7 limits:
8 memory: 512M
9 # Optional: Disable OOM killer (container pauses instead of dies)
10 # Only use if you REALLY know what you're doing
11 # oom_kill_disable: true
12
13 # For Java apps: also limit JVM heap
14 java-app:
15 image: java-app:latest
16 deploy:
17 resources:
18 limits:
19 memory: 1G
20 environment:
21 # Set JVM heap to 75% of container limit
22 JAVA_OPTS: "-Xmx768m -Xms512m"

Check OOM kills with: docker inspect container_name | grep -i oomkilled

Java, Python, and Node.js don't automatically respect container limits. Configure their memory settings to stay under the limit.

05Monitoring Resource Usage

You can't set good limits without knowing actual usage. Monitor your containers to establish baselines.
1# Real-time stats for all containers
2docker stats
3
4# Stats for specific containers
5docker stats webapp database redis
6
7# One-time snapshot (for scripting)
8docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
9
10# Detailed memory breakdown
11docker inspect --format='{{.HostConfig.Memory}}' container_name
12
13# Using ctop for better visualization
14docker run --rm -it \
15 --name ctop \
16 -v /var/run/docker.sock:/var/run/docker.sock:ro \
17 quay.io/vektorlab/ctop:latest

Run 'docker stats' during peak usage times to find your actual memory high-water marks.

06Practical Limit Guidelines

Starting points for common self-hosted services. Adjust based on your actual usage patterns. **Databases:** - PostgreSQL: 256M-1G (depends on data size) - MySQL: 512M-2G - Redis: 64M-256M (depends on dataset) - MongoDB: 512M-2G **Web Apps:** - Static sites/Nginx: 32M-64M - PHP apps: 128M-512M - Node.js: 256M-512M - Java apps: 512M-2G **Media:** - Plex: 2G-4G (transcoding) - Jellyfin: 1G-4G - Photoprism: 2G-4G **Monitoring:** - Prometheus: 512M-2G - Grafana: 128M-256M
1# Example homelab stack with limits
2services:
3 traefik:
4 image: traefik:v2.10
5 deploy:
6 resources:
7 limits: { memory: 128M, cpus: '0.5' }
8 reservations: { memory: 64M, cpus: '0.1' }
9
10 nextcloud:
11 image: nextcloud:latest
12 deploy:
13 resources:
14 limits: { memory: 512M, cpus: '1.0' }
15 reservations: { memory: 256M, cpus: '0.25' }
16
17 postgres:
18 image: postgres:16-alpine
19 deploy:
20 resources:
21 limits: { memory: 512M, cpus: '1.0' }
22 reservations: { memory: 256M, cpus: '0.25' }
23
24 redis:
25 image: redis:alpine
26 deploy:
27 resources:
28 limits: { memory: 64M, cpus: '0.25' }
29 reservations: { memory: 32M, cpus: '0.1' }