$docker.recipes
·9 min read·Updated February 2026

Setting Resource Limits in Docker Compose: CPU, Memory, and Storage

How to prevent one runaway container from taking down your entire server with proper resource limits, reservations, and ulimits.

docker-composeresourcesproductionperformance

01The 3 AM Wake-Up Call

My monitoring dashboard went red at 3 AM because a Node.js service had a memory leak. Without resource limits, it consumed all 32GB of RAM on the host, triggering the Linux OOM killer, which decided to kill my PostgreSQL container. That took down every service that depended on the database. A 5-line addition to my compose file would have contained the damage: the Node.js service would have crashed and restarted (thanks to its restart policy), and everything else would have kept running. Resource limits are the seatbelts of Docker deployments. You don't notice them until the one time they save you from disaster.

02Memory Limits and Reservations

Memory limits are the most important resource constraint. A memory limit caps how much RAM a container can use. If it exceeds the limit, Docker kills the container (which triggers a restart if you have a restart policy). Memory reservations are a soft limit — Docker tries to ensure the container has at least this much memory available but allows bursting above it when the host has spare capacity.
[docker-compose.yml]
1services:
2 app:
3 image: myapp:latest
4 deploy:
5 resources:
6 limits:
7 memory: 1G # Hard cap: container is killed above this
8 reservations:
9 memory: 256M # Guaranteed minimum
10
11 db:
12 image: postgres:16-alpine
13 deploy:
14 resources:
15 limits:
16 memory: 2G
17 reservations:
18 memory: 512M
19 # PostgreSQL-specific memory tuning
20 command: >
21 postgres
22 -c shared_buffers=512MB
23 -c effective_cache_size=1536MB
24 -c work_mem=16MB

Set database memory limits generously — databases use RAM for caching, and starving them of memory causes dramatic performance degradation. A good rule: set the limit to 2x your expected usage.

03CPU Limits

CPU limits prevent a container from monopolizing CPU cores. They're less critical than memory limits (CPU contention slows things down but doesn't cause crashes), but still important for multi-service hosts:
[docker-compose.yml]
1services:
2 # CPU-intensive service: allow 2 full cores
3 ml-worker:
4 image: myapp-ml:latest
5 deploy:
6 resources:
7 limits:
8 cpus: "2.0"
9 reservations:
10 cpus: "0.5"
11
12 # Light service: limit to half a core
13 dashboard:
14 image: homepage:latest
15 deploy:
16 resources:
17 limits:
18 cpus: "0.5"
19 memory: 256M

04Practical Guidelines

Here are the resource limits I use for common services (on a 32GB / 8-core host): Databases (PostgreSQL, MySQL): 2-4GB memory, 2 CPUs Application servers (Node, Python, Go): 512M-1G memory, 1-2 CPUs Reverse proxy (Traefik, Caddy): 256M memory, 0.5 CPU Monitoring (Prometheus): 1G memory, 1 CPU Dashboards (Grafana, Homepage): 256M memory, 0.5 CPU Cache (Redis): 256-512M memory, 0.5 CPU Media server (Jellyfin): 2-4G memory, 4 CPUs (for transcoding) AI/ML (Ollama): 8-16G memory, all available CPUs Start conservative and increase limits if you see containers hitting their caps. Docker stats shows real-time resource usage for all containers — run it periodically to understand your actual resource consumption. The key principle: limits should prevent catastrophic failures, not constrain normal operation. Set them above peak usage but below "something is clearly wrong" usage. Check our recipes for service-specific resource recommendations.

About the Author

Frank Pegasus

DevOps engineer and self-hosting enthusiast with over a decade of experience running containerized workloads in production. Creator of docker.recipes.