01The 3 AM Wake-Up Call
My monitoring dashboard went red at 3 AM because a Node.js service had a memory leak. Without resource limits, it consumed all 32GB of RAM on the host, triggering the Linux OOM killer, which decided to kill my PostgreSQL container. That took down every service that depended on the database.
A 5-line addition to my compose file would have contained the damage: the Node.js service would have crashed and restarted (thanks to its restart policy), and everything else would have kept running.
Resource limits are the seatbelts of Docker deployments. You don't notice them until the one time they save you from disaster.
02Memory Limits and Reservations
Memory limits are the most important resource constraint. A memory limit caps how much RAM a container can use. If it exceeds the limit, Docker kills the container (which triggers a restart if you have a restart policy).
Memory reservations are a soft limit — Docker tries to ensure the container has at least this much memory available but allows bursting above it when the host has spare capacity.
[docker-compose.yml]
1services: 2 app: 3 image: myapp:latest4 deploy: 5 resources: 6 limits: 7 memory: 1G # Hard cap: container is killed above this8 reservations: 9 memory: 256M # Guaranteed minimum1011 db: 12 image: postgres:16-alpine13 deploy: 14 resources: 15 limits: 16 memory: 2G17 reservations: 18 memory: 512M19 # PostgreSQL-specific memory tuning20 command: >21 postgres22 -c shared_buffers=512MB23 -c effective_cache_size=1536MB24 -c work_mem=16MBSet database memory limits generously — databases use RAM for caching, and starving them of memory causes dramatic performance degradation. A good rule: set the limit to 2x your expected usage.
03CPU Limits
CPU limits prevent a container from monopolizing CPU cores. They're less critical than memory limits (CPU contention slows things down but doesn't cause crashes), but still important for multi-service hosts:
[docker-compose.yml]
1services: 2 # CPU-intensive service: allow 2 full cores3 ml-worker: 4 image: myapp-ml:latest5 deploy: 6 resources: 7 limits: 8 cpus: "2.0"9 reservations: 10 cpus: "0.5"1112 # Light service: limit to half a core13 dashboard: 14 image: homepage:latest15 deploy: 16 resources: 17 limits: 18 cpus: "0.5"19 memory: 256M04Practical Guidelines
Here are the resource limits I use for common services (on a 32GB / 8-core host):
Databases (PostgreSQL, MySQL): 2-4GB memory, 2 CPUs
Application servers (Node, Python, Go): 512M-1G memory, 1-2 CPUs
Reverse proxy (Traefik, Caddy): 256M memory, 0.5 CPU
Monitoring (Prometheus): 1G memory, 1 CPU
Dashboards (Grafana, Homepage): 256M memory, 0.5 CPU
Cache (Redis): 256-512M memory, 0.5 CPU
Media server (Jellyfin): 2-4G memory, 4 CPUs (for transcoding)
AI/ML (Ollama): 8-16G memory, all available CPUs
Start conservative and increase limits if you see containers hitting their caps. Docker stats shows real-time resource usage for all containers — run it periodically to understand your actual resource consumption.
The key principle: limits should prevent catastrophic failures, not constrain normal operation. Set them above peak usage but below "something is clearly wrong" usage. Check our recipes for service-specific resource recommendations.