01Small Changes, Big Impact
Most Docker Compose performance problems aren't architectural — they're small configuration issues that compound into sluggish deployments. A bloated image that takes 5 minutes to pull. A build that re-downloads dependencies on every code change. A container that swaps because it doesn't have memory limits.
After optimizing Docker setups across dozens of projects, I've found that 80% of performance wins come from five straightforward techniques. None of them require changing your application code or switching tools.
02Multi-Stage Builds
If you build your own images, multi-stage builds are the single biggest optimization. A typical Node.js image with node_modules and build tools can be 1GB+. A multi-stage build produces a 100-200MB production image:
[Dockerfile]
1# Dockerfile with multi-stage build2# Stage 1: Build (includes devDependencies, build tools)3FROM node:22-alpine AS builder4WORKDIR /app5COPY package*.json ./6RUN npm ci7COPY . .8RUN npm run build910# Stage 2: Production (only runtime dependencies)11FROM node:22-alpine12WORKDIR /app13COPY package*.json ./14RUN npm ci --omit=dev15COPY --from=builder /app/dist ./dist16USER node17CMD ["node", "dist/index.js"]Order your COPY instructions from least to most frequently changed. Docker caches each layer, so putting package.json before source code means dependencies are only re-installed when package.json changes.
03Leveraging Build Cache
Docker's build cache is your best friend for fast iterations. Each instruction in a Dockerfile creates a layer, and Docker reuses cached layers when the inputs haven't changed.
The key insight: once a layer is invalidated, every subsequent layer is rebuilt. So put stable, slow operations (installing system packages, downloading dependencies) early in the Dockerfile, and fast-changing operations (copying source code) last.
For Docker Compose builds, use the cache_from option to pull a previously built image and use its layers as cache, even in CI environments that start with a clean Docker daemon.
04Use Alpine and Slim Images
Base image choice dramatically affects pull times and disk usage:
Standard Debian-based images: 200-400MB
Slim variants: 80-150MB
Alpine-based images: 30-80MB
Distroless images: 15-50MB
For most services, Alpine images work perfectly and are maintained by the official Docker teams. Use postgres:16-alpine instead of postgres:16, node:22-alpine instead of node:22, and redis:7-alpine instead of redis:7.
The size difference matters more than you'd think: on a VPS with 1Gbps bandwidth, pulling a 400MB image takes 3 seconds. On a home connection with 50Mbps upload, it takes 64 seconds. Alpine images pull 5x faster.
05Runtime Performance
Beyond image optimization, these compose-level settings improve runtime performance:
Storage driver: Use overlay2 (the default on modern Docker). If you're on an older system, check with docker info | grep "Storage Driver".
Volume performance: Named volumes are faster than bind mounts for database and application data on macOS and Windows (bind mounts use a translation layer). On Linux, performance is identical.
Logging: Default JSON logging can consume significant disk I/O for high-volume services. Set a max-size and max-file to prevent log files from filling your disk. Or switch to journald or local drivers for better performance.
Networking: If you don't need network isolation between specific services, putting them on the same network reduces DNS lookup overhead and eliminates NAT traversal.
These optimizations are especially impactful on resource-constrained hosts like Raspberry Pi or small VPS instances. Check our recipes for optimized configurations tailored to lower-resource environments.
[docker-compose.yml]
1services: 2 app: 3 image: myapp:latest4 logging: 5 driver: "json-file"6 options: 7 max-size: "10m"8 max-file: "3"9 # Faster DNS resolution for containers on the same network10 dns: 11 - 127.0.0.11 # Docker's embedded DNS