$docker.recipes
·12 min read·Updated February 2026

Centralized Logging with Loki, Grafana, and Promtail in Docker Compose

Stop grepping through individual container logs. Set up a centralized logging stack with Loki, Promtail, and Grafana using Docker Compose — lightweight and production-ready.

logginglokigrafanamonitoringdocker-compose

01Why Centralized Logging Matters

When you're running 3-5 Docker containers, docker compose logs -f works fine. You can grep through output, scroll back, and find what you need. But once you cross 10-15 services, this approach falls apart. Last year, I spent 45 minutes tracking down an intermittent 502 error in my stack. The cause was a database connection timeout that only showed up in the app logs — not the reverse proxy logs I was staring at. I had to docker compose logs service-name through six different services before finding it. That afternoon I set up Loki. Since then, I can search all my logs from a single Grafana dashboard, filter by service, time range, and log level, and set up alerts for specific error patterns. The entire stack uses about 300MB of RAM — a fraction of what the ELK stack (Elasticsearch, Logstash, Kibana) would consume. If you're running more than a handful of services, centralized logging isn't a luxury — it's a necessity. Here's how to set it up.

02How the Loki Stack Works

The Loki stack has three components, each with a single responsibility: Promtail is the agent that collects logs. It tails log files, Docker container logs, or systemd journal entries and ships them to Loki. Think of it as the "shipper" — it reads logs from your containers and forwards them with labels attached. Loki is the log database. Unlike Elasticsearch, Loki doesn't index the full text of your logs. Instead, it indexes only the metadata (labels like service name, container ID, log level) and stores the log lines as compressed chunks. This is why it uses so much less memory and storage than ELK. Grafana is the query interface. It connects to Loki as a data source and lets you search, filter, and visualize logs using LogQL — Loki's query language that's intentionally similar to PromQL if you're already using Prometheus. The data flow: Containers write to stdout/stderr → Docker stores logs as JSON files → Promtail reads those files and adds labels → Promtail ships to Loki → You query through Grafana.

Grafana Alloy (formerly Grafana Agent) is replacing Promtail as the recommended collector. It supports logs, metrics, and traces in a single agent. For new setups, consider using Alloy instead of Promtail. For simplicity, this guide uses Promtail since it's still widely documented and works perfectly for log-only collection.

03Docker Compose Configuration

Here's the complete Docker Compose setup. Loki stores data in a local volume, Promtail reads Docker's log files, and Grafana connects to Loki as a pre-configured data source.
[docker-compose.yml]
1services:
2 loki:
3 image: grafana/loki:3.4
4 command: -config.file=/etc/loki/local-config.yaml
5 volumes:
6 - loki-data:/loki
7 ports:
8 - "3100:3100"
9 restart: unless-stopped
10 healthcheck:
11 test: ["CMD", "wget", "--quiet", "--tries=1", "--output-document=-", "http://localhost:3100/ready"]
12 interval: 15s
13 timeout: 5s
14 retries: 5
15
16 promtail:
17 image: grafana/promtail:3.4
18 volumes:
19 - ./promtail-config.yml:/etc/promtail/config.yml:ro
20 - /var/lib/docker/containers:/var/lib/docker/containers:ro
21 - /var/run/docker.sock:/var/run/docker.sock:ro
22 command: -config.file=/etc/promtail/config.yml
23 depends_on:
24 loki:
25 condition: service_healthy
26 restart: unless-stopped
27
28 grafana:
29 image: grafana/grafana:11.5.2
30 environment:
31 - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD:-admin}
32 - GF_AUTH_ANONYMOUS_ENABLED=false
33 volumes:
34 - grafana-data:/var/lib/grafana
35 ports:
36 - "3000:3000"
37 depends_on:
38 loki:
39 condition: service_healthy
40 restart: unless-stopped
41
42volumes:
43 loki-data:
44 grafana-data:

04Configuring Promtail for Docker

Promtail needs a configuration file that tells it where to find logs and how to label them. This configuration auto-discovers Docker containers and extracts useful labels like container name and compose service.
[promtail-config.yml]
1server:
2 http_listen_port: 9080
3 grpc_listen_port: 0
4
5positions:
6 filename: /tmp/positions.yaml
7
8clients:
9 - url: http://loki:3100/loki/api/v1/push
10
11scrape_configs:
12 - job_name: docker
13 docker_sd_configs:
14 - host: unix:///var/run/docker.sock
15 refresh_interval: 5s
16 relabel_configs:
17 # Extract container name
18 - source_labels: ['__meta_docker_container_name']
19 regex: '/(.*)'
20 target_label: 'container'
21 # Extract compose service name
22 - source_labels: ['__meta_docker_container_label_com_docker_compose_service']
23 target_label: 'service'
24 # Extract compose project name
25 - source_labels: ['__meta_docker_container_label_com_docker_compose_project']
26 target_label: 'project'

Watch your log volume. A single verbose service can generate gigabytes of logs per day. Set Docker's log driver options to limit log file sizes: add --log-opt max-size=50m --log-opt max-file=3 to your Docker daemon configuration. In Loki, configure retention to auto-delete old logs — 7-14 days is usually sufficient for troubleshooting.

05Querying Logs with LogQL

Once Grafana is running (http://localhost:3000) and Loki is added as a data source, you can start querying. LogQL syntax is straightforward — start with a stream selector in curly braces and optionally add filters.
[logql-examples]
1# Basic: show all logs from a specific service
2{service="nginx"}
3
4# Filter by keyword
5{service="app"} |= "error"
6
7# Exclude noisy lines
8{service="app"} != "healthcheck"
9
10# Regex matching
11{service="app"} |~ "status=(4|5)\d{2}"
12
13# Parse JSON logs and filter by field
14{service="api"} | json | level="error"
15
16# Count errors per service over time (for dashboards)
17sum(rate({service=~".+"} |= "error" [5m])) by (service)
18
19# Top 10 noisiest services
20topk(10, sum(rate({service=~".+"} [1h])) by (service))

06Production Considerations

I've been running this stack for 14 months across two servers. Here's what I've learned: Storage sizing: Loki is remarkably efficient. My stack generates about 2GB of raw logs per day across 18 services, and Loki compresses that to about 200MB on disk. A 50GB volume gives me roughly 8 months of retention, but I keep only 30 days since I rarely need logs older than a week. Retention: Configure retention in Loki's config to auto-delete old chunks. Without retention, your disk will eventually fill up. I use 30 days for general logs and 90 days for security-related services (reverse proxy access logs, auth service logs). Alerting: Set up Grafana alerts for patterns that matter. I alert on more than 10 error-level logs per minute from any service, any OOM-killed containers, and SSL certificate expiration warnings. These three rules have caught issues before they became outages. Resource usage: The entire stack (Loki + Promtail + Grafana) uses about 300-400MB RAM on my server. Compare that to an ELK stack which easily needs 4-8GB. For a self-hosted environment, the Loki stack is the clear winner on resource efficiency. Multi-server: If you run services across multiple machines, install Promtail on each server and point them all to a single Loki instance. This gives you cross-server log search from one Grafana dashboard. Use Tailscale or WireGuard to secure the connection between servers.

About the Author

Frank Pegasus

DevOps engineer and self-hosting enthusiast with over a decade of experience running containerized workloads in production. Creator of docker.recipes.