docker.recipes

Sentry Error Tracking

advanced

Self-hosted error tracking and performance monitoring platform.

Overview

Sentry is an open-source application monitoring platform that provides real-time error tracking, performance monitoring, and release health insights for applications across multiple programming languages. Originally developed by Disqus in 2008 and later spun off as its own company, Sentry has become the de facto standard for error tracking in modern software development, offering both cloud-hosted and self-hosted solutions to help developers identify, diagnose, and resolve issues before they impact users. This comprehensive Docker stack combines Sentry's web interface, worker processes, and cron scheduler with a robust backend infrastructure including PostgreSQL for metadata storage, Redis for caching and session management, Apache Kafka for event streaming, ClickHouse for high-performance analytics, and Snuba as the query layer that bridges Sentry's data needs with ClickHouse's analytical capabilities. The architecture is specifically designed to handle high-throughput error ingestion and provide fast query performance for debugging workflows, making it suitable for organizations processing millions of events per day. This self-hosted deployment is ideal for teams requiring data sovereignty, custom integrations, or cost optimization compared to Sentry's cloud offering, while still maintaining the full feature set including error grouping, performance monitoring, release tracking, and team collaboration tools.

Key Features

  • Real-time error tracking with intelligent grouping and deduplication across multiple programming languages
  • Performance monitoring with transaction traces, database query analysis, and web vitals tracking
  • Release health monitoring with crash-free session rates and adoption tracking
  • Source map support for JavaScript applications enabling readable stack traces in production
  • Contextual breadcrumbs that capture user actions and system events leading up to errors
  • Flexible alerting system with integrations for Slack, PagerDuty, Jira, and custom webhooks
  • Session replay functionality for reproducing user sessions that encountered errors
  • High-throughput event processing using Kafka for reliable message queuing and ClickHouse for analytical storage

Common Use Cases

  • 1Enterprise organizations requiring self-hosted error tracking for compliance and data sovereignty
  • 2Development teams processing high volumes of errors and performance data (>1M events/day)
  • 3Companies wanting to avoid per-event pricing of cloud-hosted solutions
  • 4Organizations with custom integration requirements for existing monitoring infrastructure
  • 5Startups and scale-ups needing comprehensive debugging capabilities without vendor lock-in
  • 6DevOps teams implementing observability strategies across microservices architectures
  • 7Software companies providing error tracking as part of their platform offerings

Prerequisites

  • Minimum 8GB RAM for full stack deployment (Sentry requires 4GB+, ClickHouse needs 2GB+, Kafka 1GB+)
  • 50GB+ available disk space for event storage and database growth
  • Port 9000 available for Sentry web interface access
  • Understanding of Sentry project configuration and SDK integration
  • Familiarity with PostgreSQL database administration for backup and maintenance
  • Basic knowledge of Kafka and ClickHouse for troubleshooting data pipeline issues

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 sentry-web:
3 image: getsentry/sentry:latest
4 container_name: sentry-web
5 environment:
6 - SENTRY_SECRET_KEY=${SECRET_KEY}
7 - SENTRY_POSTGRES_HOST=postgres
8 - SENTRY_POSTGRES_PORT=5432
9 - SENTRY_DB_NAME=sentry
10 - SENTRY_DB_USER=sentry
11 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
12 - SENTRY_REDIS_HOST=redis
13 - SNUBA=http://snuba:1218
14 volumes:
15 - sentry-data:/data
16 ports:
17 - "9000:9000"
18 depends_on:
19 - postgres
20 - redis
21 - kafka
22 - snuba
23 networks:
24 - sentry-network
25 restart: unless-stopped
26
27 sentry-worker:
28 image: getsentry/sentry:latest
29 container_name: sentry-worker
30 command: run worker
31 environment:
32 - SENTRY_SECRET_KEY=${SECRET_KEY}
33 - SENTRY_POSTGRES_HOST=postgres
34 - SENTRY_DB_NAME=sentry
35 - SENTRY_DB_USER=sentry
36 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
37 - SENTRY_REDIS_HOST=redis
38 volumes:
39 - sentry-data:/data
40 depends_on:
41 - sentry-web
42 networks:
43 - sentry-network
44 restart: unless-stopped
45
46 sentry-cron:
47 image: getsentry/sentry:latest
48 container_name: sentry-cron
49 command: run cron
50 environment:
51 - SENTRY_SECRET_KEY=${SECRET_KEY}
52 - SENTRY_POSTGRES_HOST=postgres
53 - SENTRY_DB_NAME=sentry
54 - SENTRY_DB_USER=sentry
55 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
56 - SENTRY_REDIS_HOST=redis
57 volumes:
58 - sentry-data:/data
59 depends_on:
60 - sentry-web
61 networks:
62 - sentry-network
63 restart: unless-stopped
64
65 postgres:
66 image: postgres:15-alpine
67 container_name: sentry-postgres
68 environment:
69 - POSTGRES_USER=sentry
70 - POSTGRES_PASSWORD=${DB_PASSWORD}
71 - POSTGRES_DB=sentry
72 volumes:
73 - postgres-data:/var/lib/postgresql/data
74 networks:
75 - sentry-network
76 restart: unless-stopped
77
78 redis:
79 image: redis:7-alpine
80 container_name: sentry-redis
81 volumes:
82 - redis-data:/data
83 networks:
84 - sentry-network
85 restart: unless-stopped
86
87 kafka:
88 image: confluentinc/cp-kafka:latest
89 container_name: sentry-kafka
90 environment:
91 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
92 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
93 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
94 volumes:
95 - kafka-data:/var/lib/kafka/data
96 depends_on:
97 - zookeeper
98 networks:
99 - sentry-network
100 restart: unless-stopped
101
102 zookeeper:
103 image: confluentinc/cp-zookeeper:latest
104 container_name: sentry-zookeeper
105 environment:
106 - ZOOKEEPER_CLIENT_PORT=2181
107 volumes:
108 - zookeeper-data:/var/lib/zookeeper/data
109 networks:
110 - sentry-network
111 restart: unless-stopped
112
113 clickhouse:
114 image: clickhouse/clickhouse-server:latest
115 container_name: sentry-clickhouse
116 volumes:
117 - clickhouse-data:/var/lib/clickhouse
118 networks:
119 - sentry-network
120 restart: unless-stopped
121
122 snuba:
123 image: getsentry/snuba:latest
124 container_name: sentry-snuba
125 environment:
126 - SNUBA_SETTINGS=docker
127 - CLICKHOUSE_HOST=clickhouse
128 - KAFKA_BOOTSTRAP_SERVERS=kafka:9092
129 depends_on:
130 - clickhouse
131 - kafka
132 networks:
133 - sentry-network
134 restart: unless-stopped
135
136volumes:
137 sentry-data:
138 postgres-data:
139 redis-data:
140 kafka-data:
141 zookeeper-data:
142 clickhouse-data:
143
144networks:
145 sentry-network:
146 driver: bridge

.env Template

.env
1# Sentry
2DB_PASSWORD=secure_sentry_password
3
4# Generate with: python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"
5SECRET_KEY=your_sentry_secret_key

Usage Notes

  1. 1Sentry UI at http://localhost:9000
  2. 2Run migrations first: sentry upgrade
  3. 3Create admin: sentry createuser
  4. 4SDKs for all major languages
  5. 5Consider using official install script

Individual Services(9 services)

Copy individual services to mix and match with your existing compose files.

sentry-web
sentry-web:
  image: getsentry/sentry:latest
  container_name: sentry-web
  environment:
    - SENTRY_SECRET_KEY=${SECRET_KEY}
    - SENTRY_POSTGRES_HOST=postgres
    - SENTRY_POSTGRES_PORT=5432
    - SENTRY_DB_NAME=sentry
    - SENTRY_DB_USER=sentry
    - SENTRY_DB_PASSWORD=${DB_PASSWORD}
    - SENTRY_REDIS_HOST=redis
    - SNUBA=http://snuba:1218
  volumes:
    - sentry-data:/data
  ports:
    - "9000:9000"
  depends_on:
    - postgres
    - redis
    - kafka
    - snuba
  networks:
    - sentry-network
  restart: unless-stopped
sentry-worker
sentry-worker:
  image: getsentry/sentry:latest
  container_name: sentry-worker
  command: run worker
  environment:
    - SENTRY_SECRET_KEY=${SECRET_KEY}
    - SENTRY_POSTGRES_HOST=postgres
    - SENTRY_DB_NAME=sentry
    - SENTRY_DB_USER=sentry
    - SENTRY_DB_PASSWORD=${DB_PASSWORD}
    - SENTRY_REDIS_HOST=redis
  volumes:
    - sentry-data:/data
  depends_on:
    - sentry-web
  networks:
    - sentry-network
  restart: unless-stopped
sentry-cron
sentry-cron:
  image: getsentry/sentry:latest
  container_name: sentry-cron
  command: run cron
  environment:
    - SENTRY_SECRET_KEY=${SECRET_KEY}
    - SENTRY_POSTGRES_HOST=postgres
    - SENTRY_DB_NAME=sentry
    - SENTRY_DB_USER=sentry
    - SENTRY_DB_PASSWORD=${DB_PASSWORD}
    - SENTRY_REDIS_HOST=redis
  volumes:
    - sentry-data:/data
  depends_on:
    - sentry-web
  networks:
    - sentry-network
  restart: unless-stopped
postgres
postgres:
  image: postgres:15-alpine
  container_name: sentry-postgres
  environment:
    - POSTGRES_USER=sentry
    - POSTGRES_PASSWORD=${DB_PASSWORD}
    - POSTGRES_DB=sentry
  volumes:
    - postgres-data:/var/lib/postgresql/data
  networks:
    - sentry-network
  restart: unless-stopped
redis
redis:
  image: redis:7-alpine
  container_name: sentry-redis
  volumes:
    - redis-data:/data
  networks:
    - sentry-network
  restart: unless-stopped
kafka
kafka:
  image: confluentinc/cp-kafka:latest
  container_name: sentry-kafka
  environment:
    - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
    - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
    - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
  volumes:
    - kafka-data:/var/lib/kafka/data
  depends_on:
    - zookeeper
  networks:
    - sentry-network
  restart: unless-stopped
zookeeper
zookeeper:
  image: confluentinc/cp-zookeeper:latest
  container_name: sentry-zookeeper
  environment:
    - ZOOKEEPER_CLIENT_PORT=2181
  volumes:
    - zookeeper-data:/var/lib/zookeeper/data
  networks:
    - sentry-network
  restart: unless-stopped
clickhouse
clickhouse:
  image: clickhouse/clickhouse-server:latest
  container_name: sentry-clickhouse
  volumes:
    - clickhouse-data:/var/lib/clickhouse
  networks:
    - sentry-network
  restart: unless-stopped
snuba
snuba:
  image: getsentry/snuba:latest
  container_name: sentry-snuba
  environment:
    - SNUBA_SETTINGS=docker
    - CLICKHOUSE_HOST=clickhouse
    - KAFKA_BOOTSTRAP_SERVERS=kafka:9092
  depends_on:
    - clickhouse
    - kafka
  networks:
    - sentry-network
  restart: unless-stopped

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 sentry-web:
5 image: getsentry/sentry:latest
6 container_name: sentry-web
7 environment:
8 - SENTRY_SECRET_KEY=${SECRET_KEY}
9 - SENTRY_POSTGRES_HOST=postgres
10 - SENTRY_POSTGRES_PORT=5432
11 - SENTRY_DB_NAME=sentry
12 - SENTRY_DB_USER=sentry
13 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
14 - SENTRY_REDIS_HOST=redis
15 - SNUBA=http://snuba:1218
16 volumes:
17 - sentry-data:/data
18 ports:
19 - "9000:9000"
20 depends_on:
21 - postgres
22 - redis
23 - kafka
24 - snuba
25 networks:
26 - sentry-network
27 restart: unless-stopped
28
29 sentry-worker:
30 image: getsentry/sentry:latest
31 container_name: sentry-worker
32 command: run worker
33 environment:
34 - SENTRY_SECRET_KEY=${SECRET_KEY}
35 - SENTRY_POSTGRES_HOST=postgres
36 - SENTRY_DB_NAME=sentry
37 - SENTRY_DB_USER=sentry
38 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
39 - SENTRY_REDIS_HOST=redis
40 volumes:
41 - sentry-data:/data
42 depends_on:
43 - sentry-web
44 networks:
45 - sentry-network
46 restart: unless-stopped
47
48 sentry-cron:
49 image: getsentry/sentry:latest
50 container_name: sentry-cron
51 command: run cron
52 environment:
53 - SENTRY_SECRET_KEY=${SECRET_KEY}
54 - SENTRY_POSTGRES_HOST=postgres
55 - SENTRY_DB_NAME=sentry
56 - SENTRY_DB_USER=sentry
57 - SENTRY_DB_PASSWORD=${DB_PASSWORD}
58 - SENTRY_REDIS_HOST=redis
59 volumes:
60 - sentry-data:/data
61 depends_on:
62 - sentry-web
63 networks:
64 - sentry-network
65 restart: unless-stopped
66
67 postgres:
68 image: postgres:15-alpine
69 container_name: sentry-postgres
70 environment:
71 - POSTGRES_USER=sentry
72 - POSTGRES_PASSWORD=${DB_PASSWORD}
73 - POSTGRES_DB=sentry
74 volumes:
75 - postgres-data:/var/lib/postgresql/data
76 networks:
77 - sentry-network
78 restart: unless-stopped
79
80 redis:
81 image: redis:7-alpine
82 container_name: sentry-redis
83 volumes:
84 - redis-data:/data
85 networks:
86 - sentry-network
87 restart: unless-stopped
88
89 kafka:
90 image: confluentinc/cp-kafka:latest
91 container_name: sentry-kafka
92 environment:
93 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
94 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
95 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
96 volumes:
97 - kafka-data:/var/lib/kafka/data
98 depends_on:
99 - zookeeper
100 networks:
101 - sentry-network
102 restart: unless-stopped
103
104 zookeeper:
105 image: confluentinc/cp-zookeeper:latest
106 container_name: sentry-zookeeper
107 environment:
108 - ZOOKEEPER_CLIENT_PORT=2181
109 volumes:
110 - zookeeper-data:/var/lib/zookeeper/data
111 networks:
112 - sentry-network
113 restart: unless-stopped
114
115 clickhouse:
116 image: clickhouse/clickhouse-server:latest
117 container_name: sentry-clickhouse
118 volumes:
119 - clickhouse-data:/var/lib/clickhouse
120 networks:
121 - sentry-network
122 restart: unless-stopped
123
124 snuba:
125 image: getsentry/snuba:latest
126 container_name: sentry-snuba
127 environment:
128 - SNUBA_SETTINGS=docker
129 - CLICKHOUSE_HOST=clickhouse
130 - KAFKA_BOOTSTRAP_SERVERS=kafka:9092
131 depends_on:
132 - clickhouse
133 - kafka
134 networks:
135 - sentry-network
136 restart: unless-stopped
137
138volumes:
139 sentry-data:
140 postgres-data:
141 redis-data:
142 kafka-data:
143 zookeeper-data:
144 clickhouse-data:
145
146networks:
147 sentry-network:
148 driver: bridge
149EOF
150
151# 2. Create the .env file
152cat > .env << 'EOF'
153# Sentry
154DB_PASSWORD=secure_sentry_password
155
156# Generate with: python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"
157SECRET_KEY=your_sentry_secret_key
158EOF
159
160# 3. Start the services
161docker compose up -d
162
163# 4. View logs
164docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/sentry-complete/run | bash

Troubleshooting

  • Sentry web interface shows 'Internal Server Error': Check that database migrations have been run with 'docker exec sentry-web sentry upgrade' command
  • Events not appearing in Sentry dashboard: Verify Snuba service is running and ClickHouse connection is established by checking snuba container logs
  • High memory usage on ClickHouse container: Increase ClickHouse max_memory_usage setting or implement data retention policies to manage storage growth
  • Kafka consumer lag warnings: Scale up sentry-worker instances or tune Kafka partition configuration for better throughput
  • Redis connection errors in Sentry logs: Ensure Redis container has sufficient memory and check for Redis maxmemory-policy configuration
  • Slow query performance in Sentry: Optimize ClickHouse table schemas and consider adjusting SNUBA_SETTINGS for your data volume

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Components

sentry-websentry-workersentry-cronpostgresqlrediskafkaclickhousesnuba

Tags

#error-tracking#sentry#monitoring#apm#debugging

Category

Monitoring & Observability
Ad Space