Sentry Error Tracking
Self-hosted error tracking and performance monitoring platform.
Overview
Sentry is an open-source application monitoring platform that provides real-time error tracking, performance monitoring, and release health insights for applications across multiple programming languages. Originally developed by Disqus in 2008 and later spun off as its own company, Sentry has become the de facto standard for error tracking in modern software development, offering both cloud-hosted and self-hosted solutions to help developers identify, diagnose, and resolve issues before they impact users. This comprehensive Docker stack combines Sentry's web interface, worker processes, and cron scheduler with a robust backend infrastructure including PostgreSQL for metadata storage, Redis for caching and session management, Apache Kafka for event streaming, ClickHouse for high-performance analytics, and Snuba as the query layer that bridges Sentry's data needs with ClickHouse's analytical capabilities. The architecture is specifically designed to handle high-throughput error ingestion and provide fast query performance for debugging workflows, making it suitable for organizations processing millions of events per day. This self-hosted deployment is ideal for teams requiring data sovereignty, custom integrations, or cost optimization compared to Sentry's cloud offering, while still maintaining the full feature set including error grouping, performance monitoring, release tracking, and team collaboration tools.
Key Features
- Real-time error tracking with intelligent grouping and deduplication across multiple programming languages
- Performance monitoring with transaction traces, database query analysis, and web vitals tracking
- Release health monitoring with crash-free session rates and adoption tracking
- Source map support for JavaScript applications enabling readable stack traces in production
- Contextual breadcrumbs that capture user actions and system events leading up to errors
- Flexible alerting system with integrations for Slack, PagerDuty, Jira, and custom webhooks
- Session replay functionality for reproducing user sessions that encountered errors
- High-throughput event processing using Kafka for reliable message queuing and ClickHouse for analytical storage
Common Use Cases
- 1Enterprise organizations requiring self-hosted error tracking for compliance and data sovereignty
- 2Development teams processing high volumes of errors and performance data (>1M events/day)
- 3Companies wanting to avoid per-event pricing of cloud-hosted solutions
- 4Organizations with custom integration requirements for existing monitoring infrastructure
- 5Startups and scale-ups needing comprehensive debugging capabilities without vendor lock-in
- 6DevOps teams implementing observability strategies across microservices architectures
- 7Software companies providing error tracking as part of their platform offerings
Prerequisites
- Minimum 8GB RAM for full stack deployment (Sentry requires 4GB+, ClickHouse needs 2GB+, Kafka 1GB+)
- 50GB+ available disk space for event storage and database growth
- Port 9000 available for Sentry web interface access
- Understanding of Sentry project configuration and SDK integration
- Familiarity with PostgreSQL database administration for backup and maintenance
- Basic knowledge of Kafka and ClickHouse for troubleshooting data pipeline issues
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 sentry-web: 3 image: getsentry/sentry:latest4 container_name: sentry-web5 environment: 6 - SENTRY_SECRET_KEY=${SECRET_KEY}7 - SENTRY_POSTGRES_HOST=postgres8 - SENTRY_POSTGRES_PORT=54329 - SENTRY_DB_NAME=sentry10 - SENTRY_DB_USER=sentry11 - SENTRY_DB_PASSWORD=${DB_PASSWORD}12 - SENTRY_REDIS_HOST=redis13 - SNUBA=http://snuba:121814 volumes: 15 - sentry-data:/data16 ports: 17 - "9000:9000"18 depends_on: 19 - postgres20 - redis21 - kafka22 - snuba23 networks: 24 - sentry-network25 restart: unless-stopped2627 sentry-worker: 28 image: getsentry/sentry:latest29 container_name: sentry-worker30 command: run worker31 environment: 32 - SENTRY_SECRET_KEY=${SECRET_KEY}33 - SENTRY_POSTGRES_HOST=postgres34 - SENTRY_DB_NAME=sentry35 - SENTRY_DB_USER=sentry36 - SENTRY_DB_PASSWORD=${DB_PASSWORD}37 - SENTRY_REDIS_HOST=redis38 volumes: 39 - sentry-data:/data40 depends_on: 41 - sentry-web42 networks: 43 - sentry-network44 restart: unless-stopped4546 sentry-cron: 47 image: getsentry/sentry:latest48 container_name: sentry-cron49 command: run cron50 environment: 51 - SENTRY_SECRET_KEY=${SECRET_KEY}52 - SENTRY_POSTGRES_HOST=postgres53 - SENTRY_DB_NAME=sentry54 - SENTRY_DB_USER=sentry55 - SENTRY_DB_PASSWORD=${DB_PASSWORD}56 - SENTRY_REDIS_HOST=redis57 volumes: 58 - sentry-data:/data59 depends_on: 60 - sentry-web61 networks: 62 - sentry-network63 restart: unless-stopped6465 postgres: 66 image: postgres:15-alpine67 container_name: sentry-postgres68 environment: 69 - POSTGRES_USER=sentry70 - POSTGRES_PASSWORD=${DB_PASSWORD}71 - POSTGRES_DB=sentry72 volumes: 73 - postgres-data:/var/lib/postgresql/data74 networks: 75 - sentry-network76 restart: unless-stopped7778 redis: 79 image: redis:7-alpine80 container_name: sentry-redis81 volumes: 82 - redis-data:/data83 networks: 84 - sentry-network85 restart: unless-stopped8687 kafka: 88 image: confluentinc/cp-kafka:latest89 container_name: sentry-kafka90 environment: 91 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:218192 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:909293 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=194 volumes: 95 - kafka-data:/var/lib/kafka/data96 depends_on: 97 - zookeeper98 networks: 99 - sentry-network100 restart: unless-stopped101102 zookeeper: 103 image: confluentinc/cp-zookeeper:latest104 container_name: sentry-zookeeper105 environment: 106 - ZOOKEEPER_CLIENT_PORT=2181107 volumes: 108 - zookeeper-data:/var/lib/zookeeper/data109 networks: 110 - sentry-network111 restart: unless-stopped112113 clickhouse: 114 image: clickhouse/clickhouse-server:latest115 container_name: sentry-clickhouse116 volumes: 117 - clickhouse-data:/var/lib/clickhouse118 networks: 119 - sentry-network120 restart: unless-stopped121122 snuba: 123 image: getsentry/snuba:latest124 container_name: sentry-snuba125 environment: 126 - SNUBA_SETTINGS=docker127 - CLICKHOUSE_HOST=clickhouse128 - KAFKA_BOOTSTRAP_SERVERS=kafka:9092129 depends_on: 130 - clickhouse131 - kafka132 networks: 133 - sentry-network134 restart: unless-stopped135136volumes: 137 sentry-data: 138 postgres-data: 139 redis-data: 140 kafka-data: 141 zookeeper-data: 142 clickhouse-data: 143144networks: 145 sentry-network: 146 driver: bridge.env Template
.env
1# Sentry2DB_PASSWORD=secure_sentry_password34# Generate with: python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"5SECRET_KEY=your_sentry_secret_keyUsage Notes
- 1Sentry UI at http://localhost:9000
- 2Run migrations first: sentry upgrade
- 3Create admin: sentry createuser
- 4SDKs for all major languages
- 5Consider using official install script
Individual Services(9 services)
Copy individual services to mix and match with your existing compose files.
sentry-web
sentry-web:
image: getsentry/sentry:latest
container_name: sentry-web
environment:
- SENTRY_SECRET_KEY=${SECRET_KEY}
- SENTRY_POSTGRES_HOST=postgres
- SENTRY_POSTGRES_PORT=5432
- SENTRY_DB_NAME=sentry
- SENTRY_DB_USER=sentry
- SENTRY_DB_PASSWORD=${DB_PASSWORD}
- SENTRY_REDIS_HOST=redis
- SNUBA=http://snuba:1218
volumes:
- sentry-data:/data
ports:
- "9000:9000"
depends_on:
- postgres
- redis
- kafka
- snuba
networks:
- sentry-network
restart: unless-stopped
sentry-worker
sentry-worker:
image: getsentry/sentry:latest
container_name: sentry-worker
command: run worker
environment:
- SENTRY_SECRET_KEY=${SECRET_KEY}
- SENTRY_POSTGRES_HOST=postgres
- SENTRY_DB_NAME=sentry
- SENTRY_DB_USER=sentry
- SENTRY_DB_PASSWORD=${DB_PASSWORD}
- SENTRY_REDIS_HOST=redis
volumes:
- sentry-data:/data
depends_on:
- sentry-web
networks:
- sentry-network
restart: unless-stopped
sentry-cron
sentry-cron:
image: getsentry/sentry:latest
container_name: sentry-cron
command: run cron
environment:
- SENTRY_SECRET_KEY=${SECRET_KEY}
- SENTRY_POSTGRES_HOST=postgres
- SENTRY_DB_NAME=sentry
- SENTRY_DB_USER=sentry
- SENTRY_DB_PASSWORD=${DB_PASSWORD}
- SENTRY_REDIS_HOST=redis
volumes:
- sentry-data:/data
depends_on:
- sentry-web
networks:
- sentry-network
restart: unless-stopped
postgres
postgres:
image: postgres:15-alpine
container_name: sentry-postgres
environment:
- POSTGRES_USER=sentry
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB=sentry
volumes:
- postgres-data:/var/lib/postgresql/data
networks:
- sentry-network
restart: unless-stopped
redis
redis:
image: redis:7-alpine
container_name: sentry-redis
volumes:
- redis-data:/data
networks:
- sentry-network
restart: unless-stopped
kafka
kafka:
image: confluentinc/cp-kafka:latest
container_name: sentry-kafka
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
volumes:
- kafka-data:/var/lib/kafka/data
depends_on:
- zookeeper
networks:
- sentry-network
restart: unless-stopped
zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: sentry-zookeeper
environment:
- ZOOKEEPER_CLIENT_PORT=2181
volumes:
- zookeeper-data:/var/lib/zookeeper/data
networks:
- sentry-network
restart: unless-stopped
clickhouse
clickhouse:
image: clickhouse/clickhouse-server:latest
container_name: sentry-clickhouse
volumes:
- clickhouse-data:/var/lib/clickhouse
networks:
- sentry-network
restart: unless-stopped
snuba
snuba:
image: getsentry/snuba:latest
container_name: sentry-snuba
environment:
- SNUBA_SETTINGS=docker
- CLICKHOUSE_HOST=clickhouse
- KAFKA_BOOTSTRAP_SERVERS=kafka:9092
depends_on:
- clickhouse
- kafka
networks:
- sentry-network
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 sentry-web:5 image: getsentry/sentry:latest6 container_name: sentry-web7 environment:8 - SENTRY_SECRET_KEY=${SECRET_KEY}9 - SENTRY_POSTGRES_HOST=postgres10 - SENTRY_POSTGRES_PORT=543211 - SENTRY_DB_NAME=sentry12 - SENTRY_DB_USER=sentry13 - SENTRY_DB_PASSWORD=${DB_PASSWORD}14 - SENTRY_REDIS_HOST=redis15 - SNUBA=http://snuba:121816 volumes:17 - sentry-data:/data18 ports:19 - "9000:9000"20 depends_on:21 - postgres22 - redis23 - kafka24 - snuba25 networks:26 - sentry-network27 restart: unless-stopped2829 sentry-worker:30 image: getsentry/sentry:latest31 container_name: sentry-worker32 command: run worker33 environment:34 - SENTRY_SECRET_KEY=${SECRET_KEY}35 - SENTRY_POSTGRES_HOST=postgres36 - SENTRY_DB_NAME=sentry37 - SENTRY_DB_USER=sentry38 - SENTRY_DB_PASSWORD=${DB_PASSWORD}39 - SENTRY_REDIS_HOST=redis40 volumes:41 - sentry-data:/data42 depends_on:43 - sentry-web44 networks:45 - sentry-network46 restart: unless-stopped4748 sentry-cron:49 image: getsentry/sentry:latest50 container_name: sentry-cron51 command: run cron52 environment:53 - SENTRY_SECRET_KEY=${SECRET_KEY}54 - SENTRY_POSTGRES_HOST=postgres55 - SENTRY_DB_NAME=sentry56 - SENTRY_DB_USER=sentry57 - SENTRY_DB_PASSWORD=${DB_PASSWORD}58 - SENTRY_REDIS_HOST=redis59 volumes:60 - sentry-data:/data61 depends_on:62 - sentry-web63 networks:64 - sentry-network65 restart: unless-stopped6667 postgres:68 image: postgres:15-alpine69 container_name: sentry-postgres70 environment:71 - POSTGRES_USER=sentry72 - POSTGRES_PASSWORD=${DB_PASSWORD}73 - POSTGRES_DB=sentry74 volumes:75 - postgres-data:/var/lib/postgresql/data76 networks:77 - sentry-network78 restart: unless-stopped7980 redis:81 image: redis:7-alpine82 container_name: sentry-redis83 volumes:84 - redis-data:/data85 networks:86 - sentry-network87 restart: unless-stopped8889 kafka:90 image: confluentinc/cp-kafka:latest91 container_name: sentry-kafka92 environment:93 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:218194 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:909295 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=196 volumes:97 - kafka-data:/var/lib/kafka/data98 depends_on:99 - zookeeper100 networks:101 - sentry-network102 restart: unless-stopped103104 zookeeper:105 image: confluentinc/cp-zookeeper:latest106 container_name: sentry-zookeeper107 environment:108 - ZOOKEEPER_CLIENT_PORT=2181109 volumes:110 - zookeeper-data:/var/lib/zookeeper/data111 networks:112 - sentry-network113 restart: unless-stopped114115 clickhouse:116 image: clickhouse/clickhouse-server:latest117 container_name: sentry-clickhouse118 volumes:119 - clickhouse-data:/var/lib/clickhouse120 networks:121 - sentry-network122 restart: unless-stopped123124 snuba:125 image: getsentry/snuba:latest126 container_name: sentry-snuba127 environment:128 - SNUBA_SETTINGS=docker129 - CLICKHOUSE_HOST=clickhouse130 - KAFKA_BOOTSTRAP_SERVERS=kafka:9092131 depends_on:132 - clickhouse133 - kafka134 networks:135 - sentry-network136 restart: unless-stopped137138volumes:139 sentry-data:140 postgres-data:141 redis-data:142 kafka-data:143 zookeeper-data:144 clickhouse-data:145146networks:147 sentry-network:148 driver: bridge149EOF150151# 2. Create the .env file152cat > .env << 'EOF'153# Sentry154DB_PASSWORD=secure_sentry_password155156# Generate with: python -c "from django.core.management.utils import get_random_secret_key; print(get_random_secret_key())"157SECRET_KEY=your_sentry_secret_key158EOF159160# 3. Start the services161docker compose up -d162163# 4. View logs164docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/sentry-complete/run | bashTroubleshooting
- Sentry web interface shows 'Internal Server Error': Check that database migrations have been run with 'docker exec sentry-web sentry upgrade' command
- Events not appearing in Sentry dashboard: Verify Snuba service is running and ClickHouse connection is established by checking snuba container logs
- High memory usage on ClickHouse container: Increase ClickHouse max_memory_usage setting or implement data retention policies to manage storage growth
- Kafka consumer lag warnings: Scale up sentry-worker instances or tune Kafka partition configuration for better throughput
- Redis connection errors in Sentry logs: Ensure Redis container has sufficient memory and check for Redis maxmemory-policy configuration
- Slow query performance in Sentry: Optimize ClickHouse table schemas and consider adjusting SNUBA_SETTINGS for your data volume
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
sentry-websentry-workersentry-cronpostgresqlrediskafkaclickhousesnuba
Tags
#error-tracking#sentry#monitoring#apm#debugging
Category
Monitoring & ObservabilityAd Space
Shortcuts: C CopyF FavoriteD Download