TiDB Distributed Database Cluster
MySQL-compatible distributed database with TiKV storage and PD cluster management.
Overview
TiDB is a distributed, MySQL-compatible database developed by PingCAP that implements the NewSQL paradigm, combining ACID transactions with horizontal scalability. This open-source database separates compute from storage using TiKV (a distributed key-value store) for data persistence and Placement Driver (PD) for cluster coordination and metadata management. TiDB was designed to handle massive datasets that exceed single-machine capacity while maintaining strong consistency and ACID compliance. This distributed architecture enables TiDB clusters to scale from gigabytes to petabytes while supporting real-time analytics through its columnar storage engine TiFlash. The cluster operates with TiDB servers handling SQL processing and client connections, multiple TiKV nodes providing distributed transactional storage with Raft consensus, and PD managing cluster topology, load balancing, and distributed transactions. Prometheus collects comprehensive metrics from all TiDB components, while Grafana provides specialized dashboards for monitoring cluster health, query performance, and storage utilization. Organizations running high-traffic applications requiring both transactional consistency and analytical capabilities benefit from this stack, particularly those experiencing MySQL scaling limitations or needing real-time HTAP workloads. The combination delivers enterprise-grade distributed database capabilities with comprehensive observability, making it ideal for financial services, e-commerce platforms, and SaaS applications requiring consistent performance at scale.
Key Features
- MySQL protocol compatibility enabling existing applications to connect without code changes
- Horizontal scaling with automatic data sharding across TiKV nodes using Region-based distribution
- Strong consistency with distributed ACID transactions using Two-Phase Commit protocol
- Real-time analytics through TiFlash columnar storage engine for HTAP workloads
- Automatic failover and self-healing with Raft consensus algorithm in TiKV clusters
- Multi-version concurrency control (MVCC) for non-blocking reads and snapshot isolation
- Online DDL operations without downtime using TiDB's distributed schema change algorithm
- Comprehensive monitoring with 200+ metrics exported to Prometheus and pre-built Grafana dashboards
Common Use Cases
- 1E-commerce platforms requiring real-time inventory management with analytics on transaction patterns
- 2Financial services needing ACID compliance for payments with real-time fraud detection analytics
- 3Gaming companies tracking player events and running live leaderboards with high write throughput
- 4SaaS applications experiencing MySQL scaling bottlenecks requiring transparent horizontal scaling
- 5IoT platforms ingesting sensor data while running real-time analytics on device performance
- 6Multi-tenant applications needing predictable performance isolation across customer workloads
- 7Data warehouse modernization projects requiring both transactional and analytical workloads
Prerequisites
- Minimum 8GB RAM across all nodes (2GB for TiDB, 4GB for TiKV nodes, 1GB each for PD/monitoring)
- Understanding of distributed systems concepts including Raft consensus and data sharding
- Familiarity with MySQL query syntax and database administration concepts
- Network configuration knowledge for multi-node cluster communication and port management
- Basic Prometheus and Grafana experience for monitoring distributed database metrics
- Docker environment with at least 20GB available storage for persistent data volumes
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 pd0: 3 image: pingcap/pd:latest4 ports: 5 - "2379:2379"6 command: 7 - --name=pd08 - --client-urls=http://0.0.0.0:23799 - --peer-urls=http://0.0.0.0:238010 - --advertise-client-urls=http://pd0:237911 - --advertise-peer-urls=http://pd0:238012 - --initial-cluster=pd0=http://pd0:238013 - --data-dir=/data14 volumes: 15 - pd0_data:/data16 networks: 17 - tidb_net1819 tikv0: 20 image: pingcap/tikv:latest21 command: 22 - --addr=0.0.0.0:2016023 - --advertise-addr=tikv0:2016024 - --data-dir=/data25 - --pd=pd0:237926 volumes: 27 - tikv0_data:/data28 depends_on: 29 - pd030 networks: 31 - tidb_net3233 tikv1: 34 image: pingcap/tikv:latest35 command: 36 - --addr=0.0.0.0:2016037 - --advertise-addr=tikv1:2016038 - --data-dir=/data39 - --pd=pd0:237940 volumes: 41 - tikv1_data:/data42 depends_on: 43 - pd044 networks: 45 - tidb_net4647 tikv2: 48 image: pingcap/tikv:latest49 command: 50 - --addr=0.0.0.0:2016051 - --advertise-addr=tikv2:2016052 - --data-dir=/data53 - --pd=pd0:237954 volumes: 55 - tikv2_data:/data56 depends_on: 57 - pd058 networks: 59 - tidb_net6061 tidb: 62 image: pingcap/tidb:latest63 ports: 64 - "4000:4000"65 - "10080:10080"66 command: 67 - --store=tikv68 - --path=pd0:237969 depends_on: 70 - tikv071 - tikv172 - tikv273 networks: 74 - tidb_net7576 prometheus: 77 image: prom/prometheus:latest78 ports: 79 - "9090:9090"80 volumes: 81 - ./prometheus.yml:/etc/prometheus/prometheus.yml82 - prometheus_data:/prometheus83 networks: 84 - tidb_net8586 grafana: 87 image: grafana/grafana:latest88 ports: 89 - "3000:3000"90 environment: 91 - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}92 volumes: 93 - grafana_data:/var/lib/grafana94 networks: 95 - tidb_net9697volumes: 98 pd0_data: 99 tikv0_data: 100 tikv1_data: 101 tikv2_data: 102 prometheus_data: 103 grafana_data: 104105networks: 106 tidb_net: .env Template
.env
1# TiDB Cluster2GRAFANA_PASSWORD=secure_grafana_password34# MySQL-compatible at localhost:40005# Status at http://localhost:10080Usage Notes
- 1MySQL-compatible connection at port 4000
- 2Use standard MySQL clients
- 3TiKV provides distributed storage
- 4PD manages cluster metadata
- 5Horizontal scaling supported
Individual Services(7 services)
Copy individual services to mix and match with your existing compose files.
pd0
pd0:
image: pingcap/pd:latest
ports:
- "2379:2379"
command:
- "--name=pd0"
- "--client-urls=http://0.0.0.0:2379"
- "--peer-urls=http://0.0.0.0:2380"
- "--advertise-client-urls=http://pd0:2379"
- "--advertise-peer-urls=http://pd0:2380"
- "--initial-cluster=pd0=http://pd0:2380"
- "--data-dir=/data"
volumes:
- pd0_data:/data
networks:
- tidb_net
tikv0
tikv0:
image: pingcap/tikv:latest
command:
- "--addr=0.0.0.0:20160"
- "--advertise-addr=tikv0:20160"
- "--data-dir=/data"
- "--pd=pd0:2379"
volumes:
- tikv0_data:/data
depends_on:
- pd0
networks:
- tidb_net
tikv1
tikv1:
image: pingcap/tikv:latest
command:
- "--addr=0.0.0.0:20160"
- "--advertise-addr=tikv1:20160"
- "--data-dir=/data"
- "--pd=pd0:2379"
volumes:
- tikv1_data:/data
depends_on:
- pd0
networks:
- tidb_net
tikv2
tikv2:
image: pingcap/tikv:latest
command:
- "--addr=0.0.0.0:20160"
- "--advertise-addr=tikv2:20160"
- "--data-dir=/data"
- "--pd=pd0:2379"
volumes:
- tikv2_data:/data
depends_on:
- pd0
networks:
- tidb_net
tidb
tidb:
image: pingcap/tidb:latest
ports:
- "4000:4000"
- "10080:10080"
command:
- "--store=tikv"
- "--path=pd0:2379"
depends_on:
- tikv0
- tikv1
- tikv2
networks:
- tidb_net
prometheus
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
networks:
- tidb_net
grafana
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
volumes:
- grafana_data:/var/lib/grafana
networks:
- tidb_net
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 pd0:5 image: pingcap/pd:latest6 ports:7 - "2379:2379"8 command:9 - --name=pd010 - --client-urls=http://0.0.0.0:237911 - --peer-urls=http://0.0.0.0:238012 - --advertise-client-urls=http://pd0:237913 - --advertise-peer-urls=http://pd0:238014 - --initial-cluster=pd0=http://pd0:238015 - --data-dir=/data16 volumes:17 - pd0_data:/data18 networks:19 - tidb_net2021 tikv0:22 image: pingcap/tikv:latest23 command:24 - --addr=0.0.0.0:2016025 - --advertise-addr=tikv0:2016026 - --data-dir=/data27 - --pd=pd0:237928 volumes:29 - tikv0_data:/data30 depends_on:31 - pd032 networks:33 - tidb_net3435 tikv1:36 image: pingcap/tikv:latest37 command:38 - --addr=0.0.0.0:2016039 - --advertise-addr=tikv1:2016040 - --data-dir=/data41 - --pd=pd0:237942 volumes:43 - tikv1_data:/data44 depends_on:45 - pd046 networks:47 - tidb_net4849 tikv2:50 image: pingcap/tikv:latest51 command:52 - --addr=0.0.0.0:2016053 - --advertise-addr=tikv2:2016054 - --data-dir=/data55 - --pd=pd0:237956 volumes:57 - tikv2_data:/data58 depends_on:59 - pd060 networks:61 - tidb_net6263 tidb:64 image: pingcap/tidb:latest65 ports:66 - "4000:4000"67 - "10080:10080"68 command:69 - --store=tikv70 - --path=pd0:237971 depends_on:72 - tikv073 - tikv174 - tikv275 networks:76 - tidb_net7778 prometheus:79 image: prom/prometheus:latest80 ports:81 - "9090:9090"82 volumes:83 - ./prometheus.yml:/etc/prometheus/prometheus.yml84 - prometheus_data:/prometheus85 networks:86 - tidb_net8788 grafana:89 image: grafana/grafana:latest90 ports:91 - "3000:3000"92 environment:93 - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}94 volumes:95 - grafana_data:/var/lib/grafana96 networks:97 - tidb_net9899volumes:100 pd0_data:101 tikv0_data:102 tikv1_data:103 tikv2_data:104 prometheus_data:105 grafana_data:106107networks:108 tidb_net:109EOF110111# 2. Create the .env file112cat > .env << 'EOF'113# TiDB Cluster114GRAFANA_PASSWORD=secure_grafana_password115116# MySQL-compatible at localhost:4000117# Status at http://localhost:10080118EOF119120# 3. Start the services121docker compose up -d122123# 4. View logs124docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/tidb-cluster/run | bashTroubleshooting
- TiKV store not available error: Verify PD service is running and accessible on port 2379, check network connectivity between containers
- Region unavailable during queries: Ensure at least 3 TiKV nodes are running for proper Raft majority, check TiKV logs for disk space issues
- Connection refused on port 4000: Wait for TiDB to complete initialization after TiKV cluster is ready, check TiDB logs for PD connection status
- High query latency in Grafana dashboards: Monitor TiKV disk I/O metrics, consider adding more TiKV nodes or optimizing query patterns
- PD leader election failures: Verify system clock synchronization across containers, ensure PD data directory has proper write permissions
- Prometheus metrics missing for TiDB components: Check that components are exposing metrics on their status ports (10080 for TiDB, 20180 for TiKV)
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download