docker.recipes

TiDB Distributed Database Cluster

advanced

MySQL-compatible distributed database with TiKV storage and PD cluster management.

Overview

TiDB is a distributed, MySQL-compatible database developed by PingCAP that implements the NewSQL paradigm, combining ACID transactions with horizontal scalability. This open-source database separates compute from storage using TiKV (a distributed key-value store) for data persistence and Placement Driver (PD) for cluster coordination and metadata management. TiDB was designed to handle massive datasets that exceed single-machine capacity while maintaining strong consistency and ACID compliance. This distributed architecture enables TiDB clusters to scale from gigabytes to petabytes while supporting real-time analytics through its columnar storage engine TiFlash. The cluster operates with TiDB servers handling SQL processing and client connections, multiple TiKV nodes providing distributed transactional storage with Raft consensus, and PD managing cluster topology, load balancing, and distributed transactions. Prometheus collects comprehensive metrics from all TiDB components, while Grafana provides specialized dashboards for monitoring cluster health, query performance, and storage utilization. Organizations running high-traffic applications requiring both transactional consistency and analytical capabilities benefit from this stack, particularly those experiencing MySQL scaling limitations or needing real-time HTAP workloads. The combination delivers enterprise-grade distributed database capabilities with comprehensive observability, making it ideal for financial services, e-commerce platforms, and SaaS applications requiring consistent performance at scale.

Key Features

  • MySQL protocol compatibility enabling existing applications to connect without code changes
  • Horizontal scaling with automatic data sharding across TiKV nodes using Region-based distribution
  • Strong consistency with distributed ACID transactions using Two-Phase Commit protocol
  • Real-time analytics through TiFlash columnar storage engine for HTAP workloads
  • Automatic failover and self-healing with Raft consensus algorithm in TiKV clusters
  • Multi-version concurrency control (MVCC) for non-blocking reads and snapshot isolation
  • Online DDL operations without downtime using TiDB's distributed schema change algorithm
  • Comprehensive monitoring with 200+ metrics exported to Prometheus and pre-built Grafana dashboards

Common Use Cases

  • 1E-commerce platforms requiring real-time inventory management with analytics on transaction patterns
  • 2Financial services needing ACID compliance for payments with real-time fraud detection analytics
  • 3Gaming companies tracking player events and running live leaderboards with high write throughput
  • 4SaaS applications experiencing MySQL scaling bottlenecks requiring transparent horizontal scaling
  • 5IoT platforms ingesting sensor data while running real-time analytics on device performance
  • 6Multi-tenant applications needing predictable performance isolation across customer workloads
  • 7Data warehouse modernization projects requiring both transactional and analytical workloads

Prerequisites

  • Minimum 8GB RAM across all nodes (2GB for TiDB, 4GB for TiKV nodes, 1GB each for PD/monitoring)
  • Understanding of distributed systems concepts including Raft consensus and data sharding
  • Familiarity with MySQL query syntax and database administration concepts
  • Network configuration knowledge for multi-node cluster communication and port management
  • Basic Prometheus and Grafana experience for monitoring distributed database metrics
  • Docker environment with at least 20GB available storage for persistent data volumes

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 pd0:
3 image: pingcap/pd:latest
4 ports:
5 - "2379:2379"
6 command:
7 - --name=pd0
8 - --client-urls=http://0.0.0.0:2379
9 - --peer-urls=http://0.0.0.0:2380
10 - --advertise-client-urls=http://pd0:2379
11 - --advertise-peer-urls=http://pd0:2380
12 - --initial-cluster=pd0=http://pd0:2380
13 - --data-dir=/data
14 volumes:
15 - pd0_data:/data
16 networks:
17 - tidb_net
18
19 tikv0:
20 image: pingcap/tikv:latest
21 command:
22 - --addr=0.0.0.0:20160
23 - --advertise-addr=tikv0:20160
24 - --data-dir=/data
25 - --pd=pd0:2379
26 volumes:
27 - tikv0_data:/data
28 depends_on:
29 - pd0
30 networks:
31 - tidb_net
32
33 tikv1:
34 image: pingcap/tikv:latest
35 command:
36 - --addr=0.0.0.0:20160
37 - --advertise-addr=tikv1:20160
38 - --data-dir=/data
39 - --pd=pd0:2379
40 volumes:
41 - tikv1_data:/data
42 depends_on:
43 - pd0
44 networks:
45 - tidb_net
46
47 tikv2:
48 image: pingcap/tikv:latest
49 command:
50 - --addr=0.0.0.0:20160
51 - --advertise-addr=tikv2:20160
52 - --data-dir=/data
53 - --pd=pd0:2379
54 volumes:
55 - tikv2_data:/data
56 depends_on:
57 - pd0
58 networks:
59 - tidb_net
60
61 tidb:
62 image: pingcap/tidb:latest
63 ports:
64 - "4000:4000"
65 - "10080:10080"
66 command:
67 - --store=tikv
68 - --path=pd0:2379
69 depends_on:
70 - tikv0
71 - tikv1
72 - tikv2
73 networks:
74 - tidb_net
75
76 prometheus:
77 image: prom/prometheus:latest
78 ports:
79 - "9090:9090"
80 volumes:
81 - ./prometheus.yml:/etc/prometheus/prometheus.yml
82 - prometheus_data:/prometheus
83 networks:
84 - tidb_net
85
86 grafana:
87 image: grafana/grafana:latest
88 ports:
89 - "3000:3000"
90 environment:
91 - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
92 volumes:
93 - grafana_data:/var/lib/grafana
94 networks:
95 - tidb_net
96
97volumes:
98 pd0_data:
99 tikv0_data:
100 tikv1_data:
101 tikv2_data:
102 prometheus_data:
103 grafana_data:
104
105networks:
106 tidb_net:

.env Template

.env
1# TiDB Cluster
2GRAFANA_PASSWORD=secure_grafana_password
3
4# MySQL-compatible at localhost:4000
5# Status at http://localhost:10080

Usage Notes

  1. 1MySQL-compatible connection at port 4000
  2. 2Use standard MySQL clients
  3. 3TiKV provides distributed storage
  4. 4PD manages cluster metadata
  5. 5Horizontal scaling supported

Individual Services(7 services)

Copy individual services to mix and match with your existing compose files.

pd0
pd0:
  image: pingcap/pd:latest
  ports:
    - "2379:2379"
  command:
    - "--name=pd0"
    - "--client-urls=http://0.0.0.0:2379"
    - "--peer-urls=http://0.0.0.0:2380"
    - "--advertise-client-urls=http://pd0:2379"
    - "--advertise-peer-urls=http://pd0:2380"
    - "--initial-cluster=pd0=http://pd0:2380"
    - "--data-dir=/data"
  volumes:
    - pd0_data:/data
  networks:
    - tidb_net
tikv0
tikv0:
  image: pingcap/tikv:latest
  command:
    - "--addr=0.0.0.0:20160"
    - "--advertise-addr=tikv0:20160"
    - "--data-dir=/data"
    - "--pd=pd0:2379"
  volumes:
    - tikv0_data:/data
  depends_on:
    - pd0
  networks:
    - tidb_net
tikv1
tikv1:
  image: pingcap/tikv:latest
  command:
    - "--addr=0.0.0.0:20160"
    - "--advertise-addr=tikv1:20160"
    - "--data-dir=/data"
    - "--pd=pd0:2379"
  volumes:
    - tikv1_data:/data
  depends_on:
    - pd0
  networks:
    - tidb_net
tikv2
tikv2:
  image: pingcap/tikv:latest
  command:
    - "--addr=0.0.0.0:20160"
    - "--advertise-addr=tikv2:20160"
    - "--data-dir=/data"
    - "--pd=pd0:2379"
  volumes:
    - tikv2_data:/data
  depends_on:
    - pd0
  networks:
    - tidb_net
tidb
tidb:
  image: pingcap/tidb:latest
  ports:
    - "4000:4000"
    - "10080:10080"
  command:
    - "--store=tikv"
    - "--path=pd0:2379"
  depends_on:
    - tikv0
    - tikv1
    - tikv2
  networks:
    - tidb_net
prometheus
prometheus:
  image: prom/prometheus:latest
  ports:
    - "9090:9090"
  volumes:
    - ./prometheus.yml:/etc/prometheus/prometheus.yml
    - prometheus_data:/prometheus
  networks:
    - tidb_net
grafana
grafana:
  image: grafana/grafana:latest
  ports:
    - "3000:3000"
  environment:
    - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
  volumes:
    - grafana_data:/var/lib/grafana
  networks:
    - tidb_net

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 pd0:
5 image: pingcap/pd:latest
6 ports:
7 - "2379:2379"
8 command:
9 - --name=pd0
10 - --client-urls=http://0.0.0.0:2379
11 - --peer-urls=http://0.0.0.0:2380
12 - --advertise-client-urls=http://pd0:2379
13 - --advertise-peer-urls=http://pd0:2380
14 - --initial-cluster=pd0=http://pd0:2380
15 - --data-dir=/data
16 volumes:
17 - pd0_data:/data
18 networks:
19 - tidb_net
20
21 tikv0:
22 image: pingcap/tikv:latest
23 command:
24 - --addr=0.0.0.0:20160
25 - --advertise-addr=tikv0:20160
26 - --data-dir=/data
27 - --pd=pd0:2379
28 volumes:
29 - tikv0_data:/data
30 depends_on:
31 - pd0
32 networks:
33 - tidb_net
34
35 tikv1:
36 image: pingcap/tikv:latest
37 command:
38 - --addr=0.0.0.0:20160
39 - --advertise-addr=tikv1:20160
40 - --data-dir=/data
41 - --pd=pd0:2379
42 volumes:
43 - tikv1_data:/data
44 depends_on:
45 - pd0
46 networks:
47 - tidb_net
48
49 tikv2:
50 image: pingcap/tikv:latest
51 command:
52 - --addr=0.0.0.0:20160
53 - --advertise-addr=tikv2:20160
54 - --data-dir=/data
55 - --pd=pd0:2379
56 volumes:
57 - tikv2_data:/data
58 depends_on:
59 - pd0
60 networks:
61 - tidb_net
62
63 tidb:
64 image: pingcap/tidb:latest
65 ports:
66 - "4000:4000"
67 - "10080:10080"
68 command:
69 - --store=tikv
70 - --path=pd0:2379
71 depends_on:
72 - tikv0
73 - tikv1
74 - tikv2
75 networks:
76 - tidb_net
77
78 prometheus:
79 image: prom/prometheus:latest
80 ports:
81 - "9090:9090"
82 volumes:
83 - ./prometheus.yml:/etc/prometheus/prometheus.yml
84 - prometheus_data:/prometheus
85 networks:
86 - tidb_net
87
88 grafana:
89 image: grafana/grafana:latest
90 ports:
91 - "3000:3000"
92 environment:
93 - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
94 volumes:
95 - grafana_data:/var/lib/grafana
96 networks:
97 - tidb_net
98
99volumes:
100 pd0_data:
101 tikv0_data:
102 tikv1_data:
103 tikv2_data:
104 prometheus_data:
105 grafana_data:
106
107networks:
108 tidb_net:
109EOF
110
111# 2. Create the .env file
112cat > .env << 'EOF'
113# TiDB Cluster
114GRAFANA_PASSWORD=secure_grafana_password
115
116# MySQL-compatible at localhost:4000
117# Status at http://localhost:10080
118EOF
119
120# 3. Start the services
121docker compose up -d
122
123# 4. View logs
124docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/tidb-cluster/run | bash

Troubleshooting

  • TiKV store not available error: Verify PD service is running and accessible on port 2379, check network connectivity between containers
  • Region unavailable during queries: Ensure at least 3 TiKV nodes are running for proper Raft majority, check TiKV logs for disk space issues
  • Connection refused on port 4000: Wait for TiDB to complete initialization after TiKV cluster is ready, check TiDB logs for PD connection status
  • High query latency in Grafana dashboards: Monitor TiKV disk I/O metrics, consider adding more TiKV nodes or optimizing query patterns
  • PD leader election failures: Verify system clock synchronization across containers, ensure PD data directory has proper write permissions
  • Prometheus metrics missing for TiDB components: Check that components are exposing metrics on their status ports (10080 for TiDB, 20180 for TiKV)

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space