TiDB Cluster
Distributed NewSQL database compatible with MySQL protocol.
Overview
TiDB is a distributed NewSQL database designed by PingCAP that combines the scalability of NoSQL systems with the ACID guarantees and SQL compatibility of traditional relational databases. Originally developed to address the limitations of MySQL in handling massive datasets and high concurrency, TiDB implements the Raft consensus algorithm for data consistency and supports horizontal scaling across commodity hardware. The database architecture separates compute and storage layers, enabling independent scaling of processing power and storage capacity while maintaining MySQL wire protocol compatibility.
This stack deploys a complete TiDB cluster consisting of three essential components that work in concert to deliver distributed database capabilities. PD (Placement Driver) serves as the cluster metadata manager and timestamp oracle, coordinating data placement decisions and maintaining cluster topology information. TiKV acts as the distributed key-value storage engine, storing actual data across multiple nodes with automatic sharding and replication. TiDB functions as the SQL layer, parsing queries, generating execution plans, and coordinating with TiKV nodes to retrieve and manipulate data.
Developers building applications that outgrow single-instance MySQL deployments will find this configuration particularly valuable, as it provides MySQL compatibility while enabling petabyte-scale storage and handling millions of concurrent connections. The HTAP (Hybrid Transactional/Analytical Processing) capabilities make it ideal for organizations needing real-time analytics on operational data without maintaining separate OLTP and OLAP systems. Enterprises dealing with rapid data growth, gaming companies managing player statistics, financial services requiring high availability, and SaaS platforms needing multi-tenant database solutions benefit from TiDB's horizontal scaling and strong consistency guarantees.
Key Features
- MySQL 5.7 protocol compatibility enabling drop-in replacement for existing MySQL applications
- HTAP architecture supporting both OLTP workloads and real-time analytical queries on the same dataset
- Automatic horizontal sharding with configurable replica count for fault tolerance
- Multi-version concurrency control (MVCC) with snapshot isolation for consistent reads
- Raft consensus algorithm ensuring strong consistency across distributed storage nodes
- Online DDL operations allowing schema changes without downtime or table locks
- Cross-data center replication with configurable placement rules for compliance requirements
- Built-in distributed transaction support with two-phase commit protocol
Common Use Cases
- 1Scaling existing MySQL applications beyond single-server limitations while maintaining compatibility
- 2Building multi-tenant SaaS platforms requiring isolated data with consistent performance
- 3Real-time analytics dashboards that need fresh data without impacting transactional workloads
- 4Gaming backend systems managing player profiles, leaderboards, and transaction histories
- 5Financial applications requiring ACID compliance with high availability and disaster recovery
- 6E-commerce platforms handling inventory management and real-time recommendation engines
- 7IoT data ingestion systems processing time-series data with complex analytical queries
Prerequisites
- Minimum 8GB RAM available (2GB for TiDB, 4GB for TiKV, 1GB for PD components)
- Docker Engine 20.10+ with BuildKit support for multi-stage builds
- Available ports 2379 (PD client), 4000 (TiDB SQL), and 10080 (TiDB status API)
- Basic understanding of distributed systems concepts and MySQL query syntax
- SSD storage recommended for TiKV data directory to ensure optimal write performance
- Network connectivity allowing inter-container communication on custom bridge networks
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 pd: 3 image: pingcap/pd:latest4 container_name: pd5 command: --name=pd --client-urls=http://0.0.0.0:2379 --peer-urls=http://0.0.0.0:2380 --data-dir=/data6 volumes: 7 - pd_data:/data8 ports: 9 - "2379:2379"10 networks: 11 - tidb-network1213 tikv: 14 image: pingcap/tikv:latest15 container_name: tikv16 command: --addr=0.0.0.0:20160 --advertise-addr=tikv:20160 --pd=pd:2379 --data-dir=/data17 volumes: 18 - tikv_data:/data19 depends_on: 20 - pd21 networks: 22 - tidb-network2324 tidb: 25 image: pingcap/tidb:latest26 container_name: tidb27 command: --store=tikv --path=pd:237928 ports: 29 - "4000:4000"30 - "10080:10080"31 depends_on: 32 - tikv33 networks: 34 - tidb-network3536volumes: 37 pd_data: 38 tikv_data: 3940networks: 41 tidb-network: 42 driver: bridge.env Template
.env
1# TiDB default configuration2# Connect using MySQL client on port 4000Usage Notes
- 1Docs: https://docs.pingcap.com/tidb/stable
- 2MySQL-compatible on port 4000 - use mysql client
- 3Status API at http://localhost:10080/status
- 4Connect: mysql -h 127.0.0.1 -P 4000 -u root
- 5HTAP: combine transactional and analytical workloads
- 6Horizontal scaling - add TiKV nodes for storage capacity
Individual Services(3 services)
Copy individual services to mix and match with your existing compose files.
pd
pd:
image: pingcap/pd:latest
container_name: pd
command: "--name=pd --client-urls=http://0.0.0.0:2379 --peer-urls=http://0.0.0.0:2380 --data-dir=/data"
volumes:
- pd_data:/data
ports:
- "2379:2379"
networks:
- tidb-network
tikv
tikv:
image: pingcap/tikv:latest
container_name: tikv
command: "--addr=0.0.0.0:20160 --advertise-addr=tikv:20160 --pd=pd:2379 --data-dir=/data"
volumes:
- tikv_data:/data
depends_on:
- pd
networks:
- tidb-network
tidb
tidb:
image: pingcap/tidb:latest
container_name: tidb
command: "--store=tikv --path=pd:2379"
ports:
- "4000:4000"
- "10080:10080"
depends_on:
- tikv
networks:
- tidb-network
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 pd:5 image: pingcap/pd:latest6 container_name: pd7 command: --name=pd --client-urls=http://0.0.0.0:2379 --peer-urls=http://0.0.0.0:2380 --data-dir=/data8 volumes:9 - pd_data:/data10 ports:11 - "2379:2379"12 networks:13 - tidb-network1415 tikv:16 image: pingcap/tikv:latest17 container_name: tikv18 command: --addr=0.0.0.0:20160 --advertise-addr=tikv:20160 --pd=pd:2379 --data-dir=/data19 volumes:20 - tikv_data:/data21 depends_on:22 - pd23 networks:24 - tidb-network2526 tidb:27 image: pingcap/tidb:latest28 container_name: tidb29 command: --store=tikv --path=pd:237930 ports:31 - "4000:4000"32 - "10080:10080"33 depends_on:34 - tikv35 networks:36 - tidb-network3738volumes:39 pd_data:40 tikv_data:4142networks:43 tidb-network:44 driver: bridge45EOF4647# 2. Create the .env file48cat > .env << 'EOF'49# TiDB default configuration50# Connect using MySQL client on port 400051EOF5253# 3. Start the services54docker compose up -d5556# 4. View logs57docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/tidb/run | bashTroubleshooting
- TiKV fails to start with 'failed to get cluster id': Ensure PD container is fully started before TiKV initialization
- Connection refused on port 4000: Verify TiKV has successfully registered with PD before TiDB starts accepting connections
- PD panic with 'etcd cluster ID mismatch': Remove pd_data volume and restart cluster to reinitialize metadata
- TiDB reports 'PD server timeout': Check PD container logs for clock synchronization issues between containers
- Slow query performance: Monitor TiKV memory usage and consider increasing container memory limits above 4GB
- Schema changes hanging indefinitely: Check for long-running transactions that may block DDL operations
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download