Kafka + Zookeeper + Kafka UI
Distributed streaming platform with web UI.
Overview
Apache Kafka is a distributed event streaming platform originally developed by LinkedIn and later open-sourced, designed to handle real-time data feeds with high throughput and fault tolerance. It serves as the backbone for event-driven architectures, enabling applications to produce, consume, and process streams of records in real-time across distributed systems. Kafka's unique approach to messaging through persistent, partitioned logs makes it ideal for scenarios requiring message replay, event sourcing, and horizontal scalability.
This stack combines Kafka with Apache ZooKeeper for distributed coordination, Confluent Schema Registry for managing Avro schemas and data evolution, and Kafka UI for visual management and monitoring. ZooKeeper handles cluster metadata, leader election, and configuration management for Kafka brokers, while Schema Registry ensures data compatibility across producers and consumers by managing schema versions. Kafka UI provides a web-based interface for monitoring topics, consumer groups, partitions, and message flows without requiring command-line tools.
This configuration is ideal for developers building event-driven applications, data engineers implementing streaming pipelines, and DevOps teams needing a complete Kafka development environment. The inclusion of Schema Registry makes it particularly valuable for organizations requiring strict data governance and schema evolution, while Kafka UI reduces the operational complexity of managing topics and monitoring cluster health. This stack provides the foundation for implementing microservices communication, real-time analytics, and data integration patterns.
Key Features
- High-throughput message processing capable of handling millions of events per second
- Durable message storage with configurable retention policies and log compaction
- Consumer groups enabling parallel processing and automatic partition rebalancing
- Exactly-once delivery semantics for critical data processing workflows
- Avro schema management with backward and forward compatibility checking
- Web-based cluster monitoring with topic, partition, and consumer group visibility
- Event sourcing capabilities through persistent, immutable message logs
- Multi-partition topic support for horizontal scaling and load distribution
Common Use Cases
- 1Real-time data pipeline development for streaming analytics and ETL processes
- 2Event-driven microservices architecture with reliable inter-service communication
- 3Change data capture (CDC) for synchronizing databases and maintaining data consistency
- 4Log aggregation from distributed applications and infrastructure components
- 5IoT data ingestion and processing for sensor networks and telemetry systems
- 6Financial transaction processing requiring audit trails and event replay capabilities
- 7E-commerce activity tracking for recommendation engines and user behavior analysis
Prerequisites
- Minimum 6GB RAM available (1GB ZooKeeper + 4GB Kafka + 1GB other components)
- Docker Engine 20.10+ and Docker Compose V2 for proper service orchestration
- Ports 8080, 8081, and 29092 available on the host system
- Basic understanding of event streaming concepts and message queue patterns
- Familiarity with Avro schema design for Schema Registry integration
- SSD storage recommended for optimal Kafka log segment performance
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 zookeeper: 3 image: confluentinc/cp-zookeeper:latest4 environment: 5 - ZOOKEEPER_CLIENT_PORT=21816 - ZOOKEEPER_TICK_TIME=20007 volumes: 8 - zookeeper-data:/var/lib/zookeeper/data9 - zookeeper-log:/var/lib/zookeeper/log10 networks: 11 - kafka-network12 restart: unless-stopped1314 kafka: 15 image: confluentinc/cp-kafka:latest16 environment: 17 - KAFKA_BROKER_ID=118 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:218119 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:2909220 - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT21 - KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT22 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=123 volumes: 24 - kafka-data:/var/lib/kafka/data25 ports: 26 - "29092:29092"27 depends_on: 28 - zookeeper29 networks: 30 - kafka-network31 restart: unless-stopped3233 schema-registry: 34 image: confluentinc/cp-schema-registry:latest35 environment: 36 - SCHEMA_REGISTRY_HOST_NAME=schema-registry37 - SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:909238 ports: 39 - "8081:8081"40 depends_on: 41 - kafka42 networks: 43 - kafka-network44 restart: unless-stopped4546 kafka-ui: 47 image: provectuslabs/kafka-ui:latest48 environment: 49 - KAFKA_CLUSTERS_0_NAME=local50 - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:909251 - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:808152 ports: 53 - "8080:8080"54 depends_on: 55 - kafka56 - schema-registry57 networks: 58 - kafka-network59 restart: unless-stopped6061volumes: 62 zookeeper-data: 63 zookeeper-log: 64 kafka-data: 6566networks: 67 kafka-network: 68 driver: bridge.env Template
.env
1# Kafka2# Kafka at localhost:29092 (external)3# Kafka UI at http://localhost:80804# Schema Registry at http://localhost:8081Usage Notes
- 1Kafka broker at localhost:29092
- 2Kafka UI at http://localhost:8080
- 3Schema Registry at http://localhost:8081
- 4Topic management via UI
- 5Avro schema support
Individual Services(4 services)
Copy individual services to mix and match with your existing compose files.
zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
volumes:
- zookeeper-data:/var/lib/zookeeper/data
- zookeeper-log:/var/lib/zookeeper/log
networks:
- kafka-network
restart: unless-stopped
kafka
kafka:
image: confluentinc/cp-kafka:latest
environment:
- KAFKA_BROKER_ID=1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
volumes:
- kafka-data:/var/lib/kafka/data
ports:
- "29092:29092"
depends_on:
- zookeeper
networks:
- kafka-network
restart: unless-stopped
schema-registry
schema-registry:
image: confluentinc/cp-schema-registry:latest
environment:
- SCHEMA_REGISTRY_HOST_NAME=schema-registry
- SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
ports:
- "8081:8081"
depends_on:
- kafka
networks:
- kafka-network
restart: unless-stopped
kafka-ui
kafka-ui:
image: provectuslabs/kafka-ui:latest
environment:
- KAFKA_CLUSTERS_0_NAME=local
- KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
- KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:8081
ports:
- "8080:8080"
depends_on:
- kafka
- schema-registry
networks:
- kafka-network
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 zookeeper:5 image: confluentinc/cp-zookeeper:latest6 environment:7 - ZOOKEEPER_CLIENT_PORT=21818 - ZOOKEEPER_TICK_TIME=20009 volumes:10 - zookeeper-data:/var/lib/zookeeper/data11 - zookeeper-log:/var/lib/zookeeper/log12 networks:13 - kafka-network14 restart: unless-stopped1516 kafka:17 image: confluentinc/cp-kafka:latest18 environment:19 - KAFKA_BROKER_ID=120 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:218121 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:2909222 - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT23 - KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT24 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=125 volumes:26 - kafka-data:/var/lib/kafka/data27 ports:28 - "29092:29092"29 depends_on:30 - zookeeper31 networks:32 - kafka-network33 restart: unless-stopped3435 schema-registry:36 image: confluentinc/cp-schema-registry:latest37 environment:38 - SCHEMA_REGISTRY_HOST_NAME=schema-registry39 - SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:909240 ports:41 - "8081:8081"42 depends_on:43 - kafka44 networks:45 - kafka-network46 restart: unless-stopped4748 kafka-ui:49 image: provectuslabs/kafka-ui:latest50 environment:51 - KAFKA_CLUSTERS_0_NAME=local52 - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:909253 - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:808154 ports:55 - "8080:8080"56 depends_on:57 - kafka58 - schema-registry59 networks:60 - kafka-network61 restart: unless-stopped6263volumes:64 zookeeper-data:65 zookeeper-log:66 kafka-data:6768networks:69 kafka-network:70 driver: bridge71EOF7273# 2. Create the .env file74cat > .env << 'EOF'75# Kafka76# Kafka at localhost:29092 (external)77# Kafka UI at http://localhost:808078# Schema Registry at http://localhost:808179EOF8081# 3. Start the services82docker compose up -d8384# 4. View logs85docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/kafka-complete/run | bashTroubleshooting
- Error 'Connection to node -1 could not be established': Verify KAFKA_ADVERTISED_LISTENERS includes correct hostname and port mapping
- ZooKeeper connection timeouts: Increase ZOOKEEPER_TICK_TIME to 4000 for slower systems or networks
- Schema Registry 'Subject not found' errors: Ensure topics exist before registering schemas and verify KAFKASTORE_BOOTSTRAP_SERVERS connectivity
- Kafka UI showing 'Cluster unavailable': Check that kafka service is fully started before kafka-ui by adding health checks or delays
- High memory usage warnings: Adjust JVM heap settings using KAFKA_HEAP_OPTS environment variable
- Topic creation failures: Verify KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR matches available broker count in cluster
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
kafkazookeeperkafka-uischema-registry
Tags
#kafka#streaming#event-streaming#zookeeper#message-queue
Category
Message Queues & BrokersAd Space
Shortcuts: C CopyF FavoriteD Download