docker.recipes

Kafka + Zookeeper + Kafka UI

advanced

Distributed streaming platform with web UI.

Overview

Apache Kafka is a distributed event streaming platform originally developed by LinkedIn and later open-sourced, designed to handle real-time data feeds with high throughput and fault tolerance. It serves as the backbone for event-driven architectures, enabling applications to produce, consume, and process streams of records in real-time across distributed systems. Kafka's unique approach to messaging through persistent, partitioned logs makes it ideal for scenarios requiring message replay, event sourcing, and horizontal scalability. This stack combines Kafka with Apache ZooKeeper for distributed coordination, Confluent Schema Registry for managing Avro schemas and data evolution, and Kafka UI for visual management and monitoring. ZooKeeper handles cluster metadata, leader election, and configuration management for Kafka brokers, while Schema Registry ensures data compatibility across producers and consumers by managing schema versions. Kafka UI provides a web-based interface for monitoring topics, consumer groups, partitions, and message flows without requiring command-line tools. This configuration is ideal for developers building event-driven applications, data engineers implementing streaming pipelines, and DevOps teams needing a complete Kafka development environment. The inclusion of Schema Registry makes it particularly valuable for organizations requiring strict data governance and schema evolution, while Kafka UI reduces the operational complexity of managing topics and monitoring cluster health. This stack provides the foundation for implementing microservices communication, real-time analytics, and data integration patterns.

Key Features

  • High-throughput message processing capable of handling millions of events per second
  • Durable message storage with configurable retention policies and log compaction
  • Consumer groups enabling parallel processing and automatic partition rebalancing
  • Exactly-once delivery semantics for critical data processing workflows
  • Avro schema management with backward and forward compatibility checking
  • Web-based cluster monitoring with topic, partition, and consumer group visibility
  • Event sourcing capabilities through persistent, immutable message logs
  • Multi-partition topic support for horizontal scaling and load distribution

Common Use Cases

  • 1Real-time data pipeline development for streaming analytics and ETL processes
  • 2Event-driven microservices architecture with reliable inter-service communication
  • 3Change data capture (CDC) for synchronizing databases and maintaining data consistency
  • 4Log aggregation from distributed applications and infrastructure components
  • 5IoT data ingestion and processing for sensor networks and telemetry systems
  • 6Financial transaction processing requiring audit trails and event replay capabilities
  • 7E-commerce activity tracking for recommendation engines and user behavior analysis

Prerequisites

  • Minimum 6GB RAM available (1GB ZooKeeper + 4GB Kafka + 1GB other components)
  • Docker Engine 20.10+ and Docker Compose V2 for proper service orchestration
  • Ports 8080, 8081, and 29092 available on the host system
  • Basic understanding of event streaming concepts and message queue patterns
  • Familiarity with Avro schema design for Schema Registry integration
  • SSD storage recommended for optimal Kafka log segment performance

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 zookeeper:
3 image: confluentinc/cp-zookeeper:latest
4 environment:
5 - ZOOKEEPER_CLIENT_PORT=2181
6 - ZOOKEEPER_TICK_TIME=2000
7 volumes:
8 - zookeeper-data:/var/lib/zookeeper/data
9 - zookeeper-log:/var/lib/zookeeper/log
10 networks:
11 - kafka-network
12 restart: unless-stopped
13
14 kafka:
15 image: confluentinc/cp-kafka:latest
16 environment:
17 - KAFKA_BROKER_ID=1
18 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
19 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
20 - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
21 - KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
22 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
23 volumes:
24 - kafka-data:/var/lib/kafka/data
25 ports:
26 - "29092:29092"
27 depends_on:
28 - zookeeper
29 networks:
30 - kafka-network
31 restart: unless-stopped
32
33 schema-registry:
34 image: confluentinc/cp-schema-registry:latest
35 environment:
36 - SCHEMA_REGISTRY_HOST_NAME=schema-registry
37 - SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
38 ports:
39 - "8081:8081"
40 depends_on:
41 - kafka
42 networks:
43 - kafka-network
44 restart: unless-stopped
45
46 kafka-ui:
47 image: provectuslabs/kafka-ui:latest
48 environment:
49 - KAFKA_CLUSTERS_0_NAME=local
50 - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
51 - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:8081
52 ports:
53 - "8080:8080"
54 depends_on:
55 - kafka
56 - schema-registry
57 networks:
58 - kafka-network
59 restart: unless-stopped
60
61volumes:
62 zookeeper-data:
63 zookeeper-log:
64 kafka-data:
65
66networks:
67 kafka-network:
68 driver: bridge

.env Template

.env
1# Kafka
2# Kafka at localhost:29092 (external)
3# Kafka UI at http://localhost:8080
4# Schema Registry at http://localhost:8081

Usage Notes

  1. 1Kafka broker at localhost:29092
  2. 2Kafka UI at http://localhost:8080
  3. 3Schema Registry at http://localhost:8081
  4. 4Topic management via UI
  5. 5Avro schema support

Individual Services(4 services)

Copy individual services to mix and match with your existing compose files.

zookeeper
zookeeper:
  image: confluentinc/cp-zookeeper:latest
  environment:
    - ZOOKEEPER_CLIENT_PORT=2181
    - ZOOKEEPER_TICK_TIME=2000
  volumes:
    - zookeeper-data:/var/lib/zookeeper/data
    - zookeeper-log:/var/lib/zookeeper/log
  networks:
    - kafka-network
  restart: unless-stopped
kafka
kafka:
  image: confluentinc/cp-kafka:latest
  environment:
    - KAFKA_BROKER_ID=1
    - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
    - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
    - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
    - KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
    - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
  volumes:
    - kafka-data:/var/lib/kafka/data
  ports:
    - "29092:29092"
  depends_on:
    - zookeeper
  networks:
    - kafka-network
  restart: unless-stopped
schema-registry
schema-registry:
  image: confluentinc/cp-schema-registry:latest
  environment:
    - SCHEMA_REGISTRY_HOST_NAME=schema-registry
    - SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
  ports:
    - "8081:8081"
  depends_on:
    - kafka
  networks:
    - kafka-network
  restart: unless-stopped
kafka-ui
kafka-ui:
  image: provectuslabs/kafka-ui:latest
  environment:
    - KAFKA_CLUSTERS_0_NAME=local
    - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
    - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:8081
  ports:
    - "8080:8080"
  depends_on:
    - kafka
    - schema-registry
  networks:
    - kafka-network
  restart: unless-stopped

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 zookeeper:
5 image: confluentinc/cp-zookeeper:latest
6 environment:
7 - ZOOKEEPER_CLIENT_PORT=2181
8 - ZOOKEEPER_TICK_TIME=2000
9 volumes:
10 - zookeeper-data:/var/lib/zookeeper/data
11 - zookeeper-log:/var/lib/zookeeper/log
12 networks:
13 - kafka-network
14 restart: unless-stopped
15
16 kafka:
17 image: confluentinc/cp-kafka:latest
18 environment:
19 - KAFKA_BROKER_ID=1
20 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
21 - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
22 - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
23 - KAFKA_INTER_BROKER_LISTENER_NAME=PLAINTEXT
24 - KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
25 volumes:
26 - kafka-data:/var/lib/kafka/data
27 ports:
28 - "29092:29092"
29 depends_on:
30 - zookeeper
31 networks:
32 - kafka-network
33 restart: unless-stopped
34
35 schema-registry:
36 image: confluentinc/cp-schema-registry:latest
37 environment:
38 - SCHEMA_REGISTRY_HOST_NAME=schema-registry
39 - SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=kafka:9092
40 ports:
41 - "8081:8081"
42 depends_on:
43 - kafka
44 networks:
45 - kafka-network
46 restart: unless-stopped
47
48 kafka-ui:
49 image: provectuslabs/kafka-ui:latest
50 environment:
51 - KAFKA_CLUSTERS_0_NAME=local
52 - KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS=kafka:9092
53 - KAFKA_CLUSTERS_0_SCHEMAREGISTRY=http://schema-registry:8081
54 ports:
55 - "8080:8080"
56 depends_on:
57 - kafka
58 - schema-registry
59 networks:
60 - kafka-network
61 restart: unless-stopped
62
63volumes:
64 zookeeper-data:
65 zookeeper-log:
66 kafka-data:
67
68networks:
69 kafka-network:
70 driver: bridge
71EOF
72
73# 2. Create the .env file
74cat > .env << 'EOF'
75# Kafka
76# Kafka at localhost:29092 (external)
77# Kafka UI at http://localhost:8080
78# Schema Registry at http://localhost:8081
79EOF
80
81# 3. Start the services
82docker compose up -d
83
84# 4. View logs
85docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/kafka-complete/run | bash

Troubleshooting

  • Error 'Connection to node -1 could not be established': Verify KAFKA_ADVERTISED_LISTENERS includes correct hostname and port mapping
  • ZooKeeper connection timeouts: Increase ZOOKEEPER_TICK_TIME to 4000 for slower systems or networks
  • Schema Registry 'Subject not found' errors: Ensure topics exist before registering schemas and verify KAFKASTORE_BOOTSTRAP_SERVERS connectivity
  • Kafka UI showing 'Cluster unavailable': Check that kafka service is fully started before kafka-ui by adding health checks or delays
  • High memory usage warnings: Adjust JVM heap settings using KAFKA_HEAP_OPTS environment variable
  • Topic creation failures: Verify KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR matches available broker count in cluster

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space