ELK Stack Complete
Complete Elasticsearch, Logstash, and Kibana stack for log aggregation and analysis.
Overview
Elasticsearch is a distributed, RESTful search and analytics engine built on Apache Lucene, originally developed by Elastic (formerly Elasticsearch N.V.) to handle massive volumes of data with near real-time search capabilities. As the core component of the Elastic Stack, Elasticsearch provides powerful full-text search, complex aggregations, and machine learning-powered anomaly detection that has made it the go-to solution for log analytics, application search, and business intelligence across organizations worldwide. This complete ELK stack combines Elasticsearch's search engine with Kibana's visualization dashboard, Logstash's data processing pipeline, and Filebeat's lightweight log shipping agent to create a comprehensive log aggregation and analysis platform. The four components work in concert: Filebeat collects and ships logs from various sources, Logstash processes and transforms the data through configurable pipelines, Elasticsearch indexes and stores the processed logs with full-text search capabilities, and Kibana provides rich visualizations and dashboards for data exploration. DevOps engineers, security analysts, and system administrators should deploy this stack when they need centralized logging with powerful search and visualization capabilities. The combination offers enterprise-grade log management that scales from small applications to massive distributed systems, providing real-time insights into system performance, security events, and business metrics through Elasticsearch's advanced analytics and Kibana's intuitive interface.
Key Features
- Full-text search with relevance scoring across all ingested log data
- Near real-time indexing and search capabilities for immediate log analysis
- Interactive Kibana dashboards with drill-down capabilities and custom visualizations
- Flexible Logstash pipeline processing with grok patterns and data transformation filters
- Lightweight Filebeat log shipping with automatic Docker container log discovery
- Distributed architecture with horizontal scaling and automatic sharding
- Advanced aggregations for time-series analysis and statistical computations
- Machine learning anomaly detection for proactive monitoring and alerting
Common Use Cases
- 1Centralized application logging for microservices architectures with distributed tracing
- 2Security information and event management (SIEM) for threat detection and compliance
- 3Infrastructure monitoring and performance analysis across multiple servers and services
- 4Business analytics and user behavior tracking through application event logs
- 5Troubleshooting production issues with comprehensive log search and correlation
- 6DevOps observability platform for CI/CD pipeline monitoring and deployment tracking
- 7IoT data ingestion and real-time analytics for sensor data and device telemetry
Prerequisites
- Minimum 4GB RAM available for Elasticsearch JVM heap allocation and optimal performance
- Docker and Docker Compose installed with sufficient disk space for log retention
- Ports 9200, 9300, 5601, 5044, and 9600 available for inter-service communication
- Understanding of log formats and basic knowledge of Logstash grok patterns
- Familiarity with Elasticsearch query DSL for advanced search and aggregations
- Network access configuration for log sources that will ship data to the stack
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 elasticsearch: 3 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.04 container_name: elasticsearch5 environment: 6 - discovery.type=single-node7 - xpack.security.enabled=false8 - "ES_JAVA_OPTS=-Xms1g -Xmx1g"9 volumes: 10 - elasticsearch_data:/usr/share/elasticsearch/data11 ports: 12 - "9200:9200"13 - "9300:9300"14 networks: 15 - elk-network16 healthcheck: 17 test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]18 interval: 30s19 timeout: 10s20 retries: 52122 kibana: 23 image: docker.elastic.co/kibana/kibana:8.11.024 container_name: kibana25 environment: 26 - ELASTICSEARCH_HOSTS=http://elasticsearch:920027 ports: 28 - "5601:5601"29 depends_on: 30 elasticsearch: 31 condition: service_healthy32 networks: 33 - elk-network3435 logstash: 36 image: docker.elastic.co/logstash/logstash:8.11.037 container_name: logstash38 volumes: 39 - ./logstash/pipeline:/usr/share/logstash/pipeline:ro40 ports: 41 - "5044:5044"42 - "9600:9600"43 environment: 44 - "LS_JAVA_OPTS=-Xms256m -Xmx256m"45 depends_on: 46 elasticsearch: 47 condition: service_healthy48 networks: 49 - elk-network5051 filebeat: 52 image: docker.elastic.co/beats/filebeat:8.11.053 container_name: filebeat54 user: root55 volumes: 56 - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro57 - /var/lib/docker/containers:/var/lib/docker/containers:ro58 - /var/run/docker.sock:/var/run/docker.sock:ro59 depends_on: 60 - logstash61 networks: 62 - elk-network6364volumes: 65 elasticsearch_data: 6667networks: 68 elk-network: 69 driver: bridge.env Template
.env
1# ELK Stack2ES_JAVA_OPTS=-Xms1g -Xmx1g3LS_JAVA_OPTS=-Xms256m -Xmx256m4ELASTIC_VERSION=8.11.0Usage Notes
- 1Kibana UI at http://localhost:5601
- 2Create logstash/pipeline/logstash.conf
- 3Create filebeat/filebeat.yml config
- 4Requires minimum 4GB RAM
- 5Use for centralized logging
Individual Services(4 services)
Copy individual services to mix and match with your existing compose files.
elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms1g -Xmx1g
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk-network
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:9200 || exit 1
interval: 30s
timeout: 10s
retries: 5
kibana
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
elasticsearch:
condition: service_healthy
networks:
- elk-network
logstash
logstash:
image: docker.elastic.co/logstash/logstash:8.11.0
container_name: logstash
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
ports:
- "5044:5044"
- "9600:9600"
environment:
- LS_JAVA_OPTS=-Xms256m -Xmx256m
depends_on:
elasticsearch:
condition: service_healthy
networks:
- elk-network
filebeat
filebeat:
image: docker.elastic.co/beats/filebeat:8.11.0
container_name: filebeat
user: root
volumes:
- ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- logstash
networks:
- elk-network
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 elasticsearch:5 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.06 container_name: elasticsearch7 environment:8 - discovery.type=single-node9 - xpack.security.enabled=false10 - "ES_JAVA_OPTS=-Xms1g -Xmx1g"11 volumes:12 - elasticsearch_data:/usr/share/elasticsearch/data13 ports:14 - "9200:9200"15 - "9300:9300"16 networks:17 - elk-network18 healthcheck:19 test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]20 interval: 30s21 timeout: 10s22 retries: 52324 kibana:25 image: docker.elastic.co/kibana/kibana:8.11.026 container_name: kibana27 environment:28 - ELASTICSEARCH_HOSTS=http://elasticsearch:920029 ports:30 - "5601:5601"31 depends_on:32 elasticsearch:33 condition: service_healthy34 networks:35 - elk-network3637 logstash:38 image: docker.elastic.co/logstash/logstash:8.11.039 container_name: logstash40 volumes:41 - ./logstash/pipeline:/usr/share/logstash/pipeline:ro42 ports:43 - "5044:5044"44 - "9600:9600"45 environment:46 - "LS_JAVA_OPTS=-Xms256m -Xmx256m"47 depends_on:48 elasticsearch:49 condition: service_healthy50 networks:51 - elk-network5253 filebeat:54 image: docker.elastic.co/beats/filebeat:8.11.055 container_name: filebeat56 user: root57 volumes:58 - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro59 - /var/lib/docker/containers:/var/lib/docker/containers:ro60 - /var/run/docker.sock:/var/run/docker.sock:ro61 depends_on:62 - logstash63 networks:64 - elk-network6566volumes:67 elasticsearch_data:6869networks:70 elk-network:71 driver: bridge72EOF7374# 2. Create the .env file75cat > .env << 'EOF'76# ELK Stack77ES_JAVA_OPTS=-Xms1g -Xmx1g78LS_JAVA_OPTS=-Xms256m -Xmx256m79ELASTIC_VERSION=8.11.080EOF8182# 3. Start the services83docker compose up -d8485# 4. View logs86docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/elasticsearch-kibana-logstash/run | bashTroubleshooting
- Elasticsearch 'cluster_block_exception' with disk watermark: Increase available disk space or adjust cluster.routing.allocation.disk.watermark settings
- Kibana shows 'Elasticsearch cluster did not respond' error: Verify Elasticsearch health check passes and container networking allows communication on port 9200
- Logstash pipeline fails to start with config errors: Validate pipeline configuration syntax in logstash.conf and ensure proper grok pattern formatting
- Filebeat not shipping logs with permission denied errors: Run Filebeat container with proper user permissions and verify Docker socket access
- High memory usage causing container crashes: Adjust ES_JAVA_OPTS and LS_JAVA_OPTS heap sizes based on available system memory
- Elasticsearch yellow cluster status with unassigned shards: Configure index templates with appropriate replica settings for single-node deployment
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
elasticsearchkibanalogstashfilebeat
Tags
#elk#elasticsearch#kibana#logstash#logging#analytics
Category
Monitoring & ObservabilityAd Space
Shortcuts: C CopyF FavoriteD Download