docker.recipes

ELK Stack Complete

advanced

Complete Elasticsearch, Logstash, and Kibana stack for log aggregation and analysis.

Overview

Elasticsearch is a distributed, RESTful search and analytics engine built on Apache Lucene, originally developed by Elastic (formerly Elasticsearch N.V.) to handle massive volumes of data with near real-time search capabilities. As the core component of the Elastic Stack, Elasticsearch provides powerful full-text search, complex aggregations, and machine learning-powered anomaly detection that has made it the go-to solution for log analytics, application search, and business intelligence across organizations worldwide. This complete ELK stack combines Elasticsearch's search engine with Kibana's visualization dashboard, Logstash's data processing pipeline, and Filebeat's lightweight log shipping agent to create a comprehensive log aggregation and analysis platform. The four components work in concert: Filebeat collects and ships logs from various sources, Logstash processes and transforms the data through configurable pipelines, Elasticsearch indexes and stores the processed logs with full-text search capabilities, and Kibana provides rich visualizations and dashboards for data exploration. DevOps engineers, security analysts, and system administrators should deploy this stack when they need centralized logging with powerful search and visualization capabilities. The combination offers enterprise-grade log management that scales from small applications to massive distributed systems, providing real-time insights into system performance, security events, and business metrics through Elasticsearch's advanced analytics and Kibana's intuitive interface.

Key Features

  • Full-text search with relevance scoring across all ingested log data
  • Near real-time indexing and search capabilities for immediate log analysis
  • Interactive Kibana dashboards with drill-down capabilities and custom visualizations
  • Flexible Logstash pipeline processing with grok patterns and data transformation filters
  • Lightweight Filebeat log shipping with automatic Docker container log discovery
  • Distributed architecture with horizontal scaling and automatic sharding
  • Advanced aggregations for time-series analysis and statistical computations
  • Machine learning anomaly detection for proactive monitoring and alerting

Common Use Cases

  • 1Centralized application logging for microservices architectures with distributed tracing
  • 2Security information and event management (SIEM) for threat detection and compliance
  • 3Infrastructure monitoring and performance analysis across multiple servers and services
  • 4Business analytics and user behavior tracking through application event logs
  • 5Troubleshooting production issues with comprehensive log search and correlation
  • 6DevOps observability platform for CI/CD pipeline monitoring and deployment tracking
  • 7IoT data ingestion and real-time analytics for sensor data and device telemetry

Prerequisites

  • Minimum 4GB RAM available for Elasticsearch JVM heap allocation and optimal performance
  • Docker and Docker Compose installed with sufficient disk space for log retention
  • Ports 9200, 9300, 5601, 5044, and 9600 available for inter-service communication
  • Understanding of log formats and basic knowledge of Logstash grok patterns
  • Familiarity with Elasticsearch query DSL for advanced search and aggregations
  • Network access configuration for log sources that will ship data to the stack

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 elasticsearch:
3 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
4 container_name: elasticsearch
5 environment:
6 - discovery.type=single-node
7 - xpack.security.enabled=false
8 - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
9 volumes:
10 - elasticsearch_data:/usr/share/elasticsearch/data
11 ports:
12 - "9200:9200"
13 - "9300:9300"
14 networks:
15 - elk-network
16 healthcheck:
17 test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
18 interval: 30s
19 timeout: 10s
20 retries: 5
21
22 kibana:
23 image: docker.elastic.co/kibana/kibana:8.11.0
24 container_name: kibana
25 environment:
26 - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
27 ports:
28 - "5601:5601"
29 depends_on:
30 elasticsearch:
31 condition: service_healthy
32 networks:
33 - elk-network
34
35 logstash:
36 image: docker.elastic.co/logstash/logstash:8.11.0
37 container_name: logstash
38 volumes:
39 - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
40 ports:
41 - "5044:5044"
42 - "9600:9600"
43 environment:
44 - "LS_JAVA_OPTS=-Xms256m -Xmx256m"
45 depends_on:
46 elasticsearch:
47 condition: service_healthy
48 networks:
49 - elk-network
50
51 filebeat:
52 image: docker.elastic.co/beats/filebeat:8.11.0
53 container_name: filebeat
54 user: root
55 volumes:
56 - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
57 - /var/lib/docker/containers:/var/lib/docker/containers:ro
58 - /var/run/docker.sock:/var/run/docker.sock:ro
59 depends_on:
60 - logstash
61 networks:
62 - elk-network
63
64volumes:
65 elasticsearch_data:
66
67networks:
68 elk-network:
69 driver: bridge

.env Template

.env
1# ELK Stack
2ES_JAVA_OPTS=-Xms1g -Xmx1g
3LS_JAVA_OPTS=-Xms256m -Xmx256m
4ELASTIC_VERSION=8.11.0

Usage Notes

  1. 1Kibana UI at http://localhost:5601
  2. 2Create logstash/pipeline/logstash.conf
  3. 3Create filebeat/filebeat.yml config
  4. 4Requires minimum 4GB RAM
  5. 5Use for centralized logging

Individual Services(4 services)

Copy individual services to mix and match with your existing compose files.

elasticsearch
elasticsearch:
  image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
  container_name: elasticsearch
  environment:
    - discovery.type=single-node
    - xpack.security.enabled=false
    - ES_JAVA_OPTS=-Xms1g -Xmx1g
  volumes:
    - elasticsearch_data:/usr/share/elasticsearch/data
  ports:
    - "9200:9200"
    - "9300:9300"
  networks:
    - elk-network
  healthcheck:
    test:
      - CMD-SHELL
      - curl -f http://localhost:9200 || exit 1
    interval: 30s
    timeout: 10s
    retries: 5
kibana
kibana:
  image: docker.elastic.co/kibana/kibana:8.11.0
  container_name: kibana
  environment:
    - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
  ports:
    - "5601:5601"
  depends_on:
    elasticsearch:
      condition: service_healthy
  networks:
    - elk-network
logstash
logstash:
  image: docker.elastic.co/logstash/logstash:8.11.0
  container_name: logstash
  volumes:
    - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
  ports:
    - "5044:5044"
    - "9600:9600"
  environment:
    - LS_JAVA_OPTS=-Xms256m -Xmx256m
  depends_on:
    elasticsearch:
      condition: service_healthy
  networks:
    - elk-network
filebeat
filebeat:
  image: docker.elastic.co/beats/filebeat:8.11.0
  container_name: filebeat
  user: root
  volumes:
    - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
    - /var/lib/docker/containers:/var/lib/docker/containers:ro
    - /var/run/docker.sock:/var/run/docker.sock:ro
  depends_on:
    - logstash
  networks:
    - elk-network

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 elasticsearch:
5 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
6 container_name: elasticsearch
7 environment:
8 - discovery.type=single-node
9 - xpack.security.enabled=false
10 - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
11 volumes:
12 - elasticsearch_data:/usr/share/elasticsearch/data
13 ports:
14 - "9200:9200"
15 - "9300:9300"
16 networks:
17 - elk-network
18 healthcheck:
19 test: ["CMD-SHELL", "curl -f http://localhost:9200 || exit 1"]
20 interval: 30s
21 timeout: 10s
22 retries: 5
23
24 kibana:
25 image: docker.elastic.co/kibana/kibana:8.11.0
26 container_name: kibana
27 environment:
28 - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
29 ports:
30 - "5601:5601"
31 depends_on:
32 elasticsearch:
33 condition: service_healthy
34 networks:
35 - elk-network
36
37 logstash:
38 image: docker.elastic.co/logstash/logstash:8.11.0
39 container_name: logstash
40 volumes:
41 - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
42 ports:
43 - "5044:5044"
44 - "9600:9600"
45 environment:
46 - "LS_JAVA_OPTS=-Xms256m -Xmx256m"
47 depends_on:
48 elasticsearch:
49 condition: service_healthy
50 networks:
51 - elk-network
52
53 filebeat:
54 image: docker.elastic.co/beats/filebeat:8.11.0
55 container_name: filebeat
56 user: root
57 volumes:
58 - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
59 - /var/lib/docker/containers:/var/lib/docker/containers:ro
60 - /var/run/docker.sock:/var/run/docker.sock:ro
61 depends_on:
62 - logstash
63 networks:
64 - elk-network
65
66volumes:
67 elasticsearch_data:
68
69networks:
70 elk-network:
71 driver: bridge
72EOF
73
74# 2. Create the .env file
75cat > .env << 'EOF'
76# ELK Stack
77ES_JAVA_OPTS=-Xms1g -Xmx1g
78LS_JAVA_OPTS=-Xms256m -Xmx256m
79ELASTIC_VERSION=8.11.0
80EOF
81
82# 3. Start the services
83docker compose up -d
84
85# 4. View logs
86docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/elasticsearch-kibana-logstash/run | bash

Troubleshooting

  • Elasticsearch 'cluster_block_exception' with disk watermark: Increase available disk space or adjust cluster.routing.allocation.disk.watermark settings
  • Kibana shows 'Elasticsearch cluster did not respond' error: Verify Elasticsearch health check passes and container networking allows communication on port 9200
  • Logstash pipeline fails to start with config errors: Validate pipeline configuration syntax in logstash.conf and ensure proper grok pattern formatting
  • Filebeat not shipping logs with permission denied errors: Run Filebeat container with proper user permissions and verify Docker socket access
  • High memory usage causing container crashes: Adjust ES_JAVA_OPTS and LS_JAVA_OPTS heap sizes based on available system memory
  • Elasticsearch yellow cluster status with unassigned shards: Configure index templates with appropriate replica settings for single-node deployment

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space