docker.recipes

Fluentd Log Aggregator

intermediate

Unified logging layer for collecting and forwarding logs.

Overview

Fluentd is a data collection and unified logging layer that acts as a centralized hub for collecting, filtering, and forwarding log data from multiple sources. Originally developed by Treasure Data and part of the Cloud Native Computing Foundation, Fluentd uses a plugin-based architecture to support over 500 data sources and output destinations. Its flexible JSON-based event routing system makes it ideal for creating unified logging pipelines across complex distributed systems. This stack combines Fluentd with Elasticsearch and Kibana to create a comprehensive log analytics platform. Fluentd collects logs from Docker containers, applications, and system sources, processes them through configurable filters, and forwards structured data to Elasticsearch for indexing and storage. Kibana provides the visualization layer, enabling teams to create dashboards, search through logs, and analyze patterns. The integration leverages Elasticsearch's full-text search capabilities and real-time indexing to make log data immediately searchable. Development teams, DevOps engineers, and system administrators who need centralized log management across microservices, containerized applications, or hybrid infrastructure will benefit from this configuration. The combination excels at handling high-volume log streams, correlating events across services, and providing real-time insights into application behavior. Unlike simple log shipping tools, Fluentd's buffering and retry mechanisms ensure log reliability, while its parsing capabilities can transform unstructured logs into searchable JSON documents.

Key Features

  • Plugin-based input system supporting Docker logs, HTTP endpoints, TCP/UDP streams, and file tailing
  • Configurable log parsing and transformation with regex, JSON, and custom format processors
  • Buffering and retry mechanisms with memory and file-based queuing for reliable log delivery
  • Tag-based routing system for directing different log types to appropriate destinations
  • Real-time log forwarding to Elasticsearch with automatic index creation and mapping
  • Kibana integration for log visualization, dashboard creation, and alerting capabilities
  • Multi-format input support including syslog, Apache access logs, and application-specific formats
  • Built-in monitoring endpoint on port 9880 for health checks and metrics collection

Common Use Cases

  • 1Centralized logging for microservices architectures with container-based applications
  • 2Application performance monitoring and debugging across development and production environments
  • 3Security information and event management (SIEM) for analyzing access logs and system events
  • 4Infrastructure monitoring for servers, network devices, and cloud services
  • 5Compliance logging and audit trail management for regulated industries
  • 6Real-time log analysis and alerting for detecting application errors and anomalies
  • 7Development team log aggregation for troubleshooting distributed system issues

Prerequisites

  • Docker host with at least 2GB available RAM for Elasticsearch indexing and search operations
  • Understanding of log formats and parsing requirements for your applications
  • Basic knowledge of Elasticsearch query syntax and Kibana dashboard creation
  • Network access to ports 24224 (Fluentd), 5601 (Kibana), and 9880 (Fluentd HTTP)
  • Fluentd configuration file (fluent.conf) defining input sources, filters, and output destinations
  • Sufficient disk space for log retention based on your data volume and retention policies

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 fluentd:
3 image: fluent/fluentd:v1.16-debian
4 container_name: fluentd
5 volumes:
6 - ./fluent.conf:/fluentd/etc/fluent.conf:ro
7 - fluentd-logs:/fluentd/log
8 ports:
9 - "24224:24224"
10 - "24224:24224/udp"
11 - "9880:9880"
12 depends_on:
13 - elasticsearch
14 networks:
15 - fluentd-network
16 restart: unless-stopped
17
18 elasticsearch:
19 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
20 container_name: fluentd-elasticsearch
21 environment:
22 - discovery.type=single-node
23 - xpack.security.enabled=false
24 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
25 volumes:
26 - elasticsearch-data:/usr/share/elasticsearch/data
27 networks:
28 - fluentd-network
29 restart: unless-stopped
30
31 kibana:
32 image: docker.elastic.co/kibana/kibana:8.11.0
33 container_name: fluentd-kibana
34 environment:
35 - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
36 ports:
37 - "5601:5601"
38 depends_on:
39 - elasticsearch
40 networks:
41 - fluentd-network
42 restart: unless-stopped
43
44volumes:
45 fluentd-logs:
46 elasticsearch-data:
47
48networks:
49 fluentd-network:
50 driver: bridge

.env Template

.env
1# Fluentd Aggregator
2# Create fluent.conf with input/output configuration

Usage Notes

  1. 1Fluentd on :24224 (forward input)
  2. 2HTTP input on :9880
  3. 3Kibana at http://localhost:5601
  4. 4Configure Docker log driver
  5. 5Plugin ecosystem available

Individual Services(3 services)

Copy individual services to mix and match with your existing compose files.

fluentd
fluentd:
  image: fluent/fluentd:v1.16-debian
  container_name: fluentd
  volumes:
    - ./fluent.conf:/fluentd/etc/fluent.conf:ro
    - fluentd-logs:/fluentd/log
  ports:
    - "24224:24224"
    - 24224:24224/udp
    - "9880:9880"
  depends_on:
    - elasticsearch
  networks:
    - fluentd-network
  restart: unless-stopped
elasticsearch
elasticsearch:
  image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
  container_name: fluentd-elasticsearch
  environment:
    - discovery.type=single-node
    - xpack.security.enabled=false
    - ES_JAVA_OPTS=-Xms512m -Xmx512m
  volumes:
    - elasticsearch-data:/usr/share/elasticsearch/data
  networks:
    - fluentd-network
  restart: unless-stopped
kibana
kibana:
  image: docker.elastic.co/kibana/kibana:8.11.0
  container_name: fluentd-kibana
  environment:
    - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
  ports:
    - "5601:5601"
  depends_on:
    - elasticsearch
  networks:
    - fluentd-network
  restart: unless-stopped

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 fluentd:
5 image: fluent/fluentd:v1.16-debian
6 container_name: fluentd
7 volumes:
8 - ./fluent.conf:/fluentd/etc/fluent.conf:ro
9 - fluentd-logs:/fluentd/log
10 ports:
11 - "24224:24224"
12 - "24224:24224/udp"
13 - "9880:9880"
14 depends_on:
15 - elasticsearch
16 networks:
17 - fluentd-network
18 restart: unless-stopped
19
20 elasticsearch:
21 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
22 container_name: fluentd-elasticsearch
23 environment:
24 - discovery.type=single-node
25 - xpack.security.enabled=false
26 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
27 volumes:
28 - elasticsearch-data:/usr/share/elasticsearch/data
29 networks:
30 - fluentd-network
31 restart: unless-stopped
32
33 kibana:
34 image: docker.elastic.co/kibana/kibana:8.11.0
35 container_name: fluentd-kibana
36 environment:
37 - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
38 ports:
39 - "5601:5601"
40 depends_on:
41 - elasticsearch
42 networks:
43 - fluentd-network
44 restart: unless-stopped
45
46volumes:
47 fluentd-logs:
48 elasticsearch-data:
49
50networks:
51 fluentd-network:
52 driver: bridge
53EOF
54
55# 2. Create the .env file
56cat > .env << 'EOF'
57# Fluentd Aggregator
58# Create fluent.conf with input/output configuration
59EOF
60
61# 3. Start the services
62docker compose up -d
63
64# 4. View logs
65docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/fluentd-aggregator/run | bash

Troubleshooting

  • Fluentd 'no patterns matched' error: Check your fluent.conf tag routing and ensure log sources match configured input patterns
  • Elasticsearch connection refused: Verify the elasticsearch service is running and accessible on the fluentd-network, check ES_JAVA_OPTS memory settings
  • Logs not appearing in Kibana: Confirm Elasticsearch indices are created, refresh index patterns in Kibana Management section
  • High memory usage in Elasticsearch: Adjust ES_JAVA_OPTS heap size based on available system memory, enable index lifecycle management
  • Fluentd buffer overflow warnings: Increase buffer size in fluent.conf or add file-based buffering for high-volume log sources
  • Permission denied on fluentd-logs volume: Ensure proper file permissions for the fluent user (UID 100) on mounted volumes

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space