Fluentd Log Aggregator
Unified logging layer for collecting and forwarding logs.
Overview
Fluentd is a data collection and unified logging layer that acts as a centralized hub for collecting, filtering, and forwarding log data from multiple sources. Originally developed by Treasure Data and part of the Cloud Native Computing Foundation, Fluentd uses a plugin-based architecture to support over 500 data sources and output destinations. Its flexible JSON-based event routing system makes it ideal for creating unified logging pipelines across complex distributed systems.
This stack combines Fluentd with Elasticsearch and Kibana to create a comprehensive log analytics platform. Fluentd collects logs from Docker containers, applications, and system sources, processes them through configurable filters, and forwards structured data to Elasticsearch for indexing and storage. Kibana provides the visualization layer, enabling teams to create dashboards, search through logs, and analyze patterns. The integration leverages Elasticsearch's full-text search capabilities and real-time indexing to make log data immediately searchable.
Development teams, DevOps engineers, and system administrators who need centralized log management across microservices, containerized applications, or hybrid infrastructure will benefit from this configuration. The combination excels at handling high-volume log streams, correlating events across services, and providing real-time insights into application behavior. Unlike simple log shipping tools, Fluentd's buffering and retry mechanisms ensure log reliability, while its parsing capabilities can transform unstructured logs into searchable JSON documents.
Key Features
- Plugin-based input system supporting Docker logs, HTTP endpoints, TCP/UDP streams, and file tailing
- Configurable log parsing and transformation with regex, JSON, and custom format processors
- Buffering and retry mechanisms with memory and file-based queuing for reliable log delivery
- Tag-based routing system for directing different log types to appropriate destinations
- Real-time log forwarding to Elasticsearch with automatic index creation and mapping
- Kibana integration for log visualization, dashboard creation, and alerting capabilities
- Multi-format input support including syslog, Apache access logs, and application-specific formats
- Built-in monitoring endpoint on port 9880 for health checks and metrics collection
Common Use Cases
- 1Centralized logging for microservices architectures with container-based applications
- 2Application performance monitoring and debugging across development and production environments
- 3Security information and event management (SIEM) for analyzing access logs and system events
- 4Infrastructure monitoring for servers, network devices, and cloud services
- 5Compliance logging and audit trail management for regulated industries
- 6Real-time log analysis and alerting for detecting application errors and anomalies
- 7Development team log aggregation for troubleshooting distributed system issues
Prerequisites
- Docker host with at least 2GB available RAM for Elasticsearch indexing and search operations
- Understanding of log formats and parsing requirements for your applications
- Basic knowledge of Elasticsearch query syntax and Kibana dashboard creation
- Network access to ports 24224 (Fluentd), 5601 (Kibana), and 9880 (Fluentd HTTP)
- Fluentd configuration file (fluent.conf) defining input sources, filters, and output destinations
- Sufficient disk space for log retention based on your data volume and retention policies
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 fluentd: 3 image: fluent/fluentd:v1.16-debian4 container_name: fluentd5 volumes: 6 - ./fluent.conf:/fluentd/etc/fluent.conf:ro7 - fluentd-logs:/fluentd/log8 ports: 9 - "24224:24224"10 - "24224:24224/udp"11 - "9880:9880"12 depends_on: 13 - elasticsearch14 networks: 15 - fluentd-network16 restart: unless-stopped1718 elasticsearch: 19 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.020 container_name: fluentd-elasticsearch21 environment: 22 - discovery.type=single-node23 - xpack.security.enabled=false24 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"25 volumes: 26 - elasticsearch-data:/usr/share/elasticsearch/data27 networks: 28 - fluentd-network29 restart: unless-stopped3031 kibana: 32 image: docker.elastic.co/kibana/kibana:8.11.033 container_name: fluentd-kibana34 environment: 35 - ELASTICSEARCH_HOSTS=http://elasticsearch:920036 ports: 37 - "5601:5601"38 depends_on: 39 - elasticsearch40 networks: 41 - fluentd-network42 restart: unless-stopped4344volumes: 45 fluentd-logs: 46 elasticsearch-data: 4748networks: 49 fluentd-network: 50 driver: bridge.env Template
.env
1# Fluentd Aggregator2# Create fluent.conf with input/output configurationUsage Notes
- 1Fluentd on :24224 (forward input)
- 2HTTP input on :9880
- 3Kibana at http://localhost:5601
- 4Configure Docker log driver
- 5Plugin ecosystem available
Individual Services(3 services)
Copy individual services to mix and match with your existing compose files.
fluentd
fluentd:
image: fluent/fluentd:v1.16-debian
container_name: fluentd
volumes:
- ./fluent.conf:/fluentd/etc/fluent.conf:ro
- fluentd-logs:/fluentd/log
ports:
- "24224:24224"
- 24224:24224/udp
- "9880:9880"
depends_on:
- elasticsearch
networks:
- fluentd-network
restart: unless-stopped
elasticsearch
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: fluentd-elasticsearch
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- ES_JAVA_OPTS=-Xms512m -Xmx512m
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- fluentd-network
restart: unless-stopped
kibana
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
container_name: fluentd-kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
ports:
- "5601:5601"
depends_on:
- elasticsearch
networks:
- fluentd-network
restart: unless-stopped
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 fluentd:5 image: fluent/fluentd:v1.16-debian6 container_name: fluentd7 volumes:8 - ./fluent.conf:/fluentd/etc/fluent.conf:ro9 - fluentd-logs:/fluentd/log10 ports:11 - "24224:24224"12 - "24224:24224/udp"13 - "9880:9880"14 depends_on:15 - elasticsearch16 networks:17 - fluentd-network18 restart: unless-stopped1920 elasticsearch:21 image: docker.elastic.co/elasticsearch/elasticsearch:8.11.022 container_name: fluentd-elasticsearch23 environment:24 - discovery.type=single-node25 - xpack.security.enabled=false26 - "ES_JAVA_OPTS=-Xms512m -Xmx512m"27 volumes:28 - elasticsearch-data:/usr/share/elasticsearch/data29 networks:30 - fluentd-network31 restart: unless-stopped3233 kibana:34 image: docker.elastic.co/kibana/kibana:8.11.035 container_name: fluentd-kibana36 environment:37 - ELASTICSEARCH_HOSTS=http://elasticsearch:920038 ports:39 - "5601:5601"40 depends_on:41 - elasticsearch42 networks:43 - fluentd-network44 restart: unless-stopped4546volumes:47 fluentd-logs:48 elasticsearch-data:4950networks:51 fluentd-network:52 driver: bridge53EOF5455# 2. Create the .env file56cat > .env << 'EOF'57# Fluentd Aggregator58# Create fluent.conf with input/output configuration59EOF6061# 3. Start the services62docker compose up -d6364# 4. View logs65docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/fluentd-aggregator/run | bashTroubleshooting
- Fluentd 'no patterns matched' error: Check your fluent.conf tag routing and ensure log sources match configured input patterns
- Elasticsearch connection refused: Verify the elasticsearch service is running and accessible on the fluentd-network, check ES_JAVA_OPTS memory settings
- Logs not appearing in Kibana: Confirm Elasticsearch indices are created, refresh index patterns in Kibana Management section
- High memory usage in Elasticsearch: Adjust ES_JAVA_OPTS heap size based on available system memory, enable index lifecycle management
- Fluentd buffer overflow warnings: Increase buffer size in fluent.conf or add file-based buffering for high-volume log sources
- Permission denied on fluentd-logs volume: Ensure proper file permissions for the fluent user (UID 100) on mounted volumes
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Components
fluentdelasticsearchkibana
Tags
#logging#fluentd#log-aggregation#unified-logging
Category
Monitoring & ObservabilityAd Space
Shortcuts: C CopyF FavoriteD Download