docker.recipes

Vector

intermediate

High-performance observability data pipeline for logs, metrics, and traces.

Overview

Vector is a high-performance observability data pipeline built in Rust that transforms, aggregates, and routes logs, metrics, and traces. Originally developed by Timber.io (now part of Datadog), Vector was designed to solve the performance and reliability issues plaguing traditional log processing pipelines, offering 10x better performance and 4x lower memory usage compared to alternatives like Logstash or Fluentd. Vector's architecture emphasizes memory safety, zero-data-loss guarantees, and incredible throughput while maintaining a small resource footprint. This Docker deployment creates a centralized observability pipeline that can collect data from multiple sources, transform it using Vector's powerful Vector Remap Language (VRL), and route it to various destinations like Elasticsearch, Prometheus, or cloud storage. Vector acts as a universal data router, capable of parsing different log formats, enriching data with metadata, filtering noise, and converting between different observability data types. The configuration exposes both the API endpoint for health monitoring and custom ports for data ingestion. This setup is ideal for platform engineers building modern observability stacks, SREs managing distributed systems, and organizations looking to consolidate their telemetry pipeline without vendor lock-in. Vector's unified approach means you can handle logs from applications, metrics from infrastructure, and distributed traces through a single pipeline, reducing operational complexity while improving data quality and reducing costs through intelligent sampling and filtering.

Key Features

  • Vector Remap Language (VRL) for powerful real-time data transformation with compile-time guarantees
  • Zero-data-loss architecture with built-in buffering, retries, and acknowledgment tracking
  • Multi-protocol data ingestion supporting syslog, HTTP, gRPC, Kafka, and file tailing
  • Built-in observability with Prometheus metrics, structured logging, and health check endpoints
  • Memory-safe Rust implementation delivering consistent sub-millisecond latencies under load
  • Unified processing for logs, metrics, and traces eliminating the need for separate pipelines
  • Hot-reloading configuration changes without dropping data or restarting the service
  • Adaptive concurrency and intelligent backpressure handling for optimal resource utilization

Common Use Cases

  • 1Centralizing log collection from Kubernetes clusters and forwarding to multiple SIEM platforms
  • 2Building cost-effective observability pipelines by pre-filtering and sampling high-volume telemetry data
  • 3Migrating from proprietary logging solutions while maintaining compatibility with existing dashboards
  • 4Creating compliance-ready audit trails with data enrichment and structured formatting
  • 5Implementing real-time alerting by parsing logs and converting critical events to metrics
  • 6Reducing cloud logging costs by intelligently routing data based on severity and content
  • 7Building hybrid cloud observability by routing on-premises data to cloud-native monitoring tools

Prerequisites

  • Minimum 512MB RAM for basic log processing, 2GB+ recommended for high-throughput scenarios
  • Docker host with available ports 8686 (API/metrics) and 9000 (data ingestion)
  • Valid vector.yaml configuration file defining sources, transforms, and sinks
  • Read access to log directories if collecting from local filesystem (/var/log mounted)
  • Network connectivity to destination systems like Elasticsearch, Kafka, or cloud APIs
  • Understanding of Vector Remap Language syntax for data transformation requirements

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 vector:
3 image: timberio/vector:latest-alpine
4 container_name: vector
5 restart: unless-stopped
6 volumes:
7 - ./vector/vector.yaml:/etc/vector/vector.yaml:ro
8 - /var/log:/var/log:ro
9 ports:
10 - "8686:8686"
11 - "9000:9000"
12 networks:
13 - vector-network
14
15networks:
16 vector-network:
17 driver: bridge

.env Template

.env
1# Vector configuration file required

Usage Notes

  1. 1Docs: https://vector.dev/docs/
  2. 2API/metrics at http://localhost:8686
  3. 3Create ./vector/vector.yaml configuration before starting
  4. 4VRL (Vector Remap Language) for powerful log transformation
  5. 5Written in Rust - 10x faster, 4x less memory than alternatives
  6. 6Supports logs, metrics, and traces - unified observability pipeline

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 vector:
5 image: timberio/vector:latest-alpine
6 container_name: vector
7 restart: unless-stopped
8 volumes:
9 - ./vector/vector.yaml:/etc/vector/vector.yaml:ro
10 - /var/log:/var/log:ro
11 ports:
12 - "8686:8686"
13 - "9000:9000"
14 networks:
15 - vector-network
16
17networks:
18 vector-network:
19 driver: bridge
20EOF
21
22# 2. Create the .env file
23cat > .env << 'EOF'
24# Vector configuration file required
25EOF
26
27# 3. Start the services
28docker compose up -d
29
30# 4. View logs
31docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/vector/run | bash

Troubleshooting

  • Configuration validation failed: Use 'vector validate /etc/vector/vector.yaml' to check syntax and test transforms
  • High memory usage with file tailing: Increase 'max_line_bytes' and implement log rotation to prevent unbounded growth
  • Connection refused to sinks: Verify destination connectivity and check Vector's internal metrics at localhost:8686/metrics
  • VRL transform compilation errors: Review VRL documentation and use Vector's built-in type checking for field validation
  • Data loss during high throughput: Increase buffer sizes in vector.yaml and monitor buffer utilization metrics
  • Permission denied reading log files: Ensure container has proper file permissions or run with appropriate user/group mapping

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space