Parseable
Cloud-native log analytics platform with S3 storage backend.
Overview
Parseable is a modern, cloud-native log analytics platform designed as a lightweight alternative to traditional solutions like Elasticsearch and Loki. Built in Rust for high performance and resource efficiency, Parseable focuses on providing real-time log analytics with SQL-based querying capabilities and flexible storage backends including S3, MinIO, and local storage. The platform emphasizes simplicity and cost-effectiveness while maintaining enterprise-grade features for log aggregation, analysis, and visualization.
This Docker deployment configures Parseable in local-store mode, creating a self-contained log analytics environment that stores data directly on the container filesystem using Docker volumes. The setup includes dedicated storage areas for both active data processing and staging operations, with a web-based interface accessible on port 8000 for log management and querying. The configuration uses basic authentication and provides RESTful APIs for log ingestion from various sources including applications, servers, and containerized workloads.
This stack is ideal for development teams, DevOps engineers, and system administrators who need efficient log analytics without the complexity and resource overhead of larger platforms like the ELK stack. Small to medium-sized organizations, startups, and homelab enthusiasts will find this particularly valuable for centralized logging, application debugging, and system monitoring where SQL familiarity is preferred over complex query languages.
Key Features
- SQL-based log querying with standard syntax for familiar data exploration and analysis
- RESTful API endpoints for programmatic log ingestion from applications and services
- Real-time log analytics with instant search and filtering capabilities
- Lightweight Rust-based architecture requiring minimal system resources compared to Java-based alternatives
- Web-based dashboard for log visualization, stream management, and query execution
- JSON-native log processing with automatic field detection and indexing
- Local storage backend with staging area for efficient data processing workflows
- Built-in authentication system with configurable user credentials for secure access
Common Use Cases
- 1Application debugging and error tracking for development teams working with microservices
- 2Centralized logging for small to medium-sized container orchestration environments
- 3Cost-effective log analytics for startups looking to replace expensive SaaS logging solutions
- 4Development environment log aggregation where SQL skills are more common than Elasticsearch expertise
- 5Homelab monitoring and system administration for self-hosted infrastructure
- 6CI/CD pipeline log analysis and build troubleshooting with structured query capabilities
- 7IoT device log collection and analysis for edge computing scenarios with resource constraints
Prerequisites
- Docker Engine 20.10+ and Docker Compose V2 for container orchestration
- Minimum 512MB RAM available for the Parseable container and data processing
- Port 8000 available on the host system for web interface access
- At least 2GB free disk space for Docker volumes and log data storage
- Basic understanding of SQL syntax for effective log querying and analysis
- Network connectivity for pulling the parseable/parseable Docker image from Docker Hub
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 parseable: 3 image: parseable/parseable:latest4 container_name: parseable5 restart: unless-stopped6 command: 7 - parseable8 - local-store9 environment: 10 P_USERNAME: ${P_USERNAME}11 P_PASSWORD: ${P_PASSWORD}12 volumes: 13 - parseable_data:/parseable/data14 - parseable_staging:/parseable/staging15 ports: 16 - "8000:8000"17 networks: 18 - parseable-network1920volumes: 21 parseable_data: 22 parseable_staging: 2324networks: 25 parseable-network: 26 driver: bridge.env Template
.env
1P_USERNAME=admin2P_PASSWORD=changemeUsage Notes
- 1Docs: https://www.parseable.io/docs/
- 2UI at http://localhost:8000 - login with P_USERNAME/P_PASSWORD
- 3Ingest logs: POST /api/v1/logstream/{stream} with JSON body
- 4Query with SQL: SELECT * FROM stream WHERE field = 'value'
- 5Production: use S3/MinIO backend instead of local-store
- 6Lightweight Elasticsearch/Loki alternative for log analytics
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 parseable:5 image: parseable/parseable:latest6 container_name: parseable7 restart: unless-stopped8 command:9 - parseable10 - local-store11 environment:12 P_USERNAME: ${P_USERNAME}13 P_PASSWORD: ${P_PASSWORD}14 volumes:15 - parseable_data:/parseable/data16 - parseable_staging:/parseable/staging17 ports:18 - "8000:8000"19 networks:20 - parseable-network2122volumes:23 parseable_data:24 parseable_staging:2526networks:27 parseable-network:28 driver: bridge29EOF3031# 2. Create the .env file32cat > .env << 'EOF'33P_USERNAME=admin34P_PASSWORD=changeme35EOF3637# 3. Start the services38docker compose up -d3940# 4. View logs41docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/parseable/run | bashTroubleshooting
- Container exits with 'permission denied' error: Ensure Docker has proper permissions to create and write to the mounted volumes, or run with appropriate user mapping
- Web interface shows 401 Unauthorized: Verify P_USERNAME and P_PASSWORD environment variables are set correctly in your .env file and match login credentials
- Log ingestion API returns 404 errors: Check that log streams are being sent to the correct endpoint format /api/v1/logstream/{stream_name} with proper HTTP POST method
- Query performance is slow with large datasets: Consider switching to S3 backend for production use as local storage may have I/O limitations with high volume logs
- Container fails to start with 'address already in use': Another service is using port 8000, either stop the conflicting service or change the port mapping in the docker-compose file
- Data persistence issues after container restart: Verify Docker volumes are properly mounted and the parseable user has write permissions to the volume mount points
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download