HAProxy Load Balancer
HAProxy for high-availability load balancing.
Overview
HAProxy is a battle-tested, high-performance TCP/HTTP load balancer and reverse proxy that has been the backbone of high-availability web infrastructure for over two decades. Known for its exceptional reliability, low latency, and robust feature set, HAProxy excels at distributing traffic across multiple backend servers while providing advanced health checking, SSL termination, and traffic shaping capabilities. Unlike application-focused reverse proxies, HAProxy is purpose-built for load balancing with microsecond-precision performance monitoring and sophisticated routing algorithms.
This deployment creates a complete load balancing demonstration environment with HAProxy distributing traffic across three separate NGINX web servers (web-1, web-2, and web-3). Each NGINX instance serves as an independent backend with its own content directory, allowing you to see load balancing in action as requests are distributed using HAProxy's algorithms. The HAProxy service exposes both the main load-balanced application on port 80 and a comprehensive statistics dashboard on port 8404 for real-time monitoring of backend health and traffic distribution.
This configuration is ideal for DevOps engineers learning load balancing concepts, developers testing application behavior under load distribution, and system administrators prototyping high-availability web architectures. The setup provides hands-on experience with HAProxy's configuration syntax, backend health monitoring, and traffic routing while demonstrating how multiple identical services can work together to provide redundancy and increased capacity.
Key Features
- Layer 4 and Layer 7 load balancing with configurable algorithms (round-robin, least connections, source IP hashing)
- Real-time health checks for backend servers with automatic failover and recovery
- Comprehensive statistics dashboard with live traffic metrics, response times, and server status
- Advanced traffic routing with ACLs for path-based, host-based, and header-based routing decisions
- Session persistence support through stick tables for maintaining client-server affinity
- Built-in rate limiting and connection throttling to protect backend servers from overload
- Zero-downtime configuration reloads for updating backend servers without service interruption
- Detailed logging with customizable log formats for traffic analysis and debugging
Common Use Cases
- 1High-availability web application deployments requiring automatic failover between multiple servers
- 2Performance testing environments to evaluate application behavior under distributed load
- 3Blue-green deployment strategies where traffic can be gradually shifted between application versions
- 4Multi-tenant applications requiring traffic routing based on domain names or URL paths
- 5Database connection pooling and load distribution for read replicas in database clusters
- 6API gateway implementations requiring sophisticated routing and rate limiting capabilities
- 7Educational environments for learning load balancing concepts and HAProxy configuration management
Prerequisites
- Minimum 512MB RAM available (HAProxy: 64MB, each NGINX instance: 128MB)
- Basic understanding of HTTP protocols, load balancing concepts, and reverse proxy functionality
- Familiarity with HAProxy configuration file syntax and backend server definitions
- Network ports 80 and 8404 available on the host system for load balancer and statistics access
- Understanding of health check mechanisms and how to interpret HAProxy statistics and logs
For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms
docker-compose.yml
docker-compose.yml
1services: 2 haproxy: 3 image: haproxy:lts-alpine4 container_name: haproxy5 restart: unless-stopped6 ports: 7 - "${HAPROXY_HTTP_PORT:-80}:80"8 - "${HAPROXY_STATS_PORT:-8404}:8404"9 volumes: 10 - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro1112 web-1: 13 image: nginx:alpine14 container_name: web-115 restart: unless-stopped16 volumes: 17 - ./html-1:/usr/share/nginx/html:ro1819 web-2: 20 image: nginx:alpine21 container_name: web-222 restart: unless-stopped23 volumes: 24 - ./html-2:/usr/share/nginx/html:ro2526 web-3: 27 image: nginx:alpine28 container_name: web-329 restart: unless-stopped30 volumes: 31 - ./html-3:/usr/share/nginx/html:ro.env Template
.env
1# HAProxy2HAPROXY_HTTP_PORT=803HAPROXY_STATS_PORT=8404Usage Notes
- 1HAProxy at http://localhost
- 2Stats at http://localhost:8404/stats
- 3Configure haproxy.cfg for backends
- 4Health checks included
Individual Services(4 services)
Copy individual services to mix and match with your existing compose files.
haproxy
haproxy:
image: haproxy:lts-alpine
container_name: haproxy
restart: unless-stopped
ports:
- ${HAPROXY_HTTP_PORT:-80}:80
- ${HAPROXY_STATS_PORT:-8404}:8404
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
web-1
web-1:
image: nginx:alpine
container_name: web-1
restart: unless-stopped
volumes:
- ./html-1:/usr/share/nginx/html:ro
web-2
web-2:
image: nginx:alpine
container_name: web-2
restart: unless-stopped
volumes:
- ./html-2:/usr/share/nginx/html:ro
web-3
web-3:
image: nginx:alpine
container_name: web-3
restart: unless-stopped
volumes:
- ./html-3:/usr/share/nginx/html:ro
Quick Start
terminal
1# 1. Create the compose file2cat > docker-compose.yml << 'EOF'3services:4 haproxy:5 image: haproxy:lts-alpine6 container_name: haproxy7 restart: unless-stopped8 ports:9 - "${HAPROXY_HTTP_PORT:-80}:80"10 - "${HAPROXY_STATS_PORT:-8404}:8404"11 volumes:12 - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro1314 web-1:15 image: nginx:alpine16 container_name: web-117 restart: unless-stopped18 volumes:19 - ./html-1:/usr/share/nginx/html:ro2021 web-2:22 image: nginx:alpine23 container_name: web-224 restart: unless-stopped25 volumes:26 - ./html-2:/usr/share/nginx/html:ro2728 web-3:29 image: nginx:alpine30 container_name: web-331 restart: unless-stopped32 volumes:33 - ./html-3:/usr/share/nginx/html:ro34EOF3536# 2. Create the .env file37cat > .env << 'EOF'38# HAProxy39HAPROXY_HTTP_PORT=8040HAPROXY_STATS_PORT=840441EOF4243# 3. Start the services44docker compose up -d4546# 4. View logs47docker compose logs -fOne-Liner
Run this command to download and set up the recipe in one step:
terminal
1curl -fsSL https://docker.recipes/api/recipes/haproxy-loadbalancer-stack/run | bashTroubleshooting
- 503 Service Unavailable errors: Check that web-1, web-2, and web-3 containers are running and healthy using docker ps
- HAProxy fails to start with configuration errors: Validate haproxy.cfg syntax using haproxy -c -f haproxy.cfg before container restart
- Backend servers marked as DOWN in stats: Verify health check URLs are accessible and NGINX containers are responding on port 80
- Statistics page not accessible: Confirm HAPROXY_STATS_PORT environment variable is set correctly and port 8404 is not blocked
- Uneven traffic distribution: Review load balancing algorithm in haproxy.cfg and check if session persistence is affecting distribution
- Connection timeouts to backends: Adjust timeout values in HAProxy configuration and verify network connectivity between containers
Community Notes
Loading...
Loading notes...
Download Recipe Kit
Get all files in a ready-to-deploy package
Includes docker-compose.yml, .env template, README, and license
Ad Space
Shortcuts: C CopyF FavoriteD Download