docker.recipes

HAProxy Load Balancer

intermediate

HAProxy for high-availability load balancing.

Overview

HAProxy is a battle-tested, high-performance TCP/HTTP load balancer and reverse proxy that has been the backbone of high-availability web infrastructure for over two decades. Known for its exceptional reliability, low latency, and robust feature set, HAProxy excels at distributing traffic across multiple backend servers while providing advanced health checking, SSL termination, and traffic shaping capabilities. Unlike application-focused reverse proxies, HAProxy is purpose-built for load balancing with microsecond-precision performance monitoring and sophisticated routing algorithms. This deployment creates a complete load balancing demonstration environment with HAProxy distributing traffic across three separate NGINX web servers (web-1, web-2, and web-3). Each NGINX instance serves as an independent backend with its own content directory, allowing you to see load balancing in action as requests are distributed using HAProxy's algorithms. The HAProxy service exposes both the main load-balanced application on port 80 and a comprehensive statistics dashboard on port 8404 for real-time monitoring of backend health and traffic distribution. This configuration is ideal for DevOps engineers learning load balancing concepts, developers testing application behavior under load distribution, and system administrators prototyping high-availability web architectures. The setup provides hands-on experience with HAProxy's configuration syntax, backend health monitoring, and traffic routing while demonstrating how multiple identical services can work together to provide redundancy and increased capacity.

Key Features

  • Layer 4 and Layer 7 load balancing with configurable algorithms (round-robin, least connections, source IP hashing)
  • Real-time health checks for backend servers with automatic failover and recovery
  • Comprehensive statistics dashboard with live traffic metrics, response times, and server status
  • Advanced traffic routing with ACLs for path-based, host-based, and header-based routing decisions
  • Session persistence support through stick tables for maintaining client-server affinity
  • Built-in rate limiting and connection throttling to protect backend servers from overload
  • Zero-downtime configuration reloads for updating backend servers without service interruption
  • Detailed logging with customizable log formats for traffic analysis and debugging

Common Use Cases

  • 1High-availability web application deployments requiring automatic failover between multiple servers
  • 2Performance testing environments to evaluate application behavior under distributed load
  • 3Blue-green deployment strategies where traffic can be gradually shifted between application versions
  • 4Multi-tenant applications requiring traffic routing based on domain names or URL paths
  • 5Database connection pooling and load distribution for read replicas in database clusters
  • 6API gateway implementations requiring sophisticated routing and rate limiting capabilities
  • 7Educational environments for learning load balancing concepts and HAProxy configuration management

Prerequisites

  • Minimum 512MB RAM available (HAProxy: 64MB, each NGINX instance: 128MB)
  • Basic understanding of HTTP protocols, load balancing concepts, and reverse proxy functionality
  • Familiarity with HAProxy configuration file syntax and backend server definitions
  • Network ports 80 and 8404 available on the host system for load balancer and statistics access
  • Understanding of health check mechanisms and how to interpret HAProxy statistics and logs

For development & testing. Review security settings, change default credentials, and test thoroughly before production use. See Terms

docker-compose.yml

docker-compose.yml
1services:
2 haproxy:
3 image: haproxy:lts-alpine
4 container_name: haproxy
5 restart: unless-stopped
6 ports:
7 - "${HAPROXY_HTTP_PORT:-80}:80"
8 - "${HAPROXY_STATS_PORT:-8404}:8404"
9 volumes:
10 - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
11
12 web-1:
13 image: nginx:alpine
14 container_name: web-1
15 restart: unless-stopped
16 volumes:
17 - ./html-1:/usr/share/nginx/html:ro
18
19 web-2:
20 image: nginx:alpine
21 container_name: web-2
22 restart: unless-stopped
23 volumes:
24 - ./html-2:/usr/share/nginx/html:ro
25
26 web-3:
27 image: nginx:alpine
28 container_name: web-3
29 restart: unless-stopped
30 volumes:
31 - ./html-3:/usr/share/nginx/html:ro

.env Template

.env
1# HAProxy
2HAPROXY_HTTP_PORT=80
3HAPROXY_STATS_PORT=8404

Usage Notes

  1. 1HAProxy at http://localhost
  2. 2Stats at http://localhost:8404/stats
  3. 3Configure haproxy.cfg for backends
  4. 4Health checks included

Individual Services(4 services)

Copy individual services to mix and match with your existing compose files.

haproxy
haproxy:
  image: haproxy:lts-alpine
  container_name: haproxy
  restart: unless-stopped
  ports:
    - ${HAPROXY_HTTP_PORT:-80}:80
    - ${HAPROXY_STATS_PORT:-8404}:8404
  volumes:
    - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
web-1
web-1:
  image: nginx:alpine
  container_name: web-1
  restart: unless-stopped
  volumes:
    - ./html-1:/usr/share/nginx/html:ro
web-2
web-2:
  image: nginx:alpine
  container_name: web-2
  restart: unless-stopped
  volumes:
    - ./html-2:/usr/share/nginx/html:ro
web-3
web-3:
  image: nginx:alpine
  container_name: web-3
  restart: unless-stopped
  volumes:
    - ./html-3:/usr/share/nginx/html:ro

Quick Start

terminal
1# 1. Create the compose file
2cat > docker-compose.yml << 'EOF'
3services:
4 haproxy:
5 image: haproxy:lts-alpine
6 container_name: haproxy
7 restart: unless-stopped
8 ports:
9 - "${HAPROXY_HTTP_PORT:-80}:80"
10 - "${HAPROXY_STATS_PORT:-8404}:8404"
11 volumes:
12 - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
13
14 web-1:
15 image: nginx:alpine
16 container_name: web-1
17 restart: unless-stopped
18 volumes:
19 - ./html-1:/usr/share/nginx/html:ro
20
21 web-2:
22 image: nginx:alpine
23 container_name: web-2
24 restart: unless-stopped
25 volumes:
26 - ./html-2:/usr/share/nginx/html:ro
27
28 web-3:
29 image: nginx:alpine
30 container_name: web-3
31 restart: unless-stopped
32 volumes:
33 - ./html-3:/usr/share/nginx/html:ro
34EOF
35
36# 2. Create the .env file
37cat > .env << 'EOF'
38# HAProxy
39HAPROXY_HTTP_PORT=80
40HAPROXY_STATS_PORT=8404
41EOF
42
43# 3. Start the services
44docker compose up -d
45
46# 4. View logs
47docker compose logs -f

One-Liner

Run this command to download and set up the recipe in one step:

terminal
1curl -fsSL https://docker.recipes/api/recipes/haproxy-loadbalancer-stack/run | bash

Troubleshooting

  • 503 Service Unavailable errors: Check that web-1, web-2, and web-3 containers are running and healthy using docker ps
  • HAProxy fails to start with configuration errors: Validate haproxy.cfg syntax using haproxy -c -f haproxy.cfg before container restart
  • Backend servers marked as DOWN in stats: Verify health check URLs are accessible and NGINX containers are responding on port 80
  • Statistics page not accessible: Confirm HAPROXY_STATS_PORT environment variable is set correctly and port 8404 is not blocked
  • Uneven traffic distribution: Review load balancing algorithm in haproxy.cfg and check if session persistence is affecting distribution
  • Connection timeouts to backends: Adjust timeout values in HAProxy configuration and verify network connectivity between containers

Community Notes

Loading...
Loading notes...

Download Recipe Kit

Get all files in a ready-to-deploy package

Includes docker-compose.yml, .env template, README, and license

Ad Space