Load Balancer Configuration for High Availability

Load balancers are critical infrastructure components that distribute traffic across multiple servers, ensuring high availability, fault tolerance, and optimal performance. A properly configured load balancer can handle millions of requests per day while providing seamless failover and zero-downtime deployments. This comprehensive guide explores load balancer architectures, configuration strategies, and best practices for production environments.

Load balancing infrastructure
High availability load balancing architecture

Understanding Load Balancing

Load balancing distributes incoming network traffic across multiple backend servers to ensure no single server becomes overwhelmed. This improves application responsiveness, availability, and scalability[1].

Load Balancing Benefits

  • High Availability: Automatic failover when servers fail
  • Scalability: Add/remove servers without downtime
  • Performance: Distribute load for optimal resource utilization
  • Flexibility: Rolling updates and maintenance without service disruption
  • SSL Termination: Offload encryption from backend servers

Load Balancing Algorithms

AlgorithmDescriptionUse CaseSession Persistence
Round RobinDistributes sequentiallyHomogeneous serversNo
Least ConnectionsSends to server with fewest connectionsVarying request durationsNo
IP HashHash client IP to specific serverSession persistenceYes
Weighted Round RobinDistributes based on server capacityHeterogeneous serversNo
Least Response TimeSends to fastest responding serverPerformance-critical appsNo
URL HashHash URL to specific serverCache efficiencyNo

HAProxy Configuration

HAProxy is the industry-standard open-source load balancer, known for reliability and performance.

Basic HAProxy Setup

# Install HAProxy
apt-get update
apt-get install haproxy

## Enable HAProxy
systemctl enable haproxy

Global Configuration

## /etc/haproxy/haproxy.cfg

global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    
    # Default SSL material locations
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private
    
    # Modern SSL configuration
    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
    ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
    
    # Performance tuning
    maxconn 50000
    tune.ssl.default-dh-param 2048
    nbproc 4  # Number of processes (or nbthread for threads)
    cpu-map auto:1/1-4 0-3  # Pin to CPU cores

defaults
    log     global
    mode    http
    option  httplog
    option  dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
    errorfile 504 /etc/haproxy/errors/504.http

Frontend Configuration

frontend http_front
    bind *:80
    bind *:443 ssl crt /etc/haproxy/certs/example.com.pem
    
    # Redirect HTTP to HTTPS
    http-request redirect scheme https unless { ssl_fc }
    
    # ACLs for routing
    acl is_api path_beg /api
    acl is_admin path_beg /admin
    acl is_static path_end .jpg .png .gif .css .js
    
    # Security headers
    http-response set-header X-Frame-Options SAMEORIGIN
    http-response set-header X-Content-Type-Options nosniff
    http-response set-header X-XSS-Protection "1; mode=block"
    
    # Use backend based on ACLs
    use_backend api_servers if is_api
    use_backend admin_servers if is_admin
    use_backend static_servers if is_static
    default_backend web_servers

Backend Configuration

backend web_servers
    balance roundrobin
    option httpchk GET /health
    http-check expect status 200
    
    # Server definitions
    server web1 10.0.1.10:80 check weight 100 maxconn 500
    server web2 10.0.1.11:80 check weight 100 maxconn 500
    server web3 10.0.1.12:80 check weight 100 maxconn 500 backup
    
    # Health check settings
    default-server inter 3s fall 3 rise 2

backend api_servers
    balance leastconn
    option httpchk GET /api/health
    http-check expect status 200
    
    # Cookie-based session persistence
    cookie SERVERID insert indirect nocache
    
    server api1 10.0.2.10:8080 check cookie api1
    server api2 10.0.2.11:8080 check cookie api2
    server api3 10.0.2.12:8080 check cookie api3

backend static_servers
    balance uri
    hash-type consistent
    option httpchk HEAD /
    
    server static1 10.0.3.10:80 check
    server static2 10.0.3.11:80 check
    
    # Caching headers for static content
    http-response set-header Cache-Control "public, max-age=31536000"

backend admin_servers
    balance source  # IP hash for session persistence
    option httpchk GET /admin/health
    
    # Restrict access by IP
    acl admin_ips src 203.0.113.0/24 198.51.100.0/24
    http-request deny unless admin_ips
    
    server admin1 10.0.4.10:80 check
    server admin2 10.0.4.11:80 check

Server monitoring dashboard
Load balancer monitoring and analytics

Statistics and Monitoring

listen stats
    bind *:8404
    stats enable
    stats uri /
    stats refresh 30s
    stats auth admin:SecurePassword123
    stats admin if TRUE
    
    # Show backend server status
    stats show-legends
    stats show-desc HAProxy Statistics

Nginx as Load Balancer

Nginx provides excellent load balancing with simpler configuration than HAProxy.

Nginx Load Balancer Configuration

## /etc/nginx/nginx.conf

http {
    # Upstream backend servers
    upstream web_backend {
        least_conn;
        
        server 10.0.1.10:80 weight=3 max_fails=3 fail_timeout=30s;
        server 10.0.1.11:80 weight=2 max_fails=3 fail_timeout=30s;
        server 10.0.1.12:80 weight=1 backup;
        
        # Keepalive connections to backends
        keepalive 32;
        keepalive_timeout 60s;
        keepalive_requests 100;
    }
    
    upstream api_backend {
        # IP hash for session persistence
        ip_hash;
        
        server 10.0.2.10:8080 max_fails=2 fail_timeout=10s;
        server 10.0.2.11:8080 max_fails=2 fail_timeout=10s;
        server 10.0.2.12:8080 max_fails=2 fail_timeout=10s;
        
        keepalive 16;
    }
    
    # Health check (requires nginx-plus or third-party module)
    # For open source, use passive health checks via max_fails
    
    server {
        listen 80;
        listen 443 ssl http2;
        
        server_name example.com;
        
        ssl_certificate /etc/ssl/certs/example.com.crt;
        ssl_certificate_key /etc/ssl/private/example.com.key;
        
        # Security headers
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header X-XSS-Protection "1; mode=block" always;
        
        location / {
            proxy_pass http://web_backend;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            
            # Timeouts
            proxy_connect_timeout 5s;
            proxy_send_timeout 60s;
            proxy_read_timeout 60s;
            
            # Buffering
            proxy_buffering on;
            proxy_buffer_size 4k;
            proxy_buffers 8 4k;
        }
        
        location /api/ {
            proxy_pass http://api_backend;
            proxy_http_version 1.1;
            proxy_set_header Connection "";
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            
            # No buffering for streaming APIs
            proxy_buffering off;
            
            # WebSocket support
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
        
        # Health check endpoint
        location /health {
            access_log off;
            return 200 "healthy\n";
            add_header Content-Type text/plain;
        }
    }
}

Active Health Checks

HAProxy Health Checks

backend web_servers
    option httpchk GET /health HTTP/1.1\r\nHost:\ example.com
    http-check expect status 200
    http-check expect string "healthy"
    
    # Advanced health check
    option httpchk
    http-check send meth GET uri /health ver HTTP/1.1 hdr Host example.com
    http-check expect status 200
    http-check expect rstring ^healthy$
    
    server web1 10.0.1.10:80 check inter 5s fall 3 rise 2

Custom Health Check Script

#!/bin/bash
## /usr/local/bin/backend-health-check.sh

SERVER=$1
PORT=$2

## Check HTTP response
response=$(curl -s -o /dev/null -w "%{http_code}" http://$SERVER:$PORT/health --max-time 5)

if [ "$response" = "200" ]; then
    # Additional checks
    load=$(ssh $SERVER "cat /proc/loadavg | awk '{print \$1}'")
    if (( $(echo "$load < 10.0" | bc -l) )); then
        echo "healthy"
        exit 0
    fi
fi

echo "unhealthy"
exit 1

Session Persistence

backend web_servers
    balance roundrobin
    
    # Insert cookie
    cookie SERVERID insert indirect nocache httponly secure
    
    server web1 10.0.1.10:80 check cookie web1
    server web2 10.0.1.11:80 check cookie web2

IP Hash Persistence (Nginx)

upstream backend {
    ip_hash;
    server 10.0.1.10:80;
    server 10.0.1.11:80;
}

Sticky Sessions with Consistent Hashing

backend api_servers
    balance uri
    hash-type consistent
    
    # Hash based on URI path
    server api1 10.0.2.10:8080 check
    server api2 10.0.2.11:8080 check
    server api3 10.0.2.12:8080 check

SSL/TLS Termination

HAProxy SSL Configuration

frontend https_front
    bind *:443 ssl crt /etc/haproxy/certs/example.com.pem alpn h2,http/1.1
    
    # Force HTTPS
    http-request redirect scheme https unless { ssl_fc }
    
    # HSTS header
    http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
    
    # Perfect Forward Secrecy
    ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384
    
    default_backend web_servers

SSL Certificate Management

## Combine certificate and private key for HAProxy
cat /etc/letsencrypt/live/example.com/fullchain.pem \
    /etc/letsencrypt/live/example.com/privkey.pem \
    > /etc/haproxy/certs/example.com.pem

## Set proper permissions
chmod 600 /etc/haproxy/certs/example.com.pem
chown haproxy:haproxy /etc/haproxy/certs/example.com.pem

## Reload HAProxy without downtime
systemctl reload haproxy

High Availability Setup

Active-Passive with Keepalived

## Install keepalived
apt-get install keepalived

## /etc/keepalived/keepalived.conf (Master)
vrrp_script check_haproxy {
    script "/usr/bin/killall -0 haproxy"
    interval 2
    weight 2
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 101
    advert_int 1
    
    authentication {
        auth_type PASS
        auth_pass SecurePassword123
    }
    
    virtual_ipaddress {
        192.168.1.100/24
    }
    
    track_script {
        check_haproxy
    }
}

## /etc/keepalived/keepalived.conf (Backup)
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    
    authentication {
        auth_type PASS
        auth_pass SecurePassword123
    }
    
    virtual_ipaddress {
        192.168.1.100/24
    }
    
    track_script {
        check_haproxy
    }
}

Active-Active with DNS Round Robin

## DNS configuration
example.com.  IN  A  192.168.1.10  # Load Balancer 1
example.com.  IN  A  192.168.1.11  # Load Balancer 2

## Both load balancers active, traffic distributed by DNS

Rate Limiting and DDoS Protection

HAProxy Rate Limiting

frontend http_front
    # Track requests per IP
    stick-table type ip size 100k expire 30s store http_req_rate(10s)
    
    # Rate limit rules
    http-request track-sc0 src
    http-request deny if { sc_http_req_rate(0) gt 100 }
    
    # Connection limit per IP
    acl too_many_connections src_conn_cur gt 20
    http-request deny if too_many_connections

Nginx Rate Limiting

## Define rate limit zones
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

server {
    location / {
        # Rate limiting
        limit_req zone=general burst=20 nodelay;
        limit_conn conn_limit 10;
        
        # Connection limits
        limit_rate 1m;  # Bandwidth limit per connection
        
        proxy_pass http://backend;
    }
}

Monitoring and Observability

Prometheus Integration

## HAProxy exporter
frontend stats
    bind *:8405
    http-request use-service prometheus-exporter if { path /metrics }
    stats enable
    stats uri /stats
## Install HAProxy exporter
wget https://github.com/prometheus/haproxy_exporter/releases/download/v0.13.0/haproxy_exporter-0.13.0.linux-amd64.tar.gz
tar xvf haproxy_exporter-0.13.0.linux-amd64.tar.gz
./haproxy_exporter --haproxy.scrape-uri="http://localhost:8404/stats;csv"

Logging Best Practices

global
    log /dev/log local0 info
    log /dev/log local1 notice

frontend http_front
    # Custom log format
    log-format "%ci:%cp [%tr] %ft %b/%s %TR/%Tw/%Tc/%Tr/%Ta %ST %B %CC %CS %tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r"
    
    option httplog
    option logasap  # Log immediately after response

Performance Tuning

System-Level Optimization

## Increase file descriptor limits
echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf

## Kernel tuning for high concurrency
cat >> /etc/sysctl.conf << EOF
net.ipv4.tcp_max_syn_backlog = 8192
net.core.somaxconn = 4096
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_tw_reuse = 1
EOF

sysctl -p

HAProxy Performance Tuning

global
    maxconn 50000
    tune.ssl.default-dh-param 2048
    tune.bufsize 32768
    tune.maxrewrite 1024
    nbthread 4
    
defaults
    maxconn 40000
    
backend web_servers
    server web1 10.0.1.10:80 check maxconn 5000

Zero-Downtime Deployments

Rolling Updates

#!/bin/bash
## rolling-update.sh

SERVERS=("web1" "web2" "web3")

for server in "${SERVERS[@]}"; do
    echo "Draining $server..."
    
    # Disable server in HAProxy
    echo "set server web_servers/$server state maint" | \
        socat stdio /run/haproxy/admin.sock
    
    # Wait for connections to drain
    sleep 30
    
    # Deploy new version
    ssh $server "systemctl restart myapp"
    
    # Health check
    for i in {1..30}; do
        if curl -f http://$server/health > /dev/null 2>&1; then
            break
        fi
        sleep 2
    done
    
    # Re-enable server
    echo "set server web_servers/$server state ready" | \
        socat stdio /run/haproxy/admin.sock
    
    echo "$server updated and back online"
done

Conclusion

Load balancers are essential for modern high-availability architectures. Proper configuration ensures:

Key benefits:

  • 99.99% uptime with redundant load balancers
  • Linear scalability by adding backend servers
  • Zero-downtime deployments with rolling updates
  • Improved performance through intelligent traffic distribution
  • Enhanced security with rate limiting and SSL termination

Implementation recommendations:

  • Use HAProxy for complex routing and advanced features
  • Use Nginx for simplicity and HTTP/2 support
  • Implement active health checks with appropriate intervals
  • Configure session persistence based on application requirements
  • Deploy redundant load balancers with keepalived or DNS
  • Monitor metrics with Prometheus and Grafana
  • Test failover scenarios regularly
  • Document runbooks for common operational tasks

Properly configured load balancers can handle millions of requests per second while providing seamless failover and optimal resource utilization. The investment in robust load balancing architecture pays dividends in reliability and user experience.

References

[1] HAProxy Technologies. (2024). HAProxy Documentation. Available at: https://www.haproxy.org/documentation.html (Accessed: November 2025)

[2] NGINX, Inc. (2024). NGINX Load Balancing. Available at: https://docs.nginx.com/nginx/admin-guide/load-balancer/ (Accessed: November 2025)

[3] Tarreau, W. (2023). HAProxy: The Reliable, High Performance TCP/HTTP Load Balancer. Available at: https://www.haproxy.com/ (Accessed: November 2025)

[4] Shkuro, Y. (2019). Mastering Distributed Tracing. Packt Publishing. Available at: https://www.packtpub.com/product/mastering-distributed-tracing/9781788628464 (Accessed: November 2025)

Thank you for reading! If you have any feedback or comments, please send them to [email protected].