
NGINX as a Load Balancer Cheatsheet
⚖️ NGINX Cheatsheet: Load Balancing (Full Reference)
🔁 1. Round-Robin Load Balancing
upstream app_servers {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
server {
listen 80;
location / {
proxy_pass http://app_servers;
}
}
Default behavior: NGINX distributes requests one by one across all servers.
Simple and fast, ideal for equal-capacity backends.
📉 2. Least Connections Method
upstream app_servers {
least_conn;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
Ideal when backend servers have varying response times or workloads.
Reduces overload on busy servers.
📍 3. IP Hashing for Sticky Sessions
upstream app_servers {
ip_hash;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
Based on client IP address.
Ensures session consistency (e.g., login session).
Not reliable behind proxies unless
real_ip_module
is used.
🧠 4. Custom Sticky Sessions via Cookie
map $cookie_user_id $sticky_backend {
default backend1;
"u123" backend2;
}
upstream backend1 {
server 127.0.0.1:3001;
}
upstream backend2 {
server 127.0.0.1:3002;
}
server {
location / {
proxy_pass http://$sticky_backend;
}
}
More fine-grained control than
ip_hash
.Can implement custom logic based on user ID or token.
🏥 5. Basic Health Checks (NGINX Plus or OpenResty)
NGINX OSS (passive):
upstream app_servers {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
- Passive only: unhealthy servers are removed after failed responses.
NGINX Plus (active):
upstream app_servers {
zone backends 64k;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
health_check interval=5 fails=3 passes=2;
}
Requires NGINX Plus.
Use
health_check
to actively probe backends.
🚫 6. Marking Servers as Backup
upstream app_servers {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003 backup;
}
The third server is only used if the others fail.
Useful for fault tolerance.
📦 7. Load Balancing Multiple Applications
upstream app1_servers {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
upstream app2_servers {
server 127.0.0.1:4001;
server 127.0.0.1:4002;
}
server {
location /app1/ {
proxy_pass http://app1_servers;
}
location /app2/ {
proxy_pass http://app2_servers;
}
}
Isolate backend groups per application.
Clean, scalable architecture.
⏱ 8. Timeout and Retry Controls
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
proxy_next_upstream error timeout http_500;
proxy_next_upstream
: retry on errors or timeouts.Improves reliability during intermittent failures.
🔍 9. Logging Which Backend Was Used
log_format upstreamlog '$remote_addr to $upstream_addr via $request';
access_log /var/log/nginx/upstream.log upstreamlog;
Helps in debugging and traffic analysis.
$upstream_addr
shows which backend responded.
📊 10. Load Testing the Balancer
ab -n 1000 -c 100 http://localhost/
wrk -t4 -c100 -d30s http://localhost/
Validate your load balancer handles concurrency properly.
Monitor CPU, memory, and response times during test.