Skip to content
Published on

Nginx Complete Guide 2025: Reverse Proxy, Load Balancing, SSL, Caching & Security

Authors

TOC

1. What is Nginx

Nginx (pronounced "engine-x") is a high-performance web server originally developed by Igor Sysoev in 2004 to address the C10K problem. Today it holds the number one web server market share globally, serving not just as a web server but as a reverse proxy, load balancer, HTTP cache, and API gateway.

1.1 Nginx vs Apache

FeatureNginxApache
ArchitectureEvent-driven (async)Process/thread-based
Concurrent ConnectionsTens of thousands to hundreds of thousandsThousands (depends on MPM)
Memory UsageFew KB per connectionFew MB per connection
Static FilesVery fastFast
Dynamic ContentProxy (FastCGI/uWSGI)Built-in modules (mod_php)
ConfigurationCentralized.htaccess distributed
URL Rewritinglocation blocksmod_rewrite
Load BalancingBuilt-inSeparate module needed
Market Share~34% (#1)~29% (#2)

1.2 Event-Driven Architecture

Apache (Process Model):
┌────────────┐
Master├────────────┤
Worker 1   │ → Client A (1 process per connection)
Worker 2   │ → Client B
Worker 3   │ → Client C
...Worker 1000│ → Client 1000
└────────────┘
1000 connections = 1000 processes/threads = high memory usage

Nginx (Event Model):
┌────────────┐
Master├────────────┤
Worker 1   │ → Handles thousands via epoll/kqueue
Worker 2 (non-blocking I/O)
Worker 3   │ →
Worker 4 (one per CPU core)
└────────────┘
10000 connections = 4 workers = very low memory usage

Core Principles:

  • Master Process: Reads configuration, manages workers, handles logs
  • Worker Process: Handles actual requests (one per CPU core)
  • epoll/kqueue: OS-level event notification mechanisms
  • Non-blocking I/O: Processes other requests without waiting

1.3 Installation

# Ubuntu/Debian
sudo apt update
sudo apt install nginx

# CentOS/RHEL
sudo yum install epel-release
sudo yum install nginx

# macOS
brew install nginx

# Docker
docker run -d -p 80:80 -p 443:443 \
  -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /path/to/certs:/etc/nginx/certs:ro \
  --name nginx nginx:alpine

# Status check
sudo systemctl status nginx
nginx -t  # Validate config syntax
nginx -V  # Show compile options

2. Configuration Structure

2.1 File Layout

/etc/nginx/
├── nginx.conf              # Main configuration
├── conf.d/                 # Additional configs (*.conf auto-loaded)
│   ├── default.conf
│   └── myapp.conf
├── sites-available/        # Available sites (Debian family)
│   └── mysite.conf
├── sites-enabled/          # Enabled sites (symlinks)
│   └── mysite.conf -> ../sites-available/mysite.conf
├── mime.types              # MIME type mappings
├── fastcgi_params          # FastCGI parameters
└── snippets/               # Reusable config fragments
    ├── ssl-params.conf
    └── proxy-params.conf

2.2 Configuration Block Hierarchy

# /etc/nginx/nginx.conf

# Global context
user nginx;
worker_processes auto;          # Auto-set to CPU core count
worker_rlimit_nofile 65535;     # Max file descriptors per worker
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

# Events context
events {
    worker_connections 10240;   # Max concurrent connections per worker
    multi_accept on;            # Accept multiple connections at once
    use epoll;                  # Linux: epoll, BSD: kqueue
}

# HTTP context
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Log format
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    # Performance settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 50m;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_comp_level 5;
    gzip_types text/plain text/css application/json
               application/javascript text/xml application/xml
               application/xml+rss text/javascript image/svg+xml;

    # Include server blocks
    include /etc/nginx/conf.d/*.conf;
}

2.3 Server and Location Blocks

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # 301 redirect (HTTP to HTTPS)
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # SSL configuration
    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    # Root directory
    root /var/www/html;
    index index.html index.htm;

    # Location priority (highest to lowest)
    # 1. Exact match (=)
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }

    # 2. Preferential prefix (^~)
    location ^~ /static/ {
        alias /var/www/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # 3. Regex (~, ~*) - case sensitive/insensitive
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
        expires 7d;
        add_header Cache-Control "public";
    }

    # 4. Prefix match (none or /)
    location / {
        try_files $uri $uri/ /index.html;
    }

    # API proxy
    location /api/ {
        proxy_pass http://backend;
        include snippets/proxy-params.conf;
    }

    # Error pages
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

2.4 Location Matching Priority

Priority (highest to lowest):
1. = (exact match)            location = /path
2. ^~ (preferential prefix)   location ^~ /path
3. ~ (regex, case-sensitive)  location ~ \.php$
4. ~* (regex, case-insensitive) location ~* \.(jpg|png)$
5. /path (prefix match)       location /path
6. / (default)                location /

3. Reverse Proxy Configuration

3.1 Basic Reverse Proxy

# snippets/proxy-params.conf
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";

proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
server {
    listen 443 ssl http2;
    server_name api.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include snippets/proxy-params.conf;
    }
}

3.2 How Reverse Proxy Works

Client                    Nginx (Reverse Proxy)              Backend
  │                              │                              │
GET /api/users              │                              │
  │─────────────────────────────>│                              │
  │                              │  GET /api/users              │
  │                              │  Host: api.example.com  │                              │  X-Real-IP: 203.0.113.1  │                              │  X-Forwarded-For: 203.0.113.1  │                              │─────────────────────────────>  │                              │                              │
  │                              │  200 OK  │                              │<─────────────────────────────│
200 OK                      │                              │
<─────────────────────────────│                              │

3.3 Path Rewriting

# /api/v1/users -> forwarded to backend as /users
location /api/v1/ {
    rewrite ^/api/v1/(.*)$ /$1 break;
    proxy_pass http://backend;
    include snippets/proxy-params.conf;
}

# Or include URI in proxy_pass
location /api/v1/ {
    proxy_pass http://backend/;  # Note the trailing slash!
    include snippets/proxy-params.conf;
}

# Conditional redirect
location /old-page {
    return 301 /new-page;
}

# Regex-based rewrite
rewrite ^/blog/(\d{4})/(\d{2})/(.*)$ /posts/$3?year=$1&month=$2 permanent;

4. Load Balancing

4.1 Upstream Configuration

# Default Round Robin
upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# Least Connections - route to server with fewest active connections
upstream backend_least {
    least_conn;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# IP Hash - same client IP always goes to same server (session affinity)
upstream backend_iphash {
    ip_hash;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# Weighted distribution
upstream backend_weighted {
    server 10.0.0.1:8080 weight=5;   # 50% traffic
    server 10.0.0.2:8080 weight=3;   # 30% traffic
    server 10.0.0.3:8080 weight=2;   # 20% traffic
}

# Advanced configuration
upstream backend_advanced {
    least_conn;
    server 10.0.0.1:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:8080 backup;      # Used only when others fail
    server 10.0.0.4:8080 down;        # Temporarily disabled

    keepalive 32;                      # Upstream connection pool
}

4.2 Load Balancing Algorithm Comparison

AlgorithmDescriptionProsConsBest For
Round RobinSequential distribution (default)Simple, even distributionIgnores server capacityIdentical servers
Least ConnectionsRoutes to server with fewest connectionsLoad-awareCan overwhelm new serversVariable request times
IP HashClient IP-based routingSession persistencePotential uneven distributionSession-based apps
WeightWeight-based distributionReflects capacity differencesManual configurationMixed server specs
RandomRandom selection (Nginx Plus)Good for distributed setupsUnpredictableLarge clusters

4.3 Health Checks

# Passive health check (OSS)
upstream backend {
    server 10.0.0.1:8080 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 max_fails=3 fail_timeout=30s;
}

5. SSL/TLS Configuration

5.1 Let's Encrypt + Certbot

# Install Certbot and obtain certificate
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

# Verify auto-renewal
sudo certbot renew --dry-run

# Add auto-renewal to crontab
# 0 0,12 * * * certbot renew --quiet --post-hook "systemctl reload nginx"

5.2 Hardened SSL Configuration

# snippets/ssl-params.conf

# Protocols - only TLS 1.2 and 1.3
ssl_protocols TLSv1.2 TLSv1.3;

# Cipher suites
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;

# DH parameters
ssl_dhparam /etc/nginx/certs/dhparam.pem;

# SSL session cache
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;

# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;

# HSTS (Strict Transport Security)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

5.3 SSL Termination Pattern

Client (HTTPS)           Nginx (SSL Termination)         Backend (HTTP)
     │                          │                              │
HTTPS (TLS 1.3)        │                              │
     │─────────────────────────>│                              │
     │                          │  HTTP (plain)     │                          │─────────────────────────────>     │                          │                              │
     │                          │  HTTP Response     │                          │<─────────────────────────────│
HTTPS Response          │                              │
<─────────────────────────│                              │
server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;
    include snippets/ssl-params.conf;

    location / {
        proxy_pass http://backend;
        proxy_set_header X-Forwarded-Proto https;
        include snippets/proxy-params.conf;
    }
}

6. Caching (Proxy Cache)

6.1 Basic Cache Configuration

http {
    # Define cache zone
    proxy_cache_path /var/cache/nginx
        levels=1:2
        keys_zone=my_cache:10m      # 10MB memory (key storage)
        max_size=10g                 # Max disk size
        inactive=60m                 # Remove after 60 min unused
        use_temp_path=off;

    server {
        listen 443 ssl http2;
        server_name example.com;

        # Enable caching
        location /api/ {
            proxy_cache my_cache;
            proxy_cache_valid 200 302 10m;    # Cache 200, 302 for 10 min
            proxy_cache_valid 404 1m;          # Cache 404 for 1 min
            proxy_cache_use_stale error timeout updating
                                   http_500 http_502 http_503 http_504;
            proxy_cache_background_update on;
            proxy_cache_lock on;

            # Cache key
            proxy_cache_key "$scheme$request_method$host$request_uri";

            # Add cache status header
            add_header X-Cache-Status $upstream_cache_status;

            proxy_pass http://backend;
            include snippets/proxy-params.conf;
        }

        # Static file caching (browser)
        location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
            root /var/www/static;
            expires 30d;
            add_header Cache-Control "public, immutable";
            access_log off;
        }
    }
}

6.2 Cache Status Values

StatusDescription
HITServed from cache
MISSNot in cache, fetched from backend
EXPIREDCache expired, refreshed from backend
STALEExpired cache served per stale policy
UPDATINGServing stale while updating in background
REVALIDATEDBackend returned 304, existing cache reused
BYPASSCache bypassed, direct backend request

6.3 Cache Bypass and Purge

location /api/ {
    proxy_cache my_cache;

    # Skip cache when session cookie exists
    proxy_cache_bypass $cookie_session;
    proxy_no_cache $cookie_session;

    # Bypass on Cache-Control: no-cache
    proxy_cache_bypass $http_cache_control;

    proxy_pass http://backend;
}

7. Rate Limiting

7.1 Basic Rate Limiting

http {
    # Define limit zones
    # 10MB memory, 10 requests per second per IP
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    # 1 request per second per IP (login protection)
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

    # API key-based limiting
    limit_req_zone $http_x_api_key zone=apikey_limit:10m rate=100r/s;

    server {
        # API endpoint - allow burst
        location /api/ {
            limit_req zone=api_limit burst=20 nodelay;
            limit_req_status 429;

            proxy_pass http://backend;
        }

        # Login - strict limiting
        location /auth/login {
            limit_req zone=login_limit burst=5;
            limit_req_status 429;

            proxy_pass http://backend;
        }
    }
}

7.2 Connection Limiting

http {
    # Limit concurrent connections per IP
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    server {
        # Max 100 concurrent connections per IP
        limit_conn conn_limit 100;

        # Download bandwidth limiting
        location /downloads/ {
            limit_conn conn_limit 5;        # 5 concurrent downloads
            limit_rate 500k;                # 500KB/s per connection
            limit_rate_after 10m;           # No limit for first 10MB
        }
    }
}

7.3 Understanding Rate Limiting Behavior

rate=10r/s, burst=20, nodelay:

Time  | Requests | Processed | Explanation
------|----------|-----------|----------------------------------
0.0s  |    25    |    21     | 10(rate) + 20(burst) = 30 max
      |          |           | 21 processed immediately, 4 get 429
0.1s  |     5    |     1     | 1 slot recovered (0.1s * 10r/s)
1.0s  |     5    |     5     | 10 burst slots recovered

8. WebSocket Proxy

8.1 WebSocket Configuration

# Map for WebSocket upgrade
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream websocket_backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;

    # WebSocket requires sticky sessions
    ip_hash;
}

server {
    listen 443 ssl http2;
    server_name ws.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location /ws/ {
        proxy_pass http://websocket_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebSocket timeout (default 60s)
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}

8.2 WebSocket Handshake Flow

Client                    Nginx                    Backend
  │                         │                         │
GET /ws/ HTTP/1.1       │                         │
Upgrade: websocket      │                         │
Connection: Upgrade     │                         │
  │────────────────────────>│                         │
  │                         │ GET /ws/ HTTP/1.1  │                         │ Upgrade: websocket      │
  │                         │ Connection: Upgrade  │                         │────────────────────────>  │                         │                         │
  │                         │ 101 Switching Protocols  │                         │<────────────────────────│
101 Switching Protocols │                         │
<────────────────────────│                         │
  │                         │                         │
<-- WebSocket frames -><-- WebSocket frames ->

9. Security Configuration

9.1 Security Headers

server {
    # XSS Protection
    add_header X-XSS-Protection "1; mode=block" always;

    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # Clickjacking prevention
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Referrer Policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' cdn.example.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: cdn.example.com; font-src 'self' fonts.gstatic.com;" always;

    # Permissions Policy
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
}

9.2 Access Control

server {
    # IP-based access control
    location /admin/ {
        allow 10.0.0.0/8;
        allow 192.168.0.0/16;
        deny all;

        proxy_pass http://backend;
    }

    # Basic Authentication
    location /internal/ {
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://backend;
    }

    # Allow only specific HTTP methods
    location /api/ {
        limit_except GET POST PUT DELETE {
            deny all;
        }

        proxy_pass http://backend;
    }

    # Hide server information
    server_tokens off;

    # Block hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
}

9.3 Basic DDoS Protection

http {
    # Connection limits
    limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
    limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=30r/s;

    # Request body size limits
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;

    # Timeout settings
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    server {
        limit_conn conn_per_ip 50;
        limit_req zone=req_per_ip burst=50 nodelay;
    }
}

10. Compression and Performance Optimization

10.1 Detailed Gzip Configuration

http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;          # 1-9 (5 is optimal balance)
    gzip_min_length 1024;       # Skip files under 1KB
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/xml+rss
        application/atom+xml
        image/svg+xml
        font/opentype
        font/ttf
        font/woff
        font/woff2;

    # Use pre-compressed files (generated at build time)
    gzip_static on;
}

10.2 Brotli Compression (Nginx Module)

# Brotli is 20-30% more efficient than Gzip
# Requires separate module installation
load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;

http {
    brotli on;
    brotli_comp_level 6;
    brotli_types text/plain text/css application/json
                 application/javascript text/xml application/xml
                 application/xml+rss text/javascript image/svg+xml;
    brotli_static on;
}

10.3 Performance Tuning Checklist

worker_processes auto;                # CPU core count
worker_rlimit_nofile 65535;

events {
    worker_connections 10240;
    multi_accept on;
    use epoll;
}

http {
    # File transfer optimization
    sendfile on;
    tcp_nopush on;                    # Use with sendfile
    tcp_nodelay on;                   # Effective with keepalive

    # Timeouts
    keepalive_timeout 65;
    keepalive_requests 1000;

    # Buffers
    client_body_buffer_size 16k;
    client_header_buffer_size 1k;
    client_max_body_size 50m;
    large_client_header_buffers 4 8k;

    # File cache
    open_file_cache max=10000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # Upstream connection pool
    upstream backend {
        keepalive 32;
        keepalive_requests 100;
        keepalive_timeout 60s;
    }
}

11. Docker and Kubernetes Integration

11.1 Docker Compose with Nginx

# docker-compose.yml
version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/certs:/etc/nginx/certs:ro
      - ./nginx/cache:/var/cache/nginx
    depends_on:
      - app1
      - app2
    networks:
      - webnet
    restart: unless-stopped

  app1:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

  app2:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

networks:
  webnet:
    driver: bridge
# nginx/conf.d/default.conf
upstream app {
    server app1:3000;
    server app2:3000;
    keepalive 16;
}

server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://app;
        include /etc/nginx/snippets/proxy-params.conf;
    }
}

11.2 Kubernetes Ingress (Nginx Ingress Controller)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rate-limit: "10"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - example.com
        - api.example.com
      secretName: example-tls
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-svc
                port:
                  number: 80
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: backend-svc
                port:
                  number: 8080

12. Nginx vs Traefik vs Caddy

FeatureNginxTraefikCaddy
ConfigurationFile-basedDynamic (Docker labels, K8s)Caddyfile / JSON API
Auto HTTPSManual (Certbot)Built-in (ACME)Built-in (ACME)
Service DiscoveryManualDocker/K8s autoLimited
PerformanceTop tierGoodGood
HTTP/3Module supportBuilt-inBuilt-in
DashboardNone (Nginx Plus paid)Built-in web UIBuilt-in API
Config Reloadnginx -s reloadAuto hot reloadAuto hot reload
CommunityVery largeGrowingGrowing
Best ForGeneral purpose, high perfMicroservices, DockerSmall scale, simple setup
LicenseBSDMITApache 2.0

12.1 Selection Guide

Choose Nginx when:

  • You need top-tier performance and stability
  • Complex reverse proxy rules are required
  • Serving legacy systems or static files
  • You need the largest community and documentation

Choose Traefik when:

  • You need automatic service discovery in Docker/Kubernetes
  • Routing rules change dynamically in a microservices environment
  • Built-in dashboard and metrics are important

Choose Caddy when:

  • Automatic HTTPS is the top priority
  • You want quick setup with simple configuration
  • Small projects or development environments

13. Interview Quiz

Q1. Why is Nginx's event-driven architecture advantageous over Apache's process model?

Apache's traditional Prefork/Worker MPM allocates a process or thread per connection. 10,000 concurrent connections require 10,000 processes/threads, each consuming several MB of memory.

Nginx uses an event loop where a small number of worker processes (typically matching CPU core count) handle tens of thousands of connections asynchronously through OS event mechanisms like epoll/kqueue.

Key differences:

  • Memory efficiency: Nginx uses a few KB per connection vs Apache's few MB
  • Context switching: Nginx minimizes it vs Apache's process/thread switching overhead
  • C10K problem: Nginx was designed from the ground up to solve this
  • Caveat: Event model can be disadvantageous for CPU-intensive tasks
Q2. What is the difference between proxy_pass with and without a trailing slash?

This is one of the most common Nginx configuration mistakes.

proxy_pass http://backend; (no trailing slash): The request URI is passed through as-is. A /api/users request forwards as /api/users to the backend.

proxy_pass http://backend/; (with trailing slash): The matched location part is stripped and the remainder is forwarded.

Example with location /api/:

  • No slash: /api/users request goes to http://backend/api/users
  • With slash: /api/users request goes to http://backend/users

Misunderstanding this causes 404 errors or incorrect routing.

Q3. What are the roles of burst and nodelay in Nginx rate limiting?

limit_req zone=api rate=10r/s burst=20 nodelay;

rate=10r/s: Allows 10 requests per second. Internally uses a token bucket allowing 1 request per 100ms.

burst=20: Queues up to 20 excess requests beyond the rate. Without burst, requests exceeding the rate get an immediate 429.

nodelay: Processes burst-queued requests immediately without delay. Without nodelay, requests are processed sequentially at the rate, causing wait times.

Combined effects:

  • rate=10r/s burst=20: Handles up to 21 instantly, but burst requests are delayed
  • rate=10r/s burst=20 nodelay: Processes up to 21 immediately, excess gets 429
Q4. What is SSL Termination and why use it?

SSL Termination handles HTTPS encryption/decryption at the Nginx (reverse proxy) level, communicating with backend servers over plain HTTP.

Benefits:

  1. Reduced backend load: SSL handshakes and encryption/decryption are CPU-intensive; centralizing them at Nginx
  2. Centralized certificate management: All certificates managed in one place
  3. Simplified backends: Backend applications do not need to handle SSL
  4. Performance optimization: Leverages Nginx SSL session cache, OCSP Stapling, etc.

Security considerations:

  • Internal network between Nginx and backends must be secure
  • mTLS (Mutual TLS) can be applied for internal communication if needed
  • X-Forwarded-Proto header communicates the original protocol to backends
Q5. What is the role of upstream keepalive and what is an appropriate value?

keepalive is a connection pool that caches idle connections between Nginx and upstream (backend) servers.

upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    keepalive 32;
}

Without keepalive: Every request triggers a TCP 3-way handshake. Under high load, TIME_WAIT sockets accumulate rapidly, potentially causing port exhaustion.

With keepalive: Reuses existing connections, eliminating TCP handshake overhead. This reduces latency and increases throughput.

Appropriate value:

  • Start with roughly 2x your concurrent connection count
  • Too high wastes memory; too low reduces connection reuse benefits
  • Must set proxy_http_version 1.1; and proxy_set_header Connection ""; for it to work

14. References

  1. Nginx Official Documentation
  2. Nginx Admin Guide
  3. Nginx Plus Feature Comparison
  4. Let's Encrypt / Certbot
  5. Nginx Ingress Controller
  6. Traefik Documentation
  7. Caddy Documentation
  8. Mozilla SSL Configuration Generator
  9. Nginx Performance Tuning Guide
  10. C10K Problem
  11. Nginx Cookbook (O'Reilly)
  12. DigitalOcean Nginx Tutorials
  13. Nginx Security Best Practices