- Authors
- Name
- 1. Nginx Architecture and Configuration Structure
- 2. Virtual Host / Server Block Configuration
- 3. Reverse Proxy Configuration
- 4. Load Balancing
- 5. SSL/TLS Configuration
- 6. Caching Configuration
- 7. Rate Limiting and Connection Limiting
- 8. Gzip/Brotli Compression
- 9. Security Headers
- 10. Access Control
- 11. Logging Configuration
- 12. Performance Tuning
- 13. URL Rewriting and Redirection
- 14. Static File Serving Optimization
- 15. Health Checks and Monitoring
- Production Configuration Checklist
- References
1. Nginx Architecture and Configuration Structure
1.1 Event-Driven Architecture: Master-Worker Model
Nginx adopts a fundamentally different event-driven architecture compared to Apache httpd's process/thread-based model. This design philosophy is the core reason why Nginx can handle hundreds of thousands of concurrent connections on a single server.
┌─────────────────────────────────────────────────────────┐
│ Master Process │
│ - Read and validate configuration files │
│ - Create/manage Worker processes (fork) │
│ - Port binding (80, 443) │
│ - Signal handling (reload, stop, reopen) │
└─────────┬───────────┬───────────┬───────────┬───────────┘
│ │ │ │
┌─────▼─────┐ ┌───▼───┐ ┌───▼───┐ ┌───▼───┐
│ Worker 0 │ │Worker 1│ │Worker 2│ │Worker 3│
│ Event Loop │ │ ... │ │ ... │ │ ... │
│ epoll/kq │ │ │ │ │ │ │
│ 1000s conn │ │ │ │ │ │ │
└───────────┘ └────────┘ └────────┘ └────────┘
The Master Process runs with root privileges and is responsible for parsing configuration files, binding ports, and managing Worker processes. It creates Workers via the fork() system call, and during configuration reload, it spawns new Workers and gracefully shuts down the old ones without dropping existing connections.
The Worker Process is the core unit that handles actual client requests. Each Worker runs an independent event loop, leveraging the OS I/O multiplexing mechanism (epoll on Linux, kqueue on FreeBSD/macOS) to handle thousands of connections concurrently without blocking. This dramatically reduces context switching and memory overhead compared to per-connection thread allocation.
1.2 nginx.conf Configuration Context Structure
Nginx configuration follows a hierarchical context structure. Child contexts inherit settings from the parent, and redeclaring the same directive in a child context overrides the parent value.
# ============================================
# Main Context (Global Settings)
# ============================================
user nginx; # User running Worker processes
worker_processes auto; # Create Workers matching CPU core count
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
# ============================================
# Events Context (Connection Handling)
# ============================================
events {
worker_connections 1024; # Max concurrent connections per Worker
multi_accept on; # Accept multiple connections at once
use epoll; # Use epoll on Linux (default)
}
# ============================================
# HTTP Context (HTTP Protocol Settings)
# ============================================
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
# ========================================
# Server Context (Virtual Host)
# ========================================
server {
listen 80;
server_name example.com;
# ====================================
# Location Context (URL Path Matching)
# ====================================
location / {
root /var/www/html;
index index.html;
}
location /api/ {
proxy_pass http://backend;
}
}
}
# ============================================
# Stream Context (TCP/UDP Proxy, L4)
# ============================================
stream {
server {
listen 3306;
proxy_pass mysql_backend;
}
}
Context Hierarchy Summary:
| Context | Location | Purpose |
|---|---|---|
| Main | Top-level | Global settings (user, worker, pid, error_log) |
| Events | Inside Main | Connection handling mechanism (worker_connections) |
| HTTP | Inside Main | All HTTP protocol-related settings |
| Server | Inside HTTP | Virtual host (per-domain settings) |
| Location | Inside Server | Request handling rules per URL path |
| Upstream | Inside HTTP | Backend server group (load balancing) |
| Stream | Inside Main | TCP/UDP L4 proxy |
1.3 Configuration File Structure Best Practice
In production environments, rather than putting everything in a single nginx.conf, configurations are modularized for management.
/etc/nginx/
├── nginx.conf # Main config (split with include)
├── conf.d/ # Common config snippets
│ ├── ssl-params.conf # SSL/TLS common parameters
│ ├── proxy-params.conf # Reverse proxy common headers
│ ├── security-headers.conf # Security headers
│ └── gzip.conf # Compression settings
├── sites-available/ # Per-site config files
│ ├── example.com.conf
│ ├── api.example.com.conf
│ └── admin.example.com.conf
├── sites-enabled/ # Active sites (symlinks)
│ ├── example.com.conf -> ../sites-available/example.com.conf
│ └── api.example.com.conf -> ../sites-available/api.example.com.conf
└── snippets/ # Reusable config fragments
├── letsencrypt.conf
└── fastcgi-php.conf
# nginx.conf main file
http {
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}
2. Virtual Host / Server Block Configuration
Nginx Server Blocks are the equivalent of Apache Virtual Hosts, allowing you to host multiple domains on a single server.
2.1 Basic Server Block Configuration
# /etc/nginx/sites-available/example.com.conf
# -- Primary domain --
server {
listen 80;
listen [::]:80; # IPv6 support
server_name example.com www.example.com;
root /var/www/example.com/html;
index index.html index.htm;
# Separate access logs per domain
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
location / {
try_files $uri $uri/ =404;
}
}
# -- Second domain --
server {
listen 80;
listen [::]:80;
server_name blog.example.com;
root /var/www/blog.example.com/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
2.2 Default Server (catch-all)
This is the default server block that handles requests for undefined domains. For security, responding with 444 (connection drop) is recommended.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _; # All unmatched domains
# Immediately close connections for undefined host requests
return 444;
}
2.3 Server Name Matching Priority
Nginx follows this priority order for server_name matching:
- Exact name:
server_name example.com - Leading wildcard:
server_name *.example.com - Trailing wildcard:
server_name example.* - Regular expression:
server_name ~^(?<subdomain>.+)\.example\.com$ - default_server: When none of the above match
# Capture subdomain with regex
server {
listen 80;
server_name ~^(?<subdomain>.+)\.example\.com$;
location / {
root /var/www/$subdomain;
}
}
2.4 Site Enable/Disable
# Enable site
sudo ln -s /etc/nginx/sites-available/example.com.conf \
/etc/nginx/sites-enabled/example.com.conf
# Validate configuration
sudo nginx -t
# Reload (zero downtime)
sudo systemctl reload nginx
3. Reverse Proxy Configuration
3.1 Basic Reverse Proxy
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
# -- Essential proxy headers --
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Purpose of each header:
| Header | Purpose |
|---|---|
Host | Pass the original request Host header |
X-Real-IP | Actual client IP (identify original IP behind proxy) |
X-Forwarded-For | Accumulated client IP list through proxy chain |
X-Forwarded-Proto | Original protocol (http/https) -- used for redirect decisions |
X-Forwarded-Host | Original Host header |
X-Forwarded-Port | Original port |
3.2 Reusable Proxy Parameter Snippet
# /etc/nginx/conf.d/proxy-params.conf
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_http_version 1.1;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering on;
# Reuse with include in Server Block
location / {
proxy_pass http://backend;
include /etc/nginx/conf.d/proxy-params.conf;
}
3.3 WebSocket Proxy
WebSocket uses the HTTP Upgrade mechanism, so the Upgrade and Connection hop-by-hop headers must be explicitly forwarded. Nginx does not forward these headers by default.
# -- Dynamically set Connection header with map --
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 80;
server_name ws.example.com;
location /ws/ {
proxy_pass http://127.0.0.1:8080;
# WebSocket required settings
proxy_http_version 1.1; # HTTP/1.1 required (Upgrade support)
proxy_set_header Upgrade $http_upgrade; # Forward client's Upgrade header
proxy_set_header Connection $connection_upgrade; # Dynamic Connection header
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket connections are long-lived, extend timeouts
proxy_read_timeout 86400s; # 24 hours (default 60s would drop idle connections)
proxy_send_timeout 86400s;
}
}
3.4 Path-Based Routing (Microservices)
server {
listen 80;
server_name api.example.com;
# User service
location /api/users/ {
proxy_pass http://user-service:3001/; # Note trailing /: strips /api/users/ before forwarding
include /etc/nginx/conf.d/proxy-params.conf;
}
# Order service
location /api/orders/ {
proxy_pass http://order-service:3002/;
include /etc/nginx/conf.d/proxy-params.conf;
}
# Payment service
location /api/payments/ {
proxy_pass http://payment-service:3003/;
include /etc/nginx/conf.d/proxy-params.conf;
proxy_read_timeout 120s; # Extended timeout for payments
}
}
Note: If the
proxy_passURL ends with/, the portion matching thelocationis stripped. A request to/api/users/123is forwarded tohttp://user-service:3001/123. Without the trailing/, the full URI is forwarded as-is.
4. Load Balancing
4.1 Upstream Block and Algorithms
# ============================================
# 1. Round Robin (Default)
# Distributes requests sequentially
# ============================================
upstream backend_roundrobin {
server 10.0.0.1:8080; # Default weight 1
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
# ============================================
# 2. Weighted Round Robin
# Distributes proportionally based on server capacity
# ============================================
upstream backend_weighted {
server 10.0.0.1:8080 weight=5; # 5/8 of requests
server 10.0.0.2:8080 weight=2; # 2/8 of requests
server 10.0.0.3:8080 weight=1; # 1/8 of requests
}
# ============================================
# 3. Least Connections
# Forwards to the server with fewest active connections
# ============================================
upstream backend_leastconn {
least_conn;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
# ============================================
# 4. IP Hash (Session Affinity)
# Same client IP -> Same server
# ============================================
upstream backend_iphash {
ip_hash;
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
# ============================================
# 5. Generic Hash (Custom Hash)
# Hash based on arbitrary variable
# ============================================
upstream backend_hash {
hash $request_uri consistent; # URI-based + consistent hashing
server 10.0.0.1:8080;
server 10.0.0.2:8080;
server 10.0.0.3:8080;
}
Algorithm Selection Guide:
| Algorithm | Suitable For | Considerations |
|---|---|---|
| Round Robin | Stateless services, identical servers | Imbalance if server specs differ |
| Weighted | Servers with different specs | Manual weight management needed |
| Least Connections | High variance in request processing time | Similar to Round Robin for short requests |
| IP Hash | Session affinity needed (legacy apps) | Redistribution occurs on server add/remove |
| Generic Hash | Cache optimization (same URI -> same server) | Consistent hashing recommended |
4.2 Server Status and Backup
upstream backend {
least_conn;
server 10.0.0.1:8080; # Active server
server 10.0.0.2:8080; # Active server
server 10.0.0.3:8080 backup; # Backup: activated when all above servers are down
server 10.0.0.4:8080 down; # Disabled (maintenance)
server 10.0.0.5:8080 max_fails=3 fail_timeout=30s;
# max_fails=3: Marked unhealthy after 3 failures within 30s
# fail_timeout=30s: Excluded from requests for 30s after being marked unhealthy
}
4.3 Keepalive Connection Pool
Reuses TCP connections to backend servers to reduce connection setup/teardown overhead.
upstream backend {
server 10.0.0.1:8080;
server 10.0.0.2:8080;
keepalive 32; # Number of idle connections to maintain per Worker
keepalive_requests 1000; # Max requests per keepalive connection
keepalive_timeout 60s; # Idle connection keep time
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1; # HTTP/1.1 required for keepalive
proxy_set_header Connection ""; # Empty value instead of "close" enables keepalive
}
}
5. SSL/TLS Configuration
5.1 Basic HTTPS Setup
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name example.com www.example.com;
# -- Certificates --
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# -- Protocols --
ssl_protocols TLSv1.2 TLSv1.3; # Disable TLS 1.0, 1.1
# -- Cipher Suites (for TLS 1.2) --
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off; # Recommended off for TLS 1.3
# -- Elliptic Curves --
ssl_ecdh_curve X25519:secp384r1:secp256r1;
# -- Session Reuse --
ssl_session_cache shared:SSL:10m; # 10MB = ~40,000 sessions
ssl_session_timeout 1d; # Session validity period
ssl_session_tickets off; # Ensure Forward Secrecy
root /var/www/example.com/html;
index index.html;
}
5.2 HTTP to HTTPS Redirect
server {
listen 80;
listen [::]:80;
server_name example.com www.example.com;
# 301 redirect all HTTP requests to HTTPS
return 301 https://$host$request_uri;
}
5.3 HSTS (HTTP Strict Transport Security)
# Add inside HTTPS server block
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
max-age=63072000: Use HTTPS only for 2 years (minimum 1 year recommended)includeSubDomains: Apply to all subdomains as wellpreload: Qualifies for browser HSTS Preload list registrationalways: Send header even on error responses (4xx, 5xx)
5.4 OCSP Stapling
OCSP Stapling has the server handle certificate validity verification on behalf of the client, eliminating the client's direct CA lookup. This improves initial connection speed and protects privacy.
ssl_stapling on;
ssl_stapling_verify on;
# Trust chain for OCSP response verification (includes Let's Encrypt intermediate cert)
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
# DNS resolver for OCSP responder lookup
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
5.5 Production SSL Integrated Snippet
# /etc/nginx/conf.d/ssl-params.conf
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_prefer_server_ciphers off;
ssl_ecdh_curve X25519:secp384r1:secp256r1;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;
ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# Include in Server Block
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/nginx/conf.d/ssl-params.conf;
# ... remaining configuration
}
5.6 Let's Encrypt Auto-Renewal (Certbot)
# Issue certificate
sudo certbot --nginx -d example.com -d www.example.com
# Test auto-renewal
sudo certbot renew --dry-run
# Cron auto-renewal (certbot already sets this up, but explicit)
echo "0 0,12 * * * root certbot renew --quiet --deploy-hook 'systemctl reload nginx'" \
| sudo tee /etc/cron.d/certbot-renew
6. Caching Configuration
6.1 Proxy Cache (Reverse Proxy Cache)
# -- Define cache path in HTTP Context --
proxy_cache_path /var/cache/nginx/proxy
levels=1:2 # 2-level directory structure (file distribution)
keys_zone=proxy_cache:10m # Shared memory for cache keys (1MB ~ 8,000 keys)
max_size=1g # Maximum disk cache size
inactive=60m # Remove cache unused for 60 minutes
use_temp_path=off; # No temp file path (write directly -> performance gain)
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://backend;
proxy_cache proxy_cache; # Specify cache zone
proxy_cache_valid 200 302 10m; # Cache 200, 302 responses for 10 min
proxy_cache_valid 404 1m; # Cache 404 responses for 1 min
proxy_cache_use_stale error timeout
updating
http_500 http_502
http_503 http_504; # Serve stale cache on backend errors
proxy_cache_lock on; # Only one request to backend for concurrent requests
proxy_cache_min_uses 2; # Only cache URLs requested 2+ times
# Cache status header for debugging
add_header X-Cache-Status $upstream_cache_status;
}
# Cache bypass (for admins)
location /api/ {
proxy_pass http://backend;
proxy_cache proxy_cache;
# Bypass when Cookie has nocache or header has Cache-Control: no-cache
proxy_cache_bypass $http_cache_control;
proxy_no_cache $cookie_nocache;
}
}
X-Cache-Status values:
| Status | Meaning |
|---|---|
HIT | Served directly from cache |
MISS | No cache -> requested from backend |
EXPIRED | Expired cache -> re-requested from backend |
STALE | Expired but served via stale policy |
UPDATING | Stale cache served while being updated |
BYPASS | Cache was bypassed |
6.2 FastCGI Cache (PHP etc.)
fastcgi_cache_path /var/cache/nginx/fastcgi
levels=1:2
keys_zone=fastcgi_cache:10m
max_size=512m
inactive=30m;
server {
listen 80;
server_name wordpress.example.com;
# Define cache bypass conditions
set $skip_cache 0;
# Do not cache POST requests
if ($request_method = POST) {
set $skip_cache 1;
}
# Do not cache requests with query strings
if ($query_string != "") {
set $skip_cache 1;
}
# Exclude WordPress admin pages from cache
if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php") {
set $skip_cache 1;
}
# Exclude logged-in users from cache
if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") {
set $skip_cache 1;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_cache fastcgi_cache;
fastcgi_cache_valid 200 30m;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache_use_stale error timeout updating http_500;
add_header X-FastCGI-Cache $upstream_cache_status;
}
}
6.3 Browser Caching (Static Resources)
# -- Long-term browser cache for static files --
location ~* \.(jpg|jpeg|png|gif|ico|webp|avif|svg)$ {
expires 30d; # Set Expires header
add_header Cache-Control "public, immutable"; # Browser uses cache without revalidation
access_log off; # Disable static file logging (I/O savings)
}
location ~* \.(css|js)$ {
expires 7d;
add_header Cache-Control "public";
}
location ~* \.(woff|woff2|ttf|eot)$ {
expires 365d;
add_header Cache-Control "public, immutable";
add_header Access-Control-Allow-Origin "*"; # Font CORS
}
# -- Short cache or no-cache for HTML --
location ~* \.html$ {
expires -1; # no-cache
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
7. Rate Limiting and Connection Limiting
7.1 Rate Limiting (Request Rate Limiting)
Nginx Rate Limiting uses the Leaky Bucket algorithm. Requests enter the bucket (zone) and are processed at a configured rate.
# -- Define Zone in HTTP Context --
# Key: client IP
# Shared memory: 10MB (approx. 160,000 IP addresses)
# Rate: 10 requests per second
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# Login endpoint: 1 per second (Brute Force prevention)
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
# API endpoint: 50 per second
limit_req_zone $binary_remote_addr zone=api:10m rate=50r/s;
# Log level when Rate Limit is exceeded
limit_req_status 429; # 429 Too Many Requests instead of default 503
limit_req_log_level warn;
server {
listen 80;
server_name example.com;
# -- General pages --
location / {
limit_req zone=general burst=20 nodelay;
# burst=20: Allow up to 20 excess requests momentarily
# nodelay: Process requests within burst range immediately without delay
proxy_pass http://backend;
}
# -- Login page --
location /login {
limit_req zone=login burst=5 nodelay;
proxy_pass http://backend;
}
# -- API --
location /api/ {
limit_req zone=api burst=100 nodelay;
proxy_pass http://backend;
}
}
burst and nodelay behavior:
Rate: 10r/s, Burst: 20
30 requests arrive at time 0:
├── 10: Processed immediately (within rate)
├── 20: Stored in burst queue
│ Without nodelay: Processed at 100ms intervals (over 2 seconds)
│ With nodelay: Processed immediately (only queue slots consumed)
└── Remaining: Rejected with 429
7.2 Connection Limiting (Concurrent Connection Limiting)
# Limit concurrent connections per client IP
limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
# Limit total concurrent connections per server
limit_conn_zone $server_name zone=conn_per_server:10m;
limit_conn_status 429;
limit_conn_log_level warn;
server {
listen 80;
server_name example.com;
# Max 100 concurrent connections per IP
limit_conn conn_per_ip 100;
# Max 10,000 concurrent connections per server
limit_conn conn_per_server 10000;
# Download bandwidth limiting (optional)
location /downloads/ {
limit_conn conn_per_ip 5; # Limit downloads to 5 per IP
limit_rate 500k; # 500KB/s speed limit per connection
limit_rate_after 10m; # No limit for first 10MB, then limited
}
}
7.3 IP Whitelist Combined with Rate Limiting
# Exempt internal network from Rate Limiting
geo $limit {
default 1;
10.0.0.0/8 0; # Internal network
192.168.0.0/16 0; # Internal network
172.16.0.0/12 0; # Internal network
}
map $limit $limit_key {
0 ""; # Empty key -> Rate Limiting not applied
1 $binary_remote_addr; # External IP -> Rate Limiting applied
}
limit_req_zone $limit_key zone=api:10m rate=10r/s;
8. Gzip/Brotli Compression
8.1 Gzip Compression Settings
# /etc/nginx/conf.d/gzip.conf
gzip on;
gzip_vary on; # Add Vary: Accept-Encoding header
gzip_proxied any; # Compress proxy responses too
gzip_comp_level 6; # Compression level (1-9, 6 balances performance/compression)
gzip_min_length 1000; # Files under 1KB have no compression benefit -> exclude
gzip_buffers 16 8k; # Compression buffers
gzip_types
text/plain
text/css
text/javascript
text/xml
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
application/vnd.ms-fontobject
font/opentype
font/ttf
image/svg+xml
image/x-icon;
# Exclude already compressed files (images, videos not in gzip_types)
gzip_disable "msie6"; # Disable for IE6 (legacy)
8.2 Brotli Compression Settings
Brotli provides 15-25% better compression ratios compared to Gzip. Most modern browsers support it, and Nginx requires the ngx_brotli module.
# Brotli dynamic compression
brotli on;
brotli_comp_level 6; # Level 6 recommended for dynamic compression (11 causes CPU overload)
brotli_min_length 1000;
brotli_types
text/plain
text/css
text/javascript
text/xml
application/javascript
application/json
application/xml
application/rss+xml
font/opentype
font/ttf
image/svg+xml;
# Brotli static compression (serve pre-compressed .br files)
brotli_static on;
8.3 Dual Compression Strategy
Serve Brotli to browsers that support it, fall back to Gzip for those that do not.
# Pre-compress static files during build (CI/CD pipeline)
# gzip -k -9 dist/**/*.{js,css,html,json,svg}
# brotli -k -q 11 dist/**/*.{js,css,html,json,svg}
# Nginx configuration
brotli_static on; # Serve .br files first if available
gzip_static on; # Serve .gz files (when Brotli not supported)
gzip on; # Dynamic gzip if no pre-compressed file exists
Compression Performance Comparison:
| Algorithm | Compression Ratio (typical JS) | CPU Load (dynamic) | Browser Support |
|---|---|---|---|
| Gzip L6 | 70-75% | Low | 99%+ |
| Brotli L6 | 75-80% | Medium | 96%+ |
| Brotli L11 | 80-85% | High (static only) | 96%+ |
9. Security Headers
9.1 Comprehensive Security Header Configuration
# /etc/nginx/conf.d/security-headers.conf
# -- Clickjacking Prevention --
add_header X-Frame-Options "DENY" always;
# DENY: No site can embed via iframe
# SAMEORIGIN: Only same domain allowed in iframe
# Note: CSP frame-ancestors is the more modern alternative
# -- MIME Type Sniffing Prevention --
add_header X-Content-Type-Options "nosniff" always;
# Prevent browsers from ignoring Content-Type and making their own determination
# -- XSS Filter (Legacy, modern browsers use CSP) --
add_header X-XSS-Protection "0" always;
# Latest recommendation: "0" (disabled) -- CSP is safer and more accurate
# -- Referrer Information Control --
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
# Same domain: Send full URL
# Different domain: Send only origin (domain)
# HTTP to HTTPS downgrade: Send nothing
# -- Permissions Policy (camera, microphone, location control) --
add_header Permissions-Policy "camera=(), microphone=(), geolocation=(), payment=()" always;
# Disable all features (only enable what is needed)
# -- HSTS --
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# -- Cross-Origin Policies --
add_header Cross-Origin-Opener-Policy "same-origin" always;
add_header Cross-Origin-Embedder-Policy "require-corp" always;
add_header Cross-Origin-Resource-Policy "same-origin" always;
9.2 Content Security Policy (CSP)
CSP is the most powerful security header but complex to configure. It is recommended to start with Report-Only mode to monitor violations, then gradually apply the policy.
# -- Step 1: Report-Only mode (report violations without blocking) --
add_header Content-Security-Policy-Report-Only
"default-src 'self';
script-src 'self' https://cdn.example.com;
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
img-src 'self' data: https:;
font-src 'self' https://fonts.gstatic.com;
connect-src 'self' https://api.example.com;
frame-ancestors 'none';
base-uri 'self';
form-action 'self';
report-uri /csp-report;" always;
# -- Step 2: Enforce (block on violation) --
add_header Content-Security-Policy
"default-src 'self';
script-src 'self' https://cdn.example.com;
style-src 'self' 'unsafe-inline' https://fonts.googleapis.com;
img-src 'self' data: https:;
font-src 'self' https://fonts.gstatic.com;
connect-src 'self' https://api.example.com;
frame-ancestors 'none';
base-uri 'self';
form-action 'self';" always;
9.3 Integrated Application in Server Block
server {
listen 443 ssl http2;
server_name example.com;
include /etc/nginx/conf.d/ssl-params.conf;
include /etc/nginx/conf.d/security-headers.conf;
# Additional security-related settings
server_tokens off; # Hide Nginx version information
more_clear_headers Server; # Remove Server header (headers-more module)
# ...
}
10. Access Control
10.1 IP-Based Access Control
# -- Admin page: Allow only specific IPs --
location /admin/ {
allow 10.0.0.0/8; # Internal network
allow 203.0.113.50; # Specific admin IP
deny all; # Deny all others
proxy_pass http://backend;
}
# -- Block specific IPs --
location / {
deny 192.168.1.100; # Block specific IP
deny 10.0.0.0/24; # Block subnet
allow all; # Allow all others
# Note: Order of allow/deny matters! The first matching rule is applied
proxy_pass http://backend;
}
10.2 HTTP Basic Authentication
# Create htpasswd file
sudo apt install apache2-utils # Debian/Ubuntu
# sudo yum install httpd-tools # RHEL/CentOS
# Create user (-c: create new file, -B: bcrypt hash)
sudo htpasswd -cB /etc/nginx/.htpasswd admin
# Add user
sudo htpasswd -B /etc/nginx/.htpasswd developer
# -- Apply Basic Auth to specific path --
location /admin/ {
auth_basic "Administrator Area"; # Auth prompt message
auth_basic_user_file /etc/nginx/.htpasswd; # Password file
proxy_pass http://backend;
}
# -- Exempt specific path from auth --
location /admin/health {
auth_basic off; # Health check exempt from auth
proxy_pass http://backend;
}
10.3 IP + Auth Combined (satisfy directive)
location /admin/ {
# satisfy any -> Access allowed if IP is permitted OR auth succeeds
# satisfy all -> Access allowed only when BOTH IP and auth are satisfied
satisfy any;
# IP whitelist
allow 10.0.0.0/8;
deny all;
# Basic Auth (for access outside IP whitelist)
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
proxy_pass http://backend;
}
10.4 GeoIP-Based Access Control
# Requires GeoIP2 module (ngx_http_geoip2_module)
geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
auto_reload 60m;
$geoip2_data_country_code country iso_code;
}
# Block specific countries
map $geoip2_data_country_code $allowed_country {
default yes;
CN no; # China
RU no; # Russia
}
server {
if ($allowed_country = no) {
return 403;
}
}
11. Logging Configuration
11.1 Basic Log Configuration
# -- Access Log --
# Default log_format: combined
log_format combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /var/log/nginx/access.log combined;
# -- Error Log --
# Levels: debug, info, notice, warn, error, crit, alert, emerg
error_log /var/log/nginx/error.log warn;
11.2 JSON Log Format (Log Analysis Tool Integration)
JSON-formatted structured logging is essential for integration with analysis tools like Elasticsearch, Datadog, and Splunk.
log_format json_combined escape=json
'{'
'"time":"$time_iso8601",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"request_method":"$request_method",'
'"request_uri":"$request_uri",'
'"server_protocol":"$server_protocol",'
'"status":$status,'
'"body_bytes_sent":$body_bytes_sent,'
'"request_time":$request_time,'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"upstream_addr":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_response_time":"$upstream_response_time",'
'"ssl_protocol":"$ssl_protocol",'
'"ssl_cipher":"$ssl_cipher",'
'"request_id":"$request_id"'
'}';
access_log /var/log/nginx/access.json.log json_combined;
11.3 Conditional Logging
# -- Exclude health check request logs --
map $request_uri $loggable {
~*^/health 0;
~*^/ready 0;
~*^/metrics 0;
default 1;
}
access_log /var/log/nginx/access.log combined if=$loggable;
# -- Log only error requests separately --
map $status $is_error {
~^[45] 1;
default 0;
}
access_log /var/log/nginx/error_requests.log combined if=$is_error;
# -- Log slow requests (over 1 second) --
map $request_time $is_slow {
~^[1-9] 1; # 1 second or more
~^[0-9]{2} 1; # 10 seconds or more
default 0;
}
access_log /var/log/nginx/slow_requests.log json_combined if=$is_slow;
11.4 Per-Domain Log Separation
server {
server_name example.com;
access_log /var/log/nginx/example.com.access.log json_combined;
error_log /var/log/nginx/example.com.error.log warn;
}
server {
server_name api.example.com;
access_log /var/log/nginx/api.example.com.access.log json_combined;
error_log /var/log/nginx/api.example.com.error.log warn;
}
11.5 Log Rotation (logrotate)
# /etc/logrotate.d/nginx
/var/log/nginx/*.log {
daily # Rotate daily
missingok # No error if log file is missing
rotate 14 # Keep 14 days
compress # gzip compression
delaycompress # Don't compress the most recent file
notifempty # Don't rotate empty files
create 0640 nginx adm # New file permissions
sharedscripts
postrotate
# Signal Nginx to reopen log files
if [ -f /run/nginx.pid ]; then
kill -USR1 $(cat /run/nginx.pid)
fi
endscript
}
12. Performance Tuning
12.1 Worker Processes and Connections
# -- Main Context --
worker_processes auto; # Match CPU core count (manual: 4, 8, etc.)
worker_rlimit_nofile 65535; # Max file descriptors per Worker
events {
worker_connections 4096; # Max concurrent connections per Worker
multi_accept on; # Accept multiple connections per event loop
use epoll; # Linux: epoll (default)
}
Maximum Concurrent Connections Formula:
Max connections = worker_processes x worker_connections
Example: 4 workers x 4096 connections = 16,384 concurrent connections
With reverse proxy (2 connections per client + backend):
Actual concurrent clients = 16,384 / 2 = 8,192
12.2 Keepalive Settings
http {
# -- Client Keepalive --
keepalive_timeout 65; # Client keepalive duration (seconds)
keepalive_requests 1000; # Max requests per keepalive connection
# -- Timeouts --
client_body_timeout 12; # Client request body receive timeout
client_header_timeout 12; # Client request header receive timeout
send_timeout 10; # Response send timeout to client
reset_timedout_connection on; # Immediately reset timed-out connections (free memory)
}
12.3 Buffer Settings
http {
# -- Client Request Buffers --
client_body_buffer_size 16k; # Request body buffer (writes to disk when exceeded)
client_header_buffer_size 1k; # Request header buffer
client_max_body_size 100m; # Maximum upload size (default 1MB)
large_client_header_buffers 4 16k; # Buffer for large headers
# -- Proxy Buffers --
proxy_buffers 16 32k; # Backend response storage buffers
proxy_buffer_size 16k; # First response (header) buffer
proxy_busy_buffers_size 64k; # Buffer size being sent to client
proxy_temp_file_write_size 64k; # Temp file write size to disk
}
12.4 Comprehensive Production Performance Tuning
# /etc/nginx/nginx.conf -- Production Optimization
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
pid /run/nginx.pid;
events {
worker_connections 4096;
multi_accept on;
use epoll;
}
http {
# -- MIME & Basic Settings --
include /etc/nginx/mime.types;
default_type application/octet-stream;
charset utf-8;
server_tokens off; # Hide version information
# -- File Transfer Optimization --
sendfile on; # Kernel-level file transfer
tcp_nopush on; # Used with sendfile: packet optimization
tcp_nodelay on; # Disable Nagle algorithm
aio on; # Asynchronous I/O
# -- Keepalive --
keepalive_timeout 65;
keepalive_requests 1000;
# -- Timeouts --
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
reset_timedout_connection on;
# -- Buffers --
client_body_buffer_size 16k;
client_header_buffer_size 1k;
client_max_body_size 100m;
large_client_header_buffers 4 16k;
# -- Open File Cache --
open_file_cache max=10000 inactive=30s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# -- Compression, SSL, Logging includes --
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*.conf;
}
13. URL Rewriting and Redirection
13.1 return Directive (Recommended)
return is simpler and more efficient than rewrite. For most URL change cases, return should be considered first.
# -- HTTP to HTTPS Redirect --
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
}
# -- www to non-www normalization --
server {
listen 443 ssl http2;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
# -- Domain change redirect --
server {
listen 80;
listen 443 ssl http2;
server_name old-domain.com www.old-domain.com;
return 301 https://new-domain.com$request_uri;
}
# -- Specific path redirect --
location /old-page {
return 301 /new-page;
}
# -- Maintenance mode --
location / {
return 503; # Service Unavailable
}
# Combine with error_page
error_page 503 /maintenance.html;
location = /maintenance.html {
root /var/www/html;
internal;
}
13.2 rewrite Directive
rewrite is used when regex-based URL transformation is needed.
# -- Basic syntax --
# rewrite regex replacement [flag];
# flag: last | break | redirect (302) | permanent (301)
# -- Convert to versionless API path --
rewrite ^/api/v1/(.*)$ /api/$1 last;
# /api/v1/users -> /api/users (internal rewrite)
# last: Start new location matching
# -- Remove extensions (Clean URL) --
rewrite ^/(.*)\.html$ /$1 permanent;
# /about.html -> /about (301 redirect)
# -- Rewrite with query string --
rewrite ^/search/(.*)$ /search?q=$1? last;
# Appending ? at end removes original query string
# -- Multilingual URLs --
rewrite ^/ko/(.*)$ /$1?lang=ko last;
rewrite ^/en/(.*)$ /$1?lang=en last;
rewrite ^/ja/(.*)$ /$1?lang=ja last;
rewrite flag comparison:
| Flag | Behavior | Use Case |
|---|---|---|
last | Rewrite then start new location matching | Internal routing change |
break | Rewrite then process within current location | Transformation within current block |
redirect | 302 temporary redirect | Temporary move |
permanent | 301 permanent redirect | Permanent move |
13.3 try_files Directive
try_files sequentially checks for file/directory existence and is essential for SPAs and frameworks.
# -- SPA (React, Vue, Angular, etc.) --
location / {
root /var/www/spa;
try_files $uri $uri/ /index.html;
# 1. $uri: Check if requested file exists
# 2. $uri/: Check if directory exists
# 3. /index.html: If none above exist, serve index.html (SPA routing)
}
# -- Next.js / Nuxt.js --
location / {
try_files $uri $uri/ @proxy;
}
location @proxy {
proxy_pass http://127.0.0.1:3000;
include /etc/nginx/conf.d/proxy-params.conf;
}
# -- PHP (WordPress, Laravel) --
location / {
try_files $uri $uri/ /index.php?$args;
}
# -- Static files first -> backend fallback --
location / {
root /var/www/static;
try_files $uri @backend;
}
location @backend {
proxy_pass http://app_server;
}
13.4 Conditional Redirects
# -- Mobile device redirect --
if ($http_user_agent ~* "(Android|iPhone|iPad)") {
return 302 https://m.example.com$request_uri;
}
# -- Based on specific query parameter --
if ($arg_redirect) {
return 302 $arg_redirect;
}
# -- Manage complex redirects with map --
map $request_uri $redirect_uri {
/old-blog/post-1 /blog/new-post-1;
/old-blog/post-2 /blog/new-post-2;
/products/legacy /shop/all;
default "";
}
server {
if ($redirect_uri) {
return 301 $redirect_uri;
}
}
14. Static File Serving Optimization
14.1 Core File Transfer Directives
http {
# -- sendfile --
# Use the kernel's sendfile() system call to transfer files directly to sockets
# Eliminates user-space memory copy, reducing CPU usage and context switching
sendfile on;
# -- tcp_nopush --
# Works with sendfile: sends HTTP headers and file start in a single packet
# Reduces network packet count for better bandwidth efficiency
tcp_nopush on;
# -- tcp_nodelay --
# Disable Nagle algorithm: send small packets immediately
# Reduces latency on keepalive connections (can be used with tcp_nopush)
tcp_nodelay on;
}
Transfer Method Comparison:
sendfile off (default):
Disk -> Kernel Buffer -> User Memory (Nginx) -> Kernel Socket Buffer -> Network
[read] [copy] [write]
sendfile on:
Disk -> Kernel Buffer ─────────────────────────-> Kernel Socket Buffer -> Network
[read] [zero-copy: no CPU involvement] [transfer]
14.2 Open File Cache
Caches file descriptors, sizes, and modification times of frequently requested files to minimize filesystem lookups.
http {
# Cache up to 10,000 file info entries, remove after 30s of inactivity
open_file_cache max=10000 inactive=30s;
# Re-validate cached info every 60 seconds
open_file_cache_valid 60s;
# Only cache files requested 2+ times (filter one-time requests)
open_file_cache_min_uses 2;
# Cache file-not-found (ENOENT) errors too (prevent repeated lookups for missing files)
open_file_cache_errors on;
}
14.3 Comprehensive Static File Serving Configuration
server {
listen 443 ssl http2;
server_name static.example.com;
root /var/www/static;
# -- Images --
location ~* \.(jpg|jpeg|png|gif|ico|webp|avif|svg)$ {
expires 30d;
add_header Cache-Control "public, immutable";
add_header Vary "Accept-Encoding";
access_log off;
log_not_found off; # Disable 404 logging too
# Image-specific limits
limit_rate 2m; # 2MB/s per connection
}
# -- CSS/JS (with cache busting strategy) --
location ~* \.(css|js)$ {
expires 365d; # Long-term cache if hash-based filenames
add_header Cache-Control "public, immutable";
gzip_static on;
brotli_static on;
}
# -- Fonts --
location ~* \.(woff|woff2|ttf|eot|otf)$ {
expires 365d;
add_header Cache-Control "public, immutable";
add_header Access-Control-Allow-Origin "*";
}
# -- Media files --
location ~* \.(mp4|webm|ogg|mp3|wav)$ {
expires 30d;
add_header Cache-Control "public";
# Range request support (video seeking)
add_header Accept-Ranges bytes;
}
# -- Prevent directory listing --
location / {
autoindex off;
}
# -- Block access to hidden files --
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
}
14.4 HTTP/2 Server Push (Optional)
# Pre-send key resources before client requests them
location = /index.html {
http2_push /css/main.css;
http2_push /js/app.js;
http2_push /images/logo.webp;
}
Note: HTTP/2 Server Push support is being discontinued in some browsers, and
103 Early Hintsis emerging as an alternative.
15. Health Checks and Monitoring
15.1 Stub Status (Basic Monitoring)
The stub_status module included in Nginx OSS provides real-time connection status.
server {
listen 8080; # Separate port
server_name localhost;
# Allow access only from internal network
allow 10.0.0.0/8;
allow 127.0.0.1;
deny all;
location /nginx_status {
stub_status;
}
location /health {
access_log off;
return 200 "OK\n";
add_header Content-Type text/plain;
}
}
stub_status output example:
Active connections: 291
server accepts handled requests
16630948 16630948 31070465
Reading: 6 Writing: 179 Waiting: 106
| Field | Meaning |
|---|---|
Active connections | Current active connections (Reading + Writing + Waiting) |
accepts | Total accepted connections |
handled | Total handled connections (= accepts means normal) |
requests | Total processed requests |
Reading | Connections reading client request headers |
Writing | Connections sending response to client |
Waiting | Keepalive idle connections |
15.2 Passive Health Check (Upstream Monitoring)
Nginx OSS supports only passive health checks based on actual traffic.
upstream backend {
server 10.0.0.1:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.2:8080 max_fails=3 fail_timeout=30s;
server 10.0.0.3:8080 max_fails=3 fail_timeout=30s backup;
# max_fails=3: Marked unhealthy after 3 consecutive failures within 30s
# fail_timeout=30s: No requests sent to server for 30s after being marked unhealthy
# After 30s, requests are sent again to check recovery
}
server {
location / {
proxy_pass http://backend;
# Define what responses count as "failure"
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_next_upstream_timeout 10s; # Max time to try next server
proxy_next_upstream_tries 3; # Max retry count
}
}
15.3 Active Health Check (NGINX Plus or External Solutions)
# -- Active health check in NGINX Plus --
upstream backend {
zone backend_zone 64k; # Shared memory zone required
server 10.0.0.1:8080;
server 10.0.0.2:8080;
}
server {
location / {
proxy_pass http://backend;
health_check interval=5s # Check every 5 seconds
fails=3 # Marked unhealthy after 3 failures
passes=2 # Recovered after 2 successes
uri=/health; # Health check endpoint
}
}
15.4 Prometheus Integration (nginx-prometheus-exporter)
# docker-compose.yml
services:
nginx-exporter:
image: nginx/nginx-prometheus-exporter:1.3
command:
- --nginx.scrape-uri=http://nginx:8080/nginx_status
ports:
- '9113:9113'
depends_on:
- nginx
# prometheus.yml
scrape_configs:
- job_name: 'nginx'
static_configs:
- targets: ['nginx-exporter:9113']
15.5 Custom Health Check Endpoints
# -- Liveness Probe (Nginx itself operational check) --
location = /healthz {
access_log off;
return 200 "alive\n";
add_header Content-Type text/plain;
}
# -- Readiness Probe (including backend connection check) --
location = /readyz {
access_log off;
proxy_pass http://backend/health;
proxy_connect_timeout 2s;
proxy_read_timeout 2s;
# Return 503 on backend response failure
error_page 502 503 504 = @not_ready;
}
location @not_ready {
return 503 "not ready\n";
add_header Content-Type text/plain;
}
# Usage in Kubernetes
apiVersion: v1
kind: Pod
spec:
containers:
- name: nginx
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Production Configuration Checklist
A summary of essential items to verify when deploying Nginx to production environments.
| Category | Item | Status |
|---|---|---|
| Architecture | worker_processes auto configured | [ ] |
| Architecture | worker_connections set to appropriate value | [ ] |
| SSL/TLS | TLSv1.2 + TLSv1.3 only enabled | [ ] |
| SSL/TLS | Strong Cipher Suites configured | [ ] |
| SSL/TLS | OCSP Stapling enabled | [ ] |
| SSL/TLS | HSTS header configured | [ ] |
| SSL/TLS | HTTP to HTTPS redirect | [ ] |
| Security | server_tokens off | [ ] |
| Security | Security headers (CSP, X-Frame-Options, etc.) configured | [ ] |
| Security | Rate Limiting configured | [ ] |
| Security | Admin page access control | [ ] |
| Performance | sendfile, tcp_nopush, tcp_nodelay enabled | [ ] |
| Performance | Gzip/Brotli compression configured | [ ] |
| Performance | open_file_cache configured | [ ] |
| Performance | Static file browser caching configured | [ ] |
| Performance | Upstream keepalive configured | [ ] |
| Caching | proxy_cache or fastcgi_cache configured | [ ] |
| Caching | proxy_cache_use_stale configured | [ ] |
| Monitoring | stub_status enabled (internal only) | [ ] |
| Monitoring | Health check endpoints configured | [ ] |
| Logging | JSON log format configured | [ ] |
| Logging | Log rotation configured | [ ] |
| Logging | Health check/static file logging excluded | [ ] |
| Proxy | Essential proxy headers configured | [ ] |
| Proxy | WebSocket proxy configured (if needed) | [ ] |
| Load Balancing | Appropriate algorithm selected | [ ] |
| Load Balancing | Backup server configured | [ ] |
References
- Nginx Official Documentation
- Nginx Beginner's Guide
- DigitalOcean - Understanding Nginx Configuration File Structure
- Inside NGINX: How We Designed for Performance and Scale
- NGINX Reverse Proxy Guide
- Nginx WebSocket Proxying
- NGINX HTTP Load Balancing
- NGINX TLS 1.3 Hardening Guide
- A Guide to Caching with NGINX
- Rate Limiting with NGINX
- NGINX Gzip Compression Guide
- Tuning NGINX for Performance
- Creating NGINX Rewrite Rules
- NGINX sendfile, tcp_nopush, tcp_nodelay Explained
- NGINX Health Checks
- Mozilla SSL Configuration Generator