Skip to content

Split View: Nginx 완전 가이드 2025: 리버스 프록시, 로드 밸런싱, SSL, 캐싱부터 보안까지

✨ Learn with Quiz
|

Nginx 완전 가이드 2025: 리버스 프록시, 로드 밸런싱, SSL, 캐싱부터 보안까지

TOC

1. Nginx란 무엇인가

Nginx(엔진엑스)는 2004년 Igor Sysoev가 C10K 문제를 해결하기 위해 개발한 고성능 웹 서버입니다. 현재 전 세계 웹 서버 시장 점유율 1위를 차지하며, 단순 웹 서버를 넘어 리버스 프록시, 로드 밸런서, HTTP 캐시, API 게이트웨이 등 다양한 역할을 수행합니다.

1.1 Nginx vs Apache

항목NginxApache
아키텍처이벤트 드리븐 (비동기)프로세스/스레드 기반
동시 연결수만 ~ 수십만수천 (MPM 설정에 따라)
메모리 사용연결당 수 KB연결당 수 MB
정적 파일매우 빠름빠름
동적 컨텐츠프록시 (FastCGI/uWSGI)모듈 내장 (mod_php)
설정 방식중앙 집중형.htaccess 분산 가능
URL 재작성location 블록mod_rewrite
로드 밸런싱내장별도 모듈 필요
시장 점유율약 34% (1위)약 29% (2위)

1.2 이벤트 드리븐 아키텍처

Apache (프로세스 모델):
┌────────────┐
Master├────────────┤
Worker 1   │ → Client A (1 process per connection)
Worker 2   │ → Client B
Worker 3   │ → Client C
...Worker 1000│ → Client 1000
└────────────┘
1000 connections = 1000 processes/threads = 높은 메모리 사용

Nginx (이벤트 모델):
┌────────────┐
Master├────────────┤
Worker 1   │ → epoll/kqueue로 수천 연결 처리
Worker 2 (non-blocking I/O)
Worker 3   │ →
Worker 4 (CPU 코어 수만큼)
└────────────┘
10000 connections = 4 workers = 매우 낮은 메모리 사용

핵심 원리:

  • Master Process: 설정 읽기, 워커 관리, 로그 관리
  • Worker Process: 실제 요청 처리 (CPU 코어 수만큼 생성)
  • epoll/kqueue: OS 레벨의 이벤트 통지 메커니즘
  • Non-blocking I/O: 요청을 기다리지 않고 다른 요청 처리

1.3 Nginx 설치

# Ubuntu/Debian
sudo apt update
sudo apt install nginx

# CentOS/RHEL
sudo yum install epel-release
sudo yum install nginx

# macOS
brew install nginx

# Docker
docker run -d -p 80:80 -p 443:443 \
  -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /path/to/certs:/etc/nginx/certs:ro \
  --name nginx nginx:alpine

# 상태 확인
sudo systemctl status nginx
nginx -t  # 설정 문법 검사
nginx -V  # 컴파일 옵션 확인

2. Nginx 설정 구조

2.1 설정 파일 구조

/etc/nginx/
├── nginx.conf              # 메인 설정
├── conf.d/                 # 추가 설정 (*.conf 자동 로드)
│   ├── default.conf
│   └── myapp.conf
├── sites-available/        # 사용 가능한 사이트 (Debian계열)
│   └── mysite.conf
├── sites-enabled/          # 활성화된 사이트 (심볼릭 링크)
│   └── mysite.conf -> ../sites-available/mysite.conf
├── mime.types              # MIME 타입 매핑
├── fastcgi_params          # FastCGI 파라미터
└── snippets/               # 재사용 가능한 설정 조각
    ├── ssl-params.conf
    └── proxy-params.conf

2.2 설정 블록 계층

# /etc/nginx/nginx.conf

# 전역 컨텍스트
user nginx;
worker_processes auto;          # CPU 코어 수만큼 자동 설정
worker_rlimit_nofile 65535;     # 워커당 최대 파일 디스크립터
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

# 이벤트 컨텍스트
events {
    worker_connections 10240;   # 워커당 최대 동시 연결
    multi_accept on;            # 한 번에 여러 연결 수락
    use epoll;                  # Linux: epoll, BSD: kqueue
}

# HTTP 컨텍스트
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # 로그 포맷
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    # 성능 설정
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 50m;

    # Gzip 압축
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_comp_level 5;
    gzip_types text/plain text/css application/json
               application/javascript text/xml application/xml
               application/xml+rss text/javascript image/svg+xml;

    # 서버 블록 포함
    include /etc/nginx/conf.d/*.conf;
}

2.3 Server 블록과 Location 블록

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # 301 리다이렉트 (HTTP → HTTPS)
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # SSL 설정
    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    # 루트 디렉토리
    root /var/www/html;
    index index.html index.htm;

    # Location 우선순위 (높은 순서)
    # 1. 정확한 일치 (=)
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }

    # 2. 우선 접두사 (^~)
    location ^~ /static/ {
        alias /var/www/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # 3. 정규식 (~, ~*) - 대소문자 구분/무시
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
        expires 7d;
        add_header Cache-Control "public";
    }

    # 4. 접두사 일치 (없음 또는 /)
    location / {
        try_files $uri $uri/ /index.html;
    }

    # API 프록시
    location /api/ {
        proxy_pass http://backend;
        include snippets/proxy-params.conf;
    }

    # 에러 페이지
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

2.4 Location 매칭 우선순위

우선순위 (높은 순):
1. = (정확한 일치)          location = /path
2. ^~ (우선 접두사)         location ^~ /path
3. ~ (정규식, 대소문자 구분) location ~ \.php$
4. ~* (정규식, 대소문자 무시) location ~* \.(jpg|png)$
5. / (접두사 일치)          location /path
6. / (기본)                 location /

3. 리버스 프록시 설정

3.1 기본 리버스 프록시

# snippets/proxy-params.conf
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";

proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
server {
    listen 443 ssl http2;
    server_name api.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include snippets/proxy-params.conf;
    }
}

3.2 프록시 동작 원리

Client                    Nginx (Reverse Proxy)              Backend
  │                              │                              │
GET /api/users              │                              │
  │─────────────────────────────>│                              │
  │                              │  GET /api/users              │
  │                              │  Host: api.example.com  │                              │  X-Real-IP: 203.0.113.1  │                              │  X-Forwarded-For: 203.0.113.1  │                              │─────────────────────────────>  │                              │                              │
  │                              │  200 OK  │                              │<─────────────────────────────│
200 OK                      │                              │
<─────────────────────────────│                              │

3.3 경로 재작성(Rewrite)

# /api/v1/users → 백엔드의 /users로 전달
location /api/v1/ {
    rewrite ^/api/v1/(.*)$ /$1 break;
    proxy_pass http://backend;
    include snippets/proxy-params.conf;
}

# 또는 proxy_pass에 URI 포함
location /api/v1/ {
    proxy_pass http://backend/;  # 슬래시 주의!
    include snippets/proxy-params.conf;
}

# 조건부 리다이렉트
location /old-page {
    return 301 /new-page;
}

# 정규식 기반 rewrite
rewrite ^/blog/(\d{4})/(\d{2})/(.*)$ /posts/$3?year=$1&month=$2 permanent;

4. 로드 밸런싱

4.1 Upstream 설정

# 기본 Round Robin
upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# Least Connections - 가장 적은 연결 수를 가진 서버로
upstream backend_least {
    least_conn;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# IP Hash - 같은 클라이언트 IP는 같은 서버로 (세션 고정)
upstream backend_iphash {
    ip_hash;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# 가중치(Weight) 기반
upstream backend_weighted {
    server 10.0.0.1:8080 weight=5;   # 50% 트래픽
    server 10.0.0.2:8080 weight=3;   # 30% 트래픽
    server 10.0.0.3:8080 weight=2;   # 20% 트래픽
}

# 고급 설정
upstream backend_advanced {
    least_conn;
    server 10.0.0.1:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:8080 backup;      # 다른 서버 장애 시만 사용
    server 10.0.0.4:8080 down;        # 일시적으로 비활성화

    keepalive 32;                      # 업스트림 커넥션 풀
}

4.2 로드 밸런싱 알고리즘 비교

알고리즘설명장점단점적합한 상황
Round Robin순차적 분배 (기본)간단, 균등 분배서버 성능 차이 미반영동일 사양 서버
Least Connections최소 연결 수 서버로부하 균형새 서버에 집중 가능요청 처리 시간 불균등
IP Hash클라이언트 IP 기반세션 고정불균등 분배 가능세션 기반 앱
Weight가중치 기반성능 차이 반영수동 관리서버 스펙 차이
Random무작위 (Nginx Plus)분산 환경 적합예측 불가대규모 클러스터

4.3 헬스 체크

# 패시브 헬스 체크 (OSS)
upstream backend {
    server 10.0.0.1:8080 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 max_fails=3 fail_timeout=30s;
}

# 액티브 헬스 체크 (Nginx Plus 또는 별도 모듈)
# upstream backend {
#     zone backend_zone 64k;
#     server 10.0.0.1:8080;
#     server 10.0.0.2:8080;
#     health_check interval=10 fails=3 passes=2 uri=/health;
# }

5. SSL/TLS 설정

5.1 Let's Encrypt + Certbot

# Certbot 설치 및 인증서 발급
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

# 자동 갱신 확인
sudo certbot renew --dry-run

# crontab에 자동 갱신 추가
# 0 0,12 * * * certbot renew --quiet --post-hook "systemctl reload nginx"

5.2 강화된 SSL 설정

# snippets/ssl-params.conf

# 프로토콜 - TLS 1.2, 1.3만 허용
ssl_protocols TLSv1.2 TLSv1.3;

# 암호화 스위트
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;

# DH 파라미터
ssl_dhparam /etc/nginx/certs/dhparam.pem;

# SSL 세션 캐시
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;

# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;

# HSTS (Strict Transport Security)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

5.3 SSL Termination 패턴

Client (HTTPS)           Nginx (SSL Termination)         Backend (HTTP)
     │                          │                              │
HTTPS (TLS 1.3)        │                              │
     │─────────────────────────>│                              │
     │                          │  HTTP (plain)     │                          │─────────────────────────────>     │                          │                              │
     │                          │  HTTP Response     │                          │<─────────────────────────────│
HTTPS Response          │                              │
<─────────────────────────│                              │
server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;
    include snippets/ssl-params.conf;

    location / {
        proxy_pass http://backend;
        proxy_set_header X-Forwarded-Proto https;
        include snippets/proxy-params.conf;
    }
}

6. 캐싱 (Proxy Cache)

6.1 기본 캐시 설정

http {
    # 캐시 존 정의
    proxy_cache_path /var/cache/nginx
        levels=1:2
        keys_zone=my_cache:10m      # 10MB 메모리 (키 저장)
        max_size=10g                 # 디스크 최대 크기
        inactive=60m                 # 60분 미사용 시 삭제
        use_temp_path=off;

    server {
        listen 443 ssl http2;
        server_name example.com;

        # 캐시 활성화
        location /api/ {
            proxy_cache my_cache;
            proxy_cache_valid 200 302 10m;    # 200, 302 응답 10분
            proxy_cache_valid 404 1m;          # 404 응답 1분
            proxy_cache_use_stale error timeout updating
                                   http_500 http_502 http_503 http_504;
            proxy_cache_background_update on;
            proxy_cache_lock on;

            # 캐시 키 설정
            proxy_cache_key "$scheme$request_method$host$request_uri";

            # 캐시 상태 헤더 추가
            add_header X-Cache-Status $upstream_cache_status;

            proxy_pass http://backend;
            include snippets/proxy-params.conf;
        }

        # 정적 파일 캐시 (브라우저)
        location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
            root /var/www/static;
            expires 30d;
            add_header Cache-Control "public, immutable";
            access_log off;
        }
    }
}

6.2 캐시 상태 값

상태설명
HIT캐시에서 응답 제공
MISS캐시 없음, 백엔드에서 가져옴
EXPIRED캐시 만료, 백엔드에서 갱신
STALE만료된 캐시이지만 stale 정책으로 제공
UPDATINGstale 응답 제공 중, 백그라운드 갱신
REVALIDATED백엔드가 304 응답, 기존 캐시 재사용
BYPASS캐시 무시 설정으로 백엔드 직접 요청

6.3 캐시 바이패스와 퍼지

# 특정 조건에서 캐시 무시
location /api/ {
    proxy_cache my_cache;

    # 쿠키가 있으면 캐시 안 함
    proxy_cache_bypass $cookie_session;
    proxy_no_cache $cookie_session;

    # Cache-Control: no-cache 요청 시 캐시 무시
    proxy_cache_bypass $http_cache_control;

    # 특정 쿼리 파라미터로 캐시 퍼지
    # proxy_cache_purge $purge_method;  # Nginx Plus

    proxy_pass http://backend;
}

7. Rate Limiting (속도 제한)

7.1 기본 Rate Limiting

http {
    # 제한 존 정의
    # 10MB 메모리, IP당 초당 10 요청
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    # IP당 초당 1 요청 (로그인 보호)
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

    # API 키 기반 제한
    limit_req_zone $http_x_api_key zone=apikey_limit:10m rate=100r/s;

    server {
        # API 엔드포인트 - 버스트 허용
        location /api/ {
            limit_req zone=api_limit burst=20 nodelay;
            limit_req_status 429;

            proxy_pass http://backend;
        }

        # 로그인 - 엄격한 제한
        location /auth/login {
            limit_req zone=login_limit burst=5;
            limit_req_status 429;

            proxy_pass http://backend;
        }
    }
}

7.2 연결 수 제한

http {
    # IP당 동시 연결 수 제한
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    server {
        # IP당 최대 100 동시 연결
        limit_conn conn_limit 100;

        # 다운로드 대역폭 제한
        location /downloads/ {
            limit_conn conn_limit 5;        # 5개 동시 다운로드
            limit_rate 500k;                # 연결당 500KB/s
            limit_rate_after 10m;           # 처음 10MB는 제한 없음
        }
    }
}

7.3 Rate Limiting 동작 이해

rate=10r/s, burst=20, nodelay:

시간  │ 요청수 │ 처리 │ 설명
─────│───────│─────│──────────────────────────────
0.0s │  252110(rate) + 20(burst) = 30까지 허용
     │       │     │ 21개 즉시 처리, 4429 응답
0.1s │   51  │ burst 버킷에 1여유 (0.1s * 10r/s = 1)
1.0s │   55  │ burst 버킷이 10개 회복

8. WebSocket 프록시

8.1 WebSocket 설정

# WebSocket 업그레이드를 위한 map
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream websocket_backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;

    # WebSocket은 sticky session 필요
    ip_hash;
}

server {
    listen 443 ssl http2;
    server_name ws.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location /ws/ {
        proxy_pass http://websocket_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebSocket 타임아웃 (기본 60초)
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}

8.2 WebSocket 핸드셰이크 흐름

Client                    Nginx                    Backend
  │                         │                         │
GET /ws/ HTTP/1.1       │                         │
Upgrade: websocket      │                         │
Connection: Upgrade     │                         │
  │────────────────────────>│                         │
  │                         │ GET /ws/ HTTP/1.1  │                         │ Upgrade: websocket      │
  │                         │ Connection: Upgrade  │                         │────────────────────────>  │                         │                         │
  │                         │ 101 Switching Protocols  │                         │<────────────────────────│
101 Switching Protocols │                         │
<────────────────────────│                         │
  │                         │                         │
  │ ← WebSocket frames →   │ ← WebSocket frames →   │

9. 보안 설정

9.1 보안 헤더

server {
    # XSS 보호
    add_header X-XSS-Protection "1; mode=block" always;

    # MIME 타입 스니핑 방지
    add_header X-Content-Type-Options "nosniff" always;

    # 클릭재킹 방지
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Referrer 정책
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' cdn.example.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: cdn.example.com; font-src 'self' fonts.gstatic.com;" always;

    # Permissions Policy
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
}

9.2 접근 제어

server {
    # IP 기반 접근 제어
    location /admin/ {
        allow 10.0.0.0/8;
        allow 192.168.0.0/16;
        deny all;

        proxy_pass http://backend;
    }

    # Basic Authentication
    location /internal/ {
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://backend;
    }

    # 특정 HTTP 메서드만 허용
    location /api/ {
        limit_except GET POST PUT DELETE {
            deny all;
        }

        proxy_pass http://backend;
    }

    # 서버 정보 숨기기
    server_tokens off;

    # 숨겨진 파일 접근 차단
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
}

9.3 DDoS 방어 기본 설정

http {
    # 연결 제한
    limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
    limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=30r/s;

    # 요청 본문 크기 제한
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;

    # 타임아웃 설정
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    server {
        limit_conn conn_per_ip 50;
        limit_req zone=req_per_ip burst=50 nodelay;

        # User-Agent 기반 봇 차단
        if ($http_user_agent ~* (bot|crawler|spider|scraper)) {
            return 403;
        }

        # 빈 User-Agent 차단
        if ($http_user_agent = "") {
            return 403;
        }
    }
}

10. Gzip 압축과 성능 최적화

10.1 Gzip 상세 설정

http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;          # 1-9 (5가 최적 밸런스)
    gzip_min_length 1024;       # 1KB 이하는 압축 안 함
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/xml+rss
        application/atom+xml
        image/svg+xml
        font/opentype
        font/ttf
        font/woff
        font/woff2;

    # 미리 압축된 파일 사용 (빌드 시 생성)
    gzip_static on;
}

10.2 Brotli 압축 (Nginx 모듈)

# Brotli는 Gzip보다 20-30% 더 효율적
# 별도 모듈 설치 필요
load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;

http {
    brotli on;
    brotli_comp_level 6;
    brotli_types text/plain text/css application/json
                 application/javascript text/xml application/xml
                 application/xml+rss text/javascript image/svg+xml;
    brotli_static on;
}

10.3 성능 튜닝 체크리스트

# /etc/nginx/nginx.conf

worker_processes auto;                # CPU 코어 수
worker_rlimit_nofile 65535;

events {
    worker_connections 10240;
    multi_accept on;
    use epoll;
}

http {
    # 파일 전송 최적화
    sendfile on;
    tcp_nopush on;                    # sendfile과 함께 사용
    tcp_nodelay on;                   # keepalive에서 유효

    # 타임아웃
    keepalive_timeout 65;
    keepalive_requests 1000;

    # 버퍼
    client_body_buffer_size 16k;
    client_header_buffer_size 1k;
    client_max_body_size 50m;
    large_client_header_buffers 4 8k;

    # 파일 캐시
    open_file_cache max=10000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # 업스트림 커넥션 풀
    upstream backend {
        keepalive 32;
        keepalive_requests 100;
        keepalive_timeout 60s;
    }
}

11. Docker & Kubernetes 연동

11.1 Docker Compose로 Nginx 구성

# docker-compose.yml
version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/certs:/etc/nginx/certs:ro
      - ./nginx/cache:/var/cache/nginx
    depends_on:
      - app1
      - app2
    networks:
      - webnet
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 512M

  app1:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

  app2:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

networks:
  webnet:
    driver: bridge
# nginx/conf.d/default.conf
upstream app {
    server app1:3000;
    server app2:3000;
    keepalive 16;
}

server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://app;
        include /etc/nginx/snippets/proxy-params.conf;
    }
}

11.2 Kubernetes Ingress (Nginx Ingress Controller)

# Nginx Ingress Controller 설치
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.9.0/deploy/static/provider/cloud/deploy.yaml

# Ingress 리소스 정의
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rate-limit: "10"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - example.com
        - api.example.com
      secretName: example-tls
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-svc
                port:
                  number: 80
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: backend-svc
                port:
                  number: 8080

12. Nginx vs Traefik vs Caddy 비교

기능NginxTraefikCaddy
설정 방식파일 기반동적 (Docker Label, K8s)Caddyfile / JSON API
자동 HTTPS수동 (Certbot)내장 (ACME)내장 (ACME)
서비스 디스커버리수동Docker/K8s 자동제한적
성능최고 수준좋음좋음
HTTP/3모듈로 지원내장내장
대시보드없음 (Nginx Plus 유료)내장 웹 UI내장 API
설정 리로드nginx -s reload자동 핫 리로드자동 핫 리로드
커뮤니티매우 큼성장 중성장 중
적합 환경범용, 고성능마이크로서비스, Docker소규모, 간편 설정
라이선스BSDMITApache 2.0

12.1 선택 가이드

Nginx를 선택해야 할 때:

  • 최고의 성능과 안정성이 필요할 때
  • 복잡한 리버스 프록시 규칙이 필요할 때
  • 레거시 시스템이나 정적 파일 서빙에 최적
  • 큰 커뮤니티와 풍부한 문서가 필요할 때

Traefik을 선택해야 할 때:

  • Docker/Kubernetes 환경에서 자동 서비스 디스커버리가 필요할 때
  • 동적으로 라우팅 규칙이 변경되는 마이크로서비스 환경
  • 내장 대시보드와 메트릭이 필요할 때

Caddy를 선택해야 할 때:

  • 자동 HTTPS가 가장 중요할 때
  • 간단한 설정으로 빠르게 시작하고 싶을 때
  • 소규모 프로젝트나 개발 환경

13. 면접 대비 퀴즈

Q1. Nginx의 이벤트 드리븐 아키텍처가 Apache의 프로세스 모델보다 유리한 이유는?

Apache의 전통적인 Prefork/Worker MPM은 각 연결마다 프로세스나 스레드를 할당합니다. 10,000개의 동시 연결이면 10,000개의 프로세스/스레드가 필요하며, 각각 수 MB의 메모리를 소비합니다.

Nginx는 이벤트 루프 기반으로 동작하여 소수의 Worker Process(보통 CPU 코어 수)가 epoll/kqueue 같은 OS 이벤트 메커니즘을 통해 수만 개의 연결을 비동기적으로 처리합니다.

핵심 차이:

  • 메모리 효율성: Nginx는 연결당 수 KB vs Apache는 연결당 수 MB
  • 컨텍스트 스위칭: Nginx는 최소화 vs Apache는 프로세스/스레드 전환 오버헤드
  • C10K 문제: Nginx는 설계 단계부터 이 문제를 해결하기 위해 만들어짐
  • 단, CPU 집약적인 작업에는 이벤트 모델이 불리할 수 있음
Q2. proxy_pass에서 URI 슬래시(/)의 유무에 따른 차이는?

이것은 Nginx 설정에서 가장 흔한 실수 중 하나입니다.

proxy_pass http://backend; (슬래시 없음): 요청 URI가 그대로 전달됩니다. /api/users 요청은 백엔드에 /api/users로 전달됩니다.

proxy_pass http://backend/; (슬래시 있음): location에 매칭된 부분이 제거되고 나머지가 전달됩니다.

예시 - location /api/에서:

  • 슬래시 없음: /api/users 요청은 http://backend/api/users로 전달
  • 슬래시 있음: /api/users 요청은 http://backend/users로 전달

이 차이를 이해하지 못하면 404 에러나 잘못된 라우팅이 발생합니다.

Q3. Nginx에서 Rate Limiting의 burst와 nodelay 옵션의 역할은?

limit_req zone=api rate=10r/s burst=20 nodelay;

rate=10r/s: 초당 10개 요청까지 허용합니다. 내부적으로 100ms마다 1개 요청을 허용하는 토큰 버킷 방식입니다.

burst=20: rate를 초과하는 요청을 최대 20개까지 큐에 대기시킵니다. burst 없이는 rate 초과 시 즉시 429 응답입니다.

nodelay: burst 큐에 들어온 요청을 지연 없이 즉시 처리합니다. nodelay 없이는 요청이 rate에 맞춰 순차적으로 처리되므로 대기 시간이 발생합니다.

조합 효과:

  • rate=10r/s burst=20: 순간 최대 21개 처리 가능하지만, burst 요청은 지연됨
  • rate=10r/s burst=20 nodelay: 순간 최대 21개를 즉시 처리, 초과분은 429
Q4. SSL Termination이란 무엇이고 왜 사용하나요?

SSL Termination(SSL 종료)은 HTTPS 암호화/복호화를 Nginx(리버스 프록시)에서 처리하고, 백엔드 서버에는 일반 HTTP로 통신하는 패턴입니다.

장점:

  1. 백엔드 부하 감소: SSL 핸드셰이크와 암복호화는 CPU 집약적 작업. 이를 Nginx에 집중시킴
  2. 인증서 관리 중앙화: 모든 인증서를 Nginx 한 곳에서 관리
  3. 백엔드 단순화: 백엔드 애플리케이션이 SSL을 신경 쓰지 않아도 됨
  4. 성능 최적화: Nginx의 SSL 세션 캐시, OCSP Stapling 등 활용

보안 고려사항:

  • Nginx와 백엔드 사이의 내부 네트워크가 안전해야 함
  • 필요시 내부 통신에도 mTLS(Mutual TLS) 적용 가능
  • X-Forwarded-Proto 헤더로 백엔드에 원본 프로토콜 전달
Q5. Nginx에서 upstream keepalive의 역할과 적절한 값은?

keepalive는 Nginx와 업스트림(백엔드) 서버 간의 유휴(idle) 연결을 캐시하는 커넥션 풀입니다.

upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    keepalive 32;
}

keepalive가 없으면: 매 요청마다 TCP 3-way handshake가 발생합니다. 고부하 시 TIME_WAIT 상태의 소켓이 급증하여 포트 고갈이 발생할 수 있습니다.

keepalive가 있으면: 이전 연결을 재사용하여 TCP 핸드셰이크 오버헤드를 제거합니다. 지연 시간 감소와 처리량 증가 효과가 있습니다.

적절한 값:

  • 동시 연결 수의 약 2배가 시작점
  • 너무 높으면 메모리 낭비, 너무 낮으면 커넥션 재사용 효과 감소
  • proxy_http_version 1.1;proxy_set_header Connection "";을 함께 설정해야 작동

14. 참고 자료

  1. Nginx 공식 문서
  2. Nginx Admin Guide
  3. Nginx Plus 기능 비교
  4. Let's Encrypt / Certbot
  5. Nginx Ingress Controller
  6. Traefik 공식 문서
  7. Caddy 공식 문서
  8. Mozilla SSL Configuration Generator
  9. Nginx 성능 튜닝 가이드
  10. C10K Problem
  11. Nginx Cookbook (O'Reilly)
  12. DigitalOcean Nginx 튜토리얼
  13. Nginx Security Best Practices

Nginx Complete Guide 2025: Reverse Proxy, Load Balancing, SSL, Caching & Security

TOC

1. What is Nginx

Nginx (pronounced "engine-x") is a high-performance web server originally developed by Igor Sysoev in 2004 to address the C10K problem. Today it holds the number one web server market share globally, serving not just as a web server but as a reverse proxy, load balancer, HTTP cache, and API gateway.

1.1 Nginx vs Apache

FeatureNginxApache
ArchitectureEvent-driven (async)Process/thread-based
Concurrent ConnectionsTens of thousands to hundreds of thousandsThousands (depends on MPM)
Memory UsageFew KB per connectionFew MB per connection
Static FilesVery fastFast
Dynamic ContentProxy (FastCGI/uWSGI)Built-in modules (mod_php)
ConfigurationCentralized.htaccess distributed
URL Rewritinglocation blocksmod_rewrite
Load BalancingBuilt-inSeparate module needed
Market Share~34% (#1)~29% (#2)

1.2 Event-Driven Architecture

Apache (Process Model):
┌────────────┐
Master├────────────┤
Worker 1   │ → Client A (1 process per connection)
Worker 2   │ → Client B
Worker 3   │ → Client C
...Worker 1000│ → Client 1000
└────────────┘
1000 connections = 1000 processes/threads = high memory usage

Nginx (Event Model):
┌────────────┐
Master├────────────┤
Worker 1   │ → Handles thousands via epoll/kqueue
Worker 2 (non-blocking I/O)
Worker 3   │ →
Worker 4 (one per CPU core)
└────────────┘
10000 connections = 4 workers = very low memory usage

Core Principles:

  • Master Process: Reads configuration, manages workers, handles logs
  • Worker Process: Handles actual requests (one per CPU core)
  • epoll/kqueue: OS-level event notification mechanisms
  • Non-blocking I/O: Processes other requests without waiting

1.3 Installation

# Ubuntu/Debian
sudo apt update
sudo apt install nginx

# CentOS/RHEL
sudo yum install epel-release
sudo yum install nginx

# macOS
brew install nginx

# Docker
docker run -d -p 80:80 -p 443:443 \
  -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /path/to/certs:/etc/nginx/certs:ro \
  --name nginx nginx:alpine

# Status check
sudo systemctl status nginx
nginx -t  # Validate config syntax
nginx -V  # Show compile options

2. Configuration Structure

2.1 File Layout

/etc/nginx/
├── nginx.conf              # Main configuration
├── conf.d/                 # Additional configs (*.conf auto-loaded)
│   ├── default.conf
│   └── myapp.conf
├── sites-available/        # Available sites (Debian family)
│   └── mysite.conf
├── sites-enabled/          # Enabled sites (symlinks)
│   └── mysite.conf -> ../sites-available/mysite.conf
├── mime.types              # MIME type mappings
├── fastcgi_params          # FastCGI parameters
└── snippets/               # Reusable config fragments
    ├── ssl-params.conf
    └── proxy-params.conf

2.2 Configuration Block Hierarchy

# /etc/nginx/nginx.conf

# Global context
user nginx;
worker_processes auto;          # Auto-set to CPU core count
worker_rlimit_nofile 65535;     # Max file descriptors per worker
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

# Events context
events {
    worker_connections 10240;   # Max concurrent connections per worker
    multi_accept on;            # Accept multiple connections at once
    use epoll;                  # Linux: epoll, BSD: kqueue
}

# HTTP context
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Log format
    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    # Performance settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 50m;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_comp_level 5;
    gzip_types text/plain text/css application/json
               application/javascript text/xml application/xml
               application/xml+rss text/javascript image/svg+xml;

    # Include server blocks
    include /etc/nginx/conf.d/*.conf;
}

2.3 Server and Location Blocks

server {
    listen 80;
    listen [::]:80;
    server_name example.com www.example.com;

    # 301 redirect (HTTP to HTTPS)
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name example.com www.example.com;

    # SSL configuration
    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    # Root directory
    root /var/www/html;
    index index.html index.htm;

    # Location priority (highest to lowest)
    # 1. Exact match (=)
    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }

    # 2. Preferential prefix (^~)
    location ^~ /static/ {
        alias /var/www/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
    }

    # 3. Regex (~, ~*) - case sensitive/insensitive
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
        expires 7d;
        add_header Cache-Control "public";
    }

    # 4. Prefix match (none or /)
    location / {
        try_files $uri $uri/ /index.html;
    }

    # API proxy
    location /api/ {
        proxy_pass http://backend;
        include snippets/proxy-params.conf;
    }

    # Error pages
    error_page 404 /404.html;
    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }
}

2.4 Location Matching Priority

Priority (highest to lowest):
1. = (exact match)            location = /path
2. ^~ (preferential prefix)   location ^~ /path
3. ~ (regex, case-sensitive)  location ~ \.php$
4. ~* (regex, case-insensitive) location ~* \.(jpg|png)$
5. /path (prefix match)       location /path
6. / (default)                location /

3. Reverse Proxy Configuration

3.1 Basic Reverse Proxy

# snippets/proxy-params.conf
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Connection "";

proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;

proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
server {
    listen 443 ssl http2;
    server_name api.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:3000;
        include snippets/proxy-params.conf;
    }
}

3.2 How Reverse Proxy Works

Client                    Nginx (Reverse Proxy)              Backend
  │                              │                              │
GET /api/users              │                              │
  │─────────────────────────────>│                              │
  │                              │  GET /api/users              │
  │                              │  Host: api.example.com  │                              │  X-Real-IP: 203.0.113.1  │                              │  X-Forwarded-For: 203.0.113.1  │                              │─────────────────────────────>  │                              │                              │
  │                              │  200 OK  │                              │<─────────────────────────────│
200 OK                      │                              │
<─────────────────────────────│                              │

3.3 Path Rewriting

# /api/v1/users -> forwarded to backend as /users
location /api/v1/ {
    rewrite ^/api/v1/(.*)$ /$1 break;
    proxy_pass http://backend;
    include snippets/proxy-params.conf;
}

# Or include URI in proxy_pass
location /api/v1/ {
    proxy_pass http://backend/;  # Note the trailing slash!
    include snippets/proxy-params.conf;
}

# Conditional redirect
location /old-page {
    return 301 /new-page;
}

# Regex-based rewrite
rewrite ^/blog/(\d{4})/(\d{2})/(.*)$ /posts/$3?year=$1&month=$2 permanent;

4. Load Balancing

4.1 Upstream Configuration

# Default Round Robin
upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# Least Connections - route to server with fewest active connections
upstream backend_least {
    least_conn;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# IP Hash - same client IP always goes to same server (session affinity)
upstream backend_iphash {
    ip_hash;
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    server 10.0.0.3:8080;
}

# Weighted distribution
upstream backend_weighted {
    server 10.0.0.1:8080 weight=5;   # 50% traffic
    server 10.0.0.2:8080 weight=3;   # 30% traffic
    server 10.0.0.3:8080 weight=2;   # 20% traffic
}

# Advanced configuration
upstream backend_advanced {
    least_conn;
    server 10.0.0.1:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 weight=2 max_fails=3 fail_timeout=30s;
    server 10.0.0.3:8080 backup;      # Used only when others fail
    server 10.0.0.4:8080 down;        # Temporarily disabled

    keepalive 32;                      # Upstream connection pool
}

4.2 Load Balancing Algorithm Comparison

AlgorithmDescriptionProsConsBest For
Round RobinSequential distribution (default)Simple, even distributionIgnores server capacityIdentical servers
Least ConnectionsRoutes to server with fewest connectionsLoad-awareCan overwhelm new serversVariable request times
IP HashClient IP-based routingSession persistencePotential uneven distributionSession-based apps
WeightWeight-based distributionReflects capacity differencesManual configurationMixed server specs
RandomRandom selection (Nginx Plus)Good for distributed setupsUnpredictableLarge clusters

4.3 Health Checks

# Passive health check (OSS)
upstream backend {
    server 10.0.0.1:8080 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:8080 max_fails=3 fail_timeout=30s;
}

5. SSL/TLS Configuration

5.1 Let's Encrypt + Certbot

# Install Certbot and obtain certificate
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

# Verify auto-renewal
sudo certbot renew --dry-run

# Add auto-renewal to crontab
# 0 0,12 * * * certbot renew --quiet --post-hook "systemctl reload nginx"

5.2 Hardened SSL Configuration

# snippets/ssl-params.conf

# Protocols - only TLS 1.2 and 1.3
ssl_protocols TLSv1.2 TLSv1.3;

# Cipher suites
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;

# DH parameters
ssl_dhparam /etc/nginx/certs/dhparam.pem;

# SSL session cache
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;

# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/certs/chain.pem;
resolver 1.1.1.1 8.8.8.8 valid=300s;

# HSTS (Strict Transport Security)
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

5.3 SSL Termination Pattern

Client (HTTPS)           Nginx (SSL Termination)         Backend (HTTP)
     │                          │                              │
HTTPS (TLS 1.3)        │                              │
     │─────────────────────────>│                              │
     │                          │  HTTP (plain)     │                          │─────────────────────────────>     │                          │                              │
     │                          │  HTTP Response     │                          │<─────────────────────────────│
HTTPS Response          │                              │
<─────────────────────────│                              │
server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;
    include snippets/ssl-params.conf;

    location / {
        proxy_pass http://backend;
        proxy_set_header X-Forwarded-Proto https;
        include snippets/proxy-params.conf;
    }
}

6. Caching (Proxy Cache)

6.1 Basic Cache Configuration

http {
    # Define cache zone
    proxy_cache_path /var/cache/nginx
        levels=1:2
        keys_zone=my_cache:10m      # 10MB memory (key storage)
        max_size=10g                 # Max disk size
        inactive=60m                 # Remove after 60 min unused
        use_temp_path=off;

    server {
        listen 443 ssl http2;
        server_name example.com;

        # Enable caching
        location /api/ {
            proxy_cache my_cache;
            proxy_cache_valid 200 302 10m;    # Cache 200, 302 for 10 min
            proxy_cache_valid 404 1m;          # Cache 404 for 1 min
            proxy_cache_use_stale error timeout updating
                                   http_500 http_502 http_503 http_504;
            proxy_cache_background_update on;
            proxy_cache_lock on;

            # Cache key
            proxy_cache_key "$scheme$request_method$host$request_uri";

            # Add cache status header
            add_header X-Cache-Status $upstream_cache_status;

            proxy_pass http://backend;
            include snippets/proxy-params.conf;
        }

        # Static file caching (browser)
        location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
            root /var/www/static;
            expires 30d;
            add_header Cache-Control "public, immutable";
            access_log off;
        }
    }
}

6.2 Cache Status Values

StatusDescription
HITServed from cache
MISSNot in cache, fetched from backend
EXPIREDCache expired, refreshed from backend
STALEExpired cache served per stale policy
UPDATINGServing stale while updating in background
REVALIDATEDBackend returned 304, existing cache reused
BYPASSCache bypassed, direct backend request

6.3 Cache Bypass and Purge

location /api/ {
    proxy_cache my_cache;

    # Skip cache when session cookie exists
    proxy_cache_bypass $cookie_session;
    proxy_no_cache $cookie_session;

    # Bypass on Cache-Control: no-cache
    proxy_cache_bypass $http_cache_control;

    proxy_pass http://backend;
}

7. Rate Limiting

7.1 Basic Rate Limiting

http {
    # Define limit zones
    # 10MB memory, 10 requests per second per IP
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    # 1 request per second per IP (login protection)
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

    # API key-based limiting
    limit_req_zone $http_x_api_key zone=apikey_limit:10m rate=100r/s;

    server {
        # API endpoint - allow burst
        location /api/ {
            limit_req zone=api_limit burst=20 nodelay;
            limit_req_status 429;

            proxy_pass http://backend;
        }

        # Login - strict limiting
        location /auth/login {
            limit_req zone=login_limit burst=5;
            limit_req_status 429;

            proxy_pass http://backend;
        }
    }
}

7.2 Connection Limiting

http {
    # Limit concurrent connections per IP
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    server {
        # Max 100 concurrent connections per IP
        limit_conn conn_limit 100;

        # Download bandwidth limiting
        location /downloads/ {
            limit_conn conn_limit 5;        # 5 concurrent downloads
            limit_rate 500k;                # 500KB/s per connection
            limit_rate_after 10m;           # No limit for first 10MB
        }
    }
}

7.3 Understanding Rate Limiting Behavior

rate=10r/s, burst=20, nodelay:

Time  | Requests | Processed | Explanation
------|----------|-----------|----------------------------------
0.0s  |    25    |    21     | 10(rate) + 20(burst) = 30 max
      |          |           | 21 processed immediately, 4 get 429
0.1s  |     5    |     1     | 1 slot recovered (0.1s * 10r/s)
1.0s  |     5    |     5     | 10 burst slots recovered

8. WebSocket Proxy

8.1 WebSocket Configuration

# Map for WebSocket upgrade
map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

upstream websocket_backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;

    # WebSocket requires sticky sessions
    ip_hash;
}

server {
    listen 443 ssl http2;
    server_name ws.example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location /ws/ {
        proxy_pass http://websocket_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # WebSocket timeout (default 60s)
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
}

8.2 WebSocket Handshake Flow

Client                    Nginx                    Backend
  │                         │                         │
GET /ws/ HTTP/1.1       │                         │
Upgrade: websocket      │                         │
Connection: Upgrade     │                         │
  │────────────────────────>│                         │
  │                         │ GET /ws/ HTTP/1.1  │                         │ Upgrade: websocket      │
  │                         │ Connection: Upgrade  │                         │────────────────────────>  │                         │                         │
  │                         │ 101 Switching Protocols  │                         │<────────────────────────│
101 Switching Protocols │                         │
<────────────────────────│                         │
  │                         │                         │
<-- WebSocket frames -><-- WebSocket frames ->

9. Security Configuration

9.1 Security Headers

server {
    # XSS Protection
    add_header X-XSS-Protection "1; mode=block" always;

    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # Clickjacking prevention
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Referrer Policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline' cdn.example.com; style-src 'self' 'unsafe-inline'; img-src 'self' data: cdn.example.com; font-src 'self' fonts.gstatic.com;" always;

    # Permissions Policy
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
}

9.2 Access Control

server {
    # IP-based access control
    location /admin/ {
        allow 10.0.0.0/8;
        allow 192.168.0.0/16;
        deny all;

        proxy_pass http://backend;
    }

    # Basic Authentication
    location /internal/ {
        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/.htpasswd;

        proxy_pass http://backend;
    }

    # Allow only specific HTTP methods
    location /api/ {
        limit_except GET POST PUT DELETE {
            deny all;
        }

        proxy_pass http://backend;
    }

    # Hide server information
    server_tokens off;

    # Block hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }
}

9.3 Basic DDoS Protection

http {
    # Connection limits
    limit_conn_zone $binary_remote_addr zone=conn_per_ip:10m;
    limit_req_zone $binary_remote_addr zone=req_per_ip:10m rate=30r/s;

    # Request body size limits
    client_max_body_size 10m;
    client_body_buffer_size 128k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;

    # Timeout settings
    client_body_timeout 12;
    client_header_timeout 12;
    send_timeout 10;

    server {
        limit_conn conn_per_ip 50;
        limit_req zone=req_per_ip burst=50 nodelay;
    }
}

10. Compression and Performance Optimization

10.1 Detailed Gzip Configuration

http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 5;          # 1-9 (5 is optimal balance)
    gzip_min_length 1024;       # Skip files under 1KB
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml
        application/xml+rss
        application/atom+xml
        image/svg+xml
        font/opentype
        font/ttf
        font/woff
        font/woff2;

    # Use pre-compressed files (generated at build time)
    gzip_static on;
}

10.2 Brotli Compression (Nginx Module)

# Brotli is 20-30% more efficient than Gzip
# Requires separate module installation
load_module modules/ngx_http_brotli_filter_module.so;
load_module modules/ngx_http_brotli_static_module.so;

http {
    brotli on;
    brotli_comp_level 6;
    brotli_types text/plain text/css application/json
                 application/javascript text/xml application/xml
                 application/xml+rss text/javascript image/svg+xml;
    brotli_static on;
}

10.3 Performance Tuning Checklist

worker_processes auto;                # CPU core count
worker_rlimit_nofile 65535;

events {
    worker_connections 10240;
    multi_accept on;
    use epoll;
}

http {
    # File transfer optimization
    sendfile on;
    tcp_nopush on;                    # Use with sendfile
    tcp_nodelay on;                   # Effective with keepalive

    # Timeouts
    keepalive_timeout 65;
    keepalive_requests 1000;

    # Buffers
    client_body_buffer_size 16k;
    client_header_buffer_size 1k;
    client_max_body_size 50m;
    large_client_header_buffers 4 8k;

    # File cache
    open_file_cache max=10000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # Upstream connection pool
    upstream backend {
        keepalive 32;
        keepalive_requests 100;
        keepalive_timeout 60s;
    }
}

11. Docker and Kubernetes Integration

11.1 Docker Compose with Nginx

# docker-compose.yml
version: '3.8'

services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/certs:/etc/nginx/certs:ro
      - ./nginx/cache:/var/cache/nginx
    depends_on:
      - app1
      - app2
    networks:
      - webnet
    restart: unless-stopped

  app1:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

  app2:
    image: myapp:latest
    expose:
      - "3000"
    networks:
      - webnet

networks:
  webnet:
    driver: bridge
# nginx/conf.d/default.conf
upstream app {
    server app1:3000;
    server app2:3000;
    keepalive 16;
}

server {
    listen 80;
    server_name _;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key /etc/nginx/certs/privkey.pem;

    location / {
        proxy_pass http://app;
        include /etc/nginx/snippets/proxy-params.conf;
    }
}

11.2 Kubernetes Ingress (Nginx Ingress Controller)

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/rate-limit: "10"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - example.com
        - api.example.com
      secretName: example-tls
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-svc
                port:
                  number: 80
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: backend-svc
                port:
                  number: 8080

12. Nginx vs Traefik vs Caddy

FeatureNginxTraefikCaddy
ConfigurationFile-basedDynamic (Docker labels, K8s)Caddyfile / JSON API
Auto HTTPSManual (Certbot)Built-in (ACME)Built-in (ACME)
Service DiscoveryManualDocker/K8s autoLimited
PerformanceTop tierGoodGood
HTTP/3Module supportBuilt-inBuilt-in
DashboardNone (Nginx Plus paid)Built-in web UIBuilt-in API
Config Reloadnginx -s reloadAuto hot reloadAuto hot reload
CommunityVery largeGrowingGrowing
Best ForGeneral purpose, high perfMicroservices, DockerSmall scale, simple setup
LicenseBSDMITApache 2.0

12.1 Selection Guide

Choose Nginx when:

  • You need top-tier performance and stability
  • Complex reverse proxy rules are required
  • Serving legacy systems or static files
  • You need the largest community and documentation

Choose Traefik when:

  • You need automatic service discovery in Docker/Kubernetes
  • Routing rules change dynamically in a microservices environment
  • Built-in dashboard and metrics are important

Choose Caddy when:

  • Automatic HTTPS is the top priority
  • You want quick setup with simple configuration
  • Small projects or development environments

13. Interview Quiz

Q1. Why is Nginx's event-driven architecture advantageous over Apache's process model?

Apache's traditional Prefork/Worker MPM allocates a process or thread per connection. 10,000 concurrent connections require 10,000 processes/threads, each consuming several MB of memory.

Nginx uses an event loop where a small number of worker processes (typically matching CPU core count) handle tens of thousands of connections asynchronously through OS event mechanisms like epoll/kqueue.

Key differences:

  • Memory efficiency: Nginx uses a few KB per connection vs Apache's few MB
  • Context switching: Nginx minimizes it vs Apache's process/thread switching overhead
  • C10K problem: Nginx was designed from the ground up to solve this
  • Caveat: Event model can be disadvantageous for CPU-intensive tasks
Q2. What is the difference between proxy_pass with and without a trailing slash?

This is one of the most common Nginx configuration mistakes.

proxy_pass http://backend; (no trailing slash): The request URI is passed through as-is. A /api/users request forwards as /api/users to the backend.

proxy_pass http://backend/; (with trailing slash): The matched location part is stripped and the remainder is forwarded.

Example with location /api/:

  • No slash: /api/users request goes to http://backend/api/users
  • With slash: /api/users request goes to http://backend/users

Misunderstanding this causes 404 errors or incorrect routing.

Q3. What are the roles of burst and nodelay in Nginx rate limiting?

limit_req zone=api rate=10r/s burst=20 nodelay;

rate=10r/s: Allows 10 requests per second. Internally uses a token bucket allowing 1 request per 100ms.

burst=20: Queues up to 20 excess requests beyond the rate. Without burst, requests exceeding the rate get an immediate 429.

nodelay: Processes burst-queued requests immediately without delay. Without nodelay, requests are processed sequentially at the rate, causing wait times.

Combined effects:

  • rate=10r/s burst=20: Handles up to 21 instantly, but burst requests are delayed
  • rate=10r/s burst=20 nodelay: Processes up to 21 immediately, excess gets 429
Q4. What is SSL Termination and why use it?

SSL Termination handles HTTPS encryption/decryption at the Nginx (reverse proxy) level, communicating with backend servers over plain HTTP.

Benefits:

  1. Reduced backend load: SSL handshakes and encryption/decryption are CPU-intensive; centralizing them at Nginx
  2. Centralized certificate management: All certificates managed in one place
  3. Simplified backends: Backend applications do not need to handle SSL
  4. Performance optimization: Leverages Nginx SSL session cache, OCSP Stapling, etc.

Security considerations:

  • Internal network between Nginx and backends must be secure
  • mTLS (Mutual TLS) can be applied for internal communication if needed
  • X-Forwarded-Proto header communicates the original protocol to backends
Q5. What is the role of upstream keepalive and what is an appropriate value?

keepalive is a connection pool that caches idle connections between Nginx and upstream (backend) servers.

upstream backend {
    server 10.0.0.1:8080;
    server 10.0.0.2:8080;
    keepalive 32;
}

Without keepalive: Every request triggers a TCP 3-way handshake. Under high load, TIME_WAIT sockets accumulate rapidly, potentially causing port exhaustion.

With keepalive: Reuses existing connections, eliminating TCP handshake overhead. This reduces latency and increases throughput.

Appropriate value:

  • Start with roughly 2x your concurrent connection count
  • Too high wastes memory; too low reduces connection reuse benefits
  • Must set proxy_http_version 1.1; and proxy_set_header Connection ""; for it to work

14. References

  1. Nginx Official Documentation
  2. Nginx Admin Guide
  3. Nginx Plus Feature Comparison
  4. Let's Encrypt / Certbot
  5. Nginx Ingress Controller
  6. Traefik Documentation
  7. Caddy Documentation
  8. Mozilla SSL Configuration Generator
  9. Nginx Performance Tuning Guide
  10. C10K Problem
  11. Nginx Cookbook (O'Reilly)
  12. DigitalOcean Nginx Tutorials
  13. Nginx Security Best Practices