Skip to content
Published on

Linux Kernel Parameter Tuning Guide: sysctl + Boot Params, Safe Changes and Rollback

Authors
  • Name
    Twitter

Introduction

Kernel parameter tuning is a task that directly impacts server performance and stability. A single incorrect value can trigger OOM kills, drop network connections, or create security vulnerabilities.

This article distinguishes between sysctl (runtime parameters) and boot parameters (kernel command line), covering the meaning, recommended values, and application methods for each, and presents safe change procedures and rollback strategies for production environments.


1. Two Paths for Kernel Parameters

Categorysysctl (Runtime)Boot Params (Boot Time)
When AppliedImmediately (no reboot)At next boot
Config File/etc/sysctl.d/*.conf/etc/default/grub -> grub.cfg
Check Commandsysctl <param>cat /proc/cmdline
Persistsysctl.d + sysctl -pgrub2-mkconfig / update-grub
RollbackRestore previous valuesSelect previous GRUB entry
ScopeItems under /proc/sys/All kernel boot options

2. Safe Change Procedures (Production Protocol)

2.1 Pre-Change Checklist

#!/usr/bin/env bash
# pre-tuning-check.sh - Back up state before tuning

BACKUP_DIR="/root/kernel-tuning-backup/$(date +%Y%m%d-%H%M%S)"
mkdir -p "$BACKUP_DIR"

# 1. Full sysctl dump
sysctl -a > "$BACKUP_DIR/sysctl-before.txt" 2>/dev/null

# 2. Current boot parameters
cat /proc/cmdline > "$BACKUP_DIR/cmdline-before.txt"

# 3. GRUB config backup
cp /etc/default/grub "$BACKUP_DIR/grub-before"
[[ -d /etc/sysctl.d ]] && cp -r /etc/sysctl.d "$BACKUP_DIR/sysctl.d-before"

# 4. System state snapshot
free -h > "$BACKUP_DIR/memory-before.txt"
ss -s > "$BACKUP_DIR/socket-stats-before.txt"
vmstat 1 5 > "$BACKUP_DIR/vmstat-before.txt"
cat /proc/net/sockstat > "$BACKUP_DIR/sockstat-before.txt"

echo "Backup complete: $BACKUP_DIR"

2.2 Change Steps

1. Test in staging environment
2. Apply to 1 canary server -> Monitor (minimum 24 hours)
3. If no issues, rolling apply by group
4. Compare metrics after application (before vs after)

2.3 Rollback Procedure

# sysctl rollback - Restore specific parameter from backup
PARAM="net.core.somaxconn"
OLD_VALUE=$(grep "^${PARAM}" /root/kernel-tuning-backup/latest/sysctl-before.txt | awk '{print $3}')
sysctl -w "${PARAM}=${OLD_VALUE}"

# Full sysctl rollback
while IFS='= ' read -r key value; do
  sysctl -w "${key}=${value}" 2>/dev/null
done < /root/kernel-tuning-backup/latest/sysctl-before.txt

# Boot parameter rollback - Restore previous GRUB config
cp /root/kernel-tuning-backup/latest/grub-before /etc/default/grub
grub2-mkconfig -o /boot/grub2/grub.cfg  # RHEL
# update-grub                            # Ubuntu

3. Network Tuning

3.1 TCP Connection Management

# /etc/sysctl.d/10-network.conf

# TCP backlog - Essential for high-traffic servers
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# TCP socket buffers (bytes)
# min / default / max
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 262144 16777216
net.ipv4.tcp_wmem = 4096 262144 16777216

# TCP congestion control
net.ipv4.tcp_congestion_control = bbr
net.core.default_qdisc = fq

# TIME_WAIT management
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_tw_buckets = 2000000

# Keepalive (servers behind load balancers)
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 6

3.2 Network Parameter Reference Table

ParameterDefaultRecommendedDescription
net.core.somaxconn409665535Max listen() backlog
net.ipv4.tcp_max_syn_backlog102465535Max SYN queue size
net.core.rmem_max21299216MBMax receive socket buffer
net.core.wmem_max21299216MBMax send socket buffer
net.ipv4.tcp_congestion_controlcubicbbrCongestion control algo
net.ipv4.tcp_tw_reuse0(2)1TIME_WAIT socket reuse
net.ipv4.tcp_fin_timeout6015FIN-WAIT-2 timeout
net.ipv4.tcp_keepalive_time720060Keepalive start time (sec)
net.ipv4.ip_local_port_range32768-609991024-65535Outbound port range

3.3 Enabling BBR

# Load BBR kernel module
modprobe tcp_bbr
echo "tcp_bbr" >> /etc/modules-load.d/bbr.conf

# Apply sysctl
sysctl -w net.core.default_qdisc=fq
sysctl -w net.ipv4.tcp_congestion_control=bbr

# Verify
sysctl net.ipv4.tcp_congestion_control
# net.ipv4.tcp_congestion_control = bbr

BBR vs CUBIC: BBR uses bandwidth estimation-based congestion control rather than packet loss-based, significantly improving performance especially on long-distance, high-latency networks.


4. Memory Tuning

4.1 Virtual Memory Management

# /etc/sysctl.d/20-memory.conf

# Swap tendency (0=minimal, 100=aggressive)
# DB servers: 1-10, Web servers: 10-30
vm.swappiness = 10

# Dirty page ratio - disk write delay
vm.dirty_ratio = 40              # Max dirty page ratio relative to total memory
vm.dirty_background_ratio = 10   # Background flush start ratio

# OOM-related
vm.overcommit_memory = 0         # 0=default(heuristic), 1=always allow, 2=restrict
vm.panic_on_oom = 0              # Whether to panic on OOM (0=run OOM Killer)

# Max memory map areas (Elasticsearch, MongoDB, etc.)
vm.max_map_count = 262144

# Filesystem cache release (emergency only)
# echo 3 > /proc/sys/vm/drop_caches  # 1=pagecache, 2=dentries+inodes, 3=all

4.2 Memory Parameter Guide

ParameterDB ServerWeb/API ServerML Workload
vm.swappiness1~510~301
vm.dirty_ratio402040
vm.dirty_background_ratio10510
vm.overcommit_memory001
vm.max_map_count26214465530262144

4.3 Huge Pages

# Transparent Huge Pages (THP) - Recommended to disable for DB
# Set via boot parameter
# Add to GRUB_CMDLINE_LINUX:
# transparent_hugepage=never

# Runtime check/change
cat /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# Static Huge Pages (Oracle DB, DPDK, etc.)
# /etc/sysctl.d/20-memory.conf
vm.nr_hugepages = 1024  # 2MB * 1024 = 2GB

# Verify
grep -i huge /proc/meminfo

5. Filesystem and I/O Tuning

# /etc/sysctl.d/30-fs.conf

# Max open files (system-wide)
fs.file-max = 2097152

# inotify watch limits (IDE, file watch services)
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 8192

# AIO (Asynchronous I/O) max request count
fs.aio-max-nr = 1048576

ulimit Integration

# /etc/security/limits.d/99-app.conf
# Must be configured together with sysctl's fs.file-max to take effect

*    soft    nofile    1048576
*    hard    nofile    1048576
*    soft    nproc     65535
*    hard    nproc     65535
*    soft    memlock   unlimited
*    hard    memlock   unlimited

I/O Scheduler Configuration

# Check current scheduler
cat /sys/block/sda/queue/scheduler

# SSD: none or mq-deadline recommended
echo mq-deadline > /sys/block/sda/queue/scheduler

# Persistent config (udev rule)
# /etc/udev/rules.d/60-scheduler.rules
# ACTION=="add|change", KERNEL=="sd*", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="mq-deadline"
# ACTION=="add|change", KERNEL=="sd*", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
SchedulerDisk TypeCharacteristics
none (noop)NVMe SSDMinimal overhead
mq-deadlineSATA SSDGuaranteed latency
bfqHDDFair bandwidth allocation
kyberFast SSDRead/write latency control

# /etc/sysctl.d/40-security.conf

# ASLR (Address Space Layout Randomization)
kernel.randomize_va_space = 2  # 0=off, 1=partial, 2=full

# SysRq restriction (allow emergency recovery only)
kernel.sysrq = 176  # Bitmask: sync + remount-ro + reboot

# Core dump restriction
kernel.core_pattern = |/bin/false
fs.suid_dumpable = 0

# dmesg access restriction
kernel.dmesg_restrict = 1

# Kernel pointer hiding
kernel.kptr_restrict = 2

# BPF restriction (unprivileged users)
kernel.unprivileged_bpf_disabled = 1

# Network security
net.ipv4.conf.all.rp_filter = 1           # Reverse Path Filtering
net.ipv4.conf.all.accept_redirects = 0     # Reject ICMP redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0  # Reject Source Routing
net.ipv4.conf.all.log_martians = 1         # Log suspicious packets
net.ipv4.icmp_echo_ignore_broadcasts = 1   # Smurf attack defense

# IP forwarding (disable unless router/container host)
net.ipv4.ip_forward = 0
# For container hosts:
# net.ipv4.ip_forward = 1

Security Parameter Checklist

ParameterSecure ValueCIS BenchmarkNotes
kernel.randomize_va_space2RequiredFull ASLR enabled
kernel.dmesg_restrict1RecommendedBlock dmesg for regular users
kernel.kptr_restrict2RecommendedPrevent kernel address exposure
net.ipv4.conf.all.rp_filter1RequiredPrevent IP spoofing
net.ipv4.conf.all.accept_redirects0RequiredPrevent MITM attacks
net.ipv4.conf.all.log_martians1RecommendedAudit abnormal packets
fs.suid_dumpable0RequiredPrevent SUID core dumps

7. Boot Parameters (Kernel Command Line)

7.1 Configuration Method

# Check current boot parameters
cat /proc/cmdline

# RHEL / Rocky
vi /etc/default/grub
# GRUB_CMDLINE_LINUX="... parameters_to_add"
grub2-mkconfig -o /boot/grub2/grub.cfg

# Ubuntu
vi /etc/default/grub
# GRUB_CMDLINE_LINUX_DEFAULT="... parameters_to_add"
update-grub

7.2 Key Boot Parameters

ParameterValuePurpose
transparent_hugepage=nevernever / always / madviseDisable THP for DB servers
mitigations=autooff / auto / auto,nosmtCPU vulnerability mitigation
numa_balancing=disabledisable / enableNUMA auto-balancing
isolcpus=2-7CPU listIsolate specific CPUs from scheduler
nohz_full=2-7CPU listTick-less mode (real-time workloads)
intel_iommu=onon / offEnable IOMMU (SR-IOV, VFIO)
iommu=ptpt / offIOMMU pass-through
default_hugepagesz=1G2M / 1GDefault Huge Page size
hugepagesz=1G hugepages=16size + count1GB Huge Page allocation
crashkernel=256Msizekdump memory reservation
audit=10 / 1Kernel audit logging

7.3 CPU Vulnerability Mitigation vs Performance

# Check currently applied mitigations
grep -r . /sys/devices/system/cpu/vulnerabilities/ 2>/dev/null

# Disable mitigations (benchmark/isolated environments only!)
# Add to GRUB_CMDLINE_LINUX:
# mitigations=off

# Performance impact (varies by workload)
# mitigations=auto: 5~30% overhead on syscall-intensive workloads
# mitigations=off: Security risk - not recommended for production

Warning: mitigations=off disables all security mitigations including Spectre/Meltdown/MDS. Use only in isolated benchmark environments and never in production.


8. Workload-Specific Tuning Profiles

8.1 Web Server / API Server

# /etc/sysctl.d/99-web-server.conf

# Network
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_keepalive_time = 60
net.ipv4.tcp_keepalive_intvl = 10
net.ipv4.tcp_keepalive_probes = 6
net.ipv4.tcp_congestion_control = bbr
net.core.default_qdisc = fq
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216

# Files
fs.file-max = 2097152

# Memory
vm.swappiness = 10

8.2 Database Server

# /etc/sysctl.d/99-database.conf

# Memory
vm.swappiness = 1
vm.dirty_ratio = 40
vm.dirty_background_ratio = 10
vm.overcommit_memory = 0
vm.max_map_count = 262144

# Files
fs.file-max = 2097152
fs.aio-max-nr = 1048576

# Network (primarily internal communication)
net.core.somaxconn = 65535
net.ipv4.tcp_keepalive_time = 60

# Huge Pages (PostgreSQL, Oracle, etc.)
# vm.nr_hugepages calculation: shared_buffers / 2MB + some headroom
# Example: shared_buffers=8GB -> vm.nr_hugepages = 4200
vm.nr_hugepages = 4200

Boot parameters:

transparent_hugepage=never

8.3 Container Host (Docker/K8s)

# /etc/sysctl.d/99-container-host.conf

# IP forwarding required
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Network
net.core.somaxconn = 65535
net.ipv4.ip_local_port_range = 1024 65535
net.netfilter.nf_conntrack_max = 1048576

# inotify (when running many Pods)
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 8192

# PID limit
kernel.pid_max = 4194304

# Files
fs.file-max = 2097152

9. Automation and Verification

Managing sysctl with Ansible

# roles/sysctl/tasks/main.yml
- name: Apply sysctl parameters
  ansible.posix.sysctl:
    name: "{{ item.key }}"
    value: "{{ item.value }}"
    sysctl_file: /etc/sysctl.d/99-tuning.conf
    reload: true
    state: present
  loop: "{{ sysctl_params | dict2items }}"

# roles/sysctl/defaults/main.yml
sysctl_params:
  net.core.somaxconn: 65535
  net.ipv4.tcp_max_syn_backlog: 65535
  vm.swappiness: 10
  fs.file-max: 2097152

Post-Application Verification Script

#!/usr/bin/env bash
# verify-tuning.sh - Verify tuning values

declare -A EXPECTED=(
  ["net.core.somaxconn"]="65535"
  ["net.ipv4.tcp_congestion_control"]="bbr"
  ["vm.swappiness"]="10"
  ["fs.file-max"]="2097152"
)

FAILED=0
for param in "${!EXPECTED[@]}"; do
  actual=$(sysctl -n "$param" 2>/dev/null)
  expected="${EXPECTED[$param]}"
  if [[ "$actual" != "$expected" ]]; then
    echo "FAIL: $param = $actual (expected: $expected)"
    (( FAILED++ ))
  else
    echo "OK:   $param = $actual"
  fi
done

echo "---"
if (( FAILED > 0 )); then
  echo "Verification failed: ${FAILED} item(s)"
  exit 1
else
  echo "All parameters verified successfully"
fi

10. Troubleshooting

SymptomCheck CommandRelated Parameter
"Too many open files"ulimit -n, sysctl fs.file-maxfs.file-max, limits.conf
"Connection refused" (backlog full)ss -lnt, netstat -s | grep overflownet.core.somaxconn
TIME_WAIT explosionss -stcp_tw_reuse, tcp_fin_timeout
Frequent OOM killsdmesg | grep -i oom, /proc/meminfovm.swappiness, vm.overcommit_memory
High I/O waitiostat -x 1, vmstat 1vm.dirty_ratio, I/O scheduler
"Cannot allocate memory" (mmap)sysctl vm.max_map_countvm.max_map_count
nf_conntrack table fulldmesg | grep conntrack, sysctl net.netfilter.nf_conntrack_countnf_conntrack_max

Conclusion

Here are the key principles of kernel parameter tuning:

  1. Measure first, tune later: Do not change values unless a bottleneck has been identified.
  2. One at a time: Changing multiple parameters simultaneously makes it impossible to isolate effects.
  3. Always back up: Record current values before making changes. Never make changes without a rollback path.
  4. Canary deployment: Do not apply to all servers at once -- verify on 1-2 servers first.
  5. Document everything: Record why you changed to this value and what effect was observed.

Proper tuning can dramatically improve server performance, but incorrect tuning can directly cause outages. Always approach tuning safely, incrementally, and measurably.