Skip to content

Split View: Shell 기본기부터 고급 운용까지: 엔지니어를 위한 실전 Shell 가이드

✨ Learn with Quiz
|

Shell 기본기부터 고급 운용까지: 엔지니어를 위한 실전 Shell 가이드

들어가며

서버에 SSH 접속하고, CI/CD 파이프라인을 작성하고, 로그를 분석하고, 배포 스크립트를 돌린다. 엔지니어의 하루는 Shell 위에서 시작되고 Shell 위에서 끝난다. 하지만 의외로 많은 개발자가 Shell의 기본 동작 원리를 깊이 이해하지 못한 채 "되는 명령어"만 반복한다.

이 글에서는 Bash/Zsh 기본 문법에서 출발하여 파이프라인, 프로세스 치환, 시그널 핸들링, 성능 최적화까지 엔지니어가 알아야 할 Shell 기법을 실전 예제 중심으로 다룬다.


1. Shell 선택: Bash vs Zsh vs Fish

항목BashZshFish
기본 탑재대부분 Linux 배포판macOS (Catalina+)별도 설치
POSIX 호환거의 완전거의 완전비호환
자동완성기본 수준플러그인으로 강력기본 내장 최강
스크립트 호환표준Bash 호환 모드 지원독자 문법
프롬프트 커스텀PS1 직접 수정Oh My Zsh / Powerlevel10k내장 설정
추천 용도서버 스크립트, CI/CD로컬 개발 환경개인 터미널

실전 원칙: 서버 스크립트는 #!/usr/bin/env bash로 작성하고, 로컬 인터랙티브 셸은 Zsh를 쓴다.


2. 기본기: 변수·조건·반복

2.1 변수 선언과 스코프

# 로컬 변수 (현재 셸에서만)
APP_NAME="my-service"

# 환경 변수 (자식 프로세스에 전달)
export DB_HOST="db.prod.internal"

# readonly - 실수로 덮어쓰기 방지
readonly CONFIG_PATH="/etc/app/config.yaml"

# 변수 기본값 패턴
: "${LOG_LEVEL:=info}"          # 미설정 시 info 할당
: "${TIMEOUT:?TIMEOUT 환경변수 필수}"  # 미설정 시 에러 종료
echo "${USER:-unknown}"         # 미설정 시 unknown 출력 (할당 안 함)

2.2 조건문 패턴

# 문자열 비교 - [[ ]] 사용 (Bash/Zsh 확장)
if [[ "$ENV" == "production" ]]; then
  echo "프로덕션 모드"
elif [[ "$ENV" =~ ^(staging|dev)$ ]]; then
  echo "비프로덕션 환경: $ENV"
else
  echo "알 수 없는 환경"
fi

# 파일 테스트
[[ -f /etc/hosts ]]   # 파일 존재
[[ -d /var/log ]]     # 디렉터리 존재
[[ -r "$file" ]]      # 읽기 권한
[[ -s "$file" ]]      # 파일 크기 > 0
[[ "$f1" -nt "$f2" ]] # f1이 f2보다 최신

# 산술 비교 - (( )) 사용
if (( retries > 3 )); then
  echo "재시도 한도 초과"
fi

2.3 반복문 패턴

# 파일 목록 순회 - glob 사용 (ls 파싱 금지!)
for f in /var/log/*.log; do
  [[ -f "$f" ]] || continue
  echo "처리 중: $f ($(wc -l < "$f") 줄)"
done

# C-style for
for (( i=0; i<10; i++ )); do
  curl -s "http://api.local/health" > /dev/null && break
  sleep 1
done

# while + read - 파일/명령 출력 한 줄씩 처리
while IFS=',' read -r name email role; do
  echo "사용자 생성: $name ($role)"
done < users.csv

# 무한 루프 + 탈출 조건
while true; do
  status=$(curl -s -o /dev/null -w '%{http_code}' http://api/health)
  [[ "$status" == "200" ]] && break
  sleep 5
done

3. 파이프라인 심화

3.1 파이프라인 기본 원리

파이프(|)는 앞 명령의 stdout을 뒤 명령의 stdin에 연결한다. 각 명령은 별도 서브셸에서 동시 실행된다.

# 접속 IP Top 10
awk '{print $1}' /var/log/nginx/access.log \
  | sort \
  | uniq -c \
  | sort -rn \
  | head -10

# pipefail - 파이프라인 중간 실패 감지
set -o pipefail
curl -s "$URL" | jq '.items[]' | wc -l
# curl 실패 시 전체 파이프라인 종료 코드 ≠ 0

3.2 프로세스 치환 (Process Substitution)

두 명령의 출력을 파일처럼 다른 명령에 전달한다.

# 두 서버의 패키지 목록 비교
diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort')

# 두 API 응답 비교
diff <(curl -s api-v1/users | jq -S .) <(curl -s api-v2/users | jq -S .)

# tee + 프로세스 치환: 한 스트림을 여러 곳에 동시 전달
cat access.log \
  | tee >(grep 'ERROR' > errors.log) \
  | tee >(awk '{print $1}' | sort -u > unique_ips.txt) \
  | wc -l

3.3 리다이렉션 고급 패턴

# stderr만 캡처
errors=$(command 2>&1 1>/dev/null)

# stdout + stderr 모두 파일로
command &> output.log        # Bash 4+
command > output.log 2>&1    # POSIX 호환

# Here String
grep "pattern" <<< "$variable"

# File Descriptor 활용
exec 3>/tmp/audit.log         # FD 3 열기
echo "작업 시작: $(date)" >&3
do_something
echo "작업 완료: $(date)" >&3
exec 3>&-                     # FD 3 닫기

4. 함수와 에러 처리

4.1 함수 정의 패턴

# 방어적 함수 구조
log() {
  local level="${1:?level 필수 (INFO|WARN|ERROR)}"
  local message="${2:?message 필수}"
  printf '[%s] [%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$level" "$message" >&2
}

retry() {
  local max_attempts="${1:?}"
  local delay="${2:?}"
  shift 2
  local attempt=1

  until "$@"; do
    if (( attempt >= max_attempts )); then
      log ERROR "명령 실패 ($max_attempts회 시도): $*"
      return 1
    fi
    log WARN "재시도 $attempt/$max_attempts (${delay}s 후): $*"
    sleep "$delay"
    (( attempt++ ))
  done
}

# 사용
retry 5 3 curl -sf http://api.internal/health

4.2 안전한 스크립트 헤더

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

# set -e: 명령 실패 시 즉시 종료
# set -u: 미정의 변수 사용 시 에러
# set -o pipefail: 파이프라인 중간 실패 감지
# IFS: 단어 분리 기준을 줄바꿈·탭으로 제한

# 클린업 트랩
cleanup() {
  local exit_code=$?
  rm -f "$TMPFILE"
  log INFO "종료 (exit code: $exit_code)"
  exit "$exit_code"
}
trap cleanup EXIT
trap 'log ERROR "라인 $LINENO에서 에러 발생"; exit 1' ERR

TMPFILE=$(mktemp)

5. 텍스트 처리 파이프라인

5.1 도구 비교표

도구용도속도복잡성
grep패턴 매칭·필터링매우 빠름낮음
sed스트림 편집·치환빠름중간
awk필드 기반 처리·집계빠름높음
jqJSON 처리빠름중간
yqYAML 처리보통중간
cut/paste단순 필드 추출·병합매우 빠름낮음
xargs표준입력 → 인수 변환빠름중간

5.2 실전 예제

# 1. 로그에서 5xx 에러 요청 경로 Top 10
awk '$9 ~ /^5[0-9]{2}$/ {print $7}' access.log \
  | sort | uniq -c | sort -rn | head -10

# 2. JSON API 응답에서 특정 필드 추출 + CSV 변환
curl -s https://api.example.com/users \
  | jq -r '.[] | [.id, .name, .email] | @csv'

# 3. YAML 설정에서 이미지 태그 일괄 변경
yq -i '.spec.template.spec.containers[].image |= sub("v1\\.2\\.3", "v1.2.4")' \
  k8s/deployment.yaml

# 4. 대용량 로그 병렬 검색 (xargs + grep)
find /var/log -name '*.log' -mtime -1 -print0 \
  | xargs -0 -P4 grep -l 'OutOfMemoryError'

# 5. CSV 3번째 컬럼 합계
awk -F',' '{sum += $3} END {printf "합계: %.2f\n", sum}' sales.csv

6. 시그널 핸들링과 프로세스 관리

6.1 주요 시그널

시그널번호기본 동작용도
SIGHUP1종료데몬 설정 리로드
SIGINT2종료Ctrl+C
SIGQUIT3코어 덤프Ctrl+\
SIGKILL9강제 종료트랩 불가
SIGTERM15종료정상 종료 요청
SIGUSR110사용자 정의로그 레벨 변경 등
SIGSTOP19일시정지트랩 불가

6.2 Graceful Shutdown 패턴

#!/usr/bin/env bash
set -euo pipefail

RUNNING=true
CHILD_PID=""

shutdown() {
  log INFO "종료 시그널 수신, graceful shutdown 시작"
  RUNNING=false
  if [[ -n "$CHILD_PID" ]]; then
    kill -TERM "$CHILD_PID" 2>/dev/null || true
    wait "$CHILD_PID" 2>/dev/null || true
  fi
}

trap shutdown SIGTERM SIGINT

while $RUNNING; do
  process_job &
  CHILD_PID=$!
  wait "$CHILD_PID" || true
  CHILD_PID=""
  sleep 5
done

log INFO "정상 종료"

6.3 Job Control

# 백그라운드 실행 + 완료 대기
build_frontend &
pid1=$!
build_backend &
pid2=$!

wait "$pid1" "$pid2"
echo "빌드 완료"

# nohup - 세션 종료 후에도 실행 유지
nohup long_task.sh > /var/log/task.log 2>&1 &
disown

# timeout - 명령 실행 시간 제한
timeout 30s curl -s http://slow-api.com/data

7. 배열과 연관 배열

# 인덱스 배열
servers=("web01" "web02" "web03" "db01")
echo "서버 수: ${#servers[@]}"
echo "첫 번째: ${servers[0]}"
echo "전체: ${servers[@]}"

# 배열 슬라이스
web_servers=("${servers[@]:0:3}")

# 배열에 추가
servers+=("cache01")

# 연관 배열 (Bash 4+)
declare -A service_ports
service_ports=(
  [nginx]=80
  [api]=8080
  [redis]=6379
  [postgres]=5432
)

for svc in "${!service_ports[@]}"; do
  echo "$svc${service_ports[$svc]}"
done

# 배열로 안전한 명령 구성
curl_opts=(
  -s
  --max-time 10
  --retry 3
  -H "Authorization: Bearer $TOKEN"
  -H "Content-Type: application/json"
)
curl "${curl_opts[@]}" "$API_URL"

8. 고급 패턴

8.1 Subshell vs Command Group

# Subshell () - 별도 프로세스, 부모 변수 변경 없음
(cd /tmp && tar czf backup.tar.gz /var/data)
# 현재 디렉터리 변경 없음

# Command Group {} - 현재 셸에서 실행
{
  echo "=== 시스템 정보 ==="
  uname -a
  free -h
  df -h
} > system_report.txt

8.2 동적 변수명 (nameref)

# Bash 4.3+ nameref
setup_db() {
  local -n result=$1  # nameref
  result="postgresql://localhost:5432/app"
}

setup_db DB_URL
echo "$DB_URL"  # postgresql://localhost:5432/app

8.3 병렬 실행 패턴

# GNU parallel을 이용한 병렬 처리
cat server_list.txt | parallel -j10 'ssh {} "df -h / | tail -1"'

# xargs 병렬
find . -name '*.png' -print0 \
  | xargs -0 -P$(nproc) -I{} convert {} -resize 50% resized/{}

# wait + 배열로 병렬 제어
pids=()
for host in web0{1..5}; do
  deploy.sh "$host" &
  pids+=($!)
done

failed=0
for pid in "${pids[@]}"; do
  wait "$pid" || (( failed++ ))
done
echo "배포 완료: 실패 $failed건"

9. 성능 최적화 체크리스트

항목느린 패턴빠른 패턴
루프 내 외부 명령for f in ...; do cat "$f" | grep ...; donegrep -r ... /path/
서브셸 남발result=$(echo "$var" | sed ...)result="${var//old/new}"
불필요한 파이프cat file | grep patterngrep pattern file
정렬 후 유니크sort | uniqsort -u
큰 파일 행 수cat file | wc -lwc -l < file
파일 존재 확인ls /path/file 2>/dev/null[[ -f /path/file ]]
문자열에서 추출echo "$s" | cut -d. -f1"${s%%.*}" (Parameter Expansion)

Parameter Expansion 주요 패턴

file="/var/log/nginx/access.log"

echo "${file##*/}"    # access.log (경로 제거)
echo "${file%.*}"     # /var/log/nginx/access (확장자 제거)
echo "${file%%/*}"    # (빈 문자열, 첫 / 이전)
echo "${file%.log}.bak"  # /var/log/nginx/access.bak

version="v1.2.3-rc1"
echo "${version#v}"       # 1.2.3-rc1
echo "${version%-*}"      # v1.2.3
echo "${version^^}"       # V1.2.3-RC1 (대문자)
echo "${version,,}"       # v1.2.3-rc1 (소문자)
echo "${#version}"        # 10 (문자열 길이)

10. 실전 스크립트 템플릿

배포 스크립트

#!/usr/bin/env bash
set -euo pipefail

readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly APP_NAME="${1:?사용법: $0 <app-name> <version>}"
readonly VERSION="${2:?사용법: $0 <app-name> <version>}"
readonly DEPLOY_ENV="${DEPLOY_ENV:-staging}"
readonly LOG_FILE="/var/log/deploy/${APP_NAME}-$(date +%Y%m%d-%H%M%S).log"

# --- 로깅 ---
log()  { printf '[%s] [%-5s] %s\n' "$(date +%T)" "$1" "$2" | tee -a "$LOG_FILE" >&2; }
info() { log INFO "$1"; }
warn() { log WARN "$1"; }
die()  { log ERROR "$1"; exit 1; }

# --- 사전 점검 ---
preflight() {
  info "사전 점검 시작"
  command -v docker   >/dev/null || die "docker가 설치되어 있지 않습니다"
  command -v kubectl  >/dev/null || die "kubectl이 설치되어 있지 않습니다"

  local context
  context=$(kubectl config current-context)
  [[ "$context" == *"$DEPLOY_ENV"* ]] || die "kubectl context($context)$DEPLOY_ENV 와 일치하지 않습니다"
  info "사전 점검 통과 (context: $context)"
}

# --- 배포 ---
deploy() {
  info "$APP_NAME:$VERSION$DEPLOY_ENV 배포 시작"

  kubectl set image "deployment/$APP_NAME" \
    "$APP_NAME=registry.internal/$APP_NAME:$VERSION" \
    --record

  info "롤아웃 대기 중..."
  if ! kubectl rollout status "deployment/$APP_NAME" --timeout=300s; then
    warn "롤아웃 실패, 롤백 실행"
    kubectl rollout undo "deployment/$APP_NAME"
    die "배포 실패 → 롤백 완료"
  fi

  info "배포 성공"
}

# --- 메인 ---
main() {
  mkdir -p "$(dirname "$LOG_FILE")"
  info "=== $APP_NAME $VERSION 배포 ($DEPLOY_ENV) ==="
  preflight
  deploy
  info "=== 배포 완료 ==="
}

main "$@"

마무리 체크리스트

  • 스크립트 상단에 set -euo pipefail 선언했는가?
  • 모든 변수를 큰따옴표("$var")로 감쌌는가?
  • 외부 입력(사용자, 파일명)을 그대로 명령에 넣지 않았는가?
  • trap으로 임시 파일·프로세스 정리를 보장했는가?
  • 루프 안에서 불필요한 외부 명령 호출을 줄였는가?
  • ShellCheck(shellcheck script.sh)로 정적 분석을 통과했는가?
  • POSIX 호환이 필요한 환경이면 Bash 확장 문법을 피했는가?

Shell은 "알면 빠르고, 모르면 위험한" 도구다. 기본기를 탄탄히 다지고 안전한 패턴을 습관화하면, 어떤 서버 환경에서도 자신 있게 문제를 해결할 수 있다.

From Shell Basics to Advanced Operations: A Practical Shell Guide for Engineers

Introduction

SSH into a server, write CI/CD pipelines, analyze logs, run deployment scripts. An engineer's day starts on the Shell and ends on the Shell. Yet surprisingly, many developers repeat "commands that work" without deeply understanding how the Shell actually works.

This article starts from Bash/Zsh basic syntax and covers pipelines, process substitution, signal handling, and performance optimization -- Shell techniques engineers need to know, with a focus on practical examples.


1. Choosing a Shell: Bash vs Zsh vs Fish

FeatureBashZshFish
Built-inMost Linux distrosmacOS (Catalina+)Separate install
POSIX CompatNearly completeNearly completeNon-compatible
Auto-completionBasic levelPowerful with pluginsBest built-in
Script CompatStandardBash compat mode availableUnique syntax
Prompt CustomizeManual PS1Oh My Zsh / Powerlevel10kBuilt-in config
Recommended ForServer scripts, CI/CDLocal dev environmentPersonal terminal

Practical rule: Write server scripts with #!/usr/bin/env bash, and use Zsh for your local interactive shell.


2. Fundamentals: Variables, Conditionals, Loops

2.1 Variable Declaration and Scope

# Local variable (current shell only)
APP_NAME="my-service"

# Environment variable (passed to child processes)
export DB_HOST="db.prod.internal"

# readonly - Prevent accidental overwrite
readonly CONFIG_PATH="/etc/app/config.yaml"

# Variable default value patterns
: "${LOG_LEVEL:=info}"          # Assign info if unset
: "${TIMEOUT:?TIMEOUT env var required}"  # Error and exit if unset
echo "${USER:-unknown}"         # Print unknown if unset (no assignment)

2.2 Conditional Patterns

# String comparison - use [[ ]] (Bash/Zsh extension)
if [[ "$ENV" == "production" ]]; then
  echo "Production mode"
elif [[ "$ENV" =~ ^(staging|dev)$ ]]; then
  echo "Non-production environment: $ENV"
else
  echo "Unknown environment"
fi

# File tests
[[ -f /etc/hosts ]]   # File exists
[[ -d /var/log ]]     # Directory exists
[[ -r "$file" ]]      # Read permission
[[ -s "$file" ]]      # File size > 0
[[ "$f1" -nt "$f2" ]] # f1 is newer than f2

# Arithmetic comparison - use (( ))
if (( retries > 3 )); then
  echo "Retry limit exceeded"
fi

2.3 Loop Patterns

# Iterate over file list - use glob (never parse ls!)
for f in /var/log/*.log; do
  [[ -f "$f" ]] || continue
  echo "Processing: $f ($(wc -l < "$f") lines)"
done

# C-style for
for (( i=0; i<10; i++ )); do
  curl -s "http://api.local/health" > /dev/null && break
  sleep 1
done

# while + read - Process file/command output line by line
while IFS=',' read -r name email role; do
  echo "Creating user: $name ($role)"
done < users.csv

# Infinite loop + exit condition
while true; do
  status=$(curl -s -o /dev/null -w '%{http_code}' http://api/health)
  [[ "$status" == "200" ]] && break
  sleep 5
done

3. Pipeline Deep Dive

3.1 Pipeline Fundamentals

A pipe (|) connects the stdout of the preceding command to the stdin of the following command. Each command runs simultaneously in a separate subshell.

# Top 10 connecting IPs
awk '{print $1}' /var/log/nginx/access.log \
  | sort \
  | uniq -c \
  | sort -rn \
  | head -10

# pipefail - Detect mid-pipeline failures
set -o pipefail
curl -s "$URL" | jq '.items[]' | wc -l
# If curl fails, the entire pipeline exit code != 0

3.2 Process Substitution

Pass the output of two commands to another command as if they were files.

# Compare package lists from two servers
diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort')

# Compare two API responses
diff <(curl -s api-v1/users | jq -S .) <(curl -s api-v2/users | jq -S .)

# tee + process substitution: Send one stream to multiple destinations simultaneously
cat access.log \
  | tee >(grep 'ERROR' > errors.log) \
  | tee >(awk '{print $1}' | sort -u > unique_ips.txt) \
  | wc -l

3.3 Advanced Redirection Patterns

# Capture stderr only
errors=$(command 2>&1 1>/dev/null)

# Redirect both stdout + stderr to file
command &> output.log        # Bash 4+
command > output.log 2>&1    # POSIX compatible

# Here String
grep "pattern" <<< "$variable"

# File Descriptor usage
exec 3>/tmp/audit.log         # Open FD 3
echo "Task started: $(date)" >&3
do_something
echo "Task completed: $(date)" >&3
exec 3>&-                     # Close FD 3

4. Functions and Error Handling

4.1 Function Definition Patterns

# Defensive function structure
log() {
  local level="${1:?level required (INFO|WARN|ERROR)}"
  local message="${2:?message required}"
  printf '[%s] [%s] %s\n' "$(date '+%Y-%m-%d %H:%M:%S')" "$level" "$message" >&2
}

retry() {
  local max_attempts="${1:?}"
  local delay="${2:?}"
  shift 2
  local attempt=1

  until "$@"; do
    if (( attempt >= max_attempts )); then
      log ERROR "Command failed ($max_attempts attempts): $*"
      return 1
    fi
    log WARN "Retry $attempt/$max_attempts (after ${delay}s): $*"
    sleep "$delay"
    (( attempt++ ))
  done
}

# Usage
retry 5 3 curl -sf http://api.internal/health

4.2 Safe Script Header

#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'

# set -e: Exit immediately on command failure
# set -u: Error on undefined variable usage
# set -o pipefail: Detect mid-pipeline failures
# IFS: Restrict word splitting to newline and tab

# Cleanup trap
cleanup() {
  local exit_code=$?
  rm -f "$TMPFILE"
  log INFO "Exiting (exit code: $exit_code)"
  exit "$exit_code"
}
trap cleanup EXIT
trap 'log ERROR "Error at line $LINENO"; exit 1' ERR

TMPFILE=$(mktemp)

5. Text Processing Pipelines

5.1 Tool Comparison Table

ToolPurposeSpeedComplexity
grepPattern matching/filteringVery fastLow
sedStream editing/replacingFastMedium
awkField-based processingFastHigh
jqJSON processingFastMedium
yqYAML processingModerateMedium
cut/pasteSimple field extract/mergeVery fastLow
xargsstdin to argumentsFastMedium

5.2 Practical Examples

# 1. Top 10 request paths with 5xx errors from log
awk '$9 ~ /^5[0-9]{2}$/ {print $7}' access.log \
  | sort | uniq -c | sort -rn | head -10

# 2. Extract specific fields from JSON API response + convert to CSV
curl -s https://api.example.com/users \
  | jq -r '.[] | [.id, .name, .email] | @csv'

# 3. Batch update image tags in YAML config
yq -i '.spec.template.spec.containers[].image |= sub("v1\\.2\\.3", "v1.2.4")' \
  k8s/deployment.yaml

# 4. Parallel search of large logs (xargs + grep)
find /var/log -name '*.log' -mtime -1 -print0 \
  | xargs -0 -P4 grep -l 'OutOfMemoryError'

# 5. Sum of 3rd column in CSV
awk -F',' '{sum += $3} END {printf "Total: %.2f\n", sum}' sales.csv

6. Signal Handling and Process Management

6.1 Key Signals

SignalNumberDefault ActionPurpose
SIGHUP1TerminateDaemon config reload
SIGINT2TerminateCtrl+C
SIGQUIT3Core dumpCtrl+\
SIGKILL9Force killCannot be trapped
SIGTERM15TerminateGraceful shutdown
SIGUSR110User-definedLog level change etc
SIGSTOP19SuspendCannot be trapped

6.2 Graceful Shutdown Pattern

#!/usr/bin/env bash
set -euo pipefail

RUNNING=true
CHILD_PID=""

shutdown() {
  log INFO "Shutdown signal received, starting graceful shutdown"
  RUNNING=false
  if [[ -n "$CHILD_PID" ]]; then
    kill -TERM "$CHILD_PID" 2>/dev/null || true
    wait "$CHILD_PID" 2>/dev/null || true
  fi
}

trap shutdown SIGTERM SIGINT

while $RUNNING; do
  process_job &
  CHILD_PID=$!
  wait "$CHILD_PID" || true
  CHILD_PID=""
  sleep 5
done

log INFO "Clean shutdown complete"

6.3 Job Control

# Background execution + wait for completion
build_frontend &
pid1=$!
build_backend &
pid2=$!

wait "$pid1" "$pid2"
echo "Build complete"

# nohup - Keep running after session ends
nohup long_task.sh > /var/log/task.log 2>&1 &
disown

# timeout - Limit command execution time
timeout 30s curl -s http://slow-api.com/data

7. Arrays and Associative Arrays

# Indexed array
servers=("web01" "web02" "web03" "db01")
echo "Server count: ${#servers[@]}"
echo "First: ${servers[0]}"
echo "All: ${servers[@]}"

# Array slice
web_servers=("${servers[@]:0:3}")

# Append to array
servers+=("cache01")

# Associative array (Bash 4+)
declare -A service_ports
service_ports=(
  [nginx]=80
  [api]=8080
  [redis]=6379
  [postgres]=5432
)

for svc in "${!service_ports[@]}"; do
  echo "$svc -> ${service_ports[$svc]}"
done

# Safely construct commands with arrays
curl_opts=(
  -s
  --max-time 10
  --retry 3
  -H "Authorization: Bearer $TOKEN"
  -H "Content-Type: application/json"
)
curl "${curl_opts[@]}" "$API_URL"

8. Advanced Patterns

8.1 Subshell vs Command Group

# Subshell () - Separate process, does not modify parent variables
(cd /tmp && tar czf backup.tar.gz /var/data)
# Current directory unchanged

# Command Group {} - Runs in the current shell
{
  echo "=== System Info ==="
  uname -a
  free -h
  df -h
} > system_report.txt

8.2 Dynamic Variable Names (nameref)

# Bash 4.3+ nameref
setup_db() {
  local -n result=$1  # nameref
  result="postgresql://localhost:5432/app"
}

setup_db DB_URL
echo "$DB_URL"  # postgresql://localhost:5432/app

8.3 Parallel Execution Patterns

# Parallel processing with GNU parallel
cat server_list.txt | parallel -j10 'ssh {} "df -h / | tail -1"'

# xargs parallel
find . -name '*.png' -print0 \
  | xargs -0 -P$(nproc) -I{} convert {} -resize 50% resized/{}

# wait + array for parallel control
pids=()
for host in web0{1..5}; do
  deploy.sh "$host" &
  pids+=($!)
done

failed=0
for pid in "${pids[@]}"; do
  wait "$pid" || (( failed++ ))
done
echo "Deployment complete: $failed failure(s)"

9. Performance Optimization Checklist

ItemSlow PatternFast Pattern
External cmds in loopsfor f in ...; do cat "$f" | grep ...; donegrep -r ... /path/
Excessive subshellsresult=$(echo "$var" | sed ...)result="${var//old/new}"
Unnecessary pipescat file | grep patterngrep pattern file
Sort then uniquesort | uniqsort -u
Line count of large filecat file | wc -lwc -l < file
File existence checkls /path/file 2>/dev/null[[ -f /path/file ]]
Extract from stringecho "$s" | cut -d. -f1"${s%%.*}" (Parameter Expansion)

Key Parameter Expansion Patterns

file="/var/log/nginx/access.log"

echo "${file##*/}"    # access.log (strip path)
echo "${file%.*}"     # /var/log/nginx/access (strip extension)
echo "${file%%/*}"    # (empty string, before first /)
echo "${file%.log}.bak"  # /var/log/nginx/access.bak

version="v1.2.3-rc1"
echo "${version#v}"       # 1.2.3-rc1
echo "${version%-*}"      # v1.2.3
echo "${version^^}"       # V1.2.3-RC1 (uppercase)
echo "${version,,}"       # v1.2.3-rc1 (lowercase)
echo "${#version}"        # 10 (string length)

10. Practical Script Template

Deployment Script

#!/usr/bin/env bash
set -euo pipefail

readonly SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
readonly APP_NAME="${1:?Usage: $0 <app-name> <version>}"
readonly VERSION="${2:?Usage: $0 <app-name> <version>}"
readonly DEPLOY_ENV="${DEPLOY_ENV:-staging}"
readonly LOG_FILE="/var/log/deploy/${APP_NAME}-$(date +%Y%m%d-%H%M%S).log"

# --- Logging ---
log()  { printf '[%s] [%-5s] %s\n' "$(date +%T)" "$1" "$2" | tee -a "$LOG_FILE" >&2; }
info() { log INFO "$1"; }
warn() { log WARN "$1"; }
die()  { log ERROR "$1"; exit 1; }

# --- Pre-flight checks ---
preflight() {
  info "Starting pre-flight checks"
  command -v docker   >/dev/null || die "docker is not installed"
  command -v kubectl  >/dev/null || die "kubectl is not installed"

  local context
  context=$(kubectl config current-context)
  [[ "$context" == *"$DEPLOY_ENV"* ]] || die "kubectl context($context) does not match $DEPLOY_ENV"
  info "Pre-flight checks passed (context: $context)"
}

# --- Deploy ---
deploy() {
  info "Starting deployment: $APP_NAME:$VERSION -> $DEPLOY_ENV"

  kubectl set image "deployment/$APP_NAME" \
    "$APP_NAME=registry.internal/$APP_NAME:$VERSION" \
    --record

  info "Waiting for rollout..."
  if ! kubectl rollout status "deployment/$APP_NAME" --timeout=300s; then
    warn "Rollout failed, executing rollback"
    kubectl rollout undo "deployment/$APP_NAME"
    die "Deployment failed -> Rollback complete"
  fi

  info "Deployment successful"
}

# --- Main ---
main() {
  mkdir -p "$(dirname "$LOG_FILE")"
  info "=== Deploying $APP_NAME $VERSION ($DEPLOY_ENV) ==="
  preflight
  deploy
  info "=== Deployment complete ==="
}

main "$@"

Final Checklist

  • Did you declare set -euo pipefail at the top of the script?
  • Did you wrap all variables in double quotes ("$var")?
  • Did you avoid passing external input (user input, filenames) directly to commands?
  • Did you ensure temp file/process cleanup with trap?
  • Did you minimize unnecessary external command calls inside loops?
  • Did you pass static analysis with ShellCheck (shellcheck script.sh)?
  • If POSIX compatibility is needed, did you avoid Bash-specific syntax?

Shell is a tool that is "fast when you know it, dangerous when you don't." Build a strong foundation in the basics and make safe patterns a habit, and you can confidently solve problems in any server environment.

Quiz

Q1: What is the main topic covered in "From Shell Basics to Advanced Operations: A Practical Shell Guide for Engineers"?

From Bash/Zsh basic syntax to pipelines, process substitution, signal handling, and performance optimization -- this guide covers essential Shell techniques that every working engineer must know, with practical code examples.

Q2: What is Fundamentals: Variables, Conditionals, Loops? 2.1 Variable Declaration and Scope 2.2 Conditional Patterns 2.3 Loop Patterns

Q3: Explain the core concept of Pipeline Deep Dive. 3.1 Pipeline Fundamentals A pipe (|) connects the stdout of the preceding command to the stdin of the following command. Each command runs simultaneously in a separate subshell. 3.2 Process Substitution Pass the output of two commands to another command as if they were files.

Q4: What are the key aspects of Functions and Error Handling? 4.1 Function Definition Patterns 4.2 Safe Script Header

Q5: How does Text Processing Pipelines work? 5.1 Tool Comparison Table 5.2 Practical Examples