Skip to content
Published on

The Complete Guide to vLLM & Ollama: LLM Serving Engine Setup, Parameters, and Environment Variables

Authors
  • Name
    Twitter

Part 1: vLLM

1. Introduction to vLLM

vLLM is a high-performance LLM inference and serving engine developed at UC Berkeley. Since its release alongside the PagedAttention paper in 2023, it has established itself as the de facto standard for production LLM serving. As of March 2026, the latest version is v0.16.x, with the transition to V1 architecture underway.

1.1 Core Principles of PagedAttention

In traditional LLM inference, KV Cache is allocated in contiguous GPU memory blocks per sequence. This approach pre-reserves memory based on the maximum sequence length, resulting in 60-80% memory waste in practice.

PagedAttention introduces the operating system's Virtual Memory Paging concept to KV Cache management.

┌─────────────────────────────────────────────────┐
Traditional KV Cache│  ┌──────────────────────────────────┐            │
│  │ Seq 1: [used][used][used][waste][waste][waste]│  │ Seq 2: [used][waste][waste][waste][waste]│  │ Seq 3: [used][used][waste][waste][waste]│  └──────────────────────────────────┘            │
│              → 60~80% Memory Waste├─────────────────────────────────────────────────┤
PagedAttention KV CachePhysical Blocks: [B0][B1][B2][B3][B4][B5]...Block Table:Seq 1[B0, B3, B5]  (logical → physical)Seq 2[B1, B4]Seq 3[B2, B6]│              → < 4% Memory Waste└─────────────────────────────────────────────────┘

The core mechanisms are as follows.

  • Fixed-size blocks: KV Cache is split into fixed-size blocks (default 16 tokens)
  • Block Table: Maintains a table mapping logical block numbers of sequences to physical block addresses
  • Dynamic allocation: Physical blocks are allocated only as needed during token generation
  • Copy-on-Write: When branching sequences (e.g., Beam Search), physical blocks are shared and copied only when modification is needed

1.2 Continuous Batching

Traditional Static Batching waits until all sequences in a batch complete. Continuous Batching removes completed sequences and inserts new requests at every decoding step.

Static Batching:
Step 1: [Seq1, Seq2, Seq3, Seq4]Waits even if Seq2 completes
Step 2: [Seq1, Seq2, Seq3, Seq4]
Step 3: [Seq1, ___, Seq3, Seq4]Slot wasted after Seq2 ends
...
Step N: Next batch starts after all sequences complete

Continuous Batching:
Step 1: [Seq1, Seq2, Seq3, Seq4]
Step 2: [Seq1, Seq5, Seq3, Seq4]Seq5 inserted immediately after Seq2 completes
Step 3: [Seq1, Seq5, Seq6, Seq4]Seq6 inserted immediately after Seq3 completes
Minimizes GPU idle time, maximizes throughput

1.3 Supported Models

vLLM supports virtually all major Transformer-based LLM architectures.

CategorySupported Models
Meta Llama FamilyLlama 2, Llama 3, Llama 3.1, Llama 3.2, Llama 3.3, Llama 4
Mistral FamilyMistral 7B, Mixtral 8x7B, Mixtral 8x22B, Mistral Large, Mistral Small
Qwen FamilyQwen, Qwen 1.5, Qwen 2, Qwen 2.5, Qwen 3, QwQ
Google FamilyGemma, Gemma 2, Gemma 3
DeepSeek FamilyDeepSeek V2, DeepSeek V3, DeepSeek-R1
OthersPhi-3/4, Yi, InternLM 2/3, Command R, DBRX, Falcon, StarCoder 2
MultimodalLLaVA, InternVL, Pixtral, Qwen-VL, MiniCPM-V
EmbeddingE5-Mistral, GTE-Qwen, Jina Embeddings

1.4 LLM Serving Engine Comparison

ItemvLLMTGITensorRT-LLMllama.cpp
DeveloperUC Berkeley / vLLM ProjectHugging FaceNVIDIAGeorgi Gerganov
LanguagePython/C++/CUDARust/PythonC++/CUDAC/C++
Core TechnologyPagedAttentionContinuous BatchingFP8/INT4 kernel optimizationGGUF quantization
Multi-GPUTP + PPTPTP + PPLimited
QuantizationAWQ, GPTQ, FP8, BnBAWQ, GPTQ, BnBFP8, INT4, INT8GGUF (Q2~Q8)
API CompatOpenAI compatibleOpenAI compatibleTritonCustom API
Install DifficultyMediumMediumHighLow
Production ReadyVery HighHighVery HighLow~Medium
CommunityVery ActiveActiveNVIDIA-ledVery Active

2. vLLM Installation and Startup

2.1 pip Installation

# Basic installation (CUDA 12.x)
pip install vllm

# Specific version installation
pip install vllm==0.16.0

# CUDA 11.8 environment
pip install vllm --extra-index-url https://download.pytorch.org/whl/cu118

2.2 conda Installation

conda create -n vllm python=3.11 -y
conda activate vllm
pip install vllm

2.3 Docker Installation

# Official Docker image (NVIDIA GPU)
docker run --runtime nvidia --gpus all \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  --env "HUGGING_FACE_HUB_TOKEN=<hf_token>" \
  -p 8000:8000 \
  --ipc=host \
  vllm/vllm-openai:latest \
  --model meta-llama/Llama-3.1-8B-Instruct

# ROCm (AMD GPU)
docker run --device /dev/kfd --device /dev/dri \
  -v ~/.cache/huggingface:/root/.cache/huggingface \
  -p 8000:8000 \
  vllm/vllm-openai:latest-rocm \
  --model meta-llama/Llama-3.1-8B-Instruct

2.4 Basic Server Start

# vllm serve command (recommended)
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --host 0.0.0.0 \
  --port 8000

# Direct Python module execution (legacy)
python -m vllm.entrypoints.openai.api_server \
  --model meta-llama/Llama-3.1-8B-Instruct \
  --host 0.0.0.0 \
  --port 8000

# Start with YAML config file
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --config config.yaml

config.yaml example:

# vLLM server configuration file
host: '0.0.0.0'
port: 8000
tensor_parallel_size: 2
gpu_memory_utilization: 0.90
max_model_len: 8192
dtype: 'auto'
enforce_eager: false
enable_prefix_caching: true

2.5 Offline Batch Inference

You can perform batch inference directly from Python code without starting a server.

from vllm import LLM, SamplingParams

# Load model
llm = LLM(
    model="meta-llama/Llama-3.1-8B-Instruct",
    tensor_parallel_size=1,
    gpu_memory_utilization=0.90,
)

# Set sampling parameters
sampling_params = SamplingParams(
    temperature=0.7,
    top_p=0.9,
    max_tokens=512,
)

# Prompt list
prompts = [
    "Explain PagedAttention in simple terms.",
    "What is continuous batching?",
    "Compare vLLM and TensorRT-LLM.",
]

# Run batch inference
outputs = llm.generate(prompts, sampling_params)

for output in outputs:
    prompt = output.prompt
    generated = output.outputs[0].text
    print(f"Prompt: {prompt!r}")
    print(f"Output: {generated!r}\n")

2.6 OpenAI-Compatible API Server

The vLLM server provides OpenAI API-compatible endpoints.

# Start server
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --served-model-name llama-3.1-8b \
  --api-key my-secret-key

# Call Chat Completion with curl
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer my-secret-key" \
  -d '{
    "model": "llama-3.1-8b",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "What is PagedAttention?"}
    ],
    "temperature": 0.7,
    "max_tokens": 512
  }'
# Call with OpenAI SDK
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="my-secret-key",
)

response = client.chat.completions.create(
    model="llama-3.1-8b",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain the advantages of vLLM."},
    ],
    temperature=0.7,
    max_tokens=512,
)

print(response.choices[0].message.content)

3. Complete vLLM CLI Arguments Reference

Here is a categorized summary of key CLI arguments that can be passed to vllm serve. You can check the full list with vllm serve --help, or query by group with vllm serve --help=ModelConfig.

ArgumentTypeDefaultDescription
--modelstrfacebook/opt-125mHuggingFace model ID or local path
--tokenizerstrNone (same as model)Specify a separate tokenizer
--revisionstrNoneSpecific Git revision of the model (branch, tag, commit hash)
--tokenizer-revisionstrNoneSpecific revision of the tokenizer
--dtypestr"auto"Model weight data type (auto, float16, bfloat16, float32)
--max-model-lenintNone (follows model config)Maximum sequence length (sum of input + output tokens)
--trust-remote-codeflagFalseAllow HuggingFace remote code execution
--download-dirstrNoneModel download directory
--load-formatstr"auto"Model load format (auto, pt, safetensors, npcache, dummy, bitsandbytes)
--config-formatstr"auto"Model configuration format (auto, hf, mistral)
--seedint0Random seed for reproducibility
ArgumentTypeDefaultDescription
--hoststr"0.0.0.0"Host address to bind
--portint8000Server port number
--uvicorn-log-levelstr"info"Uvicorn log level
--api-keystrNoneAPI authentication key (Bearer token)
--served-model-namestrNoneModel name for the API (uses --model value if unset)
--chat-templatestrNoneJinja2 chat template file path or string
--response-rolestr"assistant"Role in chat completion responses
--ssl-keyfilestrNoneSSL key file path
--ssl-certfilestrNoneSSL certificate file path
--allowed-originslist["*"]CORS allowed origin list
--middlewarelistNoneFastAPI middleware classes
--max-log-lenintNoneMaximum prompt/output length in logs
--disable-log-requestsflagFalseDisable request logging
ArgumentTypeDefaultDescription
--tensor-parallel-size (-tp)int1Number of GPUs for Tensor Parallelism
--pipeline-parallel-size (-pp)int1Number of Pipeline Parallelism stages
--distributed-executor-backendstrNoneDistributed execution backend (ray, mp)
--ray-workers-use-nsightflagFalseUse Nsight profiler with Ray workers
--data-parallel-size (-dp)int1Number of Data Parallelism processes

Usage examples:

# 4-GPU Tensor Parallelism
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 4

# 2-GPU Tensor + 2-way Pipeline (4 GPUs total)
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 2 \
  --pipeline-parallel-size 2

# Ray distributed backend (multi-node)
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 8 \
  --distributed-executor-backend ray

3.4 Memory and Performance Arguments

ArgumentTypeDefaultDescription
--gpu-memory-utilizationfloat0.90GPU memory usage ratio (0.0~1.0)
--max-num-seqsint256Maximum concurrent sequences
--max-num-batched-tokensintNone (auto)Maximum tokens processed per step
--block-sizeint16PagedAttention block size (in tokens)
--swap-spacefloat4CPU swap space size (GiB)
--enforce-eagerflagFalseDisable CUDA Graph, force Eager mode
--max-seq-len-to-captureint8192Maximum sequence length for CUDA Graph capture
--disable-custom-all-reduceflagFalseDisable custom All-Reduce
--enable-prefix-cachingflagTrue (v1)Enable Automatic Prefix Caching
--enable-chunked-prefillflagTrue (v1)Enable Chunked Prefill
--num-scheduler-stepsint1Decoding steps per scheduler (Multi-Step Scheduling)
--kv-cache-dtypestr"auto"KV Cache data type (auto, fp8, fp8_e5m2, fp8_e4m3)

Usage examples:

# Memory optimization settings
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --gpu-memory-utilization 0.95 \
  --max-num-seqs 128 \
  --max-model-len 4096 \
  --enable-prefix-caching \
  --enable-chunked-prefill

# Eager mode (debugging/compatibility)
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --enforce-eager \
  --gpu-memory-utilization 0.85
ArgumentTypeDefaultDescription
--quantization (-q)strNoneSelect quantization method
--load-formatstr"auto"Model load format

--quantization supported values:

ValueDescriptionNotes
awqAWQ (Activation-aware Weight Quantization)4-bit, fast inference
gptqGPTQ (Post-Training Quantization)4-bit, ExLlamaV2 kernel
gptq_marlinGPTQ + Marlin kernel4-bit, faster kernel
awq_marlinAWQ + Marlin kernel4-bit, faster kernel
squeezellmSqueezeLLMSparse quantization
fp8FP8 (8-bit floating point)H100/MI300x only
bitsandbytesBitsAndBytes4-bit NF4
ggufGGUF formatllama.cpp compatible
compressed-tensorsCompressed TensorsGeneral purpose
experts_int8MoE Expert INT8MoE models only

Usage examples:

# AWQ quantized model
vllm serve TheBloke/Llama-2-7B-AWQ \
  --quantization awq

# GPTQ quantized model
vllm serve TheBloke/Llama-2-7B-GPTQ \
  --quantization gptq

# FP8 quantization (H100 and above)
vllm serve neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8 \
  --quantization fp8

# BitsAndBytes 4-bit (GPU memory saving)
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --quantization bitsandbytes \
  --load-format bitsandbytes
ArgumentTypeDefaultDescription
--enable-loraflagFalseEnable LoRA adapter serving
--max-lorasint1Maximum number of simultaneously loaded LoRAs
--max-lora-rankint16Maximum LoRA rank
--lora-extra-vocab-sizeint256Extra vocabulary size for LoRA adapters
--lora-moduleslistNoneLoRA adapter list (name=path format)
--long-lora-scaling-factorslistNoneLong LoRA scaling factors

Usage example:

# LoRA adapter serving
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --enable-lora \
  --max-loras 4 \
  --max-lora-rank 64 \
  --lora-modules \
    adapter1=/path/to/lora1 \
    adapter2=/path/to/lora2

3.7 Speculative Decoding Arguments

ArgumentTypeDefaultDescription
--speculative-modelstrNoneDraft model (small model or [ngram])
--num-speculative-tokensintNoneNumber of tokens to speculatively generate
--speculative-draft-tensor-parallel-sizeintNoneTP size for the draft model
--speculative-disable-by-batch-sizeintNoneDisable when batch size exceeds threshold
--ngram-prompt-lookup-maxintNoneMaximum lookup size for N-gram speculation
--ngram-prompt-lookup-minintNoneMinimum lookup size for N-gram speculation

Usage examples:

# Using a separate draft model
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --speculative-model meta-llama/Llama-3.2-1B-Instruct \
  --num-speculative-tokens 5 \
  --tensor-parallel-size 4

# N-gram based speculative decoding (no additional model needed)
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --speculative-model "[ngram]" \
  --num-speculative-tokens 5 \
  --ngram-prompt-lookup-max 4

4. vLLM Sampling Parameters

vLLM supports OpenAI API-compatible parameters plus additional advanced parameters.

4.1 Complete Parameter Reference

ParameterTypeDefaultRangeDescription
temperaturefloat1.0>= 0.0Lower is more deterministic, higher is more creative. 0 = Greedy
top_pfloat1.0(0.0, 1.0]Nucleus sampling. Sample only top tokens by cumulative probability
top_kint-1-1 or >= 1Consider only top k tokens. -1 disables
min_pfloat0.0[0.0, 1.0]Minimum probability threshold. Filters by ratio to highest probability
frequency_penaltyfloat0.0[-2.0, 2.0]Frequency-based penalty. Positive suppresses repetition
presence_penaltyfloat0.0[-2.0, 2.0]Presence-based penalty. Penalizes tokens that appeared at least once
repetition_penaltyfloat1.0> 0.0Repetition penalty (1.0 disables, greater than 1.0 suppresses)
max_tokensint16>= 1Maximum tokens to generate
stoplistNone-List of stop strings
seedintNone-Random seed (ensures reproducibility)
nint1>= 1Number of responses per prompt
best_ofintNone>= nGenerate best_of candidates and select the best
use_beam_searchboolFalse-Enable Beam Search
logprobsintNone[0, 20]Number of per-token log probabilities to return
prompt_logprobsintNone[0, 20]Number of prompt token log probabilities to return
skip_special_tokensboolTrue-Whether to skip special tokens
spaces_between_special_tokensboolTrue-Insert spaces between special tokens
guided_jsonobjectNone-JSON Schema-based structured output
guided_regexstrNone-Regex-based structured output
guided_choicelistNone-Choice-based structured output

4.2 API Call Examples with curl

# Basic Chat Completion
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "messages": [
      {"role": "user", "content": "What is the population of Seoul?"}
    ],
    "temperature": 0.3,
    "top_p": 0.9,
    "max_tokens": 256,
    "frequency_penalty": 0.5,
    "seed": 42
  }'

# Structured Output (JSON mode)
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "messages": [
      {"role": "user", "content": "Give me the population of Seoul, Busan, and Daegu in JSON"}
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "city_population",
        "schema": {
          "type": "object",
          "properties": {
            "cities": {
              "type": "array",
              "items": {
                "type": "object",
                "properties": {
                  "name": {"type": "string"},
                  "population": {"type": "integer"}
                },
                "required": ["name", "population"]
              }
            }
          },
          "required": ["cities"]
        }
      }
    },
    "temperature": 0.1,
    "max_tokens": 512
  }'

# Returning logprobs
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "messages": [
      {"role": "user", "content": "1+1=?"}
    ],
    "logprobs": true,
    "top_logprobs": 5,
    "max_tokens": 10
  }'

4.3 Python requests Example

import requests
import json

url = "http://localhost:8000/v1/chat/completions"
headers = {"Content-Type": "application/json"}

payload = {
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "messages": [
        {"role": "system", "content": "You are a helpful Korean assistant."},
        {"role": "user", "content": "What is quantum computing?"},
    ],
    "temperature": 0.7,
    "top_p": 0.9,
    "top_k": 50,
    "max_tokens": 1024,
    "repetition_penalty": 1.1,
    "stop": ["\n\n\n"],
}

response = requests.post(url, headers=headers, json=payload)
result = response.json()
print(result["choices"][0]["message"]["content"])

4.4 Streaming Example with OpenAI SDK

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="token")

# Streaming response
stream = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[
        {"role": "user", "content": "Implement quicksort in Python"},
    ],
    temperature=0.2,
    max_tokens=2048,
    stream=True,
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)
print()

5. Complete vLLM Environment Variables Reference

vLLM controls runtime behavior through various environment variables. Here is a categorized summary of key environment variables.

5.1 Core Environment Variables

Environment VariableDefaultDescription
VLLM_TARGET_DEVICE"cuda"Target device (cuda, rocm, neuron, cpu, xpu)
VLLM_USE_V1TrueUse V1 code path
VLLM_WORKER_MULTIPROC_METHOD"fork"Multiprocess spawn method (spawn, fork)
VLLM_ALLOW_LONG_MAX_MODEL_LENFalseAllow max_model_len longer than model config
CUDA_VISIBLE_DEVICESNoneGPU device numbers to use
Environment VariableDefaultDescription
VLLM_ATTENTION_BACKENDNoneAttention backend (deprecated, use --attention-backend from v0.14)
VLLM_USE_TRITON_FLASH_ATTNTrueUse Triton Flash Attention
VLLM_FLASH_ATTN_VERSIONNoneForce Flash Attention version (2 or 3)
VLLM_USE_FLASHINFER_SAMPLERNoneUse FlashInfer sampler
VLLM_FLASHINFER_FORCE_TENSOR_CORESFalseForce FlashInfer tensor core usage
VLLM_USE_TRITON_AWQFalseUse Triton AWQ kernel
VLLM_USE_DEEP_GEMMFalseUse DeepGemm kernel (MoE operations)
VLLM_MLA_DISABLEFalseDisable MLA Attention optimization
Environment VariableDefaultDescription
VLLM_CONFIGURE_LOGGING1Auto-configure vLLM logging (0 to disable)
VLLM_LOGGING_LEVEL"INFO"Default logging level
VLLM_LOGGING_CONFIG_PATHNoneCustom logging config file path
VLLM_LOGGING_PREFIX""Prefix to prepend to log messages
VLLM_LOG_BATCHSIZE_INTERVAL-1Batch size logging interval (seconds, -1 disables)
VLLM_TRACE_FUNCTION0Enable function call tracing
VLLM_DEBUG_LOG_API_SERVER_RESPONSEFalseAPI response debug logging
Environment VariableDefaultDescription
VLLM_HOST_IP""Node IP for distributed setup
VLLM_PORT0Distributed communication port
VLLM_NCCL_SO_PATHNoneNCCL library file path
NCCL_DEBUGNoneNCCL debug level (INFO, WARN, TRACE)
NCCL_SOCKET_IFNAMENoneNetwork interface for NCCL communication
VLLM_PP_LAYER_PARTITIONNonePipeline Parallelism layer partition strategy
VLLM_DP_RANK0Data Parallel process rank
VLLM_DP_SIZE1Data Parallel world size
VLLM_DP_MASTER_IP"127.0.0.1"Data Parallel master node IP
VLLM_DP_MASTER_PORT0Data Parallel master node port
VLLM_USE_RAY_SPMD_WORKERFalseRay SPMD worker execution
VLLM_USE_RAY_COMPILED_DAGFalseUse Ray Compiled Graph API
VLLM_SKIP_P2P_CHECKFalseSkip GPU P2P capability check

5.5 HuggingFace and External Services

Environment VariableDefaultDescription
HF_TOKENNoneHuggingFace API token
HUGGING_FACE_HUB_TOKENNoneHuggingFace Hub token (legacy)
VLLM_USE_MODELSCOPEFalseLoad models from ModelScope
VLLM_API_KEYNonevLLM API server auth key
VLLM_NO_USAGE_STATSFalseDisable usage stats collection
VLLM_DO_NOT_TRACKFalseOpt out of tracking

5.6 Cache and Paths

Environment VariableDefaultDescription
VLLM_CONFIG_ROOT~/.config/vllmConfig file root directory
VLLM_CACHE_ROOT~/.cache/vllmCache file root directory
VLLM_ASSETS_CACHE~/.cache/vllm/assetsDownloaded assets cache path
VLLM_RPC_BASE_PATHSystem tempIPC multiprocessing path

5.7 Environment Variable Usage Examples

# Multi-GPU + logging + HF token setup
export CUDA_VISIBLE_DEVICES=0,1,2,3
export HF_TOKEN="hf_xxxxxxxxxxxx"
export VLLM_LOGGING_LEVEL="DEBUG"
export VLLM_WORKER_MULTIPROC_METHOD="spawn"

vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 4 \
  --gpu-memory-utilization 0.90

# Passing environment variables in Docker
docker run --runtime nvidia --gpus all \
  -e CUDA_VISIBLE_DEVICES=0,1 \
  -e HF_TOKEN="hf_xxxxxxxxxxxx" \
  -e VLLM_LOGGING_LEVEL="INFO" \
  -e VLLM_WORKER_MULTIPROC_METHOD="spawn" \
  -p 8000:8000 \
  --ipc=host \
  vllm/vllm-openai:latest \
  --model meta-llama/Llama-3.1-8B-Instruct \
  --tensor-parallel-size 2

6. Advanced vLLM Configuration

6.1 Multi-GPU Setup

Tensor Parallelism (TP): Distributes each layer of the model across multiple GPUs. The most commonly used approach on a single node.

# TP=4 (distribute model across 4 GPUs)
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 4 \
  --gpu-memory-utilization 0.90

Pipeline Parallelism (PP): Places model layers sequentially across multiple GPUs. Advantageous in slow interconnect environments.

# PP=2, TP=2 (4 GPUs total, 2x2 configuration)
vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 2 \
  --pipeline-parallel-size 2

Multi-node setup (using Ray):

# Master node
ray start --head --port=6379

# Worker node
ray start --address=<master-ip>:6379

# Run vLLM (from master)
vllm serve meta-llama/Llama-3.1-405B-Instruct \
  --tensor-parallel-size 8 \
  --pipeline-parallel-size 2 \
  --distributed-executor-backend ray

6.2 Quantization Details

AWQ (Activation-aware Weight Quantization):

# Using pre-quantized AWQ model
vllm serve TheBloke/Llama-2-13B-chat-AWQ \
  --quantization awq \
  --max-model-len 4096

# Faster with Marlin kernel (SM 80+ GPU)
vllm serve TheBloke/Llama-2-13B-chat-AWQ \
  --quantization awq_marlin

GPTQ (Post-Training Quantization):

# GPTQ model (ExLlamaV2 kernel auto-used)
vllm serve TheBloke/Llama-2-13B-chat-GPTQ \
  --quantization gptq

# Using Marlin kernel
vllm serve TheBloke/Llama-2-13B-chat-GPTQ \
  --quantization gptq_marlin

FP8 (8-bit Floating Point): Hardware acceleration supported on H100, MI300x and above GPUs.

# Pre-quantized FP8 model
vllm serve neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8 \
  --quantization fp8

# Dynamic FP8 quantization (no pre-quantization needed)
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --quantization fp8 \
  --kv-cache-dtype fp8

BitsAndBytes 4-bit NF4: Instant quantization without calibration data.

vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --quantization bitsandbytes \
  --load-format bitsandbytes \
  --enforce-eager  # BnB requires Eager mode

6.3 LoRA Serving

# Enable LoRA adapters
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --enable-lora \
  --max-loras 4 \
  --max-lora-rank 64 \
  --lora-modules \
    korean-chat=/path/to/korean-lora \
    code-assist=/path/to/code-lora

Specifying a LoRA model in API calls:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="token")

# Use a specific LoRA adapter
response = client.chat.completions.create(
    model="korean-chat",  # LoRA adapter name
    messages=[{"role": "user", "content": "Hello!"}],
    temperature=0.7,
    max_tokens=256,
)

6.4 Prefix Caching & Chunked Prefill

Automatic Prefix Caching: Reuses KV Cache for common prompt prefixes to reduce TTFT. Especially effective when many requests share the same system prompt.

# Enabled by default in v1, requires explicit flag in v0
vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --enable-prefix-caching

Chunked Prefill: Splits long prompts into chunks and interleaves Prefill and Decode. Prevents long prompts from blocking Decode of shorter requests.

vllm serve meta-llama/Llama-3.1-8B-Instruct \
  --enable-chunked-prefill \
  --max-num-batched-tokens 2048

6.5 Structured Output (Guided Decoding)

# JSON Schema-based structured output
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "messages": [
      {"role": "user", "content": "Please provide Seoul weather info in JSON"}
    ],
    "response_format": {
      "type": "json_schema",
      "json_schema": {
        "name": "weather_info",
        "schema": {
          "type": "object",
          "properties": {
            "city": {"type": "string"},
            "temperature_celsius": {"type": "number"},
            "condition": {"type": "string"},
            "humidity_percent": {"type": "integer"}
          },
          "required": ["city", "temperature_celsius", "condition"]
        }
      }
    }
  }'

# Regex-based output (Completion API)
curl http://localhost:8000/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/Llama-3.1-8B-Instruct",
    "prompt": "Generate a valid email address:",
    "extra_body": {
      "guided_regex": "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}"
    },
    "max_tokens": 50
  }'

6.6 Docker Deployment

# docker-compose.yaml
version: '3.8'

services:
  vllm:
    image: vllm/vllm-openai:latest
    runtime: nvidia
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    ports:
      - '8000:8000'
    volumes:
      - ~/.cache/huggingface:/root/.cache/huggingface
    environment:
      - HF_TOKEN=${HF_TOKEN}
      - VLLM_LOGGING_LEVEL=INFO
      - VLLM_WORKER_MULTIPROC_METHOD=spawn
    ipc: host
    command: >
      --model meta-llama/Llama-3.1-8B-Instruct
      --host 0.0.0.0
      --port 8000
      --tensor-parallel-size 2
      --gpu-memory-utilization 0.90
      --max-model-len 8192
      --enable-prefix-caching
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:8000/health']
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 120s
# Run with Docker Compose
HF_TOKEN=hf_xxxx docker compose up -d

# Check logs
docker compose logs -f vllm

6.7 Kubernetes Deployment

# vllm-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vllm-llama3
  namespace: ai-serving
  labels:
    app: vllm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vllm
  template:
    metadata:
      labels:
        app: vllm
    spec:
      containers:
        - name: vllm
          image: vllm/vllm-openai:latest
          ports:
            - containerPort: 8000
              name: http
          args:
            - '--model'
            - 'meta-llama/Llama-3.1-8B-Instruct'
            - '--host'
            - '0.0.0.0'
            - '--port'
            - '8000'
            - '--tensor-parallel-size'
            - '2'
            - '--gpu-memory-utilization'
            - '0.90'
            - '--max-model-len'
            - '8192'
          env:
            - name: HF_TOKEN
              valueFrom:
                secretKeyRef:
                  name: hf-secret
                  key: token
            - name: VLLM_WORKER_MULTIPROC_METHOD
              value: 'spawn'
          resources:
            limits:
              nvidia.com/gpu: '2'
            requests:
              nvidia.com/gpu: '2'
              memory: '32Gi'
              cpu: '8'
          volumeMounts:
            - name: shm
              mountPath: /dev/shm
            - name: model-cache
              mountPath: /root/.cache/huggingface
          readinessProbe:
            httpGet:
              path: /health
              port: 8000
            initialDelaySeconds: 120
            periodSeconds: 10
          livenessProbe:
            httpGet:
              path: /health
              port: 8000
            initialDelaySeconds: 180
            periodSeconds: 30
      volumes:
        - name: shm
          emptyDir:
            medium: Memory
            sizeLimit: 2Gi
        - name: model-cache
          persistentVolumeClaim:
            claimName: model-cache-pvc
      nodeSelector:
        nvidia.com/gpu.product: 'NVIDIA-A100-SXM4-80GB'
---
apiVersion: v1
kind: Service
metadata:
  name: vllm-service
  namespace: ai-serving
spec:
  selector:
    app: vllm
  ports:
    - port: 8000
      targetPort: 8000
      name: http
  type: ClusterIP
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: vllm-hpa
  namespace: ai-serving
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: vllm-llama3
  minReplicas: 1
  maxReplicas: 4
  metrics:
    - type: Pods
      pods:
        metric:
          name: vllm_num_requests_running
        target:
          type: AverageValue
          averageValue: '50'

Part 2: Ollama

7. Introduction to Ollama

Ollama is an open-source tool that makes it easy to run LLMs in local environments. Like Docker, you can download a model and start chatting immediately with a single command like ollama run llama3.1.

7.1 Architecture Features

  • GGUF-based: Uses llama.cpp's GGUF (GPT-Generated Unified Format) quantized models
  • llama.cpp engine: Internally uses llama.cpp as the inference engine
  • Single binary: Go server + llama.cpp C++ engine distributed as a single binary
  • Automatic GPU acceleration: Auto-detects NVIDIA CUDA, AMD ROCm, Apple Metal for GPU offloading
  • Model registry: Pull/push pre-quantized models from ollama.com/library like Docker Hub

7.2 Supported Models

CategoryModelsSize
Meta Llamallama3.1, llama3.2, llama3.31B ~ 405B
Mistralmistral, mixtral7B ~ 8x22B
Googlegemma, gemma2, gemma32B ~ 27B
Microsoftphi3, phi43.8B ~ 14B
DeepSeekdeepseek-r1, deepseek-v3, deepseek-coder-v21.5B ~ 671B
Qwenqwen, qwen2, qwen2.5, qwen30.5B ~ 72B
Codecodellama, starcoder2, qwen2.5-coder3B ~ 34B
Embeddingnomic-embed-text, mxbai-embed-large, all-minilm-
Multimodalllava, bakllava, llama3.2-vision7B ~ 90B

8. Ollama Installation and Startup

8.1 Platform-Specific Installation

macOS:

# Homebrew
brew install ollama

# Or official install script
curl -fsSL https://ollama.com/install.sh | sh

Linux:

# Official install script (recommended)
curl -fsSL https://ollama.com/install.sh | sh

# Or manual installation
curl -L https://ollama.com/download/ollama-linux-amd64 -o /usr/local/bin/ollama
chmod +x /usr/local/bin/ollama

Windows:

Download and run the Windows installer from the official website (ollama.com).

Docker:

# CPU only
docker run -d -v ollama:/root/.ollama -p 11434:11434 \
  --name ollama ollama/ollama

# NVIDIA GPU
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 \
  --name ollama ollama/ollama

# AMD GPU (ROCm)
docker run -d --device /dev/kfd --device /dev/dri \
  -v ollama:/root/.ollama -p 11434:11434 \
  --name ollama ollama/ollama:rocm

8.2 Basic Usage

# Start server (if not auto-started in background)
ollama serve

# Download model and start chatting
ollama run llama3.1

# Specify a tag (size/quantization)
ollama run llama3.1:8b
ollama run llama3.1:70b-instruct-q4_K_M
ollama run qwen2.5:32b-instruct-q5_K_M

# Download model only (without running)
ollama pull llama3.1:8b

# One-line prompt
ollama run llama3.1 "What is PagedAttention?"

9. Complete Ollama CLI Commands Reference

9.1 Command Summary

CommandDescriptionKey Options
ollama serveStart Ollama serverCheck env vars with --help
ollama run <model>Run model (auto-pulls if missing)--verbose, --nowordwrap, --format json
ollama pull <model>Download model--insecure
ollama push <model>Upload model to registry--insecure
ollama create <model>Create custom model from Modelfile-f <Modelfile>, --quantize
ollama list / ollama lsList installed models-
ollama show <model>Show model details--modelfile, --parameters, --system, --template, --license
ollama cp <src> <dst>Copy model-
ollama rm <model>Delete model-
ollama psList running models-
ollama stop <model>Stop a running model-
ollama signinSign in to ollama.com-
ollama signoutSign out from ollama.com-

9.2 Detailed Command Examples

ollama serve - Start server:

# Default start (localhost:11434)
ollama serve

# Change bind address via environment variable
OLLAMA_HOST=0.0.0.0:11434 ollama serve

# Debug mode
OLLAMA_DEBUG=1 ollama serve

ollama run - Run model:

# Interactive mode
ollama run llama3.1

# One-line prompt
ollama run llama3.1 "Explain quantum computing"

# JSON format output
ollama run llama3.1 "List 3 Korean cities" --format json

# Multimodal (image input)
ollama run llama3.2-vision "What's in this image? /path/to/image.png"

# Verbose mode (display performance stats)
ollama run llama3.1 --verbose

# With system prompt
ollama run llama3.1 --system "You are a Korean translator."

ollama create - Create custom model:

# Create from Modelfile
ollama create my-model -f ./Modelfile

# Create from GGUF file
ollama create my-model -f ./Modelfile-from-gguf

# Quantization conversion
ollama create my-model-q4 --quantize q4_K_M -f ./Modelfile

ollama show - Check model info:

# Full info
ollama show llama3.1

# Output Modelfile
ollama show llama3.1 --modelfile

# Check parameters
ollama show llama3.1 --parameters

# Check system prompt
ollama show llama3.1 --system

# Check template
ollama show llama3.1 --template

ollama ps - Running models:

$ ollama ps
NAME              ID            SIZE     PROCESSOR    UNTIL
llama3.1:8b       a]f2e33d4e25  6.7 GB   100% GPU     4 minutes from now
qwen2.5:7b        845dbda0ea48  4.7 GB   100% GPU     3 minutes from now

10. Ollama API Endpoints

Ollama provides both a REST API and an OpenAI-compatible API. The default address is http://localhost:11434.

10.1 Native API Endpoints

EndpointMethodDescription
/api/generatePOSTText Completion generation
/api/chatPOSTChat Completion generation
/api/embedPOSTGenerate embedding vectors
/api/tagsGETList local models
/api/showPOSTModel details
/api/pullPOSTDownload model
/api/pushPOSTUpload model
/api/createPOSTCreate custom model
/api/copyPOSTCopy model
/api/deleteDELETEDelete model
/api/psGETList running models
/api/versionGETOllama version info

10.2 OpenAI-Compatible Endpoints

EndpointMethodDescription
/v1/chat/completionsPOSTOpenAI Chat Completion compatible
/v1/completionsPOSTOpenAI Completion compatible
/v1/modelsGETModel list (OpenAI format)
/v1/embeddingsPOSTEmbeddings (OpenAI format)

10.3 API Call Examples

Generate (Completion):

# Basic generation
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt": "Why is the sky blue?",
  "stream": false
}'

# Streaming (default)
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt": "Write a haiku about coding",
  "options": {
    "temperature": 0.7,
    "num_predict": 100
  }
}'

# JSON format output
curl http://localhost:11434/api/generate -d '{
  "model": "llama3.1",
  "prompt": "List 3 programming languages as JSON",
  "format": "json",
  "stream": false
}'

Chat (Conversation):

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [
    {"role": "system", "content": "You are a helpful Korean assistant."},
    {"role": "user", "content": "Recommend tourist spots in Seoul."}
  ],
  "stream": false,
  "options": {
    "temperature": 0.8,
    "top_p": 0.9,
    "num_ctx": 4096,
    "num_predict": 512
  }
}'

Embed (Embeddings):

# Single text embedding
curl http://localhost:11434/api/embed -d '{
  "model": "nomic-embed-text",
  "input": "Hello, world!"
}'

# Multiple text embeddings
curl http://localhost:11434/api/embed -d '{
  "model": "nomic-embed-text",
  "input": ["Hello world", "Goodbye world"]
}'

OpenAI-Compatible API:

# OpenAI format Chat Completion
curl http://localhost:11434/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama3.1",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ],
    "temperature": 0.7,
    "max_tokens": 256
  }'

# Model list
curl http://localhost:11434/v1/models

Calling from Python:

import requests

# Generate API
response = requests.post("http://localhost:11434/api/generate", json={
    "model": "llama3.1",
    "prompt": "Explain Docker in Korean",
    "stream": False,
    "options": {
        "temperature": 0.7,
        "num_predict": 512,
    },
})
print(response.json()["response"])

# Chat API
response = requests.post("http://localhost:11434/api/chat", json={
    "model": "llama3.1",
    "messages": [
        {"role": "user", "content": "What is Kubernetes?"},
    ],
    "stream": False,
})
print(response.json()["message"]["content"])
# Using Ollama with OpenAI SDK
from openai import OpenAI

client = OpenAI(
    base_url="http://localhost:11434/v1",
    api_key="ollama",  # Ollama doesn't require an API key, any value works
)

response = client.chat.completions.create(
    model="llama3.1",
    messages=[
        {"role": "user", "content": "Explain Python's GIL."},
    ],
    temperature=0.7,
    max_tokens=512,
)
print(response.choices[0].message.content)

11. Ollama Parameters (Modelfile & API)

11.1 Modelfile Structure

A Modelfile defines an Ollama custom model. It has a structure similar to a Dockerfile.

# Specify base model (required)
FROM llama3.1:8b

# Parameter settings
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
PARAMETER num_ctx 4096
PARAMETER num_predict 512
PARAMETER repeat_penalty 1.1
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|end_of_text|>"

# System prompt
SYSTEM """
You are a friendly Korean AI assistant.
You provide accurate and concise answers, explaining with examples when needed.
"""

# Chat template (Jinja2 or Go template)
TEMPLATE """
{{- if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}
{{- range .Messages }}<|start_header_id|>{{ .Role }}<|end_header_id|>
{{ .Content }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
"""

# LoRA adapter (optional)
ADAPTER /path/to/lora-adapter.gguf

# License info (optional)
LICENSE """
Apache 2.0
"""
DirectiveDescriptionRequired
FROMBase model (model name or GGUF file path)Required
PARAMETERModel parameter settingsOptional
TEMPLATEPrompt templateOptional
SYSTEMSystem promptOptional
ADAPTERLoRA/QLoRA adapter pathOptional
LICENSELicense informationOptional
MESSAGEPre-set conversation historyOptional

11.2 PARAMETER Options Detail

ParameterTypeDefaultRange/Description
temperaturefloat0.80.0~2.0. Higher is more creative, lower is more deterministic
top_pfloat0.90.0~1.0. Nucleus sampling probability threshold
top_kint401~100. Consider only top k tokens
min_pfloat0.00.0~1.0. Minimum probability filtering
num_predictint-1Maximum tokens to generate (-1: unlimited, -2: until context fills)
num_ctxint2048Context window size (in tokens)
repeat_penaltyfloat1.1Repetition penalty (1.0 disables)
repeat_last_nint64Repetition check range (0: disabled, -1: num_ctx)
seedint0Random seed (0 means different results each time)
stopstring-Stop string (multiple can be specified)
num_gpuintautoNumber of layers to offload to GPU (0: CPU only)
num_threadintautoNumber of CPU threads
num_batchint512Prompt processing batch size
mirostatint0Mirostat sampling (0: disabled, 1: Mirostat, 2: Mirostat 2.0)
mirostat_etafloat0.1Mirostat learning rate
mirostat_taufloat5.0Mirostat target entropy
tfs_zfloat1.0Tail-Free Sampling (1.0 disables)
typical_pfloat1.0Locally Typical Sampling (1.0 disables)
use_mlockboolfalseLock model in memory (prevent swap)
num_keepint0Number of tokens to keep during context recycling
penalize_newlinebooltrueApply penalty to newline tokens

11.3 Using Parameters in API

Pass parameters via the options field in API calls.

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.1",
  "messages": [
    {"role": "user", "content": "Hello"}
  ],
  "options": {
    "temperature": 0.3,
    "top_p": 0.9,
    "top_k": 50,
    "num_ctx": 8192,
    "num_predict": 1024,
    "repeat_penalty": 1.2,
    "seed": 42,
    "stop": ["<|eot_id|>"]
  }
}'

12. Complete Ollama Environment Variables Reference

12.1 Server and Network

Environment VariableDefaultDescription
OLLAMA_HOST127.0.0.1:11434Server bind address and port
OLLAMA_ORIGINSNoneCORS allowed origins (comma-separated)
OLLAMA_KEEP_ALIVE5mIdle time before model unload (5m, 1h, -1=permanent)
OLLAMA_MAX_QUEUE512Maximum queue size (requests rejected when exceeded)
OLLAMA_NUM_PARALLEL1Concurrent requests per model
OLLAMA_MAX_LOADED_MODELS1 (CPU), GPUs*3Maximum simultaneously loaded models

12.2 Storage and Paths

Environment VariableDefaultDescription
OLLAMA_MODELSOS default pathModel storage directory
OLLAMA_TMPDIRSystem tempTemporary file directory
OLLAMA_NOPRUNENoneDisable unused blob cleanup on boot

Default model storage paths by platform:

OSDefault Path
macOS~/.ollama/models
Linux/usr/share/ollama/.ollama/models
WindowsC:\Users\<user>\.ollama\models

12.3 GPU and Performance

Environment VariableDefaultDescription
OLLAMA_FLASH_ATTENTION0Enable Flash Attention (set to 1)
OLLAMA_KV_CACHE_TYPEf16KV Cache quantization type (f16, q8_0, q4_0)
OLLAMA_GPU_OVERHEAD0VRAM to reserve per GPU (bytes)
OLLAMA_LLM_LIBRARYautoForce specific LLM library
CUDA_VISIBLE_DEVICESAll GPUsNVIDIA GPU device numbers to use
ROCR_VISIBLE_DEVICESAll GPUsAMD GPU device numbers to use
GPU_DEVICE_ORDINALAll GPUsGPU order to use

12.4 Logging and Debug

Environment VariableDefaultDescription
OLLAMA_DEBUG0Enable debug logging (set to 1)
OLLAMA_NOHISTORY0Disable readline history in interactive mode

12.5 Context and Inference

Environment VariableDefaultDescription
OLLAMA_CONTEXT_LENGTH4096Default context window size
OLLAMA_NO_CLOUD0Disable cloud features (set to 1)
HTTPS_PROXY / HTTP_PROXYNoneProxy server settings
NO_PROXYNoneProxy bypass hosts

12.6 How to Set Environment Variables

macOS (launchctl):

# Set environment variables
launchctl setenv OLLAMA_HOST "0.0.0.0:11434"
launchctl setenv OLLAMA_MODELS "/Volumes/ExternalSSD/ollama/models"
launchctl setenv OLLAMA_FLASH_ATTENTION "1"
launchctl setenv OLLAMA_KV_CACHE_TYPE "q8_0"
launchctl setenv OLLAMA_NUM_PARALLEL "4"
launchctl setenv OLLAMA_KEEP_ALIVE "-1"

# Restart Ollama
brew services restart ollama

Linux (systemd):

# Create systemd service override
sudo systemctl edit ollama

# Add the following in the editor:
[Service]
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_MODELS=/data/ollama/models"
Environment="OLLAMA_FLASH_ATTENTION=1"
Environment="OLLAMA_KV_CACHE_TYPE=q8_0"
Environment="OLLAMA_NUM_PARALLEL=4"
Environment="OLLAMA_KEEP_ALIVE=-1"
Environment="OLLAMA_MAX_LOADED_MODELS=3"
Environment="CUDA_VISIBLE_DEVICES=0,1"

# Restart service
sudo systemctl daemon-reload
sudo systemctl restart ollama

Docker:

docker run -d --gpus=all \
  -e OLLAMA_HOST=0.0.0.0:11434 \
  -e OLLAMA_FLASH_ATTENTION=1 \
  -e OLLAMA_KV_CACHE_TYPE=q8_0 \
  -e OLLAMA_NUM_PARALLEL=4 \
  -e OLLAMA_KEEP_ALIVE=-1 \
  -v /data/ollama:/root/.ollama \
  -p 11434:11434 \
  --name ollama \
  ollama/ollama

13. Advanced Ollama Usage

13.1 Modelfile Writing Guide

Korean Assistant Model:

FROM llama3.1:8b

PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER num_ctx 4096
PARAMETER num_predict 1024
PARAMETER repeat_penalty 1.15
PARAMETER stop "<|eot_id|>"
PARAMETER stop "<|end_of_text|>"

SYSTEM """
You are a Korean AI assistant well-versed in Korean culture and history.
You always respond accurately and kindly in Korean, using English technical terms alongside when needed.
You provide answers in a structured format.
"""

MESSAGE user Hello, please introduce yourself.
MESSAGE assistant Hello! I'm an AI assistant specialized in Korean. I can help with various topics including Korean culture, history, technology, and more. Feel free to ask me anything!
# Create model
ollama create korean-assistant -f ./Modelfile-korean

# Run
ollama run korean-assistant "Tell me about the three grand palaces of Seoul"

Code Review Model:

FROM qwen2.5-coder:7b

PARAMETER temperature 0.2
PARAMETER top_p 0.85
PARAMETER num_ctx 8192
PARAMETER num_predict 2048

SYSTEM """
You are an expert code reviewer. Analyze code for:
1. Bugs and potential issues
2. Performance improvements
3. Security vulnerabilities
4. Code style and best practices

Provide specific, actionable feedback with corrected code examples.
"""

Quantization Level Selection Guide:

QuantizationSize RatioQualitySpeedRecommended Use
Q2_K~30%LowVery FastTesting only
Q3_K_M~37%FairFastMemory-constrained
Q4_0~42%GoodFastGeneral use (default)
Q4_K_M~45%Good+FastGeneral use (recommended)
Q5_K_M~53%GreatMediumQuality-focused
Q6_K~62%ExcellentMediumHigh quality required
Q8_0~80%BestSlowNear-original quality
F16100%OriginalSlowBaseline/benchmark

13.2 GPU Acceleration Setup

NVIDIA GPU:

# Check NVIDIA driver
nvidia-smi

# Use specific GPU only
CUDA_VISIBLE_DEVICES=0 ollama serve

# Multi-GPU
CUDA_VISIBLE_DEVICES=0,1 ollama serve

AMD GPU (ROCm):

# Check ROCm driver
rocm-smi

# Specify GPU
ROCR_VISIBLE_DEVICES=0 ollama serve

Apple Silicon (Metal):

On macOS, Metal GPU acceleration is automatically enabled. No separate configuration needed.

# Check GPU usage (Processor column in ollama ps)
ollama ps
# NAME           ID            SIZE    PROCESSOR     UNTIL
# llama3.1:8b    a]f2e33d4e25  6.7 GB  100% GPU      4 minutes from now

13.3 Docker Deployment

# docker-compose.yaml
version: '3.8'

services:
  ollama:
    image: ollama/ollama
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]
    ports:
      - '11434:11434'
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0:11434
      - OLLAMA_FLASH_ATTENTION=1
      - OLLAMA_KV_CACHE_TYPE=q8_0
      - OLLAMA_NUM_PARALLEL=4
      - OLLAMA_KEEP_ALIVE=24h
    restart: unless-stopped
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:11434/api/version']
      interval: 30s
      timeout: 5s
      retries: 3

  # Model initialization (optional)
  ollama-init:
    image: curlimages/curl:latest
    depends_on:
      ollama:
        condition: service_healthy
    entrypoint: >
      sh -c "
        curl -s http://ollama:11434/api/pull -d '{\"name\": \"llama3.1:8b\"}' &&
        curl -s http://ollama:11434/api/pull -d '{\"name\": \"nomic-embed-text\"}'
      "

volumes:
  ollama_data:

13.4 Multimodal Model Usage

# Run LLaVA model
ollama run llava "What's in this image? /path/to/photo.jpg"

# Llama 3.2 Vision
ollama run llama3.2-vision "Describe this image in Korean. /path/to/image.png"
import requests
import base64

# Encode image to base64
with open("image.jpg", "rb") as f:
    image_base64 = base64.b64encode(f.read()).decode("utf-8")

response = requests.post("http://localhost:11434/api/chat", json={
    "model": "llava",
    "messages": [
        {
            "role": "user",
            "content": "What's in this image?",
            "images": [image_base64],
        }
    ],
    "stream": False,
})
print(response.json()["message"]["content"])

13.5 Tool Calling / Function Calling

Ollama supports OpenAI-compatible Tool Calling.

from openai import OpenAI
import json

client = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"},
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                    },
                },
                "required": ["city"],
            },
        },
    }
]

response = client.chat.completions.create(
    model="llama3.1",
    messages=[
        {"role": "user", "content": "What's the current weather in Seoul?"}
    ],
    tools=tools,
    tool_choice="auto",
)

message = response.choices[0].message
if message.tool_calls:
    for tool_call in message.tool_calls:
        print(f"Function: {tool_call.function.name}")
        print(f"Arguments: {tool_call.function.arguments}")

Part 3: Comparison and Practice

14. vLLM vs Ollama Comparison

14.1 Comprehensive Comparison Table

ItemvLLMOllama
Primary UseProduction API serving, high-throughput inferenceLocal development, prototyping, personal use
EngineCustom engine (PagedAttention)llama.cpp
Model FormatHF Safetensors, AWQ, GPTQ, FP8GGUF (quantized)
APIOpenAI compatibleNative + OpenAI compatible
Install DifficultyMedium (Python/CUDA env required)Very Easy (single binary)
GPU RequiredNearly essential (NVIDIA/AMD)Optional (runs on CPU)
Multi-GPUTP + PP (up to hundreds of GPUs)Auto-distributed (limited)
ConcurrencyHundreds~thousands of requestsDefault 1~4 parallel
QuantizationAWQ, GPTQ, FP8, BnBGGUF Q2~Q8, F16
Continuous BatchingSupportedNot supported (llama.cpp limitation)
PagedAttentionCore technologyNot supported
Prefix CachingSupported (automatic)Not supported
LoRA ServingMulti-LoRA concurrent servingSingle LoRA
Structured OutputJSON Schema, Regex, GrammarJSON mode
Speculative DecodingSupported (Draft model, N-gram)Not supported
StreamingSupportedSupported
Docker DeploymentOfficial image (GPU)Official image (CPU/GPU)
KubernetesOfficial guide + Production StackCommunity Helm Chart
Memory EfficiencyVery high (less than 4% waste)High (GGUF quantization)
LicenseApache 2.0MIT

14.2 Throughput Comparison (Llama 3.1 8B, RTX 4090)

Concurrent UsersvLLM (tokens/s)Ollama (tokens/s)Ratio
1~140~652.2x
5~500~1204.2x
10~800~1505.3x
50~1,200~1508.0x
100~1,500~150 (queued)10.0x

In Red Hat's benchmark, vLLM showed 793 TPS vs Ollama 41 TPS on the same hardware -- a 19x difference. This varies depending on concurrent requests, batch size, and model size.


15. Performance Benchmarks

15.1 Throughput Comparison

MetricvLLMOllamaNotes
Single Request TPS100~140 tok/s50~70 tok/sRTX 4090, Llama 3.1 8B
10 Concurrent Total TPS700~900 tok/s120~200 tok/sContinuous Batching effect
50 Concurrent Total TPS1,000~1,500 tok/s~150 tok/sOllama queues requests
Batch Inference (1K prompts)2,000~3,000 tok/sNot supportedvLLM offline inference

15.2 Latency Comparison

MetricvLLMOllamaNotes
TTFT (Time To First Token)50~200 ms100~500 msVaries by prompt length
TPOT (Time Per Output Token)7~15 ms15~25 msSingle request basis
P99 Latency80~150 ms500~700 ms10 concurrent requests
Model Loading Time30~120 sec5~30 secGGUF loads faster

15.3 Memory Usage Comparison (Llama 3.1 8B)

ConfigurationvLLM GPU MemoryOllama GPU MemoryNotes
FP16~16 GBN/AvLLM default
FP8~9 GBN/AH100 only
AWQ 4-bit~5 GBN/AvLLM quantized
GPTQ 4-bit~5 GBN/AvLLM quantized
Q4_K_M (GGUF)N/A~5.5 GBOllama default
Q5_K_M (GGUF)N/A~6.2 GBHigher quality
Q8_0 (GGUF)N/A~9 GBBest quantization quality
KV Cache included (4K ctx)+0.5~2 GB+0.5~1.5 GBProportional to sequences

16.1 Individual Developer Local Environment

Recommended: Ollama

# Install and use immediately
ollama run llama3.1

# VS Code + Continue extension integration
# Set Ollama endpoint in settings.json

Reason: Simple installation, runs on CPU, supports macOS/Windows/Linux. Easy integration with IDE extensions.

16.2 Production API Serving

Recommended: vLLM

vllm serve meta-llama/Llama-3.1-70B-Instruct \
  --tensor-parallel-size 4 \
  --gpu-memory-utilization 0.90 \
  --max-num-seqs 256 \
  --enable-prefix-caching \
  --enable-chunked-prefill \
  --api-key ${API_KEY}

Reason: Overwhelming concurrent request handling with Continuous Batching. High memory efficiency with PagedAttention. Mature multi-GPU support, Kubernetes deployment, and monitoring integration.

16.3 Edge/IoT Environments

Recommended: Ollama + High Quantization

# Small model + high quantization
ollama run phi3:3.8b-mini-instruct-4k-q4_0

# Or Qwen 0.5B
ollama run qwen2.5:0.5b

Reason: Simple deployment as single binary. Runs on low-spec hardware with GGUF quantization. CPU-only inference support.

16.4 Large-Scale Batch Inference

Recommended: vLLM Offline Inference

from vllm import LLM, SamplingParams

llm = LLM(
    model="meta-llama/Llama-3.1-8B-Instruct",
    tensor_parallel_size=2,
    gpu_memory_utilization=0.95,
)

# Process thousands of prompts at once
prompts = load_prompts_from_file("prompts.jsonl")  # 10,000+ prompts
sampling_params = SamplingParams(temperature=0.0, max_tokens=512)

outputs = llm.generate(prompts, sampling_params)
save_outputs(outputs, "results.jsonl")

Reason: Batch scheduling that maximizes GPU memory utilization. Efficiently processes thousands to tens of thousands of prompts.

16.5 RAG Pipeline

Both work -- choose based on situation:

# Ollama-based RAG (development/small-scale)
from langchain_ollama import OllamaLLM, OllamaEmbeddings

llm = OllamaLLM(model="llama3.1")
embeddings = OllamaEmbeddings(model="nomic-embed-text")

# vLLM-based RAG (production)
from langchain_openai import ChatOpenAI, OpenAIEmbeddings

llm = ChatOpenAI(
    base_url="http://vllm-server:8000/v1",
    api_key="token",
    model="meta-llama/Llama-3.1-8B-Instruct",
)

17. Request Tracing Integration

Tracking LLM requests in production environments is essential for debugging, auditing, and performance monitoring.

17.1 vLLM Request ID Tracking

vLLM automatically generates a request_id in its OpenAI API-compatible server. To pass a custom ID, use extra_body.

from openai import OpenAI
import uuid

client = OpenAI(base_url="http://localhost:8000/v1", api_key="token")

# Pass custom request_id
xid = str(uuid.uuid4())

response = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[{"role": "user", "content": "Hello"}],
    extra_headers={"X-Request-ID": xid},
)

print(f"XID: {xid}")
print(f"Response ID: {response.id}")

17.2 Ollama Request Tracking

Ollama's native API does not support a separate request ID, so handle it at the reverse proxy level.

import requests
import uuid

xid = str(uuid.uuid4())

response = requests.post(
    "http://localhost:11434/api/chat",
    headers={"X-Request-ID": xid},
    json={
        "model": "llama3.1",
        "messages": [{"role": "user", "content": "Hello"}],
        "stream": False,
    },
)

# Include xid in logging
import logging
logger = logging.getLogger(__name__)
logger.info(f"[xid={xid}] Response: {response.status_code}")

17.3 X-Request-ID Forwarding at API Gateway

NGINX Configuration:

upstream vllm_backend {
    server vllm-server:8000;
}

server {
    listen 80;

    location /v1/ {
        # Auto-generate X-Request-ID if missing
        set $request_id $http_x_request_id;
        if ($request_id = "") {
            set $request_id $request_id;
        }

        proxy_pass http://vllm_backend;
        proxy_set_header X-Request-ID $request_id;
        proxy_set_header Host $host;

        # Add X-Request-ID to response headers
        add_header X-Request-ID $request_id always;

        # Include request_id in access log
        access_log /var/log/nginx/vllm_access.log combined_with_xid;
    }
}

# Log format definition
log_format combined_with_xid '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $body_bytes_sent '
    '"$http_referer" "$http_user_agent" '
    'xid="$http_x_request_id"';

17.4 OpenTelemetry Integration

# vLLM + OpenTelemetry distributed tracing
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Initialize Tracer
provider = TracerProvider()
exporter = OTLPSpanExporter(endpoint="http://jaeger:4317")
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer(__name__)

# Wrap LLM call as a Span
def call_llm(prompt: str, xid: str) -> str:
    with tracer.start_as_current_span("llm_inference") as span:
        span.set_attribute("xid", xid)
        span.set_attribute("model", "llama-3.1-8b")
        span.set_attribute("prompt_length", len(prompt))

        response = client.chat.completions.create(
            model="meta-llama/Llama-3.1-8B-Instruct",
            messages=[{"role": "user", "content": prompt}],
            extra_headers={"X-Request-ID": xid},
        )

        result = response.choices[0].message.content
        span.set_attribute("response_length", len(result))
        span.set_attribute("tokens_used", response.usage.total_tokens)

        return result

17.5 xid Usage Patterns in Logging

Python Example:

import logging
import uuid
from contextvars import ContextVar

# Manage xid with Context Variable
request_xid: ContextVar[str] = ContextVar("request_xid", default="")

class XIDFilter(logging.Filter):
    def filter(self, record):
        record.xid = request_xid.get("")
        return True

# Logger setup
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter(
    "%(asctime)s [%(levelname)s] [xid=%(xid)s] %(message)s"
))
handler.addFilter(XIDFilter())

logger = logging.getLogger("llm_service")
logger.addHandler(handler)
logger.setLevel(logging.INFO)

# Usage
async def handle_request(prompt: str):
    xid = str(uuid.uuid4())
    request_xid.set(xid)

    logger.info(f"Received prompt: {prompt[:50]}...")

    response = await call_llm(prompt, xid)

    logger.info(f"Generated {len(response)} chars")
    return {"xid": xid, "response": response}

Go Example:

package main

import (
    "context"
    "fmt"
    "log/slog"
    "net/http"

    "github.com/google/uuid"
)

type contextKey string
const xidKey contextKey = "xid"

// XID Middleware
func xidMiddleware(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        xid := r.Header.Get("X-Request-ID")
        if xid == "" {
            xid = uuid.New().String()
        }

        ctx := context.WithValue(r.Context(), xidKey, xid)
        w.Header().Set("X-Request-ID", xid)

        slog.Info("request received",
            "xid", xid,
            "method", r.Method,
            "path", r.URL.Path,
        )

        next.ServeHTTP(w, r.WithContext(ctx))
    })
}

// Ollama call function
func callOllama(ctx context.Context, prompt string) (string, error) {
    xid := ctx.Value(xidKey).(string)

    slog.Info("calling ollama",
        "xid", xid,
        "prompt_len", len(prompt),
    )

    // ... Ollama API call logic ...

    slog.Info("ollama response received",
        "xid", xid,
        "response_len", len(response),
    )

    return response, nil
}

18. References

vLLM

Ollama

Papers and Technical Resources