- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- What is KubeVirt?
- Architecture Deep Dive
- CRDs (Custom Resource Definitions)
- CDI (Containerized Data Importer)
- Networking
- Storage
- Live Migration
- Installation Guide
- Use Cases
- Conclusion
Introduction
Containers and virtual machines (VMs) are not substitutes but complements. VMs remain essential for legacy applications, Windows workloads, and scenarios requiring kernel-level customization. KubeVirt extends the Kubernetes API to manage VMs just like Pods.
What is KubeVirt?
KubeVirt is a Kubernetes-native VM orchestration solution. It extends the Kubernetes API so that VM lifecycles can be managed with kubectl.
Key Features
- Define VMs using Kubernetes CRDs (Custom Resource Definitions)
- Reuse existing Kubernetes infrastructure (networking, storage, monitoring)
- Containers and VMs coexist in the same cluster
- Built on the proven libvirt/QEMU/KVM virtualization stack
CNCF Project Status
KubeVirt is a major project under the Cloud Native Computing Foundation (CNCF).
| Item | Details |
|---|---|
| CNCF Level | Incubating (applying for Graduation) |
| Adopters | 41+ organizations |
| CNCF Ranking | Top 20 project |
| GitHub Stars | 5,000+ |
| Key Adopters | Red Hat, NVIDIA, ARM, CoreWeave |
Architecture Deep Dive
Here is the complete KubeVirt architecture overview.
+------------------------------------------------------------------+
| Kubernetes Control Plane |
| +------------+ +----------------+ +-------------------------+ |
| | API Server | | etcd | | Controller Manager | |
| +------+-----+ +----------------+ +-------------------------+ |
| | |
| +------v------------------+ |
| | virt-api | <-- API extension, validation |
| | (Deployment) | |
| +------+------------------+ |
| | |
| +------v------------------+ |
| | virt-controller | <-- Watches VM/VMI, manages lifecycle|
| | (Deployment) | |
| +-------------------------+ |
+------------------------------------------------------------------+
+------------------------------------------------------------------+
| Worker Node |
| +-------------------------+ |
| | virt-handler | <-- Node-level DaemonSet |
| | (DaemonSet) | |
| +------+------------------+ |
| | |
| +------v------------------+ +-------------------------------+ |
| | virt-launcher Pod | | virt-launcher Pod | |
| | +------------------+ | | +-------------------------+ | |
| | | libvirt | | | | libvirt | | |
| | | +------------+ | | | | +-------------------+ | | |
| | | | QEMU/KVM | | | | | | QEMU/KVM | | | |
| | | | (VM 1) | | | | | | (VM 2) | | | |
| | | +------------+ | | | | +-------------------+ | | |
| | +------------------+ | | +-------------------------+ | |
| +-------------------------+ +-------------------------------+ |
+------------------------------------------------------------------+
Component Details
virt-api
Serves as the API server extension point, handling all VM-related API requests.
- Admission Webhook for VM spec validation
- Provides Subresource APIs (start, stop, restart, migrate)
- VNC/Console proxy endpoints
virt-controller
A cluster-level Deployment that watches VM and VMI resources.
- Detects VM CRD changes and creates/deletes VMIs
- Requests virt-launcher Pod scheduling
- Orchestrates live migrations
- Synchronizes VM state
virt-handler
A node-level DaemonSet that manages actual VM lifecycles.
- Communicates directly with libvirt
- Generates and applies VM domain XML
- Configures node-level networking
- Handles device hotplug
virt-launcher
A Pod created for each individual VM.
- Runs the libvirt daemon
- Manages QEMU/KVM processes
- Provides VM console access
- Cleans up VM on Pod termination
CRDs (Custom Resource Definitions)
VirtualMachine (VM)
The VM CRD is a declarative definition of a virtual machine, similar to how a Deployment relates to Pods.
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: my-vm
namespace: default
spec:
running: true
template:
metadata:
labels:
app: my-vm
spec:
domain:
cpu:
cores: 2
memory:
guest: 4Gi
devices:
disks:
- name: rootdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
volumes:
- name: rootdisk
dataVolume:
name: my-vm-dv
- name: cloudinitdisk
cloudInitNoCloud:
userData: |
#cloud-config
hostname: my-vm
ssh_authorized_keys:
- ssh-rsa AAAA...
VirtualMachineInstance (VMI)
The VMI represents a running VM instance. Similar to a Pod, it reflects the actual running state.
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstance
metadata:
name: my-vmi
spec:
domain:
cpu:
cores: 1
memory:
guest: 2Gi
devices:
disks:
- name: rootdisk
disk:
bus: virtio
volumes:
- name: rootdisk
containerDisk:
image: quay.io/kubevirt/fedora-cloud-container-disk-demo:latest
VirtualMachineInstanceReplicaSet
Runs multiple replicas of identical VMIs.
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceReplicaSet
metadata:
name: my-vmirs
spec:
replicas: 3
selector:
matchLabels:
app: web-vm
template:
metadata:
labels:
app: web-vm
spec:
domain:
cpu:
cores: 1
memory:
guest: 1Gi
devices:
disks:
- name: rootdisk
disk:
bus: virtio
volumes:
- name: rootdisk
containerDisk:
image: quay.io/kubevirt/fedora-cloud-container-disk-demo:latest
VirtualMachineInstanceMigration
Live migrates a VMI to another node.
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-job
spec:
vmiName: my-vmi
CDI (Containerized Data Importer)
CDI is the component that imports VM disk images into Kubernetes PVCs.
DataVolume CRD
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: ubuntu-dv
spec:
source:
http:
url: 'https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img'
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: local-path
Supported Image Sources
CDI can import disk images from multiple sources.
+------------------+ +-------+ +------------------+
| HTTP/HTTPS URL | --> | | | |
+------------------+ | | | |
| | | |
+------------------+ | CDI | --> | PersistentVolume|
| Container Registry| --> | | | Claim (PVC) |
+------------------+ | | | |
| | | |
+------------------+ | | | |
| S3 Bucket | --> | | | |
+------------------+ +-------+ +------------------+
| oVirt / VMware | -->
+------------------+
Import from Container Registry
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: registry-dv
spec:
source:
registry:
url: 'docker://quay.io/kubevirt/fedora-cloud-container-disk-demo:latest'
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Import from S3
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: s3-dv
spec:
source:
s3:
url: 's3://my-bucket/images/rhel9.qcow2'
secretRef: s3-credentials
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Networking
Default Pod Network
By default, VMs use the Pod network. Masquerade mode is recommended.
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
Multiple Networks with Multus
Multus CNI allows attaching multiple network interfaces to VMs.
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: br-vlan100
spec:
config: |
{
"cniVersion": "0.3.1",
"type": "bridge",
"bridge": "br-vlan100",
"vlan": 100,
"ipam": {
"type": "host-local",
"subnet": "10.100.0.0/24"
}
}
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: multi-net-vm
spec:
running: true
template:
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {}
- name: vlan100
bridge: {}
networks:
- name: default
pod: {}
- name: vlan100
multus:
networkName: br-vlan100
Storage
VM disks are managed through PVCs (PersistentVolumeClaims).
Storage Options Comparison
| Storage Type | Use Case | Persistence |
|---|---|---|
| PVC | Persistent disks | Persistent |
| DataVolume | Image import via CDI | Persistent |
| containerDisk | Read-only boot images | Ephemeral |
| emptyDisk | Temporary data | Ephemeral |
| cloudInitNoCloud | Initial configuration | Ephemeral |
Live Migration
Live migration moves a VM to another node without downtime.
Requirements
- PVC with ReadWriteMany (RWX) access mode
- Shared storage (Ceph, NFS, etc.)
- Identical CPU feature sets between source and destination nodes
Running a Migration
# Start migration via kubectl
kubectl virt migrate my-vm
# Or create a Migration CRD
cat <<EOF | kubectl apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
name: migration-my-vm
spec:
vmiName: my-vm
EOF
# Check migration status
kubectl get vmim migration-my-vm -o yaml
Migration Policy Configuration
apiVersion: migrations.kubevirt.io/v1alpha1
kind: MigrationPolicy
metadata:
name: high-priority-policy
spec:
bandwidthPerMigration: 1Gi
completionTimeoutPerGiB: 800
allowAutoConverge: true
selectors:
namespaceSelector:
matchLabels:
migration-policy: high-priority
Installation Guide
Operator Installation
# Check latest version
export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
# Deploy Operator
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/$KUBEVIRT_VERSION/kubevirt-operator.yaml"
# Create KubeVirt CR
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/$KUBEVIRT_VERSION/kubevirt-cr.yaml"
# Wait for installation
kubectl wait kv kubevirt -n kubevirt --for=condition=Available --timeout=300s
virtctl CLI Installation
# Download virtctl
curl -L -o virtctl \
"https://github.com/kubevirt/kubevirt/releases/download/$KUBEVIRT_VERSION/virtctl-$KUBEVIRT_VERSION-linux-amd64"
chmod +x virtctl
sudo mv virtctl /usr/local/bin/
Use Cases
1. Hybrid Container/VM Workloads
Run microservices (containers) alongside legacy monoliths (VMs) in a single cluster.
2. Legacy Application Migration
Progressively move existing VMs from VMware, OpenStack to Kubernetes.
3. Windows Workloads
Run .NET Framework applications and Active Directory servers on Kubernetes clusters.
4. Development/Test Environments
Rapidly provision diverse OS environments for testing purposes.
Conclusion
KubeVirt bridges the gap between containers and VMs, enabling unified infrastructure management. As a CNCF project with an active community, it is also the core technology behind Red Hat OpenShift Virtualization.
In the next post, we will explore how the NVIDIA GPU Operator automates GPU management on Kubernetes.
Quiz: KubeVirt Knowledge Check
Q1. Which CRD is responsible for the declarative definition of a VM in KubeVirt?
A) VirtualMachineInstance B) VirtualMachine C) VirtualMachineInstanceReplicaSet D) VirtualMachineInstanceMigration
Answer: B) VirtualMachine - Similar to a Deployment, it declares the desired state of a VM.
Q2. What is the role of virt-launcher?
A) Cluster-level VM resource watching B) API request validation C) A per-VM Pod that runs libvirt/QEMU D) Node-level communication with libvirt
Answer: C) virt-launcher is created as one Pod per VM, managing libvirt and QEMU/KVM processes internally.
Q3. Which image source is NOT supported by CDI DataVolume?
A) HTTP/HTTPS URL B) Container Registry C) Git Repository D) S3 Bucket
Answer: C) Git repositories are not supported as CDI sources. HTTP, Registry, S3, oVirt, and VMware are supported.
Q4. What is a mandatory requirement for live migration?
A) ReadWriteOnce PVC B) ReadWriteMany PVC + shared storage C) emptyDisk volume D) containerDisk volume
Answer: B) Live migration requires shared storage (RWX) accessible from both source and destination nodes simultaneously.