- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- Why Must virt-handler Exist on Every Node
- Core Responsibilities Revealed by vm.go
- The Big Picture of Operation Flow
- How It Communicates with Launcher Pods
- Why Status Reflection Is So Complex
- Role in Networking and Storage
- Why Device Manager and Heartbeat Are Included
- It Even Handles Cgroup and Permission Adjustment
- There's Even a Separate Migration Target Controller
- Common Misconceptions
- What to Look at First When Debugging
- Conclusion
Introduction
If virt-controller is cluster-wide orchestration, virt-handler is the node-local agent that takes over the actual VM lifecycle on each node. In the Kubernetes world, its character is somewhat similar to kubelet. Of course it doesn't replace kubelet, but think of it as a KubeVirt-specific execution coordinator that operates alongside kubelet for the new VMI workload type.
docs/components.md also describes virt-handler as a DaemonSet needed one per host. Looking at the actual code in pkg/virt-handler/vm.go, you can see this description is quite accurate.
Why Must virt-handler Exist on Every Node
There are many tasks that cannot be properly handled by only looking at cluster-wide state:
- Reading actual libvirt domain state
- Communicating with QEMU inside launcher Pods
- Checking node-local devices, cgroups, SELinux, mount state
- Host-side auxiliary work for network and disk attach
- Migration source and target switching
These tasks require node-local context. Therefore KubeVirt places a virt-handler on each node.
Core Responsibilities Revealed by vm.go
Looking at the VirtualMachineController struct in pkg/virt-handler/vm.go, it becomes clear that this component is not a simple status watcher. It contains the following responsibilities:
- Launcher client management
- Container disk mounter
- Hotplug volume mounter
- Network setup integration
- Cgroup manager integration
- Device manager
- Heartbeat
- Migration proxy integration
- Domain informer and VMI informer handling
In other words, virt-handler is not "a controller that slightly updates VMI status" but rather a comprehensive agent that configures the VM execution environment on the node.
The Big Picture of Operation Flow
Simplified sequentially, what virt-handler does is:
- Watch VMIs assigned to this node.
- Look at the launcher Pod and domain state corresponding to that VMI.
- Perform necessary storage, device, and network preparation.
- Communicate with the libvirt control plane inside the launcher Pod.
- Reflect domain state back to VMI status.
- Coordinate state transitions like termination, restart, migration.
The important thing here is that virt-handler is a translation channel between the Kubernetes API and the libvirt world.
How It Communicates with Launcher Pods
vm.go uses launcherClients, cmd-client, and handler-launcher-com layers. In other words, virt-handler communicates with a command server inside the launcher Pod to deliver VM commands.
Typical request types that travel this path include:
- VM execution or synchronization
- Shutdown
- Pause or unpause
- Migration start
- Migration finalize
- Guest-related queries
The important thing is that virt-handler does not fork QEMU directly. Instead, it manipulates the libvirt domain manager through the command server inside the launcher. This layer separation is necessary for clean division between Pod sandbox internal execution and node-side orchestration.
Why Status Reflection Is So Complex
virt-handler watches both the VMI informer and domain informer. The reason is simple:
- VMI contains the state the cluster desires.
- Domain contains the actual execution state that libvirt currently knows about.
When these two differ, virt-handler must reconcile them in between. For example:
- VMI should be Running but domain doesn't exist
- Domain is up but VMI status hasn't been reflected yet
- During migration, domain detection differs between source and target
Reconciling these discrepancies is the core of virt-handler.
Role in Networking and Storage
Many people think that since the Pod already received networking and storage, virt-handler doesn't have much to do. In reality, that's not the case.
Networking
vm.go connects with netsetup, domainspec, and network/vmispec. This means virt-handler is responsible for parts of guest NIC configuration, domain network spec, and host-side wiring.
Especially on the migration target side, it must determine whether the target Pod is actually ready to receive the guest. Network readiness status is very important at this point.
Storage
Looking at dependencies like containerDiskMounter, hotplugVolumeMounter, host-disk, and container-disk, you can see that virt-handler is quite responsible for node-side mount assistance and attach paths.
Just because a volume is attached to a Pod doesn't mean guest disk preparation is finished. You must bridge all the way to the disk paths and hotplug states that the guest understands.
Why Device Manager and Heartbeat Are Included
This is strong evidence that virt-handler is not a simple watcher.
Device Manager
Hardware acceleration and special devices require appropriate permissions and exposure. deviceManagerController manages these resource exposures and states.
Heartbeat
Per-node capability and health state must be reflected to the cluster for scheduling and execution to align. Heartbeat handles this role.
In other words, virt-handler also has the role of continuously reporting whether the node "is in a state capable of running VMs."
It Even Handles Cgroup and Permission Adjustment
vm.go contains comments about device permission handling in cgroup v2 unified mode. What you can learn here is that KubeVirt is not simply launching VMs with a few lines of Go code, but rather carefully handling the Linux host's resource constraints and security model.
For example, if permissions are wrong during the device probing process, QEMU startup itself can break. So virt-handler becomes a buffer layer that absorbs differences between runtimes and kernels.
There's Even a Separate Migration Target Controller
Looking at pkg/virt-handler/migration-target.go, a dedicated controller for migration targets exists separately. This means migration is not a simple "state copy" but requires separate preparation and detection on the target node.
On the target side:
- Target domain detection
- Domain ready timestamp recording
- Resource adjustment
- Finalize invocation
Such tasks are needed. In other words, migration is not only virt-controller's responsibility nor only virt-launcher's responsibility. virt-handler strongly takes on the intermediate orchestration.
Common Misconceptions
Misconception 1: virt-handler Is Just a Simple Status Reporter
No. It communicates with launchers, handles devices and volumes, and coordinates migration and network preparation.
Misconception 2: Node-Local State Is Barely Important
No. VM execution ultimately touches the host kernel and hypervisor. Accurate execution and recovery are difficult without node-local context.
Misconception 3: kubelet Alone Is Sufficient
kubelet, which only manages Pod lifecycle, and virt-handler, which understands VM domain lifecycle, have different roles.
What to Look at First When Debugging
- If the launcher Pod is Running but the guest isn't visible, check
virt-handlerlogs. - If the migration target isn't ready, check
migration-targetside events and state. - For hotplug or device issues, check node-local mount, cgroup, and device manager traces.
- If VMI status and actual domain state are misaligned, check the domain informer reflection flow.
Conclusion
virt-handler is the center of KubeVirt's node execution plane. This component watches VMI and domain together, communicates with launcher Pods, and coordinates node-local networking, storage, devices, and migration state. Once Kubernetes places the Pod on a node, a significant portion of creating the VM-like lifecycle afterward falls to virt-handler.
In the next article, we will examine the opposite side that virt-handler delivers commands to -- how the virt-launcher, libvirt, and QEMU stack is organized inside the Pod.