- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- How to Understand the virt-launcher Pod
- What cmd/virt-launcher/virt-launcher.go Does
- Why a Command Server Is Needed
- The Responsibility Scope Shown by the DomainManager Interface
- What libvirt Does vs What QEMU Does
- What Readiness and Socket Switching Mean
- Why Guest Agent and Event Loop Are Also in the Launcher
- Why the Pod Might Not Die Immediately on Termination
- Common Misconceptions
- Debugging Hints
- Conclusion
Introduction
Now let's look at KubeVirt's most critical execution point. virt-controller creates Pods, and virt-handler coordinates on the node. So where does the actual VM run? The answer is inside the virt-launcher Pod.
More precisely, the virt-launcher Pod is an execution shell for a single VM, and within it libvirt and QEMU actually run the guest. cmd/virt-launcher/virt-launcher.go and pkg/virt-launcher/virtwrap/manager.go are the center of this execution layer.
How to Understand the virt-launcher Pod
Many beginner explanations end with "virt-launcher launches the VM." But a more accurate explanation is:
- The
virt-launcherPod provides namespace and cgroup boundaries needed for VM execution. - The
virt-launcherprocess inside the Pod prepares a command server and domain manager. - A libvirt connection is created.
- A QEMU domain is defined and executed.
- Events, guest agent, and stats continue to be monitored afterward.
In other words, virt-launcher is not a simple container entrypoint but the local runtime of the VM execution control plane.
What cmd/virt-launcher/virt-launcher.go Does
Looking at this entry point file, the initialization sequence is quite clear.
1. Register libvirt Event Implementation
At process start, libvirt.EventRegisterDefaultImpl() is called. This is needed for subsequent domain event monitoring.
2. Initialize Various Directories
initializeDirs prepares various directories needed for the guest runtime, including cloud-init, ignition, container disk, hotplug disk, secret, config map, and downward metrics.
An important fact is already visible here: VM booting is not simply launching a single QEMU binary but goes together with preparation work organizing multiple disk sources and metadata channels.
3. Create libvirt Connection
createLibvirtConnection basically connects to qemu:///system, and in non-root mode uses the session URI. In other words, virt-launcher doesn't simply link the libvirt library -- it attaches as a client to the actual libvirt control plane to define domains.
4. Start Command Server
startCmdServer launches a unix socket-based command server. virt-handler calls the domain manager on the launcher side through this socket.
5. Start Domain Event Monitoring
startDomainEventMonitoring runs the libvirt event loop and starts the notifier. Guest state changes, termination, and agent connectivity are reflected here.
Why a Command Server Is Needed
This is a key point for understanding KubeVirt's architecture. virt-handler is a node agent, but QEMU is inside the launcher Pod. The two processes are in different isolation boundaries, so they cannot make direct function calls. Therefore KubeVirt places a command server and client layer.
The flow is roughly:
virt-handlerreconciles a VMI.- Sends a request to the socket inside the launcher Pod.
- The command server calls a
DomainManagermethod. LibvirtDomainManagerdefines, starts, pauses, or migrates the domain through the libvirt API.
Thanks to this architecture, cluster node-side orchestration and Pod-internal hypervisor control are separated.
The Responsibility Scope Shown by the DomainManager Interface
Looking at the DomainManager interface in pkg/virt-launcher/virtwrap/manager.go, it becomes clear that the launcher is not just responsible for boot:
Key methods include:
SyncVMIPauseVMIUnpauseVMIKillVMIDeleteVMIMigrateVMIPrepareMigrationTargetFinalizeVirtualMachineMigrationHotplugHostDevicesGetDomainStatsGuestPing
In other words, the launcher is a local hypervisor adapter responsible for the entire guest lifecycle.
What libvirt Does vs What QEMU Does
These two are often spoken of as one lump, but their roles differ.
libvirt
- Provides domain definition and management API
- Maintains XML-based domain model
- Provides migration API
- Provides stats and event collection interfaces
QEMU
- Actual guest CPU, memory, disk, NIC emulation
- Execution engine combined with KVM
- virtio device implementations
KubeVirt generally uses libvirt as the primary control interface, letting libvirt internally manage QEMU.
What Readiness and Socket Switching Mean
markReady renames the uninitialized socket name to the actual socket name. This small action is important because it allows virt-handler to distinguish whether the launcher is still initializing or is actually ready to receive commands.
In other words, the launcher does not assume "VM control ready" just because the Pod is Running. Command plane readiness exists separately.
Why Guest Agent and Event Loop Are Also in the Launcher
The responsibility of handling guest internal information also lies heavily with the launcher. Agent poller, guest info, filesystem, user, and time sync functions are visible in the manager.go interface and virt-launcher.go initialization path.
This is natural. The guest agent is ultimately closest to the guest communication channel exposed by libvirt or QEMU. It makes more sense for the launcher to handle this rather than the node agent virt-handler accessing it directly.
Why the Pod Might Not Die Immediately on Termination
As docs/components.md also mentions, when Kubernetes attempts to terminate the virt-launcher Pod, if the guest has not yet shut down, the launcher forwards the signal to the guest side and waits for graceful shutdown as long as possible.
This behavior is very important. Otherwise, Pod termination would be almost identical to guest forced termination, increasing the risk of data corruption. KubeVirt reveals at this point that Pod lifecycle and VM lifecycle are not exactly the same.
Common Misconceptions
Misconception 1: virt-launcher Is Just a Regular Container Without Sidecars
No. This Pod is the VM execution boundary, and internally contains libvirt event loop, command server, and guest agent related logic.
Misconception 2: QEMU Is Directly Launched by virt-handler
No. virt-handler calls the launcher's command server, and the actual domain definition and execution is performed by the domain manager inside the launcher through libvirt.
Misconception 3: If Pod Is Running, the Guest Is Already Normal
Not necessarily. Launcher preparation, libvirt connection, domain definition, and agent initialization are separate stages.
Debugging Hints
- If the launcher Pod is alive but the guest isn't starting, check command server readiness and libvirt connection.
- If guest agent related information is empty, check the launcher-side poller and event notifier.
- If termination takes long, check guest shutdown delivery and grace period together.
- For migration issues, check the
PrepareMigrationTarget,MigrateVMI, andFinalizeVirtualMachineMigrationpaths separately.
Conclusion
virt-launcher is the layer closest to execution in KubeVirt. This Pod provides isolation boundaries for the VM, and the launcher process inside sets up libvirt connection, command server, event monitor, and guest agent poller. On top of that, LibvirtDomainManager controls the QEMU domain lifecycle. Ultimately, the most concrete substance of the phrase "VM runs on a Pod" is this launcher Pod's internal execution structure.
In the next article, we will look at what transformation process the VMI spec received by the launcher goes through to become libvirt domain XML and actual guest device configuration.