- Published on
virt-controller Deep Dive: Creating VM Pods with Informers, Queues, and Reconcile
- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- virt-controller Is Not a Single Controller
- Why So Many Informers
- The Typical Pattern Shown in watch/vmi/vmi.go
- How Launcher Pods Are Created
- Why the Controller Knows About Network and Storage Directly
- Why the Expectations Pattern Matters
- When Does Responsibility Pass to virt-handler
- Why There Can Be Two Pods During Migration
- The Full Orchestration Picture from application.go
- Common Misconceptions
- Symptoms and Locations Operators Should Check
- Conclusion
Introduction
The component closest to a "cluster-wide brain" in KubeVirt is virt-controller. While the actual VM execution happens on the node side with virt-handler and virt-launcher, the center that orchestrates when to create which Pod and when to advance which state to the next stage is virt-controller.
This article focuses on pkg/virt-controller/watch/application.go, pkg/virt-controller/watch/vmi/vmi.go, and pkg/virt-controller/watch/migration/migration.go to examine what informers this controller connects, what queues it runs, and how it creates launcher Pods.
virt-controller Is Not a Single Controller
The name alone might suggest a single process with a single reconcile loop, but in reality multiple watchers and controllers are bundled together. Looking at application.go, the following families are co-located:
- VMI controller
- VM controller
- Migration controller
- Replica set, pool, clone controllers
- Snapshot, export, backup related controllers
- Node, workload update, disruption budget related controllers
In other words, virt-controller is KubeVirt's cluster-wide orchestration collection.
Why So Many Informers
Looking at the NewController signature, even just the VMI controller receives multiple informers:
- VMI informer
- VM informer
- Pod informer
- PVC informer
- Migration informer
- Storage class informer
- DataVolume, CDI informer
- KubeVirt CR informer
The reason for so many is that a VMI's desired state is not determined by VMI spec alone.
- Is storage ready?
- Does a Pod already exist?
- Is migration in progress?
- Which feature gates has the cluster config enabled?
- How should network annotations be generated?
The controller must synthesize all of this surrounding state to accurately create the launcher Pod.
The Typical Pattern Shown in watch/vmi/vmi.go
The VMI controller closely resembles the standard Kubernetes controller pattern.
1. Receive Events
Register event handlers on VMI, Pod, DataVolume, PVC, VM, and KubeVirt objects.
2. Enqueue Keys
Actual work is not done directly in event handlers but is passed to a queue.
3. Sync Loop Reads Current State
Reads VMI, Pod, PVC, and migration state from indexers and stores, comparing desired state with current state.
4. Create Necessary Pods or Status
Launcher Pod creation, annotation updates, network status updates, and expectation management happen here.
The advantage of this pattern is that it mitigates race conditions and enables idempotent reconciliation.
How Launcher Pods Are Created
The VMI controller does not manually assemble Pod YAML. It calls methods like RenderLaunchManifest through a templateService. In other words, the controller makes the "a Pod is needed" decision, and the template service renders the actual launcher Pod spec.
This separation is quite important:
- The controller focuses on state judgment
- The template service focuses on Pod spec composition
Thanks to this, variations like migration Pods and hotplug attachment Pods can be handled through separate rendering paths.
Why the Controller Knows About Network and Storage Directly
Many people ask here: "Don't network and storage attach at the node? Why does the controller have an annotation generator?"
The answer is pre-wiring work.
Looking at watch/vmi/vmi.go, the following dependencies exist:
- Network annotations generator
- Storage annotations generator
- Network status updater
- Network spec validator
- Migration evaluator
The controller is not simply creating Pods -- it also prepares annotations and state so that when the Pod arrives at a node, KubeVirt and the CNI side have the context they need.
Why the Expectations Pattern Matters
When reading Kubernetes controllers, you frequently encounter the expectations pattern. KubeVirt is no different.
The VMI controller has:
- Pod expectations
- VMI expectations
- PVC expectations
This pattern solves the problem of "I just sent a Pod creation request, so even though the informer cache hasn't reflected it yet, don't retry too early." It reduces duplicate creation when the controller reads a stale cache immediately after a create call.
In large-scale clusters, this pattern is very important. Because a single VMI has more complex surrounding resources than a regular Pod, duplicate creation causes much greater confusion.
When Does Responsibility Pass to virt-handler
docs/components.md explains that responsibility shifts to virt-handler after the Pod is scheduled to a node and nodeName is determined. The actual mental model is almost exactly this:
virt-controllercreates the launcher Pod.- Kubernetes schedules the Pod to a specific node.
- The controller reflects Pod and VMI state.
- The node's
virt-handlersees that VMI and takes over VM launch.
In other words, virt-controller is a handoff coordinator between the cluster scheduler and the node agent.
Why There Can Be Two Pods During Migration
This is what distinguishes KubeVirt's controller from a normal Pod controller. A VMI normally appears as "one VMI, one launcher Pod," but during migration a target Pod is additionally created.
So the migration controller side includes:
- Target Pod creation
- Source and target state tracking
- Concurrent migration limits
- Unschedulable timeout management
Looking at pkg/virt-controller/watch/migration/migration.go, you see mechanisms like pending timeout, priority queue, policy store, and handoff map. This is because migration is not a simple Pod reschedule but a complex operation moving a live guest.
The Full Orchestration Picture from application.go
Looking at the VirtControllerApp struct, you get a sense of what KubeVirt manages cluster-wide:
- Informer factory
- Various resource informers
- Template service
- Cluster config
- Migration controller
- Backup, export, snapshot controllers
The important fact you can learn here is that virt-controller has a much broader role than just being a VM Pod creator. It is the central orchestration layer for the entire VM lifecycle.
Common Misconceptions
Misconception 1: virt-controller Directly Runs VMs
No. virt-controller creates Pods and coordinates state. The actual VM process launch happens on the node side.
Misconception 2: The Controller Only Needs to Watch VMIs
No. It must look at Pods, PVCs, migrations, cluster config, and CDI state together to accurately create launcher Pods.
Misconception 3: Migration Is Just Rescheduling a Pod
No. Since it must maintain live guest state, network, and disk visibility while preparing a target Pod, a separate migration controller is needed.
Symptoms and Locations Operators Should Check
VMI Exists but Launcher Pod Isn't Created
virt-controllerlogs- DataVolume or PVC readiness state
- Network spec validation errors
Migration Is Stuck in Pending for Too Long
- Whether target Pod scheduling is possible
- Migration controller timeout
- Cluster-wide parallel migration limits
Pod Exists but State Is Misaligned
- Informer cache reflection delay
- Expectation-related retries
- VMI status patch conflicts
Conclusion
virt-controller is KubeVirt's cluster-wide coordinator. This component reads the world around VMIs through various informers, creates launcher Pods through queue-based reconcile, and hands off responsibility to virt-handler at the appropriate time. The reason migration is complex also becomes apparent here -- a VMI is not a simple Pod but a virtualization workload that the controller creates by weaving together multiple resources.
In the next article, we will examine how virt-handler, having received this handoff, takes over the VM lifecycle on the node.