- Published on
KubeVirt Source Code Reading Map: Where to Start to Grasp the Entire Structure
- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- Questions to Start With
- Step 1: Start with API Types
- Step 2: See How User Intent Is Standardized in virt-api
- Step 3: Read Pod Creation and Handoff in virt-controller/watch
- Step 4: Reading virt-handler Reveals "Who Actually Works on the Node"
- Step 5: Read Actual VM Execution Path in cmd/virt-launcher and virtwrap
- Step 6: Read Networking by Bundling pkg/network
- Step 7: Migration Must Be Read by Going Back and Forth Between Controller and Launcher
- Step 8: Read Host Integration Code from a Kernel Primitive Perspective
- My Recommended Actual Reading Order
- Special Notes While Reading
- Summarizing the Entire Series in One Sentence
- Conclusion
Introduction
KubeVirt has a large codebase and deep layers. When you first open the repo, virt-api, virt-controller, virt-handler, virt-launcher, pkg/network, and staging all appear at once, making it hard to know where to start reading.
However, if you get the reading order right, the structure becomes quite clear. In this final post of the series, we organize a source code reading map based on "which questions should be answered by which packages to understand KubeVirt."
Questions to Start With
When reading KubeVirt code, the order of questions matters more than the file tree. I recommend the following order:
- What Kubernetes objects represent a VM?
- Who watches those objects and creates Pods?
- After arriving on a node, who runs libvirt and QEMU?
- How does the Pod network become the guest network?
- How does migration proceed in the control plane and data plane respectively?
- What kernel primitives are actually used?
Reading in this order lets you see KubeVirt not as "a huge Go repo" but as a "VM execution pipeline."
Step 1: Start with API Types
The first place to read is staging/src/kubevirt.io/api/core/v1.
These files in particular are key:
types.goschema.go
Three things to understand here:
VirtualMachineVirtualMachineInstanceVirtualMachineInstanceMigration
Reading this layer first makes it visible what controller code reconciles afterward. Especially reading status, conditions, migrationState, and interface types first makes almost all subsequent logic easier.
In other words, API types are not documentation -- they are the dictionary for interpreting the rest of the code.
Step 2: See How User Intent Is Standardized in virt-api
Next is virt-api and the admission layer. The question here is:
"What validation and defaulting does the user's spec go through to become an object the controller can process?"
Reading this step reveals:
- Which field combinations are allowed
- What subresources exist
- Whether VMIs run directly or are managed through VMs
In other words, virt-api is not an execution engine -- it is the gate that transforms user declarations into safe control-plane inputs.
Step 3: Read Pod Creation and Handoff in virt-controller/watch
Next is pkg/virt-controller/watch. The question changes here:
"Who turns this declaration into launcher Pods and migration Pods?"
These flows in particular are important:
- VMI controller
- Migration controller
- Template service
Key points to understand:
- Informers and work queues
- Owner references and expectations
- Launcher Pod manifest rendering
- Migration target Pod creation
This section is where KubeVirt uses the Kubernetes controller pattern most classically.
Step 4: Reading virt-handler Reveals "Who Actually Works on the Node"
Many people first feel the substance of KubeVirt here. pkg/virt-handler is the node-local agent, and a significant portion of VM operations happens here.
Good starting files:
vm.gomigration-target.gomigration-proxy/migration-proxy.goseccomp/seccomp.go
Questions answered at this layer:
- How does it sync VMIs and domains on the node?
- What RPC does it use to communicate with the launcher?
- How does it prepare migration source and target?
- How does it handle cgroup, device, network, and seccomp?
In other words, virt-handler is closest to being KubeVirt's "on-site foreman."
Step 5: Read Actual VM Execution Path in cmd/virt-launcher and virtwrap
To see how a VM actually starts, you must descend to cmd/virt-launcher/virt-launcher.go and pkg/virt-launcher/virtwrap.
The important question here is:
"How is the VMI spec transformed into a libvirt domain and QEMU execution state?"
Key points:
- libvirt connection creation
- Command server startup
- Domain manager implementation
- Guest agent polling
- Event collection
- Domain stats collection
And a package that must be read alongside is converter. pkg/virt-launcher/virtwrap/converter/converter.go is the core layer that translates VMI spec into domain XML perspective.
In other words, virtwrap is the code closest to the hypervisor in KubeVirt.
Step 6: Read Networking by Bundling pkg/network
KubeVirt networking cannot be understood from a single file. It is better to read these packages together:
setupmultuscontrollersdhcpmigration
The question is simple:
"How does the guest use the network the Pod already received?"
What to look at:
- Pod NIC reading
- TAP and bridge preparation
- Masquerade address calculation
- DHCP responses
- Multus annotation generation
- Interface status reflection
In other words, KubeVirt networking does not replace CNI -- it is code that reprocesses Pod network into guest-visible network.
Step 7: Migration Must Be Read by Going Back and Forth Between Controller and Launcher
Live migration does not end in a single file. You must read it by going back and forth:
- Control plane:
pkg/virt-controller/watch/migration/migration.go - Node plane:
pkg/virt-handler/migration-target.go - Transport support:
pkg/virt-handler/migration-proxy/migration-proxy.go - Hypervisor plane:
pkg/virt-launcher/virtwrap/live-migration-source.go
Good questions to attach while reading:
- Who created the target Pod?
- Where are sync address and port filled in?
- Who triggers the transition from pre-copy to post-copy?
- Who decides abort and timeout?
In other words, migration must be read vertically through controller, node agent, and launcher.
Step 8: Read Host Integration Code from a Kernel Primitive Perspective
Finally, it is good to re-read the host integration code together.
cmd/virt-chrootpkg/virt-handler/cgrouppkg/virt-handler/selinuxpkg/network/driver/virtchroot
At this point, read based on Linux primitives rather than Kubernetes abstractions.
- chroot
- Namespace entry
- cgroup v1, v2
- SELinux label switching
- TAP creation
- Device access allowance
After reaching this step, the technical answer to "why VMs could be implemented on top of Pods" is nearly complete.
My Recommended Actual Reading Order
If reading from start to finish once, I recommend this order:
staging/src/kubevirt.io/api/core/v1/types.gostaging/src/kubevirt.io/api/core/v1/schema.gopkg/virt-controller/watch/vmi/vmi.gopkg/virt-controller/watch/migration/migration.gopkg/virt-handler/vm.gopkg/virt-handler/migration-target.gocmd/virt-launcher/virt-launcher.gopkg/virt-launcher/virtwrap/manager.gopkg/virt-launcher/virtwrap/converter/converter.gopkg/network/setup/network.gopkg/network/setup/podnic.gopkg/network/controllers/vmi.gopkg/network/multus/annotation.gopkg/virt-launcher/virtwrap/live-migration-source.gopkg/virt-handler/migration-proxy/migration-proxy.gopkg/virt-handler/seccomp/seccomp.go
This order flows from the control plane down through node, launcher, network, migration, and kernel integration.
Special Notes While Reading
1. Look at Handoff Points Rather Than Package Units
KubeVirt is a system with many handoffs.
- From API to controller
- From controller to launcher Pod
- From controller to virt-handler
- From virt-handler to launcher RPC
- From launcher to libvirt and QEMU
Therefore, tracking "who hands off to the next stage" is faster for understanding than following package boundaries.
2. Keep Revisiting Status Field Definitions
When lost while reading code, returning to the status structs in types.go is helpful. In practice, much of the controller logic is code that fills or interprets status.
3. Always Think About Source and Target Simultaneously for Migration
People accustomed to single VM lifecycles often get confused reading migration code. This is because source Pod, target Pod, source node, target node, source state, and target state all exist simultaneously.
Summarizing the Entire Series in One Sentence
KubeVirt is not another hypervisor placed on top of Kubernetes -- it is an orchestrator that precisely stitches together Kubernetes' declarative control plane, Linux's virtualization primitives, and libvirt with QEMU.
Conclusion
In this series, we followed KubeVirt in a single thread from API types through controller, node agent, launcher, networking, migration, kernel, security, observability, and failure modes. While it looks complex at first, it is actually a structure of transforming declarations into Pods, starting QEMU inside Pods, attaching networks and devices with Linux primitives, and wrapping migration and status reporting in controller patterns.
Ultimately, the answer to "how were VMs possible on top of Pods" is not one piece of magic but lies in precisely understanding the handoffs across multiple layers. The KubeVirt source code is the map that most honestly shows those handoffs.