- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- What Does virt-api Do
- Why Is a Separate API Layer Needed
- The Character of virt-api as Seen in Source
- How a Migrate Request Actually Transforms
- Why This Approach Matters
- Why Start or Stop Looks Different from Migration
- Why Subresources Are Needed
- Difference Between virt-api and virt-handler
- API Layer Symptoms to Watch in Practice
- Debugging Order Operators Should Remember
- Conclusion
Introduction
When we call KubeVirt "a Kubernetes extension for VMs," the most visible surface of that extension is the API. Users send VM, VMI, and migration requests through kubectl or clients. But these requests don't go directly down to the node. In between, there is validation, defaulting, subresource branching, permission checking, and work object creation.
The primary component responsible for this layer is virt-api. docs/components.md also describes virt-api-server as the entry point for the virtualization flow.
What Does virt-api Do
Simply put, virt-api is responsible for:
- Providing entry points for KubeVirt-related APIs
- Validation and defaulting of VMI and VM related requests
- Handling subresource requests like start, stop, migrate
- Creating other CRs internally when needed
The important point here is that virt-api does not directly launch VMs. virt-api focuses on translating user intent into Kubernetes object operations.
Why Is a Separate API Layer Needed
It might seem sufficient to just have Kubernetes CRDs. But actual operations have the following requirements:
- Action-oriented requests that are hard to express with just create or update
- Pre-validation that checks guest state and prevents conflicts
- Requests that need to create separate work objects, like migration
- Subresource flows like console, VNC, restart
For these reasons, KubeVirt doesn't stop at CRD definitions alone but places a separate API processing layer.
The Character of virt-api as Seen in Source
Looking at pkg/virt-api/rest/lifecycle.go, you can see various request handlers for start, stop, migrate, reboot, backup, etc. The patterns here are quite consistent:
- Look up the target VM or VMI.
- Validate whether the current state allows the action.
- Instead of executing directly, perform a status patch or work object creation.
- Delegate actual execution to the controller or virt-handler layer afterward.
In other words, virt-api is the starting point of orchestration, not the final execution point.
How a Migrate Request Actually Transforms
The best example is the migrate handler. Looking at MigrateVMRequestHandler, the internal flow is very clear.
1. Look Up VM and VMI
First, it checks whether the VM exists and whether the VMI for that VM exists.
2. Validate That the Current VMI Is in Running State
A VMI that is not running cannot be migrated, so it returns a conflict.
3. Create a Migration CR
This is the key. The handler does not start migration directly. Instead, it creates a VirtualMachineInstanceMigration object.
apiVersion: kubevirt.io/v1
kind: VirtualMachineInstanceMigration
metadata:
generateName: kubevirt-migrate-vm-
spec:
vmiName: demo-vm
In the actual code, options like AddedNodeSelector may also be attached. From this moment, the migration is no longer an API request but a declarative work item for the controller to consume.
Why This Approach Matters
The advantages of this structure are significant.
1. Work Persists as Kubernetes Objects
Since migration becomes a separate CR, audit tracking and status checking are easy.
2. Controllers Can Process Asynchronously
The API server can return accepted without completing the work immediately.
3. Policy and Validation Are Further Separated
The API checks request validity, while actual capacity and policy decisions continue to be handled by the controller.
Why Start or Stop Looks Different from Migration
Not all actions are translated into migration CR creation. For example, a stop request may be expressed as a VM status patch or state change request depending on the case. patchVMStatusStopped and getChangeRequestJson in lifecycle.go demonstrate this pattern well.
In other words, KubeVirt chooses the most natural Kubernetes expression for each action:
- Long-running move operations: separate migration CR
- VM lifecycle transitions: status patch or state change request
- Operations requiring immediate node-side connection: virt-handler URI-based subresources
Why Subresources Are Needed
Simple spec updates alone cannot naturally express requests like "start migration now," "perform a soft reboot now," or "connect to console." So KubeVirt uses subresources.
The important design points here are:
- Requests must pass authentication and validation at the API layer
- Runtime state must be checked to quickly return conflicts
- Actual downstream operations must be delegated to the appropriate component
This is similar to why Kubernetes itself has subresources like scale, exec, log, and eviction.
Difference Between virt-api and virt-handler
This is an easy-to-confuse point, so let's separate them:
virt-api
- Close to the user and Kubernetes API
- Validation, defaulting, subresource entry point
- Status patching or work object creation
virt-handler
- Close to the node
- Communicates with actual launcher Pods
- Reflects domain state
- Involved in VM execution, termination, migration handoff
Both may feel like they "control VMs," but virt-api is the control plane entry while virt-handler is closer to the node execution plane.
API Layer Symptoms to Watch in Practice
Migrate Request Fails Immediately
High probability of an API layer conflict. Typically:
- VMI is not Running
- Paused state
- Action not permitted condition
Request Is Accepted but Nothing Happens
In this case, the API passed but may be stuck at the controller stage due to capacity, policy, or pending pod issues.
Confusion About Why Some Requests Are Patches and Others Are CR Creations
The key is to look at whether "this request is a state transition or an independent operation."
Debugging Order Operators Should Remember
- See which subresource the user's request entered through.
- Check if
virt-apireturned a conflict. - If accepted, see what additional objects were created.
- Only then go down to the controller or node agent.
If you look at this order in reverse, you easily miss problems.
Conclusion
The core role of virt-api is not to directly run VMs but to convert user requests into verifiable Kubernetes operations. Migrate requests are translated into migration CR creation, stop requests into status change patches, and some runtime actions into virt-handler connections. Understanding this layer makes it clear why KubeVirt behaves in a Kubernetes-like manner, and why requests and execution are temporally separated.
In the next article, we will examine the informer, queue, and reconcile structure of virt-controller that connects these requests and objects to actual launcher Pod creation.