- Authors

- Name
- Youngju Kim
- @fjvbn20031
- Introduction
- Why a Proxy Is Needed
- What Source and Target Each Do
- How Ports Are Used
- Why the Target Listener Can Use Random Ports
- Why Socket Files Are Needed
- Where TLS Is Applied
- Why a Separate Migration Network Can Be Used
- Why Block Migration and Direct Migration Differ
- When Operators Should Look at the Proxy
- Common Misconceptions
- Conclusion
Introduction
When thinking about live migration, it is easy to simply say "source QEMU copies memory to target QEMU." But in an actual Kubernetes environment, there are far more complex transport issues in between.
- libvirt inside the launcher primarily uses Unix sockets and local resources
- The target Pod is dynamically created
- Source and target are on different nodes
- An additional TLS layer is needed
- Port requirements differ for block migration
To solve this problem, KubeVirt uses the migration proxy layer in pkg/virt-handler/migration-proxy/migration-proxy.go.
Why a Proxy Is Needed
The core reason is that it is difficult to directly 1:1 bind the local libvirt control socket with the cluster network path.
Inside the launcher, libvirt and QEMU operate based on Unix sockets, local files, and Pod namespace. But migration requires network connectivity between source and target nodes. Therefore, KubeVirt places a proxy in between that:
- Opens TCP listeners on the target side
- Provides local Unix sockets on the source side
- Connects the two sides via TLS or plain transport
In other words, the migration proxy is an address translator and transport shim between source and target.
What Source and Target Each Do
Looking at the ProxyManager interface, the roles are clear.
Target side
StartTargetListenerGetTargetListenerPortsStopTargetListener
The target node opens listeners to accept incoming TCP connections from outside.
Source side
StartSourceListenerGetSourceListenerFilesStopSourceListener
The source node creates local Unix socket files and forwards them to the target address and port.
Thanks to this structure, from the launcher's perspective, it sees a familiar local endpoint, but the actual transport goes over the network to the target.
How Ports Are Used
The code defines default migration ports as follows:
- Direct migration port:
49152 - Block migration port:
49153
GetMigrationPortsList determines the required set of ports based on whether block migration is involved. In other words, if disks need to be additionally moved, port requirements increase.
The key insight for operators is that live migration is a network operation with explicit ports and listeners, not "invisible internal communication."
Why the Target Listener Can Use Random Ports
StartTargetListener creates proxies for target Unix files and can pass TCP bind port as 0 to use random ports. Then GetTargetListenerPorts reports the actual bound ports to the source side.
The advantage of this approach is reducing port conflicts and being more flexible in opening listeners in the dynamic environment of Pods and nodes.
In other words, it is a structure that separates fixed logical ports from actual bind ports.
Why Socket Files Are Needed
Looking at SourceUnixFile on the source side, the migration proxy creates source socket files under a base directory. The reason is that this fits well with the flow where libvirt inside the launcher still operates based on local Unix endpoints.
This means the migration proxy does not abandon the existing local runtime model for networking -- it wraps the Unix socket model into a network-capable form.
Where TLS Is Applied
migration-proxy.go separately manages server TLS config, client TLS config, and migration TLS config. Also, if the cluster migration configuration has a DisableTLS option, the additional TLS layer can be turned off on the target listener.
However, both the code and API descriptions consistently imply that DisableTLS is usually a bad idea.
This is because migration traffic carries guest memory state and execution state, making network path protection important.
In other words, the migration proxy is not just a forwarder but also a security boundary.
Why a Separate Migration Network Can Be Used
The KubeVirt configuration has an option to route migration traffic through a separate network instead of the default Pod network. This is because migration traffic may compete with application Pod networks, or may need to be separated for bandwidth and security reasons.
Understanding this from the proxy perspective is straightforward. Since the proxy is already a transport shim between source and target, it is easy to change which network the transport uses.
Why Block Migration and Direct Migration Differ
Direct migration primarily involves memory and execution state, while block migration can also include disk-related data paths. Therefore, more ports are needed, and overall bandwidth and timeout calculations become more sensitive.
This is why the proxy layer manages port maps that distinguish between the two.
When Operators Should Look at the Proxy
In the following situations, the migration proxy should be the first suspect:
- Target Pod is alive but migration connection does not start
- TLS handshake issues between source and target
- Only specific ports fail in block migration
- Connection problems after migration network separation
In these cases, looking only at libvirt errors is insufficient -- you should also check whether listeners are open, whether source sockets were created, and whether target port mappings are correct.
Common Misconceptions
Misconception 1: Source and target QEMU just connect directly
In Kubernetes environments, an intermediate transport shim is needed. The migration proxy handles this role.
Misconception 2: Migration ports are always fixed
Logical ports are defined, but actual bind ports and mappings can be dynamic.
Misconception 3: TLS is optional so turning it off is not a big deal
Migration state is sensitive. Usually the additional TLS layer should be maintained.
Conclusion
KubeVirt's migration proxy is a key layer that adapts live migration transport for the Kubernetes environment. The target side opens TCP listeners, the source side provides Unix sockets, and the two are connected via TLS and port mapping. Thanks to this structure, the local libvirt model inside the launcher and the cross-cluster network migration model are cleanly bridged.
In the next post, we will look at the resource issues directly connected to migration -- how CPU, memory, NUMA, and hugepages are interpreted in Pod scheduling and the guest hardware model.