Skip to content
Published on

How Disks and Volumes Attach to VMs

Authors

Introduction

In container workloads, once a volume is mounted to a Pod, the job is mostly done. But VMs are different. From the guest's perspective, what must be visible is not a simple file mount but a virtual disk device. Therefore KubeVirt's storage layer becomes a translation layer bridging the Pod volume model and the guest disk model.

This article examines the paths through which container disks, PVCs, DataVolumes, hostDisks, generated volumes, and hotplug volumes attach to guests.

The Most Important Fact: Storage Is Interpreted Twice

In KubeVirt, storage goes through at least two stages:

  1. Kubernetes prepares the volume source at the Pod level.
  2. KubeVirt reinterprets it as a guest-visible disk.

In other words, "the PVC is visible to the launcher Pod" and "a disk appears inside the guest" are not the same statement.

What Volume Types Exist

Looking at schema.go, the converter, and storage-related packages, frequently appearing types are:

  • PersistentVolumeClaim
  • DataVolume
  • ContainerDisk
  • HostDisk
  • CloudInitNoCloud
  • CloudInitConfigDrive
  • ConfigMap
  • Secret
  • ServiceAccount
  • Hotplug volume

From the guest's perspective, these are not all the same disk. Some are for persistent data, some for boot metadata, and some are not even copy targets during migration.

Why Container Disk Is Special

Looking at cmd/virt-launcher/virt-launcher.go and pkg/virt-handler/container-disk, container disks are prepared through dedicated local directories and a mounter. This is a KubeVirt-specific pattern for distributing the OS image itself like container image layers.

The advantage is easy distribution. The disadvantage is that the lifecycle differs from normal PVC-based persistent root disks.

Think of container disks as "VM disk images delivered like containers."

How PVC and DataVolume Are Handled

PVC

PVC is the persistent storage model that Kubernetes already knows well. KubeVirt attaches it to the Pod and then connects it as a guest disk.

DataVolume

DataVolume is a higher-level abstraction connected to CDI. It typically imports or clones an image and ultimately provides storage in PVC form.

This is why virt-controller also checks DataVolume readiness state. Guest disk attach is only possible when storage attachable to the Pod is ready.

The Converter Transforms Volume Source to Disk Source

As seen in the previous article, the converter combines volumes and disks. What's important on the storage side is determining the actual source path and mode.

For example, the same PVC can:

  • Be referenced by file path if filesystem-based
  • Need to be treated as a device file if in block mode

This difference also affects guest disk attach methods and migration possibilities.

Why Generated Volumes Are Treated Separately

Looking at classifyVolumesForMigration in live-migration-source.go, there is a classification called generated volume. This includes:

  • Config map
  • Secret
  • Downward API
  • Service account
  • Cloud-init
  • Container disk

These volumes are different from regular shared PVCs -- they are guest boot auxiliary information or generated data. They should not be treated the same way during migration.

KubeVirt classifies disks not simply as "storage or not" but into finer classes like:

  • Shared volume
  • Generated volume
  • Local volume to migrate

Why the Difference Between Local and Shared Disks Matters

From a live migration perspective, this difference is decisive.

Shared Disk

If both nodes can see the same storage, the disk itself doesn't need to be copied. Only memory state and execution state need to be moved.

Local Disk

If data exists only on the source node, disk contents must also be moved during migration. This is where block migration or volume migration problems arise.

live-migration-source.go calculates which disks are actual copy targets based on this volume classification.

Why Hotplug Volume Is More Complex

Regular volumes just need to be prepared before VM start, but hotplug is the problem of inserting a new disk into the guest while it's running.

This requires a chain of steps:

  • VMI spec change detection
  • Launcher Pod or attachment Pod assistance
  • Node-side mount
  • libvirt device addition

pkg/storage/hotplug and pkg/virt-handler/hotplug-disk are the core of this area.

Hotplug is simultaneously a storage problem and a runtime device update problem.

The Storage Reality Shown by virt-launcher Initialization Code

initializeDirs in virt-launcher.go prepares multiple disk directories:

  • cloud-init
  • ignition
  • container disk
  • hotplug disk
  • config map disk
  • secret disk
  • service account disk

Just from this list, you can see that "disk" in KubeVirt has a much broader meaning than simple block storage. The guest sees these as boot disks, metadata disks, auxiliary channels, additional volumes, etc.

Why Storage Is Always at the Center of Migration Problems

In production, a significant portion of migration failure causes come from storage visibility differences rather than network or CPU.

Typical questions include:

  • Can the target node see the same PVC?
  • Is the volume mode block or filesystem?
  • Can generated volumes be recreated on the target?
  • Are hotplug disks still not cleaned up?

This is exactly why KubeVirt carefully performs volume classification and target preparation before migration.

Common Misconceptions

Misconception 1: If Mounted to the Pod, Guest Disk Is Automatic

No. The guest must receive a disk device through the libvirt domain.

Misconception 2: PVC and DataVolume Are Almost the Same from the Guest's Perspective

Partially true, but the preparation process and higher-level orchestration differ. DataVolume has additional import and clone lifecycle.

Misconception 3: All Disks Are Treated the Same During Migration

No. The shared, generated, and local-to-migrate classification is very important.

Debugging Points Operators Should Check First

  • If the guest disk isn't visible, separately check Pod mount and libvirt disk attach.
  • If migration fails, check shared storage visibility and local volume presence.
  • If hotplug fails, check in order: spec reflection, node mount, libvirt device update.
  • If there are cloud-init or secret disk issues, check the generated volume preparation path.

Conclusion

KubeVirt's storage is a translation problem between "volumes attached to Pods" and "disks seen by guests." PVCs and DataVolumes handle persistent data paths, container disks and cloud-init handle boot auxiliary paths, and hotplug handles runtime device change paths. And when migration enters the picture, each volume takes on more specific meanings like shared, generated, and local-to-migrate.

Starting from the next article, we will move to the networking layer and examine the basic principles of converting Pod networks to VM networks step by step.