VirtualMachinePool

A VirtualMachinePool tries to ensure that a specified number of VirtualMachine replicas and their respective VirtualMachineInstances are in the ready state at any time. In other words, a VirtualMachinePool makes sure that a VirtualMachine or a set of VirtualMachines is always up and ready.

No state is kept and no guarantees are made about the maximum number of VirtualMachineInstance replicas running at any time. For example, the VirtualMachinePool may decide to create new replicas if possibly still running VMs are entering an unknown state.

Using VirtualMachinePool

The VirtualMachinePool allows us to specify a VirtualMachineTemplate in spec.virtualMachineTemplate. It consists of ObjectMetadata in spec.virtualMachineTemplate.metadata, and a VirtualMachineSpec in spec.virtualMachineTemplate.spec. The specification of the virtual machine is equal to the specification of the virtual machine in the VirtualMachine workload.

spec.replicas can be used to specify how many replicas are wanted. If unspecified, the default value is 1. This value can be updated anytime. The controller will react to the changes.

spec.selector is used by the controller to keep track of managed virtual machines. The selector specified there must be able to match the virtual machine labels as specified in spec.virtualMachineTemplate.metadata.labels. If the selector does not match these labels, or they are empty, the controller will simply do nothing except log an error. The user is responsible for avoiding the creation of other virtual machines or VirtualMachinePools which may conflict with the selector and the template labels.

Creating a VirtualMachinePool

VirtualMachinePool is part of the Kubevirt API pool.kubevirt.io/v1alpha1.

The example below shows how to create a simple VirtualMachinePool:

Example

  1. apiVersion: pool.kubevirt.io/v1alpha1
  2. kind: VirtualMachinePool
  3. metadata:
  4. name: vm-pool-cirros
  5. spec:
  6. replicas: 3
  7. selector:
  8. matchLabels:
  9. kubevirt.io/vmpool: vm-pool-cirros
  10. virtualMachineTemplate:
  11. metadata:
  12. creationTimestamp: null
  13. labels:
  14. kubevirt.io/vmpool: vm-pool-cirros
  15. spec:
  16. runStrategy: Always
  17. template:
  18. metadata:
  19. creationTimestamp: null
  20. labels:
  21. kubevirt.io/vmpool: vm-pool-cirros
  22. spec:
  23. domain:
  24. devices:
  25. disks:
  26. - disk:
  27. bus: virtio
  28. name: containerdisk
  29. resources:
  30. requests:
  31. memory: 128Mi
  32. terminationGracePeriodSeconds: 0
  33. volumes:
  34. - containerDisk:
  35. image: kubevirt/cirros-container-disk-demo:latest
  36. name: containerdisk

Saving this manifest into vm-pool-cirros.yaml and submitting it to Kubernetes will create three virtual machines based on the template.

  1. $ kubectl create -f vm-pool-cirros.yaml
  2. virtualmachinepool.pool.kubevirt.io/vm-pool-cirros created
  3. $ kubectl describe vmpool vm-pool-cirros
  4. Name: vm-pool-cirros
  5. Namespace: default
  6. Labels: <none>
  7. Annotations: <none>
  8. API Version: pool.kubevirt.io/v1alpha1
  9. Kind: VirtualMachinePool
  10. Metadata:
  11. Creation Timestamp: 2023-02-09T18:30:08Z
  12. Generation: 1
  13. Manager: kubectl-create
  14. Operation: Update
  15. Time: 2023-02-09T18:30:08Z
  16. API Version: pool.kubevirt.io/v1alpha1
  17. Fields Type: FieldsV1
  18. fieldsV1:
  19. f:status:
  20. .:
  21. f:labelSelector:
  22. f:readyReplicas:
  23. f:replicas:
  24. Manager: virt-controller
  25. Operation: Update
  26. Subresource: status
  27. Time: 2023-02-09T18:30:44Z
  28. Resource Version: 6606
  29. UID: ba51daf4-f99f-433c-89e5-93f39bc9989d
  30. Spec:
  31. Replicas: 3
  32. Selector:
  33. Match Labels:
  34. kubevirt.io/vmpool: vm-pool-cirros
  35. Virtual Machine Template:
  36. Metadata:
  37. Creation Timestamp: <nil>
  38. Labels:
  39. kubevirt.io/vmpool: vm-pool-cirros
  40. Spec:
  41. Running: true
  42. Template:
  43. Metadata:
  44. Creation Timestamp: <nil>
  45. Labels:
  46. kubevirt.io/vmpool: vm-pool-cirros
  47. Spec:
  48. Domain:
  49. Devices:
  50. Disks:
  51. Disk:
  52. Bus: virtio
  53. Name: containerdisk
  54. Resources:
  55. Requests:
  56. Memory: 128Mi
  57. Termination Grace Period Seconds: 0
  58. Volumes:
  59. Container Disk:
  60. Image: kubevirt/cirros-container-disk-demo:latest
  61. Name: containerdisk
  62. Status:
  63. Label Selector: kubevirt.io/vmpool=vm-pool-cirros
  64. Ready Replicas: 2
  65. Replicas: 3
  66. Events:
  67. Type Reason Age From Message
  68. ---- ------ ---- ---- -------
  69. Normal SuccessfulCreate 17s virtualmachinepool-controller Created VM default/vm-pool-cirros-0
  70. Normal SuccessfulCreate 17s virtualmachinepool-controller Created VM default/vm-pool-cirros-2
  71. Normal SuccessfulCreate 17s virtualmachinepool-controller Created VM default/vm-pool-cirros-1

Replicas is 3 and Ready Replicas is 2. This means that at the moment when showing the status, three Virtual Machines were already created, but only two are running and ready.

Scaling via the Scale Subresource

Note: This requires KubeVirt 0.59 or newer.

The VirtualMachinePool supports the scale subresource. As a consequence it is possible to scale it via kubectl:

  1. $ kubectl scale vmpool vm-pool-cirros --replicas 5

Removing a VirtualMachine from VirtualMachinePool

It is also possible to remove a VirtualMachine from its VirtualMachinePool.

In this scenario, the ownerReferences needs to be removed from the VirtualMachine. This can be achieved either by using kubectl edit or kubectl patch. Using kubectl patch it would look like:

  1. kubectl patch vm vm-pool-cirros-0 --type merge --patch '{"metadata":{"ownerReferences":null}}'

Note: You may want to update your VirtualMachine labels as well to avoid impact on selectors.

Using the Horizontal Pod Autoscaler

Note: This requires KubeVirt 0.59 or newer.

The HorizontalPodAutoscaler (HPA) can be used with a VirtualMachinePool. Simply reference it in the spec of the autoscaler:

  1. apiVersion: autoscaling/v1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. creationTimestamp: null
  5. name: vm-pool-cirros
  6. spec:
  7. maxReplicas: 10
  8. minReplicas: 3
  9. scaleTargetRef:
  10. apiVersion: pool.kubevirt.io/v1alpha1
  11. kind: VirtualMachinePool
  12. name: vm-pool-cirros
  13. targetCPUUtilizationPercentage: 50

or use kubectl autoscale to define the HPA via the commandline:

  1. $ kubectl autoscale vmpool vm-pool-cirros --min=3 --max=10 --cpu-percent=50

Exposing a VirtualMachinePool as a Service

A VirtualMachinePool may be exposed as a service. When this is done, one of the VirtualMachine replicas will be picked for the actual delivery of the service.

For example, exposing SSH port (22) as a ClusterIP service:

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: vm-pool-cirros-ssh
  5. spec:
  6. type: ClusterIP
  7. selector:
  8. kubevirt.io/vmpool: vm-pool-cirros
  9. ports:
  10. - protocol: TCP
  11. port: 2222
  12. targetPort: 22

Saving this manifest into vm-pool-cirros-ssh.yaml and submitting it to Kubernetes will create the ClusterIP service listening on port 2222 and forwarding to port 22.

See Service Objects for more details.

Using Persistent Storage

Note: DataVolumes are part of CDI

Usage of a DataVolumeTemplates within a spec.virtualMachineTemplate.spec will result in the creation of unique persistent storage for each VM within a VMPool. The DataVolumeTemplate name will have the VM’s sequential postfix appended to it when the VM is created from the spec.virtualMachineTemplate.spec.dataVolumeTemplates. This makes each VM a completely unique stateful workload.

Using Unique CloudInit and ConfigMap Volumes with VirtualMachinePools

By default, any secrets or configMaps references in a spec.virtualMachineTemplate.spec.template Volume section will be used directly as is, without any modification to the naming. This means if you specify a secret in a CloudInitNoCloud volume, that every VM instance spawned from the VirtualMachinePool with this volume will get the exact same secret used for their cloud-init user data.

This default behavior can be modified by setting the AppendPostfixToSecretReferences and AppendPostfixToConfigMapReferences booleans to true on the VMPool spec. When these booleans are enabled, references to secret and configMap names will have the VM’s sequential postfix appended to the secret and configmap name. This allows someone to pre-generate unique per VM secret and configMap data for a VirtualMachinePool ahead of time in a way that will be predictably assigned to VMs within the VirtualMachinePool.