VirtualMachineInstanceReplicaSet

A VirtualMachineInstanceReplicaSet tries to ensures that a specified number of VirtualMachineInstance replicas are running at any time. In other words, a VirtualMachineInstanceReplicaSet makes sure that a VirtualMachineInstance or a homogeneous set of VirtualMachineInstances is always up and ready. It is very similar to a Kubernetes ReplicaSet.

No state is kept and no guarantees about the maximum number of VirtualMachineInstance replicas which are up are given. For example, the VirtualMachineInstanceReplicaSet may decide to create new replicas if possibly still running VMs are entering an unknown state.

Using VirtualMachineInstanceReplicaSet

The VirtualMachineInstanceReplicaSet allows us to specify a VirtualMachineInstanceTemplate in spec.template. It consists of ObjectMetadata in spec.template.metadata, and a VirtualMachineInstanceSpec in spec.template.spec. The specification of the virtual machine is equal to the specification of the virtual machine in the VirtualMachineInstance workload.

spec.replicas can be used to specify how many replicas are wanted. If unspecified, the default value is 1. This value can be updated anytime. The controller will react to the changes.

spec.selector is used by the controller to keep track of managed virtual machines. The selector specified there must be able to match the virtual machine labels as specified in spec.template.metadata.labels. If the selector does not match these labels, or they are empty, the controller will simply do nothing except from logging an error. The user is responsible for not creating other virtual machines or VirtualMachineInstanceReplicaSets which conflict with the selector and the template labels.

Exposing a VirtualMachineInstanceReplicaSet as a Service

A VirtualMachineInstanceReplicaSet could be exposed as a service. When this is done, one of the VirtualMachineInstances replicas will be picked for the actual delivery of the service.

For example, exposing SSH port (22) as a ClusterIP service using virtctl on a VirtualMachineInstanceReplicaSet:

  1. $ virtctl expose vmirs vmi-ephemeral --name vmiservice --port 27017 --target-port 22

All service exposure options that apply to a VirtualMachineInstance apply to a VirtualMachineInstanceReplicaSet. See Exposing VirtualMachineInstance for more details.

When to use a VirtualMachineInstanceReplicaSet

Note: The base assumption is that referenced disks are read-only or that the VMIs are writing internally to a tmpfs. The most obvious volume sources for VirtualMachineInstanceReplicaSets which KubeVirt supports are referenced below. If other types are used data corruption is possible.

Using VirtualMachineInstanceReplicaSet is the right choice when one wants many identical VMs and does not care about maintaining any disk state after the VMs are terminated.

Volume types which work well in combination with a VirtualMachineInstanceReplicaSet are:

  • cloudInitNoCloud
  • ephemeral
  • containerDisk
  • emptyDisk
  • configMap
  • secret
  • any other type, if the VMI writes internally to a tmpfs

Fast starting ephemeral Virtual Machines

This use-case involves small and fast booting VMs with little provisioning performed during initialization.

In this scenario, migrations are not important. Redistributing VM workloads between Nodes can be achieved simply by deleting managed VirtualMachineInstances which are running on an overloaded Node. The eviction of such a VirtualMachineInstance can happen by directly deleting the VirtualMachineInstance instance (KubeVirt aware workload redistribution) or by deleting the corresponding Pod where the Virtual Machine runs in (Only Kubernetes aware workload redistribution).

Slow starting ephemeral Virtual Machines

In this use-case one has big and slow booting VMs, and complex or resource intensive provisioning is done during boot. More specifically, the timespan between the creation of a new VM and it entering the ready state is long.

In this scenario, one still does not care about the state, but since re-provisioning VMs is expensive, migrations are important. Workload redistribution between Nodes can be achieved by migrating VirtualMachineInstances to different Nodes. A workload redistributor needs to be aware of KubeVirt and create migrations, instead of evicting VirtualMachineInstances by deletion.

Note: The simplest form of having a migratable ephemeral VirtualMachineInstance, will be to use local storage based on ContainerDisks in combination with a file based backing store. However, migratable backing store support has not officially landed yet in KubeVirt and is untested.

Example

  1. apiVersion: kubevirt.io/v1
  2. kind: VirtualMachineInstanceReplicaSet
  3. metadata:
  4. name: testreplicaset
  5. spec:
  6. replicas: 3
  7. selector:
  8. matchLabels:
  9. myvmi: myvmi
  10. template:
  11. metadata:
  12. name: test
  13. labels:
  14. myvmi: myvmi
  15. spec:
  16. domain:
  17. devices:
  18. disks:
  19. - disk:
  20. name: containerdisk
  21. resources:
  22. requests:
  23. memory: 64M
  24. volumes:
  25. - name: containerdisk
  26. containerDisk:
  27. image: kubevirt/cirros-container-disk-demo:latest

Saving this manifest into testreplicaset.yaml and submitting it to Kubernetes will create three virtual machines based on the template.

  1. $ kubectl create -f testreplicaset.yaml
  2. virtualmachineinstancereplicaset "testreplicaset" created
  3. $ kubectl describe vmirs testreplicaset
  4. Name: testreplicaset
  5. Namespace: default
  6. Labels: <none>
  7. Annotations: <none>
  8. API Version: kubevirt.io/v1
  9. Kind: VirtualMachineInstanceReplicaSet
  10. Metadata:
  11. Cluster Name:
  12. Creation Timestamp: 2018-01-03T12:42:30Z
  13. Generation: 0
  14. Resource Version: 6380
  15. Self Link: /apis/kubevirt.io/v1/namespaces/default/virtualmachineinstancereplicasets/testreplicaset
  16. UID: 903a9ea0-f083-11e7-9094-525400ee45b0
  17. Spec:
  18. Replicas: 3
  19. Selector:
  20. Match Labels:
  21. Myvmi: myvmi
  22. Template:
  23. Metadata:
  24. Creation Timestamp: <nil>
  25. Labels:
  26. Myvmi: myvmi
  27. Name: test
  28. Spec:
  29. Domain:
  30. Devices:
  31. Disks:
  32. Disk:
  33. Name: containerdisk
  34. Volume Name: containerdisk
  35. Resources:
  36. Requests:
  37. Memory: 64M
  38. Volumes:
  39. Name: containerdisk
  40. Container Disk:
  41. Image: kubevirt/cirros-container-disk-demo:latest
  42. Status:
  43. Conditions: <nil>
  44. Ready Replicas: 2
  45. Replicas: 3
  46. Events:
  47. Type Reason Age From Message
  48. ---- ------ ---- ---- -------
  49. Normal SuccessfulCreate 13s virtualmachineinstancereplicaset-controller Created virtual machine: testh8998
  50. Normal SuccessfulCreate 13s virtualmachineinstancereplicaset-controller Created virtual machine: testf474w
  51. Normal SuccessfulCreate 13s virtualmachineinstancereplicaset-controller Created virtual machine: test5lvkd

Replicas is 3 and Ready Replicas is 2. This means that at the moment when showing the status, three Virtual Machines were already created, but only two are running and ready.

Scaling via the Scale Subresource

Note: This requires the CustomResourceSubresources feature gate to be enabled for clusters prior to 1.11.

The VirtualMachineInstanceReplicaSet supports the scale subresource. As a consequence it is possible to scale it via kubectl:

  1. $ kubectl scale vmirs myvmirs --replicas 5

Using the Horizontal Pod Autoscaler

Note: This requires at cluster newer or equal to 1.11.

The HorizontalPodAutoscaler (HPA) can be used with a VirtualMachineInstanceReplicaSet. Simply reference it in the spec of the autoscaler:

  1. apiVersion: autoscaling/v1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: myhpa
  5. spec:
  6. scaleTargetRef:
  7. kind: VirtualMachineInstanceReplicaSet
  8. name: vmi-replicaset-cirros
  9. apiVersion: kubevirt.io/v1
  10. minReplicas: 3
  11. maxReplicas: 10
  12. targetCPUUtilizationPercentage: 50

or use kubectl autoscale to define the HPA via the commandline:

  1. $ kubectl autoscale vmirs vmi-replicaset-cirros --min=3 --max=10