Virtual hardware

Fine-tuning different aspects of the hardware which are not device related (BIOS, mainboard, etc.) is sometimes necessary to allow guest operating systems to properly boot and reboot.

Machine Type

QEMU is able to work with two different classes of chipsets for x86_64, so called machine types. The x86_64 chipsets are i440fx (also called pc) and q35. They are versioned based on qemu-system-${ARCH}, following the format pc-${machine_type}-${qemu_version}, e.g.pc-i440fx-2.10 and pc-q35-2.10.

KubeVirt defaults to QEMU’s newest q35 machine type. If a custom machine type is desired, it is configurable through the following structure:

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. machine:
  6. # This value indicates QEMU machine type.
  7. type: pc-q35-2.10
  8. resources:
  9. requests:
  10. memory: 512M
  11. devices:
  12. disks:
  13. - name: myimage
  14. disk: {}
  15. volumes:
  16. - name: myimage
  17. persistentVolumeClaim:
  18. claimName: myclaim

Comparison of the machine types’ internals can be found at QEMU wiki.

BIOS/UEFI

All virtual machines use BIOS by default for booting.

It is possible to utilize UEFI/OVMF by setting a value via spec.firmware.bootloader:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. labels:
  5. special: vmi-alpine-efi
  6. name: vmi-alpine-efi
  7. spec:
  8. domain:
  9. devices:
  10. disks:
  11. - disk:
  12. bus: virtio
  13. name: containerdisk
  14. firmware:
  15. # this sets the bootloader type
  16. bootloader:
  17. efi: {}

SecureBoot is not yet supported.

SMBIOS Firmware

In order to provide a consistent view on the virtualized hardware for the guest OS, the SMBIOS UUID can be set to a constant value via spec.firmware.uuid:

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. firmware:
  6. # this sets the UUID
  7. uuid: 5d307ca9-b3ef-428c-8861-06e72d69f223
  8. serial: e4686d2c-6e8d-4335-b8fd-81bee22f4815
  9. resources:
  10. requests:
  11. memory: 512M
  12. devices:
  13. disks:
  14. - name: myimage
  15. disk: {}
  16. volumes:
  17. - name: myimage
  18. persistentVolumeClaim:
  19. claimName: myclaim

In addition, the SMBIOS serial number can be set to a constant value via spec.firmware.serial, as demonstrated above.

CPU

Note: This is not related to scheduling decisions or resource assignment.

Topology

Setting the number of CPU cores is possible via spec.domain.cpu.cores. The following VM will have a CPU with 3 cores:

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. cpu:
  6. # this sets the cores
  7. cores: 3
  8. resources:
  9. requests:
  10. memory: 512M
  11. devices:
  12. disks:
  13. - name: myimage
  14. disk: {}
  15. volumes:
  16. - name: myimage
  17. persistentVolumeClaim:
  18. claimName: myclaim

Enabling cpu compatibility enforcement

To enable the CPU compatibility enforcement, the CPUNodeDiscovery feature gates must be enabled in the KubeVirt CR.

This feature-gate allows kubevirt to take VM cpu model and cpu features and create node selectors from them. With these node selectors, VM can be scheduled on the node which can support VM cpu model and features.

Labeling nodes with cpu models and cpu features

To properly label the node, user can use Kubevirt Node-labeller, which creates all necessary labels or create node labels by himself.

Kubevirt node-labeller creates 3 types of labels: cpu models, cpu features and kvm info. It uses libvirt to get all supported cpu models and cpu features on host and then Node-labeller creates labels from cpu models. Kubevirt can then schedule VM on node which has support for VM cpu model and features.

Node-labeller supports obsolete list of cpu models and minimal baseline cpu model for features. Both features can be set via KubeVirt CR:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: Kubevirt
  3. metadata:
  4. name: kubevirt
  5. namespace: kubevirt
  6. spec:
  7. ...
  8. configuration:
  9. minCPUModel: "Penryn"
  10. obsoleteCPUModels:
  11. - "486"
  12. - "pentium"
  13. ...

Obsolete cpus will not be inserted in labels. If KubeVirt CR doesn’t contain obsoleteCPUModels or minCPUModel variables, Labeller sets default values (for obsoleteCPUModels “pentium, pentium2, pentium3, pentiumpro, coreduo, n270, core2duo, Conroe, athlon, phenom, kvm32, kvm64, qemu32, qemu64” and for minCPUModel “Penryn”). In minCPU user can set baseline cpu model. CPU features, which have this model, are used as basic features. These basic features are not in the label list. Feature labels are created as subtraction between set of newer cpu features and set of basic cpu features, e.g.: Haswell has: aes, apic, clflush Penryr has: apic, clflush subtraction is: aes. So label will be created only with aes feature.

User can change obsoleteCPUModels or minCPUModel by adding / removing cpu model in config map. Kubevirt then update nodes with new labels.

Model

Note: Be sure that node CPU model where you run a VM, has the same or higher CPU family.

Note: If CPU model wasn’t defined, the VM will have CPU model closest to one that used on the node where the VM is running.

Note: CPU model is case sensitive.

Setting the CPU model is possible via spec.domain.cpu.model. The following VM will have a CPU with the Conroe model:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. name: myvmi
  5. spec:
  6. domain:
  7. cpu:
  8. # this sets the CPU model
  9. model: Conroe
  10. ...

You can check list of available models here.

When CPUNodeDiscovery feature-gate is enabled and VM has cpu model, Kubevirt creates node selector with format: cpu-model.node.kubevirt.io/<cpuModel>, e.g. cpu-model.node.kubevirt.io/Conroe. When VM doesn’t have cpu model, then no node selector is created.

Enabling default cluster cpu model

To enable the default cpu model, user may add the cpuModel field in the KubeVirt CR.

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: KubeVirt
  3. metadata:
  4. name: kubevirt
  5. namespace: kubevirt
  6. spec:
  7. ...
  8. configuration:
  9. cpuModel: "EPYC"
  10. ...

Default CPU model is set when vmi doesn’t have any cpu model. When vmi has cpu model set, then vmi’s cpu model is preferred. When default cpu model is not set and vmi’s cpu model is not set too, host-model will be set. Default cpu model can be changed when kubevirt is running. When CPUNodeDiscovery feature gate is enabled Kubevirt creates node selector with default cpu model.

CPU model special cases

As special cases you can set spec.domain.cpu.model equals to: - host-passthrough to passthrough CPU from the node to the VM

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. cpu:
  6. # this passthrough the node CPU to the VM
  7. model: host-passthrough
  8. ...
  • host-model to get CPU on the VM close to the node one
  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. cpu:
  6. # this set the VM CPU close to the node one
  7. model: host-model
  8. ...

See the CPU API reference for more details.

Features

Setting CPU features is possible via spec.domain.cpu.features and can contain zero or more CPU features :

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. cpu:
  6. # this sets the CPU features
  7. features:
  8. # this is the feature's name
  9. - name: "apic"
  10. # this is the feature's policy
  11. policy: "require"
  12. ...

Note: Policy attribute can either be omitted or contain one of the following policies: force, require, optional, disable, forbid.

Note: In case a policy is omitted for a feature, it will default to require.

Behaviour according to Policies:

  • All policies will be passed to libvirt during virtual machine creation.
  • In case the feature gate “CPUNodeDiscovery” is enabled and the policy is omitted or has “require” value, then the virtual machine could be scheduled only on nodes that support this feature.
  • In case the feature gate “CPUNodeDiscovery” is enabled and the policy has “forbid” value, then the virtual machine would not be scheduled on nodes that support this feature.

Full description about features and policies can be found here.

When CPUNodeDiscovery feature-gate is enabled Kubevirt creates node selector from cpu features with format: cpu-feature.node.kubevirt.io/<cpuFeature>, e.g. cpu-feature.node.kubevirt.io/apic. When VM doesn’t have cpu feature, then no node selector is created.

Clock

Guest time

Sets the virtualized hardware clock inside the VM to a specific time. Available options are

  • utc

  • timezone

See the Clock API Reference for all possible configuration options.

utc

If utc is specified, the VM’s clock will be set to UTC.

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. clock:
  6. utc: {}
  7. resources:
  8. requests:
  9. memory: 512M
  10. devices:
  11. disks:
  12. - name: myimage
  13. disk: {}
  14. volumes:
  15. - name: myimage
  16. persistentVolumeClaim:
  17. claimName: myclaim

timezone

If timezone is specified, the VM’s clock will be set to the specified local time.

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. clock:
  6. timezone: "America/New York"
  7. resources:
  8. requests:
  9. memory: 512M
  10. devices:
  11. disks:
  12. - name: myimage
  13. disk: {}
  14. volumes:
  15. - name: myimage
  16. persistentVolumeClaim:
  17. claimName: myclaim

Timers

  • pit

  • rtc

  • kvm

  • hyperv

A pretty common timer configuration for VMs looks like this:

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. clock:
  6. utc: {}
  7. # here are the timer
  8. timer:
  9. hpet:
  10. present: false
  11. pit:
  12. tickPolicy: delay
  13. rtc:
  14. tickPolicy: catchup
  15. hyperv: {}
  16. resources:
  17. requests:
  18. memory: 512M
  19. devices:
  20. disks:
  21. - name: myimage
  22. disk: {}
  23. volumes:
  24. - name: myimage
  25. persistentVolumeClaim:
  26. claimName: myclaim

hpet is disabled,pit and rtc are configured to use a specific tickPolicy. Finally, hyperv is made available too.

See the Timer API Reference for all possible configuration options.

Note: Timer can be part of a machine type. Thus it may be necessary to explicitly disable them. We may in the future decide to add them via cluster-level defaulting, if they are part of a QEMU machine definition.

Random number generator (RNG)

You may want to use entropy collected by your cluster nodes inside your guest. KubeVirt allows to add a virtio RNG device to a virtual machine as follows.

  1. metadata:
  2. name: vmi-with-rng
  3. spec:
  4. domain:
  5. devices:
  6. rng: {}

For Linux guests, the virtio-rng kernel module should be loaded early in the boot process to acquire access to the entropy source. Other systems may require similar adjustments to work with the virtio RNG device.

Note: Some guest operating systems or user payloads may require the RNG device with enough entropy and may fail to boot without it. For example, fresh Fedora images with newer kernels (4.16.4+) may require the virtio RNG device to be present to boot to login.

Video and Graphics Device

By default a minimal Video and Graphics device configuration will be applied to the VirtualMachineInstance. The video device is vga compatible and comes with a memory size of 16 MB. This device allows connecting to the OS via vnc.

It is possible not attach it by setting spec.domain.devices.autoattachGraphicsDevice to false:

  1. metadata:
  2. name: myvmi
  3. spec:
  4. domain:
  5. devices:
  6. autoattachGraphicsDevice: false
  7. disks:
  8. - name: myimage
  9. disk: {}
  10. volumes:
  11. - name: myimage
  12. persistentVolumeClaim:
  13. claimName: myclaim

VMIs without graphics and video devices are very often referenced as headless VMIs.

If using a huge amount of small VMs this can be helpful to increase the VMI density per node, since no memory needs to be reserved for video.

Features

KubeVirt supports a range of virtualization features which may be tweaked in order to allow non-Linux based operating systems to properly boot. Most noteworthy are

  • acpi

  • apic

  • hyperv

A common feature configuration is shown by the following example:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. name: myvmi
  5. spec:
  6. domain:
  7. # typical features
  8. features:
  9. acpi: {}
  10. apic: {}
  11. hyperv:
  12. relaxed: {}
  13. vapic: {}
  14. spinlocks:
  15. spinlocks: 8191
  16. resources:
  17. requests:
  18. memory: 512M
  19. devices:
  20. disks:
  21. - name: myimage
  22. disk: {}
  23. volumes:
  24. - name: myimage
  25. persistentVolumeClaim:
  26. claimname: myclaim

See the Features API Reference for all available features and configuration options.

Resources Requests and Limits

An optional resource request can be specified by the users to allow the scheduler to make a better decision in finding the most suitable Node to place the VM.

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachineInstance
  3. metadata:
  4. name: myvmi
  5. spec:
  6. domain:
  7. resources:
  8. requests:
  9. memory: "1Gi"
  10. cpu: "2"
  11. limits:
  12. memory: "2Gi"
  13. cpu: "1"
  14. disks:
  15. - name: myimage
  16. disk: {}
  17. volumes:
  18. - name: myimage
  19. persistentVolumeClaim:
  20. claimname: myclaim

CPU

Specifying CPU limits will determine the amount of cpu shares set on the control group the VM is running in, in other words, the amount of time the VM’s CPUs can execute on the assigned resources when there is a competition for CPU resources.

For more information please refer to how Pods with resource limits are run.

Memory Overhead

Various VM resources, such as a video adapter, IOThreads, and supplementary system software, consume additional memory from the Node, beyond the requested memory intended for the guest OS consumption. In order to provide a better estimate for the scheduler, this memory overhead will be calculated and added to the requested memory.

Please see how Pods with resource requests are scheduled for additional information on resource requests and limits.

Hugepages

KubeVirt give you possibility to use hugepages as backing memory for your VM. You will need to provide desired amount of memory resources.requests.memory and size of hugepages to use memory.hugepages.pageSize, for example for x86_64 architecture it can be 2Mi.

  1. apiVersion: kubevirt.io/v1alpha1
  2. kind: VirtualMachine
  3. metadata:
  4. name: myvm
  5. spec:
  6. domain:
  7. resources:
  8. requests:
  9. memory: "64Mi"
  10. memory:
  11. hugepages:
  12. pageSize: "2Mi"
  13. disks:
  14. - name: myimage
  15. disk: {}
  16. volumes:
  17. - name: myimage
  18. persistentVolumeClaim:
  19. claimname: myclaim

In the above example the VM will have 64Mi of memory, but instead of regular memory it will use node hugepages of the size of 2Mi.

Limitations

  • a node must have pre-allocated hugepages

  • hugepages size cannot be bigger than requested memory

  • requested memory must be divisible by hugepages size

Input Devices

Tablet

Kubevirt supports input devices. The only type which is supported is tablet. Tablet input device supports only virtio and usb bus. Bus can be empty. In that case, usb will be selected.

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: VirtualMachine
  3. metadata:
  4. name: myvm
  5. spec:
  6. domain:
  7. devices:
  8. inputs:
  9. - type: tablet
  10. bus: virtio
  11. name: tablet1
  12. disks:
  13. - name: myimage
  14. disk: {}
  15. volumes:
  16. - name: myimage
  17. persistentVolumeClaim:
  18. claimname: myclaim