Installation

KubeVirt is a virtualization add-on to Kubernetes and this guide assumes that a Kubernetes cluster is already installed.

If installed on OKD, the web console is extended for management of virtual machines.

Requirements

A few requirements need to be met before you can begin:

  • Kubernetes cluster or derivative (such as OpenShift) based on a one of the latest three Kubernetes releases that are out at the time the KubeVirt release is made.
  • Kubernetes apiserver must have --allow-privileged=true in order to run KubeVirt’s privileged DaemonSet.
  • kubectl client utility

Container Runtime Support

KubeVirt is currently supported on the following container runtimes:

  • containerd
  • crio (with runv)

Other container runtimes, which do not use virtualization features, should work too. However, the mentioned ones are the main target.

Integration with AppArmor

In most of the scenarios, KubeVirt can run normally on systems with AppArmor. However, there are several known use cases that may require additional user interaction.

  • On a system with AppArmor enabled, the locally installed profiles may block the execution of the KubeVirt privileged containers. That usually results in initialization failure of the virt-handler pod:

    1. $ kubectl get pods -n kubevirt
    2. NAME READY STATUS RESTARTS AGE
    3. virt-api-77df5c4f87-7mqv4 1/1 Running 1 (17m ago) 27m
    4. virt-api-77df5c4f87-wcq44 1/1 Running 1 (17m ago) 27m
    5. virt-controller-749d8d99d4-56gb7 1/1 Running 1 (17m ago) 27m
    6. virt-controller-749d8d99d4-78j6x 1/1 Running 1 (17m ago) 27m
    7. virt-handler-4w99d 0/1 Init:Error 14 (5m18s ago) 27m
    8. virt-operator-564f568975-g9wh4 1/1 Running 1 (17m ago) 31m
    9. virt-operator-564f568975-wnpz8 1/1 Running 1 (17m ago) 31m
    10. $ kubectl logs -n kubevirt virt-handler-4w99d virt-launcher
    11. error: failed to get emulator capabilities
    12. error: internal error: Failed to start QEMU binary /usr/libexec/qemu-kvm for probing: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied
    13. $ journalctl -b | grep DEN
    14. ...
    15. May 18 16:44:20 debian audit[6316]: AVC apparmor="DENIED" operation="exec" profile="libvirtd" name="/usr/libexec/qemu-kvm" pid=6316 comm="rpc-worker" requested_mask="x" denied_mask="x" fsuid=107 ouid=0
    16. May 18 16:44:20 debian kernel: audit: type=1400 audit(1652888660.539:39): apparmor="DENIED" operation="exec" profile="libvirtd" name="/usr/libexec/qemu-kvm" pid=6316 comm="rpc-worker" requested_mask="x" denied_mask="x" fsuid=107 ouid=0
    17. ...

    Here, the host AppArmor profile for libvirtd does not allow the execution of the /usr/libexec/qemu-kvm binary. In the future this will hopefully work out of the box (tracking issue), but until then there are a couple of possible workarounds.

    The first (and simplest) one is to remove the libvirt package from the host: assuming the host is a dedicated Kubernetes node, you likely won’t need it anyway.

    If you actually need libvirt to be present on the host, then you can add the following rule to the AppArmor profile for libvirtd (usually /etc/apparmor.d/usr.sbin.libvirtd):

    1. # vim /etc/apparmor.d/usr.sbin.libvirtd
    2. ...
    3. /usr/libexec/qemu-kvm PUx,
    4. ...
    5. # apparmor_parser -r /etc/apparmor.d/usr.sbin.libvirtd # or systemctl reload apparmor.service
  • The default AppArmor profile used by the container runtimes usually denies mount call for the workloads. That may prevent from running VMs with VirtIO-FS. This is a known issue. The current workaround is to run such a VM as unconfined by adding the following annotation to the VM or VMI object:

    1. annotations:
    2. container.apparmor.security.beta.kubernetes.io/compute: unconfined

Validate Hardware Virtualization Support

Hardware with virtualization support is recommended. You can use virt-host-validate to ensure that your hosts are capable of running virtualization workloads:

  1. $ virt-host-validate qemu
  2. QEMU: Checking for hardware virtualization : PASS
  3. QEMU: Checking if device /dev/kvm exists : PASS
  4. QEMU: Checking if device /dev/kvm is accessible : PASS
  5. QEMU: Checking if device /dev/vhost-net exists : PASS
  6. QEMU: Checking if device /dev/net/tun exists : PASS
  7. ...

SELinux support

SELinux-enabled nodes need Container-selinux installed. The minimum version is documented inside the kubevirt/kubevirt repository, in docs/getting-started.md, under “SELinux support”.

For (older) release branches that don’t specify a container-selinux version, version 2.170.0 or newer is recommended.

Installing KubeVirt on Kubernetes

KubeVirt can be installed using the KubeVirt operator, which manages the lifecycle of all the KubeVirt core components. Below is an example of how to install KubeVirt’s latest official release. It supports to deploy KubeVirt on both x86_64 and Arm64 platforms.

  1. # Point at latest release
  2. $ export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
  3. # Deploy the KubeVirt operator
  4. $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
  5. # Create the KubeVirt CR (instance deployment request) which triggers the actual installation
  6. $ kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
  7. # wait until all KubeVirt components are up
  8. $ kubectl -n kubevirt wait kv kubevirt --for condition=Available

If hardware virtualization is not available, then a software emulation fallback can be enabled using by setting in the KubeVirt CR spec.configuration.developerConfiguration.useEmulation to true as follows:

  1. $ kubectl edit -n kubevirt kubevirt kubevirt

Add the following to the kubevirt.yaml file

  1. spec:
  2. ...
  3. configuration:
  4. developerConfiguration:
  5. useEmulation: true

Note: Prior to release v0.20.0 the condition for the kubectl wait command was named “Ready” instead of “Available”

Note: Prior to KubeVirt 0.34.2 a ConfigMap called kubevirt-config in the install-namespace was used to configure KubeVirt. Since 0.34.2 this method is deprecated. The configmap still has precedence over configuration on the CR exists, but it will not receive future updates and you should migrate any custom configurations to spec.configuration on the KubeVirt CR.

All new components will be deployed under the kubevirt namespace:

  1. kubectl get pods -n kubevirt
  2. NAME READY STATUS RESTARTS AGE
  3. virt-api-6d4fc3cf8a-b2ere 1/1 Running 0 1m
  4. virt-controller-5d9fc8cf8b-n5trt 1/1 Running 0 1m
  5. virt-handler-vwdjx 1/1 Running 0 1m
  6. ...

Installing KubeVirt on OKD

The following SCC needs to be added prior KubeVirt deployment:

  1. $ oc adm policy add-scc-to-user privileged -n kubevirt -z kubevirt-operator

Once privileges are granted, the KubeVirt can be deployed as described above.

Web user interface on OKD

No additional steps are required to extend OKD’s web console for KubeVirt.

The virtualization extension is automatically enabled when KubeVirt deployment is detected.

From Service Catalog as an APB

You can find KubeVirt in the OKD Service Catalog and install it from there. In order to do that please follow the documentation in the KubeVirt APB repository.

Installing KubeVirt on k3OS

The following configuration needs to be added to all nodes prior KubeVirt deployment:

  1. k3os:
  2. modules:
  3. - kvm
  4. - vhost_net

Once nodes are restarted with this configuration, the KubeVirt can be deployed as described above.

Installing the Daily Developer Builds

KubeVirt releases daily a developer build from the current main branch. One can see when the last release happened by looking at our nightly-build-jobs.

To install the latest developer build, run the following commands:

  1. $ LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest)
  2. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator.yaml
  3. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr.yaml

To find out which commit this build is based on, run:

  1. $ LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest)
  2. $ curl https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/commit
  3. d358cf085b5a86cc4fa516215f8b757a4e61def2

ARM64 developer builds

ARM64 developer builds can be installed like this:

  1. $ LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-arm64)
  2. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-arm64.yaml
  3. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-arm64.yaml

s390x developer builds

s390x developer builds can be installed like this:

  1. $ LATEST=$(curl -L https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/latest-s390x)
  2. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-operator-s390x.yaml
  3. $ kubectl apply -f https://storage.googleapis.com/kubevirt-prow/devel/nightly/release/kubevirt/kubevirt/${LATEST}/kubevirt-cr-s390x.yaml

Deploying from Source

See the Developer Getting Started Guide to understand how to build and deploy KubeVirt from source.

Installing network plugins (optional)

KubeVirt alone does not bring any additional network plugins, it just allows user to utilize them. If you want to attach your VMs to multiple networks (Multus CNI) or have full control over L2 (OVS CNI), you need to deploy respective network plugins. For more information, refer to OVS CNI installation guide.

Note: KubeVirt Ansible network playbook installs these plugins by default.

Restricting KubeVirt components node placement

You can restrict the placement of the KubeVirt components across your cluster nodes by editing the KubeVirt CR:

  • The placement of the KubeVirt control plane components (virt-controller, virt-api) is governed by the .spec.infra.nodePlacement field in the KubeVirt CR.
  • The placement of the virt-handler DaemonSet pods (and consequently, the placement of the VM workloads scheduled to the cluster) is governed by the .spec.workloads.nodePlacement field in the KubeVirt CR.

For each of these .nodePlacement objects, the .affinity, .nodeSelector and .tolerations sub-fields can be configured. See the description in the API reference for further information about using these fields.

For example, to restrict the virt-controller and virt-api pods to only run on the control-plane nodes:

  1. kubectl patch -n kubevirt kubevirt kubevirt --type merge --patch '{"spec": {"infra": {"nodePlacement": {"nodeSelector": {"node-role.kubernetes.io/control-plane": ""}}}}}'

To restrict the virt-handler pods to only run on nodes with the “region=primary” label:

  1. kubectl patch -n kubevirt kubevirt kubevirt --type merge --patch '{"spec": {"workloads": {"nodePlacement": {"nodeSelector": {"region": "primary"}}}}}'