Interfaces and Networks

Connecting a virtual machine to a network consists of two parts. First, networks are specified in spec.networks. Then, interfaces backed by the networks are added to the VM by specifying them in spec.domain.devices.interfaces.

Each interface must have a corresponding network with the same name.

An interface defines a virtual network interface of a virtual machine (also called a frontend). A network specifies the backend of an interface and declares which logical or physical device it is connected to (also called as backend).

There are multiple ways of configuring an interface as well as a network.

All possible configuration options are available in the Interface API Reference and Network API Reference.

Backend

Network backends are configured in spec.networks. A network must have a unique name. Additional fields declare which logical or physical device the network relates to.

Each network should declare its type by defining one of the following fields:

TypeDescription

pod

Default Kubernetes network

multus

Secondary network provided using Multus

pod

A pod network represents the default pod eth0 interface configured by cluster network solution that is present in each pod.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: default
  7. masquerade: {}
  8. networks:
  9. - name: default
  10. pod: {} # Stock pod network

multus

It is also possible to connect VMIs to secondary networks using Multus. This assumes that multus is installed across your cluster and a corresponding NetworkAttachmentDefinition CRD was created.

The following example defines a network which uses the ovs-cni plugin, which will connect the VMI to Open vSwitch’s bridge br1 and VLAN 100. Other CNI plugins such as ptp, bridge, macvlan or Flannel might be used as well. For their installation and usage refer to the respective project documentation.

First the NetworkAttachmentDefinition needs to be created. That is usually done by an administrator. Users can then reference the definition.

  1. apiVersion: "k8s.cni.cncf.io/v1"
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: ovs-vlan-100
  5. spec:
  6. config: '{
  7. "cniVersion": "0.3.1",
  8. "type": "ovs",
  9. "bridge": "br1",
  10. "vlan": 100
  11. }'

With following definition, the VMI will be connected to the default pod network and to the secondary Open vSwitch network.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: default
  7. masquerade: {}
  8. bootOrder: 1 # attempt to boot from an external tftp server
  9. dhcpOptions:
  10. bootFileName: default_image.bin
  11. tftpServerName: tftp.example.com
  12. - name: ovs-net
  13. bridge: {}
  14. bootOrder: 2 # if first attempt failed, try to PXE-boot from this L2 networks
  15. networks:
  16. - name: default
  17. pod: {} # Stock pod network
  18. - name: ovs-net
  19. multus: # Secondary multus network
  20. networkName: ovs-vlan-100

It is also possible to define a multus network as the default pod network with Multus. A version of multus after this Pull Request is required (currently master).

Note the following:

  • A multus default network and a pod network type are mutually exclusive.

  • The virt-launcher pod that starts the VMI will not have the pod network configured.

  • The multus delegate chosen as default must return at least one IP address.

Create a NetworkAttachmentDefinition with IPAM.

  1. apiVersion: "k8s.cni.cncf.io/v1"
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: macvlan-test
  5. spec:
  6. config: '{
  7. "type": "macvlan",
  8. "master": "eth0",
  9. "mode": "bridge",
  10. "ipam": {
  11. "type": "host-local",
  12. "subnet": "10.250.250.0/24"
  13. }
  14. }'

Define a VMI with a Multus network as the default.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: test1
  7. bridge: {}
  8. networks:
  9. - name: test1
  10. multus: # Multus network as default
  11. default: true
  12. networkName: macvlan-test

Frontend

Network interfaces are configured in spec.domain.devices.interfaces. They describe properties of virtual interfaces as “seen” inside guest instances. The same network backend may be connected to a virtual machine in multiple different ways, each with their own connectivity guarantees and characteristics.

Each interface should declare its type by defining on of the following fields:

TypeDescription

bridge

Connect using a linux bridge

slirp

Connect using QEMU user networking mode

sriov

Pass through a SR-IOV PCI device via vfio

masquerade

Connect using Iptables rules to nat the traffic

Each interface may also have additional configuration fields that modify properties “seen” inside guest instances, as listed below:

NameFormatDefault valueDescription

model

One of: e1000, e1000e, ne2k_pci, pcnet, rtl8139, virtio

virtio

NIC type

macAddress

ff:ff:ff:ff:ff:ff or FF-FF-FF-FF-FF-FF

MAC address as seen inside the guest system, for example: de:ad:00:00:be:af

ports

empty

List of ports to be forwarded to the virtual machine.

pciAddress

0000:81:00.1

Set network interface PCI address, for example: 0000:81:00.1

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: default
  7. model: e1000 # expose e1000 NIC to the guest
  8. masquerade: {} # connect through a masquerade
  9. ports:
  10. - name: http
  11. port: 80
  12. networks:
  13. - name: default
  14. pod: {}

Note: If a specific MAC address is configured for a virtual machine interface, it’s passed to the underlying CNI plugin that is expected to configure the backend to allow for this particular MAC address. Not every plugin has native support for custom MAC addresses.

Note: For some CNI plugins without native support for custom MAC addresses, there is a workaround, which is to use the tuning CNI plugin to adjust pod interface MAC address. This can be used as follows:

  1. apiVersion: "k8s.cni.cncf.io/v1"
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: ptp-mac
  5. spec:
  6. config: '{
  7. "cniVersion": "0.3.1",
  8. "name": "ptp-mac",
  9. "plugins": [
  10. {
  11. "type": "ptp",
  12. "ipam": {
  13. "type": "host-local",
  14. "subnet": "10.1.1.0/24"
  15. }
  16. },
  17. {
  18. "type": "tuning"
  19. }
  20. ]
  21. }'

This approach may not work for all plugins. For example, OKD SDN is not compatible with tuning plugin.

  • Plugins that handle custom MAC addresses natively: ovs, bridge.

  • Plugins that are compatible with tuning plugin: flannel, ptp.

  • Plugins that don’t need special MAC address treatment: sriov (in vfio mode).

Ports

Declare ports listen by the virtual machine

Note: When using the slirp interface only the configured ports will be forwarded to the virtual machine.

NameFormatRequiredDescription

name

no

Name

port

1 - 65535

yes

Port to expose

protocol

TCP,UDP

no

Connection protocol

Tip: Use e1000 model if your guest image doesn’t ship with virtio drivers.

Note: Windows machines need the latest virtio network driver to configure the correct MTU on the interface.

If spec.domain.devices.interfaces is omitted, the virtual machine is connected using the default pod network interface of bridge type. If you’d like to have a virtual machine instance without any network connectivity, you can use the autoattachPodInterface field as follows:

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. autoattachPodInterface: false

bridge

In bridge mode, virtual machines are connected to the network backend through a linux “bridge”. The pod network IPv4 address is delegated to the virtual machine via DHCPv4. The virtual machine should be configured to use DHCP to acquire IPv4 addresses.

Note: If a specific MAC address is not configured in the virtual machine interface spec the MAC address from the relevant pod interface is delegated to the virtual machine.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: red
  7. bridge: {} # connect through a bridge
  8. networks:
  9. - name: red
  10. multus:
  11. networkName: red

At this time, bridge mode doesn’t support additional configuration fields.

Note: due to IPv4 address delegation, in bridge mode the pod doesn’t have an IP address configured, which may introduce issues with third-party solutions that may rely on it. For example, Istio may not work in this mode.

Note: admin can forbid using bridge interface type for pod networks via a designated configuration flag. To achieve it, the admin should set the following option to false:

  1. apiVersion: kubevirt.io/v1alpha3
  2. kind: Kubevirt
  3. metadata:
  4. name: kubevirt
  5. namespace: kubevirt
  6. spec:
  7. configuration:
  8. network:
  9. permitBridgeInterfaceOnPodNetwork: false

Note: binding the pod network using bridge interface type may cause issues. Other than the third-party issue mentioned in the above note, live migration is not allowed with a pod network binding of bridge interface type, and also some CNI plugins might not allow to use a custom MAC address for your VM instances. If you think you may be affected by any of issues mentioned above, consider changing the default interface type to masquerade, and disabling the bridge type for pod network, as shown in the example above.

slirp

In slirp mode, virtual machines are connected to the network backend using QEMU user networking mode. In this mode, QEMU allocates internal IP addresses to virtual machines and hides them behind NAT.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: red
  7. slirp: {} # connect using SLIRP mode
  8. networks:
  9. - name: red
  10. pod: {}

At this time, slirp mode doesn’t support additional configuration fields.

Note: in slirp mode, the only supported protocols are TCP and UDP. ICMP is not supported.

More information about SLIRP mode can be found in QEMU Wiki.

masquerade

In masquerade mode, KubeVirt allocates internal IP addresses to virtual machines and hides them behind NAT. All the traffic exiting virtual machines is “NAT’ed” using pod IP addresses. A guest operating system should be configured to use DHCP to acquire IPv4 addresses.

To allow traffic of specific ports into virtual machines, the template ports section of the interface should be configured as follows. If the ports section is missing, all ports forwarded into the VM.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. interfaces:
  6. - name: red
  7. masquerade: {} # connect using masquerade mode
  8. ports:
  9. - port: 80 # allow incoming traffic on port 80 to get into the virtual machine
  10. networks:
  11. - name: red
  12. pod: {}

Note: Masquerade is only allowed to connect to the pod network.

Note: The network CIDR can be configured in the pod network section using the vmNetworkCIDR attribute.

masquerade - IPv4 and IPv6 dual-stack support

It is currently experimental, but masquerade mode can be used in IPv4 and IPv6 dual-stack clusters to provide a VM with an IP connectivity over both protocols.

As with the IPv4 masquerade mode, the VM can be contacted using the pod’s IP address - which will be in this case two IP addresses, one IPv4 and one IPv6. Outgoing traffic is also “NAT’ed” to the pod’s respective IP address from the given family.

Unlike in IPv4, the configuration of the IPv6 address and the default route is not automatic; it should be configured via cloud init, as shown below:

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. disks:
  6. - disk:
  7. bus: virtio
  8. name: cloudinitdisk
  9. interfaces:
  10. - name: red
  11. masquerade: {} # connect using masquerade mode
  12. ports:
  13. - port: 80 # allow incoming traffic on port 80 to get into the virtual machine
  14. networks:
  15. - name: red
  16. pod: {}
  17. volumes:
  18. - cloudInitNoCloud:
  19. networkData: |
  20. version: 2
  21. ethernets:
  22. eth0:
  23. dhcp4: true
  24. addresses: [ fd10:0:2::2/120 ]
  25. gateway6: fd10:0:2::1
  26. userData: |-
  27. #!/bin/bash
  28. echo "fedora" |passwd fedora --stdin

Note: The IPv6 address for the VM and default gateway must be the ones shown above.

virtio-net multiqueue

Setting the networkInterfaceMultiqueue to true will enable the multi-queue functionality, increasing the number of vhost queue, for interfaces configured with a virtio model.

  1. kind: VM
  2. spec:
  3. domain:
  4. devices:
  5. networkInterfaceMultiqueue: true

Users of a Virtual Machine with multiple vCPUs may benefit of increased network throughput and performance.

Currently, the number of queues is being determined by the number of vCPUs of a VM. This is because multi-queue support optimizes RX interrupt affinity and TX queue selection in order to make a specific queue private to a specific vCPU.

Without enabling the feature, network performance does not scale as the number of vCPUs increases. Guests cannot transmit or retrieve packets in parallel, as virtio-net has only one TX and RX queue.

NOTE: Although the virtio-net multiqueue feature provides a performance benefit, it has some limitations and therefore should not be unconditionally enabled

Some known limitations

  • Guest OS is limited to ~200 MSI vectors. Each NIC queue requires a MSI vector, as well as any virtio device or assigned PCI device. Defining an instance with multiple virtio NICs and vCPUs might lead to a possibility of hitting the guest MSI limit.

  • virtio-net multiqueue works well for incoming traffic, but can occasionally cause a performance degradation, for outgoing traffic. Specifically, this may occur when sending packets under 1,500 bytes over the Transmission Control Protocol (TCP) stream.

  • Enabling virtio-net multiqueue increases the total network throughput, but in parallel it also increases the CPU consumption.

  • Enabling virtio-net multiqueue in the host QEMU config, does not enable the functionality in the guest OS. The guest OS administrator needs to manually turn it on for each guest NIC that requires this feature, using ethtool.

  • MSI vectors would still be consumed (wasted), if multiqueue was enabled in the host, but has not been enabled in the guest OS by the administrator.

  • In case the number of vNICs in a guest instance is proportional to the number of vCPUs, enabling the multiqueue feature is less important.

  • Each virtio-net queue consumes 64 KiB of kernel memory for the vhost driver.

NOTE: Virtio-net multiqueue should be enabled in the guest OS manually, using ethtool. For example: ethtool -L <NIC> combined #num_of_queues

More information please refer to KVM/QEMU MultiQueue.

sriov

In sriov mode, virtual machines are directly exposed to an SR-IOV PCI device, usually allocated by Intel SR-IOV device plugin. The device is passed through into the guest operating system as a host device, using the vfio userspace interface, to maintain high networking performance.

How to expose SR-IOV VFs to KubeVirt

To simplify procedure, please use OpenShift SR-IOV operator to deploy and configure SR-IOV components in your cluster. On how to use the operator, please refer to their respective documentation.

Note: KubeVirt relies on VFIO userspace driver to pass PCI devices into VMI guest. Because of that, when configuring SR-IOV operator policies, make sure you define a pool of VF resources that uses driver: vfio.

Once the operator is deployed, an SriovNetworkNodePolicy must be provisioned, in which the list of SR-IOV devices to expose (with respective configurations) is defined.

Please refer to the following SriovNetworkNodePolicy for an example:

  1. apiVersion: sriovnetwork.openshift.io/v1
  2. kind: SriovNetworkNodePolicy
  3. metadata:
  4. name: policy-1
  5. namespace: sriov-network-operator
  6. spec:
  7. deviceType: vfio-pci
  8. mtu: 9000
  9. nicSelector:
  10. pfNames:
  11. - ens1f0
  12. nodeSelector:
  13. sriov: "true"
  14. numVfs: 8
  15. priority: 90
  16. resourceName: sriov-nic

The policy above will configure the SR-IOV device plugin, allowing the PF named ens1f0 to be exposed in the SRIOV capable nodes as a resource named sriov-nic.

Start an SR-IOV VM

Once all the SR-IOV components are deployed, it is needed to indicate how to configure the SR-IOV network. Refer to the following SriovNetwork for an example:

  1. apiVersion: sriovnetwork.openshift.io/v1
  2. kind: SriovNetwork
  3. metadata:
  4. name: sriov-net
  5. namespace: sriov-network-operator
  6. spec:
  7. ipam: |
  8. {}
  9. networkNamespace: default
  10. resourceName: sriov-nic
  11. spoofChk: "off"

Finally, to create a VM that will attach to the aforementioned Network, refer to the following VMI spec:

  1. ---
  2. apiVersion: kubevirt.io/v1alpha3
  3. kind: VirtualMachineInstance
  4. metadata:
  5. labels:
  6. special: vmi-perf
  7. name: vmi-perf
  8. spec:
  9. domain:
  10. cpu:
  11. sockets: 2
  12. cores: 1
  13. threads: 1
  14. dedicatedCpuPlacement: true
  15. resources:
  16. requests:
  17. memory: "4Gi"
  18. limits:
  19. memory: "4Gi"
  20. devices:
  21. disks:
  22. - disk:
  23. bus: virtio
  24. name: containerdisk
  25. - disk:
  26. bus: virtio
  27. name: cloudinitdisk
  28. interfaces:
  29. - masquerade: {}
  30. name: default
  31. - name: sriov-net
  32. sriov: {}
  33. rng: {}
  34. machine:
  35. type: ""
  36. networks:
  37. - name: default
  38. pod: {}
  39. - multus:
  40. networkName: default/sriov-net
  41. name: sriov-net
  42. terminationGracePeriodSeconds: 0
  43. volumes:
  44. - containerDisk:
  45. image: docker.io/kubevirt/fedora-cloud-container-disk-demo:latest
  46. name: containerdisk
  47. - cloudInitNoCloud:
  48. userData: |
  49. #!/bin/bash
  50. echo "centos" |passwd centos --stdin
  51. dhclient eth1
  52. name: cloudinitdisk

Note: for some NICs (e.g. Mellanox), the kernel module needs to be installed in the guest VM.

Note: Placement on dedicated CPUs can only be achieved if the Kubernetes CPU manager is running on the SR-IOV capable workers. For further details please refer to the dedicated cpu resources documentation.

Macvtap

In macvtap mode, virtual machines are directly exposed to the Kubernetes nodes L2 network. This is achieved by ‘extending’ an existing network interface with a virtual device that has its own MAC address.

Macvtap interfaces are feature gated; to enable the feature, follow these instructions, in order to activate the Macvtap feature gate (case sensitive).

Limitations

How to expose host interface to the macvtap device plugin

To simplify the procedure, please use the Cluster Network Addons Operator to deploy and configure the macvtap components in your cluster.

The aforementioned operator effectively deploys the macvtap-cni cni / device plugin combo.

There are two different alternatives to configure which host interfaces get exposed to the user, enabling them to create macvtap interfaces on top of; - select the host interfaces: indicates which host interfaces are exposed. - expose all interfaces: all interfaces of all hosts are exposed.

Both options are configured via the macvtap-deviceplugin-config ConfigMap, and more information on how to configure it can be found in the macvtap-cni repo.

You can find a minimal example, in which the eth0 interface of the Kubernetes nodes is exposed, via the master attribute.

  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata:
  4. name: macvtap-deviceplugin-config
  5. data:
  6. DP_MACVTAP_CONF: |
  7. [
  8. {
  9. "name" : "dataplane",
  10. "master" : "eth0",
  11. "mode" : "bridge",
  12. "capacity" : 50
  13. },
  14. ]

This step can be omitted, since the default configuration of the aforementioned ConfigMap is to expose all host interfaces (which is represented by the following configuration):

  1. kind: ConfigMap
  2. apiVersion: v1
  3. metadata:
  4. name: macvtap-deviceplugin-config
  5. data:
  6. DP_MACVTAP_CONF: '[]'

Start a VM with macvtap interfaces

Once the macvtap components are deployed, it is needed to indicate how to configure the macvtap network. Refer to the following NetworkAttachmentDefinition for a simple example:

  1. ---
  2. kind: NetworkAttachmentDefinition
  3. apiVersion: k8s.cni.cncf.io/v1
  4. metadata:
  5. name: macvtapnetwork
  6. annotations:
  7. k8s.v1.cni.cncf.io/resourceName: macvtap.network.kubevirt.io/eth0
  8. spec:
  9. config: '{
  10. "cniVersion": "0.3.1",
  11. "name": "macvtapnetwork",
  12. "type": "macvtap"
  13. "mtu": 1500
  14. }'

The requested k8s.v1.cni.cncf.io/resourceName annotation must point to an exposed host interface (via the master attribute, on the macvtap-deviceplugin-config ConfigMap).

Finally, to create a VM that will attach to the aforementioned Network, refer to the following VMI spec:

  1. ---
  2. apiVersion: kubevirt.io/v1alpha3
  3. kind: VirtualMachineInstance
  4. metadata:
  5. labels:
  6. special: vmi-host-network
  7. name: vmi-host-network
  8. spec:
  9. domain:
  10. devices:
  11. disks:
  12. - disk:
  13. bus: virtio
  14. name: containerdisk
  15. - disk:
  16. bus: virtio
  17. name: cloudinitdisk
  18. interfaces:
  19. - macvtap: {}
  20. name: hostnetwork
  21. rng: {}
  22. machine:
  23. type: ""
  24. resources:
  25. requests:
  26. memory: 1024M
  27. networks:
  28. - multus:
  29. networkName: macvtapnetwork
  30. name: hostnetwork
  31. terminationGracePeriodSeconds: 0
  32. volumes:
  33. - containerDisk:
  34. image: docker.io/kubevirt/fedora-cloud-container-disk-demo:devel
  35. name: containerdisk
  36. - cloudInitNoCloud:
  37. userData: |-
  38. #!/bin/bash
  39. echo "fedora" |passwd fedora --stdin
  40. name: cloudinitdisk

The requested multus networkName - i.e. macvtapnetwork - must match the name of the provisioned NetworkAttachmentDefinition.

Note: VMIs with macvtap interfaces can be migrated, but their MAC addresses must be statically set.

Security

MAC spoof check

MAC spoofing refers to the ability to generate traffic with an arbitrary source MAC address. An attacker may use this option to generate attacks on the network.

In order to protect against such scenarios, it is possible to enable the mac-spoof-check support in CNI plugins that support it.

The pod primary network which is served by the cluster network provider is not covered by this documentation. Please refer to the relevant provider to check how to enable spoofing check. The following text refers to the secondary networks, served using multus.

There are two known CNI plugins that support mac-spoof-check:

  • sriov-cni: Through the spoofchk parameter .
  • cnv-bridge: Through the macspoofchk.

Note: cnv-bridge is provided by CNAO. The bridge-cni is planned to support the macspoofchk options as well.

The configuration is to be done on the NetworkAttachmentDefinition by the operator and any interface that refers to it, will have this feature enabled.

Below is an example of using the cnv-bridge CNI with macspoofchk enabled:

  1. apiVersion: "k8s.cni.cncf.io/v1"
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: br-spoof-check
  5. spec:
  6. config: '{
  7. "cniVersion": "0.3.1",
  8. "name": "br-spoof-check",
  9. "type": "cnv-bridge",
  10. "bridge": "br10",
  11. "macspoofchk": true
  12. }'

On the VMI, the network section should point to this NetworkAttachmentDefinition by name:

  1. networks:
  2. - name: default
  3. pod: {}
  4. - multus:
  5. networkName: br-spoof-check
  6. name: br10

Limitations

  • The cnv-bridge CNI supports mac-spoof-check through nftables, therefore the node must support nftables and have the nft binary deployed.