Network Binding Plugins

[v1.1.0, Alpha feature]

A modular plugin which integrates with Kubevirt to implement a network binding.

Overview

Network Connectivity

In order for a VM to have access to external network(s), several layers need to be defined and configured, depending on the connectivity characteristics needs.

These layers include:

  • Host connectivity: Network provider.
  • Host to Pod connectivity: CNI.
  • Pod to domain connectivity: Network Binding.

This guide focuses on the Network Binding portion.

Network Binding

The network binding defines how the domain (VM) network interface is wired in the VM pod through the domain to the guest.

The network binding includes:

  • Domain vNIC configuration.
  • Pod network configuration (optional).
  • Services to deliver network details to the guest (optional). E.g. DHCP server to pass the IP configuration to the guest.

Plugins

The network bindings have been part of Kubevirt core API and codebase. With the increase of the number of network bindings added and frequent requests to tweak and change the existing network bindings, a decision has been made to create a network binding plugin infrastructure.

The plugin infrastructure provides means to compose a network binding plugin and integrate it into Kubevirt in a modular manner.

Kubevirt is providing several network binding plugins as references. The following plugins are available:

  • slirp [v1.1.0]
  • passt [v1.1.0]
  • macvtap [v1.1.1]

Definition & Flow

A network binding plugin configuration consist of the following steps:

  • Deploy network binding optional components:

  • Binding CNI plugin.

  • Binding NetworkAttachmentDefinition manifest.

  • Access to the sidecar image.
  • Enable NetworkBindingPlugins Feature Gate (FG).

  • Register network binding.

  • Assign VM network interface binding.

Deployment

Depending on the plugin, some components need to be deployed in the cluster. Not all network binding plugins require all these components, therefore these steps are optional.

  • Binding CNI plugin: When it is required to change the pod network stack (and a core domain-attachment is not a fit), a custom CNI plugin is composed to serve the network binding plugin.

This binary needs to be deployed on each node of the cluster, like any other CNI plugin.

The binary can be built from source or consumed from an existing artifact.

Note: The location of the CNI plugins binaries depends on the platform used and its configuration. A frequently used path for such binaries is /opt/cni/bin/.

  • Binding NetworkAttachmentDefinition: It references the binding CNI plugin, with optional configuration settings. The manifest needs to be deployed on the cluster at a namespace which is accessible by the VM and its pod.

Example:

  1. apiVersion: "k8s.cni.cncf.io/v1"
  2. kind: NetworkAttachmentDefinition
  3. metadata:
  4. name: netbindingpasst
  5. spec:
  6. config: '{
  7. "cniVersion": "1.0.0",
  8. "name": "netbindingpasst",
  9. "plugins": [
  10. {
  11. "type": "cni-passt-binding-plugin"
  12. }
  13. ]
  14. }'

Note: It is possible to deploy the NetworkAttachmentDefinition on the default namespace, where all other namespaces can access it. Nevertheless, it is recommended (for security reasons) to define the NetworkAttachmentDefinition in the same namespace the VM resides.

  • Multus: In order for the network binding CNI and the NetworkAttachmentDefinition to operate, there is a need to have Multus deployed on the cluster. For more information, check the Quickstart Intallation Guide.

  • Sidecar image: When a core domain-attachment is not a fit, a sidecar is used to configure the vNIC domain configuration. In a more complex scenarios, the sidecar also runs services like DHCP to deliver IP information to the guest.

The sidecar image is built and usually pushed to an image registry for consumption. Therefore, the cluster needs to have access to the image.

The image can be built from source and pushed to an accessible registry or used from a given registry that already contains it.

  • Feature Gate The network binding plugin is currently (v1.1.0) in Alpha stage, protected by a feature gate (FG) named NetworkBindingPlugins.

It is therefore necessary to set the FG in the Kubevirt CR.

Example (valid when the FG subtree is already defined):

  1. kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "path": "/spec/configuration/developerConfiguration/featureGates/-", "value": "NetworkBindingPlugins"}]'

Register

In order to use a network binding plugin, the cluster admin needs to register the binding. Registration includes the addition of the binding name with all its parameters to the Kubevirt CR.

The following (optional) parameters are currently supported (as of v1.1.1):

  • networkAttachmentDefinition: Use the format to specify the NetworkAttachementDefinition that defines the CNI plugin and the configuration the binding plugin uses. Used when the binding plugin needs to change the pod network namespace.
  • sidecarImage: Specify a container image in a registry. Used when the binding plugin needs to modify the domain vNIC configuration or when a service needs to be executed (e.g. DHCP server).
  • domainAttachmentType: Specify the name of a core domain attachment type. A possible alternative to a sidecar, to configure the domain vNIC. At the moment (v1.1.1) only a single type is supported: tap.

When both the domainAttachmentType and sidecarImage are specified, the domain will first be configured according to the domainAttachmentType and then the sidecarImage may modify it.

Note: In some deployments the Kubevirt CR is controlled by an external controller (e.g. HCO). In such cases, make sure to configure the wrapper operator/controller so the changes will get preserved.

Example (the passt binding):

  1. kubectl patch kubevirts -n kubevirt kubevirt --type=json -p='[{"op": "add", "path": "/spec/configuration/network", "value": {
  2. "binding": {
  3. "passt": {
  4. "networkAttachmentDefinition": "default/netbindingpasst",
  5. "sidecarImage": "quay.io/kubevirt/network-passt-binding:20231205_29a16d5c9"
  6. }
  7. }
  8. }}]'

VM Network Interface

When configuring the VM/VMI network interface, the binding plugin name can be specified. If it exists in the Kubevirt CR, it will be used to setup the network interface.

Example (passt binding):

  1. ---
  2. apiVersion: kubevirt.io/v1
  3. kind: VirtualMachine
  4. metadata:
  5. labels:
  6. kubevirt.io/vm: vm-net-binding-passt
  7. name: vm-net-binding-passt
  8. spec:
  9. running: true
  10. template:
  11. metadata:
  12. labels:
  13. kubevirt.io/vm: vm-net-binding-passt
  14. spec:
  15. domain:
  16. devices:
  17. disks:
  18. - disk:
  19. bus: virtio
  20. name: containerdisk
  21. - disk:
  22. bus: virtio
  23. name: cloudinitdisk
  24. interfaces:
  25. - name: passtnet
  26. binding:
  27. name: passt
  28. rng: {}
  29. resources:
  30. requests:
  31. memory: 1024M
  32. networks:
  33. - name: passtnet
  34. pod: {}
  35. terminationGracePeriodSeconds: 0
  36. volumes:
  37. - containerDisk:
  38. image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0
  39. name: containerdisk
  40. - cloudInitNoCloud:
  41. networkData: |
  42. version: 2
  43. ethernets:
  44. eth0:
  45. dhcp4: true
  46. name: cloudinitdisk

Available network binding plugins