Configuring PXE booting for virtual machines
PXE booting, or network booting, is available in OKD Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host.
Prerequisites
A Linux bridge must be connected.
The PXE server must be connected to the same VLAN as the bridge.
OKD Virtualization networking glossary
OKD Virtualization provides advanced networking functionality by using custom resources and plug-ins.
The following terms are used throughout OKD Virtualization documentation:
Container Network Interface (CNI)
a Cloud Native Computing Foundation project, focused on container network connectivity. OKD Virtualization uses CNI plug-ins to build upon the basic Kubernetes networking functionality.
Multus
a “meta” CNI plug-in that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs.
Custom resource definition (CRD)
a Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource.
Network attachment definition (NAD)
a CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks.
Node network configuration policy (NNCP)
a description of the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy
manifest to the cluster.
Preboot eXecution Environment (PXE)
an interface that enables an administrator to boot a client machine from a server over the network. Network booting allows you to remotely load operating systems and other software onto the client.
PXE booting with a specified MAC address
As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition
object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server.
Prerequisites
A Linux bridge must be connected.
The PXE server must be connected to the same VLAN as the bridge.
Procedure
Configure a PXE network on the cluster:
Create the network attachment definition file for PXE network
pxe-net-conf
:apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: pxe-net-conf
spec:
config: '{
"cniVersion": "0.3.1",
"name": "pxe-net-conf",
"plugins": [
{
"type": "cnv-bridge",
"bridge": "br1",
"vlan": 1 (1)
},
{
"type": "cnv-tuning" (2)
}
]
}'
1 Optional: The VLAN tag. 2 The cnv-tuning
plug-in provides support for custom MAC addresses.The virtual machine instance will be attached to the bridge
br1
through an access port with the requested VLAN.
Create the network attachment definition by using the file you created in the previous step:
$ oc create -f pxe-net-conf.yaml
Edit the virtual machine instance configuration file to include the details of the interface and network.
Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically.
Ensure that
bootOrder
is set to1
so that the interface boots first. In this example, the interface is connected to a network called<pxe-net>
:interfaces:
- masquerade: {}
name: default
- bridge: {}
name: pxe-net
macAddress: de:00:00:00:00:de
bootOrder: 1
Boot order is global for interfaces and disks.
Assign a boot device number to the disk to ensure proper booting after operating system provisioning.
Set the disk
bootOrder
value to2
:devices:
disks:
- disk:
bus: virtio
name: containerdisk
bootOrder: 2
Specify that the network is connected to the previously created network attachment definition. In this scenario,
<pxe-net>
is connected to the network attachment definition called<pxe-net-conf>
:networks:
- name: default
pod: {}
- name: pxe-net
multus:
networkName: pxe-net-conf
Create the virtual machine instance:
$ oc create -f vmi-pxe-boot.yaml
Example output
virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created
Wait for the virtual machine instance to run:
$ oc get vmi vmi-pxe-boot -o yaml | grep -i phase
phase: Running
View the virtual machine instance using VNC:
$ virtctl vnc vmi-pxe-boot
Watch the boot screen to verify that the PXE boot is successful.
Log in to the virtual machine instance:
$ virtctl console vmi-pxe-boot
Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used
eth1
for the PXE boot, without an IP address. The other interface,eth0
, got an IP address from OKD.$ ip addr
Example output
...
3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff
Template: Virtual machine configuration file for PXE booting
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
special: vm-pxe-boot
name: vm-pxe-boot
spec:
template:
metadata:
labels:
special: vm-pxe-boot
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
bootOrder: 2
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- masquerade: {}
name: default
- bridge: {}
name: pxe-net
macAddress: de:00:00:00:00:de
bootOrder: 1
machine:
type: ""
resources:
requests:
memory: 1024M
networks:
- name: default
pod: {}
- multus:
networkName: pxe-net-conf
name: pxe-net
terminationGracePeriodSeconds: 180
volumes:
- name: containerdisk
containerDisk:
image: kubevirt/fedora-cloud-container-disk-demo
- cloudInitNoCloud:
userData: |
#!/bin/bash
echo "fedora" | passwd fedora --stdin
name: cloudinitdisk
status: {}