Managed DHCP

Available as of v1.3.0

Beginning with v1.3.0, you can configure IP pool information and serve IP addresses to VMs running on Harvester clusters using the embedded Managed DHCP feature. This feature, which is an alternative to the standalone DHCP server, leverages the vm-dhcp-controller add-on to simplify guest cluster deployment.

Managed DHCP - 图1note

Harvester uses the planned infrastructure network so you must ensure that network connectivity is available and plan the IP pools in advance.

Install and Enable the vm-dhcp-controller Add-On

The vm-dhcp-controller add-on is not packed into the Harvester ISO, but you can download it from the expreimental-addons repository. You can install the add-on by running the following command:

  1. kubectl apply -f https://raw.githubusercontent.com/harvester/experimental-addons/main/harvester-vm-dhcp-controller/harvester-vm-dhcp-controller.yaml

After installation, enable the add-on on the Dashboard screen of the Harvester UI or using the command-line tool kubectl.

Managed DHCP - 图2

Usage

  1. On the Dashboard screen of the Harvester UI, create a VM Network.

    Managed DHCP - 图3

  2. Create an IPPool object using the command-line tool kubectl.

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: network.harvesterhci.io/v1alpha1
    3. kind: IPPool
    4. metadata:
    5. name: net-48
    6. namespace: default
    7. spec:
    8. ipv4Config:
    9. serverIP: 192.168.48.77
    10. cidr: 192.168.48.0/24
    11. pool:
    12. start: 192.168.48.81
    13. end: 192.168.48.90
    14. exclude:
    15. - 192.168.48.81
    16. - 192.168.48.90
    17. router: 192.168.48.1
    18. dns:
    19. - 1.1.1.1
    20. leaseTime: 300
    21. networkName: default/net-48
    22. EOF
  3. Create a VM that is connected to the VM Network you previously created.

    Managed DHCP - 图4

  4. Wait for the corresponding VirtualMachineNetworkConfig object to be created and for the MAC address of the VM’s network interface to be applied to the object.

  5. Check the .status field of the IPPool and VirtualMachineNetworkConfig objects, and verify that the IP address is allocated and assigned to the MAC address.

    1. $ kubectl get ippools.network net-48 -o yaml
    2. apiVersion: network.harvesterhci.io/v1alpha1
    3. kind: IPPool
    4. metadata:
    5. creationTimestamp: "2024-02-15T13:17:21Z"
    6. finalizers:
    7. - wrangler.cattle.io/vm-dhcp-ippool-controller
    8. generation: 1
    9. name: net-48
    10. namespace: default
    11. resourceVersion: "826813"
    12. uid: 5efd44b7-3796-4f02-947e-3949cb4c8e3d
    13. spec:
    14. ipv4Config:
    15. cidr: 192.168.48.0/24
    16. dns:
    17. - 1.1.1.1
    18. leaseTime: 300
    19. pool:
    20. end: 192.168.48.90
    21. exclude:
    22. - 192.168.48.81
    23. - 192.168.48.90
    24. start: 192.168.48.81
    25. router: 192.168.48.1
    26. serverIP: 192.168.48.77
    27. networkName: default/net-48
    28. status:
    29. agentPodRef:
    30. name: default-net-48-agent
    31. namespace: harvester-system
    32. conditions:
    33. - lastUpdateTime: "2024-02-15T13:17:21Z"
    34. status: "True"
    35. type: Registered
    36. - lastUpdateTime: "2024-02-15T13:17:21Z"
    37. status: "True"
    38. type: CacheReady
    39. - lastUpdateTime: "2024-02-15T13:17:30Z"
    40. status: "True"
    41. type: AgentReady
    42. - lastUpdateTime: "2024-02-15T13:17:21Z"
    43. status: "False"
    44. type: Stopped
    45. ipv4:
    46. allocated:
    47. 192.168.48.81: EXCLUDED
    48. 192.168.48.84: ca:70:82:e6:84:6e
    49. 192.168.48.90: EXCLUDED
    50. available: 7
    51. used: 1
    52. lastUpdate: "2024-02-15T13:48:20Z"
    1. $ kubectl get virtualmachinenetworkconfigs.network test-vm -o yaml
    2. apiVersion: network.harvesterhci.io/v1alpha1
    3. kind: VirtualMachineNetworkConfig
    4. metadata:
    5. creationTimestamp: "2024-02-15T13:48:02Z"
    6. finalizers:
    7. - wrangler.cattle.io/vm-dhcp-vmnetcfg-controller
    8. generation: 2
    9. labels:
    10. harvesterhci.io/vmName: test-vm
    11. name: test-vm
    12. namespace: default
    13. ownerReferences:
    14. - apiVersion: kubevirt.io/v1
    15. kind: VirtualMachine
    16. name: test-vm
    17. uid: a9f8ce12-fd6c-4bd2-b266-245d8e77dae3
    18. resourceVersion: "826809"
    19. uid: 556440c7-eeeb-4daf-9c98-60ab39688ba8
    20. spec:
    21. networkConfig:
    22. - macAddress: ca:70:82:e6:84:6e
    23. networkName: default/net-48
    24. vmName: test-vm
    25. status:
    26. conditions:
    27. - lastUpdateTime: "2024-02-15T13:48:20Z"
    28. status: "True"
    29. type: Allocated
    30. - lastUpdateTime: "2024-02-15T13:48:02Z"
    31. status: "False"
    32. type: Disabled
    33. networkConfig:
    34. - allocatedIPAddress: 192.168.48.84
    35. macAddress: ca:70:82:e6:84:6e
    36. networkName: default/net-48
    37. state: Allocated
  6. Check the VM’s serial console and verify that the IP address is correctly configured on the network interface (via DHCP).

    Managed DHCP - 图5

vm-dhcp-controller Pods and CRDs

When the vm-dhcp-controller add-on is enabled, the following types of pods run:

  • Controller: Reconciles CRD objects to determine allocation and mapping between IP and MAC addresses. The results are persisted in the IPPool objects.
  • Webhook: Validates and mutates CRD objects when receiving requests (creation, updating, and deletion)
  • Agent: Serves DHCP requests and ensures that the internal DHCP lease store is up to date. This is accomplished by syncing the specific IPPool object that the agent is associated with. Agents are spawned on-demand whenever you create new IPPool objects.

The vm-dhcp-controller introduces the following new CRDs.

  • IPPool (ippl)
  • VirtualMachineNetworkConfig (vmnetcfg)

IPPool CRD

The IPPool CRD allows you to define IP pool information. You must map each IPPool object to a specific NetworkAttachmentDefinition (NAD) object, which must be created beforehand.

Managed DHCP - 图6note

Multiple CRDs named “IPPool” are used in the Harvester ecosystem, including a similarly-named CRD in the loadbalancer.harvesterhci.io API group. To avoid issues, ensure that you are working with the IPPool CRD in the network.harvesterhci.io API group. For more information about IPPool CRD operations in relation to load balancers, see IP Pool.

Example:

  1. apiVersion: network.harvesterhci.io/v1alpha1
  2. kind: IPPool
  3. metadata:
  4. name: example
  5. namespace: default
  6. spec:
  7. ipv4Config:
  8. serverIP: 192.168.100.2 # The DHCP server's IP address
  9. cidr: 192.168.100.0/24 # The subnet information, must be in the CIDR form
  10. pool:
  11. start: 192.168.100.101
  12. end: 192.168.100.200
  13. exclude:
  14. - 192.168.100.151
  15. - 192.168.100.187
  16. router: 192.168.100.1 # The default gateway, if any
  17. dns:
  18. - 1.1.1.1
  19. domainName: example.com
  20. domainSearch:
  21. - example.com
  22. ntp:
  23. - pool.ntp.org
  24. leaseTime: 300
  25. networkName: default/example # The namespaced name of the NAD object

After the IPPool object is created, the controller reconciliation process initializes the IP allocation module and spawns the agent pod for the network.

  1. $ kubectl get ippools.network example
  2. NAME NETWORK AVAILABLE USED REGISTERED CACHEREADY AGENTREADY
  3. example default/example 98 0 True True True

VirtualMachineNetworkConfig CRD

The VirtualMachineNetworkConfig CRD resembles a request for IP address issuance and is associated with NetworkAttachmentDefinition (NAD) objects.

A sample VirtualMachineNetworkConfig object looks like the following:

  1. apiVersion: network.harvesterhci.io/v1alpha1
  2. kind: VirtualMachineNetworkConfig
  3. metadata:
  4. name: test-vm
  5. namespace: default
  6. spec:
  7. networkConfig:
  8. - macAddress: 22:37:37:82:93:7d
  9. networkName: default/example
  10. vmName: test-vm

After the VirtualMachineNetworkConfig object is created, the controller attempts to retrieve a list of unused IP addresses from the IP allocation module for each recorded MAC address. The IP-MAC mapping is then updated in the VirtualMachineNetworkConfig object and the corresponding IPPool objects.

Managed DHCP - 图7note

Manual creation of VirtualMachineNetworkConfig objects for VMs is unnecessary in most cases because vm-dhcp-controller handles that task during the VirtualMachine reconciliation process. Automatically-created VirtualMachineNetworkConfig objects are deleted when VirtualMachine objects are removed.