Creating a Windows MachineSet object on Nutanix
You can create a Windows MachineSet
object to serve a specific purpose in your OKD cluster on Nutanix. For example, you might create infrastructure Windows machine sets and related machines so that you can move supporting Windows workloads to the new Windows machines.
Prerequisites
You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM).
You are using a supported Windows Server as the operating system image.
Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OKD resources.
For OKD 4 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OKD 4 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
Machines
A fundamental unit that describes the host for a node. A machine has a providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata.
Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas
field on the MachineSet
resource to meet your compute need.
Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see “Managing control plane machines”. |
The following custom resources add more capabilities to your cluster:
Machine autoscaler
The MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.
The MachineAutoscaler
object takes effect after a ClusterAutoscaler
object exists. Both ClusterAutoscaler
and MachineAutoscaler
resources are made available by the ClusterAutoscalerOperator
object.
Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OKD implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
Set the scaling policy so that you can scale up nodes but not scale them down
Machine health check
The MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OKD version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OKD version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
Sample YAML for a Windows MachineSet object on Nutanix
This sample YAML defines a Windows MachineSet
object running on Nutanix that the Windows Machine Config Operator (WMCO) can react upon.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-windows-worker-<zone> (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> (2)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-windows-worker-<zone> (2)
machine.openshift.io/os-id: Windows (3)
spec:
metadata:
labels:
node-role.kubernetes.io/worker: "" (4)
providerSpec:
value:
apiVersion: machine.openshift.io/v1
bootType: "" (5)
categories: null
cluster: (6)
type: uuid
uuid: <cluster_uuid>
credentialsSecret:
name: nutanix-credentials (7)
image: (8)
name: <image_id>
type: name
kind: NutanixMachineProviderConfig (9)
memorySize: 16Gi (10)
project:
type: ""
subnets: (11)
- type: uuid
uuid: <subnet_uuid>
systemDiskSize: 120Gi (12)
userDataSecret:
name: windows-user-data (13)
vcpuSockets: 4 (14)
vcpusPerSocket: 1 (15)
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command:
| ||
2 | Specify the infrastructure ID, worker label, and zone. | ||
3 | Configure the compute machine set as a Windows machine. | ||
4 | Configure the Windows node as a compute machine. | ||
5 | Specifies the boot type that the compute machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are Legacy , SecureBoot , or UEFI . The default is Legacy .
| ||
6 | Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid , so there is a uuid stanza. | ||
7 | Specifies the secret name for the cluster. Do not change this value. | ||
8 | Specifies the image to use. Use an image from an existing default compute machine set for the cluster. | ||
9 | Specifies the cloud provider platform type. Do not change this value. | ||
10 | Specifies the amount of memory for the cluster in Gi. | ||
11 | Specifies a subnet configuration. In this example, the subnet type is uuid , so there is a uuid stanza. | ||
12 | Specifies the size of the system disk in Gi. | ||
13 | Specifies the name of the secret in the user data YAML file that is in the openshift-machine-api namespace. Use the value that installation program populates in the default compute machine set. | ||
14 | Specifies the number of vCPU sockets. | ||
15 | Specifies the number of vCPUs per socket. |
Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
Deploy an OKD cluster.
Install the OpenShift CLI (
oc
).Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \
-n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-<role> (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <role>
machine.openshift.io/cluster-api-machine-type: <role>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
spec:
providerSpec: (3)
...
1 The cluster infrastructure ID. 2 A default node label. For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines.3 The values in the <providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.