- Creating a compute machine set on AWS
- Machine API overview
- Sample YAML for a compute machine set custom resource on AWS
- Creating a compute machine set
- Machine set options for the Amazon EC2 Instance Metadata Service
- Machine sets that deploy machines as Dedicated Instances
- Machine sets that deploy machines as Spot Instances
- Adding a GPU node to an existing OKD cluster
- Deploying the Node Feature Discovery Operator
Creating a compute machine set on AWS
You can create a different compute machine set to serve a specific purpose in your OKD cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. |
Machine API overview
The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OKD resources.
For OKD 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OKD 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.
The two primary resources are:
Machines
A fundamental unit that describes the host for a node. A machine has a providerSpec
specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.
Machine sets
MachineSet
resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas
field on the MachineSet
resource to meet your compute need.
Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see “Managing control plane machines”. |
The following custom resources add more capabilities to your cluster:
Machine autoscaler
The MachineAutoscaler
resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.
The MachineAutoscaler
object takes effect after a ClusterAutoscaler
object exists. Both ClusterAutoscaler
and MachineAutoscaler
resources are made available by the ClusterAutoscalerOperator
object.
Cluster autoscaler
This resource is based on the upstream cluster autoscaler project. In the OKD implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:
Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU
Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods
Set the scaling policy so that you can scale up nodes but not scale them down
Machine health check
The MachineHealthCheck
resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.
In OKD version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OKD version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.
Sample YAML for a compute machine set custom resource on AWS
This sample YAML defines a compute machine set that runs in the us-east-1a
Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
.
In this sample, <infrastructure_id>
is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role>
is the node label to add.
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-<role>-<zone> (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
machine.openshift.io/cluster-api-machine-role: <role> (3)
machine.openshift.io/cluster-api-machine-type: <role> (3)
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: "" (3)
providerSpec:
value:
ami:
id: ami-046fe691f52a953f9 (4)
apiVersion: awsproviderconfig.openshift.io/v1beta1
blockDevices:
- ebs:
iops: 0
volumeSize: 120
volumeType: gp2
credentialsSecret:
name: aws-cloud-credentials
deviceIndex: 0
iamInstanceProfile:
id: <infrastructure_id>-worker-profile (1)
instanceType: m6i.large
kind: AWSMachineProviderConfig
placement:
availabilityZone: <zone> (6)
region: <region> (7)
securityGroups:
- filters:
- name: tag:Name
values:
- <infrastructure_id>-worker-sg (1)
subnet:
filters:
- name: tag:Name
values:
- <infrastructure_id>-private-<zone> (8)
tags:
- name: kubernetes.io/cluster/<infrastructure_id> (1)
value: owned
- name: <custom_tag_name> (5)
value: <custom_tag_value> (5)
userDataSecret:
name: worker-user-data
1 | Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
| ||
2 | Specify the infrastructure ID, role node label, and zone. | ||
3 | Specify the role node label to add. | ||
4 | Specify a valid Fedora CoreOS (FCOS) Amazon Machine Image (AMI) for your AWS zone for your OKD nodes. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
| ||
5 | Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com .
| ||
6 | Specify the zone, for example, us-east-1a . | ||
7 | Specify the region, for example, us-east-1 . | ||
8 | Specify the infrastructure ID and zone. |
Creating a compute machine set
In addition to the ones created by the installation program, you can create your own compute machine sets to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
Deploy an OKD cluster.
Install the OpenShift CLI (
oc
).Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
Check values of a specific compute machine set:
$ oc get machineset <machineset_name> -n \
openshift-machine-api -o yaml
Example output
...
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
machine.openshift.io/cluster-api-machine-role: worker (2)
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
1 The cluster ID. 2 A default node label.
Create the new
MachineSet
CR:$ oc create -f <file_name>.yaml
View the list of compute machine sets:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
Next steps
If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets.
Machine set options for the Amazon EC2 Instance Metadata Service
You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.
To change the IMDS configuration for existing machines, edit the machine set YAML file that manages those machines. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values.
Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2. |
Configuring IMDS by using machine sets
You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication
in the machine set YAML file for your machines.
Procedure
Add or edit the following lines under the
providerSpec
field:providerSpec:
value:
metadataServiceOptions:
authentication: Required (1)
1 To require IMDSv2, set the parameter value to Required
. To allow the use of both IMDSv1 and IMDSv2, set the parameter value toOptional
. If no value is specified, both IMDSv1 and IMDSv2 are allowed.
Machine sets that deploy machines as Dedicated Instances
You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.
Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.
Creating Dedicated Instances by using machine sets
You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy
field in your machine set YAML file to launch a Dedicated Instance on AWS.
Procedure
Specify a dedicated tenancy under the
providerSpec
field:providerSpec:
placement:
tenancy: dedicated
Machine sets that deploy machines as Spot Instances
You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OKD begins to remove the workloads from the affected instances when AWS issues the termination warning.
Interruptions can occur when using Spot Instances for the following reasons:
The instance price exceeds your maximum price
The demand for Spot Instances increases
The supply of Spot Instances decreases
When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas
quantity, the compute machine set creates a machine that requests a Spot Instance.
Creating Spot Instances by using compute machine sets
You can launch a Spot Instance on AWS by adding spotMarketOptions
to your compute machine set YAML file.
Procedure
Add the following line under the
providerSpec
field:providerSpec:
value:
spotMarketOptions: {}
You can optionally set the
spotMarketOptions.maxPrice
field to limit the cost of the Spot Instance. For example you can setmaxPrice: '2.50'
.If the
maxPrice
is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price.It is strongly recommended to use the default On-Demand price as the
maxPrice
value and to not set the maximum price for Spot Instances.
Adding a GPU node to an existing OKD cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider.
The following table lists the validated instance types:
Instance type | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture |
---|---|---|---|
| A100 | 8 | x86 |
| T4 | 1 | x86 |
Procedure
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OKD role.
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.25.4+86bd4ff
ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.25.4+86bd4ff
ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.25.4+86bd4ff
ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.25.4+86bd4ff
ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.25.4+86bd4ff
ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.25.4+86bd4ff
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones.$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h
preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h
View the machines that exist in the
openshift-machine-api
namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api | grep worker
Example output
preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h
preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h
preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h
Make a copy of one of the existing compute
MachineSet
definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>
Edit the JSON file and make the following changes to the new
MachineSet
definition:Replace
worker
withgpu
. This will be the name of the new machine set.Change the instance type of the new
MachineSet
definition tog4dn
, which includes an NVIDIA Tesla T4 GPU. To learn more about AWSg4dn
instance types, see Accelerated Computing.$ jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
"g4dn.xlarge"
The
<output_file.json>
file is saved aspreserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
.
Update the following fields in
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
:.metadata.name
to a name containinggpu
..spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
..spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
..spec.template.spec.providerSpec.value.instanceType
tog4dn.xlarge
.
To verify your changes, perform a
diff
of the original compute definition and the new GPU-enabled node definition by running the following command:$ oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -
Example output
10c10
< "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a",
---
> "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a",
21c21
< "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
---
> "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
31c31
< "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
---
> "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
60c60
< "instanceType": "g4dn.xlarge",
---
> "instanceType": "m5.xlarge",
Create the GPU-enabled compute machine set from the definition by running the following command:
$ oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
Example output
machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created
Verification
View the machine set you created by running the following command:
$ oc -n openshift-machine-api get machinesets | grep gpu
The MachineSet replica count is set to
1
so a newMachine
object is created automatically.Example output
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s
View the
Machine
object that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpu
Example output
preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s
Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.
Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OKD.
Procedure
Install the Node Feature Discovery Operator from OperatorHub in the OKD console.
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-master
andnfd-worker
pods, onenfd-worker
pod for each compute node, in theopenshift-nfd
namespace.Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE
nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
Browse to the installed Oerator in the console and select Create Node Feature Discovery.
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfd
namespace that poll the OKD nodes for hardware resources and catalogue them.
Verification
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE
nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d
nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d
nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d
nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d
The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de
.View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'
Example output
Roles: worker
feature.node.kubernetes.io/pci-1013.present=true
feature.node.kubernetes.io/pci-10de.present=true
feature.node.kubernetes.io/pci-1d0f.present=true
10de
appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.