- Creating a compute machine set on GCP
- Sample YAML for a compute machine set custom resource on GCP
- Creating a compute machine set
- Configuring persistent disk types by using machine sets
- Configuring Confidential VM by using machine sets
- Machine sets that deploy machines as preemptible VM instances
- Configuring Shielded VM options by using machine sets
- Enabling customer-managed encryption keys for a machine set
- Enabling GPU support for a compute machine set
- Adding a GPU node to an existing OKD cluster
- Deploying the Node Feature Discovery Operator
Creating a compute machine set on GCP
You can create a different compute machine set to serve a specific purpose in your OKD cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type To view the platform type for your cluster, run the following command:
|
Sample YAML for a compute machine set custom resource on GCP
This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: ""
, where <role>
is the node label to add.
Values obtained by using the OpenShift CLI
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
Infrastructure ID
The <infrastructure_id>
string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster
Image path
The <path_to_image>
string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
$ oc -n openshift-machine-api \
-o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \
get machineset/<infrastructure_id>-worker-a
Sample GCP MachineSet
values
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-w-a
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <role> (2)
machine.openshift.io/cluster-api-machine-type: <role>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: ""
providerSpec:
value:
apiVersion: gcpprovider.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <path_to_image> (3)
labels: null
sizeGb: 128
type: pd-ssd
gcpMetadata: (4)
- key: <custom_metadata_key>
value: <custom_metadata_value>
kind: GCPMachineProviderSpec
machineType: n1-standard-4
metadata:
creationTimestamp: null
networkInterfaces:
- network: <infrastructure_id>-network
subnetwork: <infrastructure_id>-worker-subnet
projectID: <project_name> (5)
region: us-central1
serviceAccounts:
- email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
tags:
- <infrastructure_id>-worker
userDataSecret:
name: worker-user-data
zone: us-central1-a
1 | For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. |
2 | For <node> , specify the node label to add. |
3 | Specify the path to the image that is used in current compute machine sets. To use a GCP Marketplace image, specify the offer to use:
|
4 | Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata. |
5 | For <project_name> , specify the name of the GCP project that you use for your cluster. |
Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
Prerequisites
Deploy an OKD cluster.
Install the OpenShift CLI (
oc
).Log in to
oc
as a user withcluster-admin
permission.
Procedure
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml
.Ensure that you set the
<clusterID>
and<role>
parameter values.Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \
-n openshift-machine-api -o yaml
Example output
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
name: <infrastructure_id>-<role> (2)
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <role>
machine.openshift.io/cluster-api-machine-type: <role>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
spec:
providerSpec: (3)
...
1 The cluster infrastructure ID. 2 A default node label. For clusters that have user-provisioned infrastructure, a compute machine set can only create
worker
andinfra
type machines.3 The values in the <providerSpec>
section of the compute machine set CR are platform-specific. For more information about<providerSpec>
parameters in the CR, see the sample compute machine set CR configuration for your provider.
Create a
MachineSet
CR by running the following command:$ oc create -f <file_name>.yaml
Verification
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
agl030519-vplxk-worker-us-east-1d 0 0 55m
agl030519-vplxk-worker-us-east-1e 0 0 55m
agl030519-vplxk-worker-us-east-1f 0 0 55m
When the new compute machine set is available, the
DESIRED
andCURRENT
values match. If the compute machine set is not available, wait a few minutes and run the command again.
Configuring persistent disk types by using machine sets
You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file.
For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks.
Procedure
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following line under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
...
spec:
template:
spec:
providerSpec:
value:
disks:
type: <pd-disk-type> (1)
1 Specify the disk persistent type. Valid values are pd-ssd
,pd-standard
, andpd-balanced
. The default value ispd-standard
.
Verification
- Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the
Type
field matches the configured disk type.
Configuring Confidential VM by using machine sets
By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys.
For more information about Confidential VM features, functions, and compatibility, see the GCP Compute Engine documentation about Confidential VM.
Confidential VMs are currently not supported on 64-bit ARM architectures. |
OKD 4.14 does not support some Confidential Compute features, such as Confidential VMs with AMD Secure Encrypted Virtualization Secure Nested Paging (SEV-SNP). |
Procedure
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
...
spec:
template:
spec:
providerSpec:
value:
confidentialCompute: Enabled (1)
onHostMaintenance: Terminate (2)
machineType: n2d-standard-8 (3)
...
1 Specify whether Confidential VM is enabled. Valid values are Disabled
orEnabled
.2 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate
, which stops the VM. Confidential VM does not support live VM migration.3 Specify a machine type that supports Confidential VM. Confidential VM supports the N2D and C2D series of machine types.
Verification
- On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured.
Machine sets that deploy machines as preemptible VM instances
You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OKD begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED
state by Compute Engine.
Interruptions can occur when using preemptible VM instances for the following reasons:
There is a system or maintenance event
The supply of preemptible VM instances decreases
The instance reaches the end of the allotted 24-hour period for preemptible VM instances
When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas
quantity, the compute machine set creates a machine that requests a preemptible VM instance.
Creating preemptible VM instances by using compute machine sets
You can launch a preemptible VM instance on GCP by adding preemptible
to your compute machine set YAML file.
Procedure
Add the following line under the
providerSpec
field:providerSpec:
value:
preemptible: true
If
preemptible
is set totrue
, the machine is labelled as aninterruptable-instance
after the instance is launched.
Configuring Shielded VM options by using machine sets
By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys.
For more information about Shielded VM features and functionality, see the GCP Compute Engine documentation about Shielded VM.
Procedure
In a text editor, open the YAML file for an existing machine set or create a new one.
Edit the following section under the
providerSpec
field:apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
# ...
spec:
template:
spec:
providerSpec:
value:
shieldedInstanceConfig: (1)
integrityMonitoring: Enabled (2)
secureBoot: Disabled (3)
virtualizedTrustedPlatformModule: Enabled (4)
# ...
1 In this section, specify any Shielded VM options that you want. 2 Specify whether integrity monitoring is enabled. Valid values are Disabled
orEnabled
.When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM).
3 Specify whether UEFI Secure Boot is enabled. Valid values are Disabled
orEnabled
.4 Specify whether vTPM is enabled. Valid values are Disabled
orEnabled
.
Verification
- Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured.
Additional resources
Enabling customer-managed encryption keys for a machine set
Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.
You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.
If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the |
Procedure
To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location:
$ gcloud kms keys add-iam-policy-binding <key_name> \
--keyring <key_ring_name> \
--location <key_ring_location> \
--member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter
Configure the encryption key under the
providerSpec
field in your machine set YAML file. For example:apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
...
spec:
template:
spec:
providerSpec:
value:
disks:
- type:
encryptionKey:
kmsKey:
name: machine-encryption-key (1)
keyRing: openshift-encrpytion-ring (2)
location: global (3)
projectID: openshift-gcp-project (4)
kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com (5)
1 The name of the customer-managed encryption key that is used for the disk encryption. 2 The name of the KMS key ring that the KMS key belongs to. 3 The GCP location in which the KMS key ring exists. 4 Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set projectID
in which the machine set was created is used.5 Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used. When a new machine is created by using the updated
providerSpec
object configuration, the disk encryption key is encrypted with the KMS key.
Enabling GPU support for a compute machine set
Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OKD on GCP supports NVIDIA GPU models in the A2 and N1 machine series.
Model name | GPU type | Machine types [1] |
---|---|---|
NVIDIA A100 |
|
|
NVIDIA K80 |
|
|
NVIDIA P100 |
| |
NVIDIA P4 |
| |
NVIDIA T4 |
| |
NVIDIA V100 |
|
- For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series, A2 machine series, and GPU regions and zones availability.
You can define which supported GPU to use for an instance by using the Machine API.
You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators.
GPUs for graphics workloads are not supported. |
Procedure
In a text editor, open the YAML file for an existing compute machine set or create a new one.
Specify a GPU configuration under the
providerSpec
field in your compute machine set YAML file. See the following examples of valid configurations:Example configuration for the A2 machine series:
providerSpec:
value:
machineType: a2-highgpu-1g (1)
onHostMaintenance: Terminate (2)
restartPolicy: Always (3)
1 Specify the machine type. Ensure that the machine type is included in the A2 machine series. 2 When using GPU support, you must set onHostMaintenance
toTerminate
.3 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always
orNever
.Example configuration for the N1 machine series:
providerSpec:
value:
gpus:
- count: 1 (1)
type: nvidia-tesla-p100 (2)
machineType: n1-standard-1 (3)
onHostMaintenance: Terminate (4)
restartPolicy: Always (5)
1 Specify the number of GPUs to attach to the machine. 2 Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible. 3 Specify the machine type. Ensure that the machine type and GPU type are compatible. 4 When using GPU support, you must set onHostMaintenance
toTerminate
.5 Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always
orNever
.
Adding a GPU node to an existing OKD cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider.
The following table lists the validated instance types:
Instance type | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture |
---|---|---|---|
| A100 | 1 | x86 |
| T4 | 1 | x86 |
Procedure
Make a copy of an existing
MachineSet
.In the new copy, change the machine set
name
inmetadata.name
and in both instances ofmachine.openshift.io/cluster-api-machineset
.Change the instance type to add the following two lines to the newly copied
MachineSet
:machineType: a2-highgpu-1g
onHostMaintenance: Terminate
Example
a2-highgpu-1g.json
file{
"apiVersion": "machine.openshift.io/v1beta1",
"kind": "MachineSet",
"metadata": {
"annotations": {
"machine.openshift.io/GPU": "0",
"machine.openshift.io/memoryMb": "16384",
"machine.openshift.io/vCPU": "4"
},
"creationTimestamp": "2023-01-13T17:11:02Z",
"generation": 1,
"labels": {
"machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p"
},
"name": "myclustername-2pt9p-worker-gpu-a",
"namespace": "openshift-machine-api",
"resourceVersion": "20185",
"uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
"machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
}
},
"template": {
"metadata": {
"labels": {
"machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
"machine.openshift.io/cluster-api-machine-role": "worker",
"machine.openshift.io/cluster-api-machine-type": "worker",
"machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
}
},
"spec": {
"lifecycleHooks": {},
"metadata": {},
"providerSpec": {
"value": {
"apiVersion": "machine.openshift.io/v1beta1",
"canIPForward": false,
"credentialsSecret": {
"name": "gcp-cloud-credentials"
},
"deletionProtection": false,
"disks": [
{
"autoDelete": true,
"boot": true,
"image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64",
"labels": null,
"sizeGb": 128,
"type": "pd-ssd"
}
],
"kind": "GCPMachineProviderSpec",
"machineType": "a2-highgpu-1g",
"onHostMaintenance": "Terminate",
"metadata": {
"creationTimestamp": null
},
"networkInterfaces": [
{
"network": "myclustername-2pt9p-network",
"subnetwork": "myclustername-2pt9p-worker-subnet"
}
],
"preemptible": true,
"projectID": "myteam",
"region": "us-central1",
"serviceAccounts": [
{
"email": "myclustername-2pt9p-w@myteam.iam.gserviceaccount.com",
"scopes": [
"https://www.googleapis.com/auth/cloud-platform"
]
}
],
"tags": [
"myclustername-2pt9p-worker"
],
"userDataSecret": {
"name": "worker-user-data"
},
"zone": "us-central1-a"
}
}
}
}
},
"status": {
"availableReplicas": 1,
"fullyLabeledReplicas": 1,
"observedGeneration": 1,
"readyReplicas": 1,
"replicas": 1
}
}
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OKD role.
$ oc get nodes
Example output
NAME STATUS ROLES AGE VERSION
myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3
myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3
myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.27.3
myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.27.3
myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.27.3
myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.27.3
myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.27.3
View the machines and machine sets that exist in the
openshift-machine-api
namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones.$ oc get machinesets -n openshift-machine-api
Example output
NAME DESIRED CURRENT READY AVAILABLE AGE
myclustername-2pt9p-worker-a 1 1 1 1 8h
myclustername-2pt9p-worker-b 1 1 1 1 8h
myclustername-2pt9p-worker-c 1 1 8h
myclustername-2pt9p-worker-f 0 0 8h
View the machines that exist in the
openshift-machine-api
namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api | grep worker
Example output
myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h
myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h
myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h
Make a copy of one of the existing compute
MachineSet
definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>
Edit the JSON file to make the following changes to the new
MachineSet
definition:Rename the machine set
name
by inserting the substringgpu
inmetadata.name
and in both instances ofmachine.openshift.io/cluster-api-machineset
.Change the
machineType
of the newMachineSet
definition toa2-highgpu-1g
, which includes an NVIDIA A100 GPU.jq .spec.template.spec.providerSpec.value.machineType ocp_4.14_machineset-a2-highgpu-1g.json
"a2-highgpu-1g"
The
<output_file.json>
file is saved asocp_4.14_machineset-a2-highgpu-1g.json
.
Update the following fields in
ocp_4.14_machineset-a2-highgpu-1g.json
:Change
.metadata.name
to a name containinggpu
.Change
.spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
.Change
.spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]
to match the new.metadata.name
.Change
.spec.template.spec.providerSpec.value.MachineType
toa2-highgpu-1g
.Add the following line under
machineType
: `“onHostMaintenance”: “Terminate”. For example:"machineType": "a2-highgpu-1g",
"onHostMaintenance": "Terminate",
To verify your changes, perform a
diff
of the original compute definition and the new GPU-enabled node definition by running the following command:$ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.14_machineset-a2-highgpu-1g.json -
Example output
15c15
< "name": "myclustername-2pt9p-worker-gpu-a",
---
> "name": "myclustername-2pt9p-worker-a",
25c25
< "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
---
> "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
34c34
< "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
---
> "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
59,60c59
< "machineType": "a2-highgpu-1g",
< "onHostMaintenance": "Terminate",
---
> "machineType": "n2-standard-4",
Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f ocp_4.14_machineset-a2-highgpu-1g.json
Example output
machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created
Verification
View the machine set you created by running the following command:
$ oc -n openshift-machine-api get machinesets | grep gpu
The MachineSet replica count is set to
1
so a newMachine
object is created automatically.Example output
myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m
View the
Machine
object that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpu
Example output
myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m
Note that there is no need to specify a namespace for the node. The node definition is cluster scoped. |
Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OKD.
Procedure
Install the Node Feature Discovery Operator from OperatorHub in the OKD console.
After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-master
andnfd-worker
pods, onenfd-worker
pod for each compute node, in theopenshift-nfd
namespace.Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE
nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
Browse to the installed Oerator in the console and select Create Node Feature Discovery.
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfd
namespace that poll the OKD nodes for hardware resources and catalogue them.
Verification
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfd
Example output
NAME READY STATUS RESTARTS AGE
nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d
nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d
nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d
nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d
The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de
.View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'
Example output
Roles: worker
feature.node.kubernetes.io/pci-1013.present=true
feature.node.kubernetes.io/pci-10de.present=true
feature.node.kubernetes.io/pci-1d0f.present=true
10de
appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.